Statistically significant relational data mining :
Energy Technology Data Exchange (ETDEWEB)
Berry, Jonathan W.; Leung, Vitus Joseph; Phillips, Cynthia Ann; Pinar, Ali; Robinson, David Gerald; Berger-Wolf, Tanya; Bhowmick, Sanjukta; Casleton, Emily; Kaiser, Mark; Nordman, Daniel J.; Wilson, Alyson G.
2014-02-01
This report summarizes the work performed under the project (3z(BStatitically significant relational data mining.(3y (BThe goal of the project was to add more statistical rigor to the fairly ad hoc area of data mining on graphs. Our goal was to develop better algorithms and better ways to evaluate algorithm quality. We concetrated on algorithms for community detection, approximate pattern matching, and graph similarity measures. Approximate pattern matching involves finding an instance of a relatively small pattern, expressed with tolerance, in a large graph of data observed with uncertainty. This report gathers the abstracts and references for the eight refereed publications that have appeared as part of this work. We then archive three pieces of research that have not yet been published. The first is theoretical and experimental evidence that a popular statistical measure for comparison of community assignments favors over-resolved communities over approximations to a ground truth. The second are statistically motivated methods for measuring the quality of an approximate match of a small pattern in a large graph. The third is a new probabilistic random graph model. Statisticians favor these models for graph analysis. The new local structure graph model overcomes some of the issues with popular models such as exponential random graph models and latent variable models.
Statistical Significance for Hierarchical Clustering
Kimes, Patrick K.; Liu, Yufeng; Hayes, D. Neil; Marron, J. S.
2017-01-01
Summary Cluster analysis has proved to be an invaluable tool for the exploratory and unsupervised analysis of high dimensional datasets. Among methods for clustering, hierarchical approaches have enjoyed substantial popularity in genomics and other fields for their ability to simultaneously uncover multiple layers of clustering structure. A critical and challenging question in cluster analysis is whether the identified clusters represent important underlying structure or are artifacts of natural sampling variation. Few approaches have been proposed for addressing this problem in the context of hierarchical clustering, for which the problem is further complicated by the natural tree structure of the partition, and the multiplicity of tests required to parse the layers of nested clusters. In this paper, we propose a Monte Carlo based approach for testing statistical significance in hierarchical clustering which addresses these issues. The approach is implemented as a sequential testing procedure guaranteeing control of the family-wise error rate. Theoretical justification is provided for our approach, and its power to detect true clustering structure is illustrated through several simulation studies and applications to two cancer gene expression datasets. PMID:28099990
Statistical significance versus clinical relevance.
van Rijn, Marieke H C; Bech, Anneke; Bouyer, Jean; van den Brand, Jan A J G
2017-04-01
In March this year, the American Statistical Association (ASA) posted a statement on the correct use of P-values, in response to a growing concern that the P-value is commonly misused and misinterpreted. We aim to translate these warnings given by the ASA into a language more easily understood by clinicians and researchers without a deep background in statistics. Moreover, we intend to illustrate the limitations of P-values, even when used and interpreted correctly, and bring more attention to the clinical relevance of study findings using two recently reported studies as examples. We argue that P-values are often misinterpreted. A common mistake is saying that P < 0.05 means that the null hypothesis is false, and P ≥0.05 means that the null hypothesis is true. The correct interpretation of a P-value of 0.05 is that if the null hypothesis were indeed true, a similar or more extreme result would occur 5% of the times upon repeating the study in a similar sample. In other words, the P-value informs about the likelihood of the data given the null hypothesis and not the other way around. A possible alternative related to the P-value is the confidence interval (CI). It provides more information on the magnitude of an effect and the imprecision with which that effect was estimated. However, there is no magic bullet to replace P-values and stop erroneous interpretation of scientific results. Scientists and readers alike should make themselves familiar with the correct, nuanced interpretation of statistical tests, P-values and CIs. © The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.
Statistical significance of cis-regulatory modules
Directory of Open Access Journals (Sweden)
Smith Andrew D
2007-01-01
Full Text Available Abstract Background It is becoming increasingly important for researchers to be able to scan through large genomic regions for transcription factor binding sites or clusters of binding sites forming cis-regulatory modules. Correspondingly, there has been a push to develop algorithms for the rapid detection and assessment of cis-regulatory modules. While various algorithms for this purpose have been introduced, most are not well suited for rapid, genome scale scanning. Results We introduce methods designed for the detection and statistical evaluation of cis-regulatory modules, modeled as either clusters of individual binding sites or as combinations of sites with constrained organization. In order to determine the statistical significance of module sites, we first need a method to determine the statistical significance of single transcription factor binding site matches. We introduce a straightforward method of estimating the statistical significance of single site matches using a database of known promoters to produce data structures that can be used to estimate p-values for binding site matches. We next introduce a technique to calculate the statistical significance of the arrangement of binding sites within a module using a max-gap model. If the module scanned for has defined organizational parameters, the probability of the module is corrected to account for organizational constraints. The statistical significance of single site matches and the architecture of sites within the module can be combined to provide an overall estimation of statistical significance of cis-regulatory module sites. Conclusion The methods introduced in this paper allow for the detection and statistical evaluation of single transcription factor binding sites and cis-regulatory modules. The features described are implemented in the Search Tool for Occurrences of Regulatory Motifs (STORM and MODSTORM software.
The thresholds for statistical and clinical significance
DEFF Research Database (Denmark)
Jakobsen, Janus Christian; Gluud, Christian; Winkel, Per
2014-01-01
BACKGROUND: Thresholds for statistical significance are insufficiently demonstrated by 95% confidence intervals or P-values when assessing results from randomised clinical trials. First, a P-value only shows the probability of getting a result assuming that the null hypothesis is true and does...... not reflect the probability of getting a result assuming an alternative hypothesis to the null hypothesis is true. Second, a confidence interval or a P-value showing significance may be caused by multiplicity. Third, statistical significance does not necessarily result in clinical significance. Therefore...... of the probability that a given trial result is compatible with a 'null' effect (corresponding to the P-value) divided by the probability that the trial result is compatible with the intervention effect hypothesised in the sample size calculation; (3) adjust the confidence intervals and the statistical significance...
The insignificance of statistical significance testing
Johnson, Douglas H.
1999-01-01
Despite their use in scientific journals such as The Journal of Wildlife Management, statistical hypothesis tests add very little value to the products of research. Indeed, they frequently confuse the interpretation of data. This paper describes how statistical hypothesis tests are often viewed, and then contrasts that interpretation with the correct one. I discuss the arbitrariness of P-values, conclusions that the null hypothesis is true, power analysis, and distinctions between statistical and biological significance. Statistical hypothesis testing, in which the null hypothesis about the properties of a population is almost always known a priori to be false, is contrasted with scientific hypothesis testing, which examines a credible null hypothesis about phenomena in nature. More meaningful alternatives are briefly outlined, including estimation and confidence intervals for determining the importance of factors, decision theory for guiding actions in the face of uncertainty, and Bayesian approaches to hypothesis testing and other statistical practices.
Swiss solar power statistics 2007 - Significant expansion
International Nuclear Information System (INIS)
Hostettler, T.
2008-01-01
This article presents and discusses the 2007 statistics for solar power in Switzerland. A significant number of new installations is noted as is the high production figures from newer installations. The basics behind the compilation of the Swiss solar power statistics are briefly reviewed and an overview for the period 1989 to 2007 is presented which includes figures on the number of photovoltaic plant in service and installed peak power. Typical production figures in kilowatt-hours (kWh) per installed kilowatt-peak power (kWp) are presented and discussed for installations of various sizes. Increased production after inverter replacement in older installations is noted. Finally, the general political situation in Switzerland as far as solar power is concerned are briefly discussed as are international developments.
Significant Statistics: Viewed with a Contextual Lens
Tait-McCutcheon, Sandi
2010-01-01
This paper examines the pedagogical and organisational changes three lead teachers made to their statistics teaching and learning programs. The lead teachers posed the research question: What would the effect of contextually integrating statistical investigations and literacies into other curriculum areas be on student achievement? By finding the…
Testing the Difference of Correlated Agreement Coefficients for Statistical Significance
Gwet, Kilem L.
2016-01-01
This article addresses the problem of testing the difference between two correlated agreement coefficients for statistical significance. A number of authors have proposed methods for testing the difference between two correlated kappa coefficients, which require either the use of resampling methods or the use of advanced statistical modeling…
Significance levels for studies with correlated test statistics.
Shi, Jianxin; Levinson, Douglas F; Whittemore, Alice S
2008-07-01
When testing large numbers of null hypotheses, one needs to assess the evidence against the global null hypothesis that none of the hypotheses is false. Such evidence typically is based on the test statistic of the largest magnitude, whose statistical significance is evaluated by permuting the sample units to simulate its null distribution. Efron (2007) has noted that correlation among the test statistics can induce substantial interstudy variation in the shapes of their histograms, which may cause misleading tail counts. Here, we show that permutation-based estimates of the overall significance level also can be misleading when the test statistics are correlated. We propose that such estimates be conditioned on a simple measure of the spread of the observed histogram, and we provide a method for obtaining conditional significance levels. We justify this conditioning using the conditionality principle described by Cox and Hinkley (1974). Application of the method to gene expression data illustrates the circumstances when conditional significance levels are needed.
Caveats for using statistical significance tests in research assessments
DEFF Research Database (Denmark)
Schneider, Jesper Wiborg
2013-01-01
controversial and numerous criticisms have been leveled against their use. Based on examples from articles by proponents of the use statistical significance tests in research assessments, we address some of the numerous problems with such tests. The issues specifically discussed are the ritual practice......This article raises concerns about the advantages of using statistical significance tests in research assessments as has recently been suggested in the debate about proper normalization procedures for citation indicators by Opthof and Leydesdorff (2010). Statistical significance tests are highly...... argue that applying statistical significance tests and mechanically adhering to their results are highly problematic and detrimental to critical thinking. We claim that the use of such tests do not provide any advantages in relation to deciding whether differences between citation indicators...
On detection and assessment of statistical significance of Genomic Islands
Directory of Open Access Journals (Sweden)
Chaudhuri Probal
2008-04-01
Full Text Available Abstract Background Many of the available methods for detecting Genomic Islands (GIs in prokaryotic genomes use markers such as transposons, proximal tRNAs, flanking repeats etc., or they use other supervised techniques requiring training datasets. Most of these methods are primarily based on the biases in GC content or codon and amino acid usage of the islands. However, these methods either do not use any formal statistical test of significance or use statistical tests for which the critical values and the P-values are not adequately justified. We propose a method, which is unsupervised in nature and uses Monte-Carlo statistical tests based on randomly selected segments of a chromosome. Such tests are supported by precise statistical distribution theory, and consequently, the resulting P-values are quite reliable for making the decision. Results Our algorithm (named Design-Island, an acronym for Detection of Statistically Significant Genomic Island runs in two phases. Some 'putative GIs' are identified in the first phase, and those are refined into smaller segments containing horizontally acquired genes in the refinement phase. This method is applied to Salmonella typhi CT18 genome leading to the discovery of several new pathogenicity, antibiotic resistance and metabolic islands that were missed by earlier methods. Many of these islands contain mobile genetic elements like phage-mediated genes, transposons, integrase and IS elements confirming their horizontal acquirement. Conclusion The proposed method is based on statistical tests supported by precise distribution theory and reliable P-values along with a technique for visualizing statistically significant islands. The performance of our method is better than many other well known methods in terms of their sensitivity and accuracy, and in terms of specificity, it is comparable to other methods.
Your Chi-Square Test Is Statistically Significant: Now What?
Sharpe, Donald
2015-01-01
Applied researchers have employed chi-square tests for more than one hundred years. This paper addresses the question of how one should follow a statistically significant chi-square test result in order to determine the source of that result. Four approaches were evaluated: calculating residuals, comparing cells, ransacking, and partitioning. Data…
Statistical Significance and Effect Size: Two Sides of a Coin.
Fan, Xitao
This paper suggests that statistical significance testing and effect size are two sides of the same coin; they complement each other, but do not substitute for one another. Good research practice requires that both should be taken into consideration to make sound quantitative decisions. A Monte Carlo simulation experiment was conducted, and a…
Reporting effect sizes as a supplement to statistical significance ...
African Journals Online (AJOL)
The purpose of the article is to review the statistical significance reporting practices in reading instruction studies and to provide guidelines for when to calculate and report effect sizes in educational research. A review of six readily accessible (online) and accredited journals publishing research on reading instruction ...
Test for the statistical significance of differences between ROC curves
International Nuclear Information System (INIS)
Metz, C.E.; Kronman, H.B.
1979-01-01
A test for the statistical significance of observed differences between two measured Receiver Operating Characteristic (ROC) curves has been designed and evaluated. The set of observer response data for each ROC curve is assumed to be independent and to arise from a ROC curve having a form which, in the absence of statistical fluctuations in the response data, graphs as a straight line on double normal-deviate axes. To test the significance of an apparent difference between two measured ROC curves, maximum likelihood estimates of the two parameters of each curve and the associated parameter variances and covariance are calculated from the corresponding set of observer response data. An approximate Chi-square statistic with two degrees of freedom is then constructed from the differences between the parameters estimated for each ROC curve and from the variances and covariances of these estimates. This statistic is known to be truly Chi-square distributed only in the limit of large numbers of trials in the observer performance experiments. Performance of the statistic for data arising from a limited number of experimental trials was evaluated. Independent sets of rating scale data arising from the same underlying ROC curve were paired, and the fraction of differences found (falsely) significant was compared to the significance level, α, used with the test. Although test performance was found to be somewhat dependent on both the number of trials in the data and the position of the underlying ROC curve in the ROC space, the results for various significance levels showed the test to be reliable under practical experimental conditions
Increasing the statistical significance of entanglement detection in experiments.
Jungnitsch, Bastian; Niekamp, Sönke; Kleinmann, Matthias; Gühne, Otfried; Lu, He; Gao, Wei-Bo; Chen, Yu-Ao; Chen, Zeng-Bing; Pan, Jian-Wei
2010-05-28
Entanglement is often verified by a violation of an inequality like a Bell inequality or an entanglement witness. Considerable effort has been devoted to the optimization of such inequalities in order to obtain a high violation. We demonstrate theoretically and experimentally that such an optimization does not necessarily lead to a better entanglement test, if the statistical error is taken into account. Theoretically, we show for different error models that reducing the violation of an inequality can improve the significance. Experimentally, we observe this phenomenon in a four-photon experiment, testing the Mermin and Ardehali inequality for different levels of noise. Furthermore, we provide a way to develop entanglement tests with high statistical significance.
Directory of Open Access Journals (Sweden)
Priya Ranganathan
2015-01-01
Full Text Available In the second part of a series on pitfalls in statistical analysis, we look at various ways in which a statistically significant study result can be expressed. We debunk some of the myths regarding the ′P′ value, explain the importance of ′confidence intervals′ and clarify the importance of including both values in a paper
Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc
2015-01-01
In the second part of a series on pitfalls in statistical analysis, we look at various ways in which a statistically significant study result can be expressed. We debunk some of the myths regarding the ‘P’ value, explain the importance of ‘confidence intervals’ and clarify the importance of including both values in a paper PMID:25878958
Statistical significance of epidemiological data. Seminar: Evaluation of epidemiological studies
International Nuclear Information System (INIS)
Weber, K.H.
1993-01-01
In stochastic damages, the numbers of events, e.g. the persons who are affected by or have died of cancer, and thus the relative frequencies (incidence or mortality) are binomially distributed random variables. Their statistical fluctuations can be characterized by confidence intervals. For epidemiologic questions, especially for the analysis of stochastic damages in the low dose range, the following issues are interesting: - Is a sample (a group of persons) with a definite observed damage frequency part of the whole population? - Is an observed frequency difference between two groups of persons random or statistically significant? - Is an observed increase or decrease of the frequencies with increasing dose random or statistically significant and how large is the regression coefficient (= risk coefficient) in this case? These problems can be solved by sttistical tests. So-called distribution-free tests and tests which are not bound to the supposition of normal distribution are of particular interest, such as: - χ 2 -independence test (test in contingency tables); - Fisher-Yates-test; - trend test according to Cochran; - rank correlation test given by Spearman. These tests are explained in terms of selected epidemiologic data, e.g. of leukaemia clusters, of the cancer mortality of the Japanese A-bomb survivors especially in the low dose range as well as on the sample of the cancer mortality in the high background area in Yangjiang (China). (orig.) [de
Systematic reviews of anesthesiologic interventions reported as statistically significant
DEFF Research Database (Denmark)
Imberger, Georgina; Gluud, Christian; Boylan, John
2015-01-01
statistically significant meta-analyses of anesthesiologic interventions, we used TSA to estimate power and imprecision in the context of sparse data and repeated updates. METHODS: We conducted a search to identify all systematic reviews with meta-analyses that investigated an intervention that may......: From 11,870 titles, we found 682 systematic reviews that investigated anesthesiologic interventions. In the 50 sampled meta-analyses, the median number of trials included was 8 (interquartile range [IQR], 5-14), the median number of participants was 964 (IQR, 523-1736), and the median number...
Increasing the statistical significance of entanglement detection in experiments
Energy Technology Data Exchange (ETDEWEB)
Jungnitsch, Bastian; Niekamp, Soenke; Kleinmann, Matthias; Guehne, Otfried [Institut fuer Quantenoptik und Quanteninformation, Innsbruck (Austria); Lu, He; Gao, Wei-Bo; Chen, Zeng-Bing [Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics, University of Science and Technology of China, Hefei (China); Chen, Yu-Ao; Pan, Jian-Wei [Hefei National Laboratory for Physical Sciences at Microscale and Department of Modern Physics, University of Science and Technology of China, Hefei (China); Physikalisches Institut, Universitaet Heidelberg (Germany)
2010-07-01
Entanglement is often verified by a violation of an inequality like a Bell inequality or an entanglement witness. Considerable effort has been devoted to the optimization of such inequalities in order to obtain a high violation. We demonstrate theoretically and experimentally that such an optimization does not necessarily lead to a better entanglement test, if the statistical error is taken into account. Theoretically, we show for different error models that reducing the violation of an inequality can improve the significance. We show this to be the case for an error model in which the variance of an observable is interpreted as its error and for the standard error model in photonic experiments. Specifically, we demonstrate that the Mermin inequality yields a Bell test which is statistically more significant than the Ardehali inequality in the case of a photonic four-qubit state that is close to a GHZ state. Experimentally, we observe this phenomenon in a four-photon experiment, testing the above inequalities for different levels of noise.
Sibling Competition & Growth Tradeoffs. Biological vs. Statistical Significance.
Kramer, Karen L; Veile, Amanda; Otárola-Castillo, Erik
2016-01-01
Early childhood growth has many downstream effects on future health and reproduction and is an important measure of offspring quality. While a tradeoff between family size and child growth outcomes is theoretically predicted in high-fertility societies, empirical evidence is mixed. This is often attributed to phenotypic variation in parental condition. However, inconsistent study results may also arise because family size confounds the potentially differential effects that older and younger siblings can have on young children's growth. Additionally, inconsistent results might reflect that the biological significance associated with different growth trajectories is poorly understood. This paper addresses these concerns by tracking children's monthly gains in height and weight from weaning to age five in a high fertility Maya community. We predict that: 1) as an aggregate measure family size will not have a major impact on child growth during the post weaning period; 2) competition from young siblings will negatively impact child growth during the post weaning period; 3) however because of their economic value, older siblings will have a negligible effect on young children's growth. Accounting for parental condition, we use linear mixed models to evaluate the effects that family size, younger and older siblings have on children's growth. Congruent with our expectations, it is younger siblings who have the most detrimental effect on children's growth. While we find statistical evidence of a quantity/quality tradeoff effect, the biological significance of these results is negligible in early childhood. Our findings help to resolve why quantity/quality studies have had inconsistent results by showing that sibling competition varies with sibling age composition, not just family size, and that biological significance is distinct from statistical significance.
Sibling Competition & Growth Tradeoffs. Biological vs. Statistical Significance.
Directory of Open Access Journals (Sweden)
Karen L Kramer
Full Text Available Early childhood growth has many downstream effects on future health and reproduction and is an important measure of offspring quality. While a tradeoff between family size and child growth outcomes is theoretically predicted in high-fertility societies, empirical evidence is mixed. This is often attributed to phenotypic variation in parental condition. However, inconsistent study results may also arise because family size confounds the potentially differential effects that older and younger siblings can have on young children's growth. Additionally, inconsistent results might reflect that the biological significance associated with different growth trajectories is poorly understood. This paper addresses these concerns by tracking children's monthly gains in height and weight from weaning to age five in a high fertility Maya community. We predict that: 1 as an aggregate measure family size will not have a major impact on child growth during the post weaning period; 2 competition from young siblings will negatively impact child growth during the post weaning period; 3 however because of their economic value, older siblings will have a negligible effect on young children's growth. Accounting for parental condition, we use linear mixed models to evaluate the effects that family size, younger and older siblings have on children's growth. Congruent with our expectations, it is younger siblings who have the most detrimental effect on children's growth. While we find statistical evidence of a quantity/quality tradeoff effect, the biological significance of these results is negligible in early childhood. Our findings help to resolve why quantity/quality studies have had inconsistent results by showing that sibling competition varies with sibling age composition, not just family size, and that biological significance is distinct from statistical significance.
A tutorial on hunting statistical significance by chasing N
Directory of Open Access Journals (Sweden)
Denes Szucs
2016-09-01
Full Text Available There is increasing concern about the replicability of studies in psychology and cognitive neuroscience. Hidden data dredging (also called p-hacking is a major contributor to this crisis because it substantially increases Type I error resulting in a much larger proportion of false positive findings than the usually expected 5%. In order to build better intuition to avoid, detect and criticise some typical problems, here I systematically illustrate the large impact of some easy to implement and so, perhaps frequent data dredging techniques on boosting false positive findings. I illustrate several forms of two special cases of data dredging. First, researchers may violate the data collection stopping rules of null hypothesis significance testing by repeatedly checking for statistical significance with various numbers of participants. Second, researchers may group participants post-hoc along potential but unplanned independent grouping variables. The first approach 'hacks' the number of participants in studies, the second approach ‘hacks’ the number of variables in the analysis. I demonstrate the high amount of false positive findings generated by these techniques with data from true null distributions. I also illustrate that it is extremely easy to introduce strong bias into data by very mild selection and re-testing. Similar, usually undocumented data dredging steps can easily lead to having 20-50%, or more false positives.
Conducting tests for statistically significant differences using forest inventory data
James A. Westfall; Scott A. Pugh; John W. Coulston
2013-01-01
Many forest inventory and monitoring programs are based on a sample of ground plots from which estimates of forest resources are derived. In addition to evaluating metrics such as number of trees or amount of cubic wood volume, it is often desirable to make comparisons between resource attributes. To properly conduct statistical tests for differences, it is imperative...
Detecting Statistically Significant Communities of Triangle Motifs in Undirected Networks
2016-04-26
Systems, Statistics & Management Science, University of Alabama, USA. 1 DISTRIBUTION A: Distribution approved for public release. Contents 1 Summary 5...13 5 Application to Real Networks 18 5.1 2012 FBS Football Schedule Network... football schedule network. . . . . . . . . . . . . . . . . . . . . . 21 14 Stem plot of degree-ordered vertices versus the degree for college football
After statistics reform : Should we still teach significance testing?
A. Hak (Tony)
2014-01-01
textabstractIn the longer term null hypothesis significance testing (NHST) will disappear because p- values are not informative and not replicable. Should we continue to teach in the future the procedures of then abolished routines (i.e., NHST)? Three arguments are discussed for not teaching NHST in
Statistical universals reveal the structures and functions of human music.
Savage, Patrick E; Brown, Steven; Sakai, Emi; Currie, Thomas E
2015-07-21
Music has been called "the universal language of mankind." Although contemporary theories of music evolution often invoke various musical universals, the existence of such universals has been disputed for decades and has never been empirically demonstrated. Here we combine a music-classification scheme with statistical analyses, including phylogenetic comparative methods, to examine a well-sampled global set of 304 music recordings. Our analyses reveal no absolute universals but strong support for many statistical universals that are consistent across all nine geographic regions sampled. These universals include 18 musical features that are common individually as well as a network of 10 features that are commonly associated with one another. They span not only features related to pitch and rhythm that are often cited as putative universals but also rarely cited domains including performance style and social context. These cross-cultural structural regularities of human music may relate to roles in facilitating group coordination and cohesion, as exemplified by the universal tendency to sing, play percussion instruments, and dance to simple, repetitive music in groups. Our findings highlight the need for scientists studying music evolution to expand the range of musical cultures and musical features under consideration. The statistical universals we identified represent important candidates for future investigation.
Wilkinson, Michael
2014-03-01
Decisions about support for predictions of theories in light of data are made using statistical inference. The dominant approach in sport and exercise science is the Neyman-Pearson (N-P) significance-testing approach. When applied correctly it provides a reliable procedure for making dichotomous decisions for accepting or rejecting zero-effect null hypotheses with known and controlled long-run error rates. Type I and type II error rates must be specified in advance and the latter controlled by conducting an a priori sample size calculation. The N-P approach does not provide the probability of hypotheses or indicate the strength of support for hypotheses in light of data, yet many scientists believe it does. Outcomes of analyses allow conclusions only about the existence of non-zero effects, and provide no information about the likely size of true effects or their practical/clinical value. Bayesian inference can show how much support data provide for different hypotheses, and how personal convictions should be altered in light of data, but the approach is complicated by formulating probability distributions about prior subjective estimates of population effects. A pragmatic solution is magnitude-based inference, which allows scientists to estimate the true magnitude of population effects and how likely they are to exceed an effect magnitude of practical/clinical importance, thereby integrating elements of subjective Bayesian-style thinking. While this approach is gaining acceptance, progress might be hastened if scientists appreciate the shortcomings of traditional N-P null hypothesis significance testing.
Beyond δ : Tailoring marked statistics to reveal modified gravity
Valogiannis, Georgios; Bean, Rachel
2018-01-01
Models that seek to explain cosmic acceleration through modifications to general relativity (GR) evade stringent Solar System constraints through a restoring, screening mechanism. Down-weighting the high-density, screened regions in favor of the low density, unscreened ones offers the potential to enhance the amount of information carried in such modified gravity models. In this work, we assess the performance of a new "marked" transformation and perform a systematic comparison with the clipping and logarithmic transformations, in the context of Λ CDM and the symmetron and f (R ) modified gravity models. Performance is measured in terms of the fractional boost in the Fisher information and the signal-to-noise ratio (SNR) for these models relative to the statistics derived from the standard density distribution. We find that all three statistics provide improved Fisher boosts over the basic density statistics. The model parameters for the marked and clipped transformation that best enhance signals and the Fisher boosts are determined. We also show that the mark is useful both as a Fourier and real-space transformation; a marked correlation function also enhances the SNR relative to the standard correlation function, and can on mildly nonlinear scales show a significant difference between the Λ CDM and the modified gravity models. Our results demonstrate how a series of simple analytical transformations could dramatically increase the predicted information extracted on deviations from GR, from large-scale surveys, and give the prospect for a much more feasible potential detection.
Understanding the Sampling Distribution and Its Use in Testing Statistical Significance.
Breunig, Nancy A.
Despite the increasing criticism of statistical significance testing by researchers, particularly in the publication of the 1994 American Psychological Association's style manual, statistical significance test results are still popular in journal articles. For this reason, it remains important to understand the logic of inferential statistics. A…
Changing viewer perspectives reveals constraints to implicit visual statistical learning.
Jiang, Yuhong V; Swallow, Khena M
2014-10-07
Statistical learning-learning environmental regularities to guide behavior-likely plays an important role in natural human behavior. One potential use is in search for valuable items. Because visual statistical learning can be acquired quickly and without intention or awareness, it could optimize search and thereby conserve energy. For this to be true, however, visual statistical learning needs to be viewpoint invariant, facilitating search even when people walk around. To test whether implicit visual statistical learning of spatial information is viewpoint independent, we asked participants to perform a visual search task from variable locations around a monitor placed flat on a stand. Unbeknownst to participants, the target was more often in some locations than others. In contrast to previous research on stationary observers, visual statistical learning failed to produce a search advantage for targets in high-probable regions that were stable within the environment but variable relative to the viewer. This failure was observed even when conditions for spatial updating were optimized. However, learning was successful when the rich locations were referenced relative to the viewer. We conclude that changing viewer perspective disrupts implicit learning of the target's location probability. This form of learning shows limited integration with spatial updating or spatiotopic representations. © 2014 ARVO.
Kellerer-Pirklbauer, Andreas
2016-04-01
Longer data series (e.g. >10 a) of ground temperatures in alpine regions are helpful to improve the understanding regarding the effects of present climate change on distribution and thermal characteristics of seasonal frost- and permafrost-affected areas. Beginning in 2004 - and more intensively since 2006 - a permafrost and seasonal frost monitoring network was established in Central and Eastern Austria by the University of Graz. This network consists of c.60 ground temperature (surface and near-surface) monitoring sites which are located at 1922-3002 m a.s.l., at latitude 46°55'-47°22'N and at longitude 12°44'-14°41'E. These data allow conclusions about general ground thermal conditions, potential permafrost occurrence, trend during the observation period, and regional pattern of changes. Calculations and analyses of several different temperature-related parameters were accomplished. At an annual scale a region-wide statistical significant warming during the observation period was revealed by e.g. an increase in mean annual temperature values (mean, maximum) or the significant lowering of the surface frost number (F+). At a seasonal scale no significant trend of any temperature-related parameter was in most cases revealed for spring (MAM) and autumn (SON). Winter (DJF) shows only a weak warming. In contrast, the summer (JJA) season reveals in general a significant warming as confirmed by several different temperature-related parameters such as e.g. mean seasonal temperature, number of thawing degree days, number of freezing degree days, or days without night frost. On a monthly basis August shows the statistically most robust and strongest warming of all months, although regional differences occur. Despite the fact that the general ground temperature warming during the last decade is confirmed by the field data in the study region, complications in trend analyses arise by temperature anomalies (e.g. warm winter 2006/07) or substantial variations in the winter
"What If" Analyses: Ways to Interpret Statistical Significance Test Results Using EXCEL or "R"
Ozturk, Elif
2012-01-01
The present paper aims to review two motivations to conduct "what if" analyses using Excel and "R" to understand the statistical significance tests through the sample size context. "What if" analyses can be used to teach students what statistical significance tests really do and in applied research either prospectively to estimate what sample size…
Fidalgo, Angel M.; Alavi, Seyed Mohammad; Amirian, Seyed Mohammad Reza
2014-01-01
This study examines three controversial aspects in differential item functioning (DIF) detection by logistic regression (LR) models: first, the relative effectiveness of different analytical strategies for detecting DIF; second, the suitability of the Wald statistic for determining the statistical significance of the parameters of interest; and…
Brouwer, D.; Meijer, R.R.; Zevalkink, D.J.
2013-01-01
Several researchers have emphasized that item response theory (IRT)-based methods should be preferred over classical approaches in measuring change for individual patients. In the present study we discuss and evaluate the use of IRT-based statistics to measure statistical significant individual
Directory of Open Access Journals (Sweden)
Sadreyev Ruslan I
2004-08-01
Full Text Available Abstract Background Profile-based analysis of multiple sequence alignments (MSA allows for accurate comparison of protein families. Here, we address the problems of detecting statistically confident dissimilarities between (1 MSA position and a set of predicted residue frequencies, and (2 between two MSA positions. These problems are important for (i evaluation and optimization of methods predicting residue occurrence at protein positions; (ii detection of potentially misaligned regions in automatically produced alignments and their further refinement; and (iii detection of sites that determine functional or structural specificity in two related families. Results For problems (1 and (2, we propose analytical estimates of P-value and apply them to the detection of significant positional dissimilarities in various experimental situations. (a We compare structure-based predictions of residue propensities at a protein position to the actual residue frequencies in the MSA of homologs. (b We evaluate our method by the ability to detect erroneous position matches produced by an automatic sequence aligner. (c We compare MSA positions that correspond to residues aligned by automatic structure aligners. (d We compare MSA positions that are aligned by high-quality manual superposition of structures. Detected dissimilarities reveal shortcomings of the automatic methods for residue frequency prediction and alignment construction. For the high-quality structural alignments, the dissimilarities suggest sites of potential functional or structural importance. Conclusion The proposed computational method is of significant potential value for the analysis of protein families.
Singer, Meromit; Engström, Alexander; Schönhuth, Alexander; Pachter, Lior
2011-09-23
Recent experimental and computational work confirms that CpGs can be unmethylated inside coding exons, thereby showing that codons may be subjected to both genomic and epigenomic constraint. It is therefore of interest to identify coding CpG islands (CCGIs) that are regions inside exons enriched for CpGs. The difficulty in identifying such islands is that coding exons exhibit sequence biases determined by codon usage and constraints that must be taken into account. We present a method for finding CCGIs that showcases a novel approach we have developed for identifying regions of interest that are significant (with respect to a Markov chain) for the counts of any pattern. Our method begins with the exact computation of tail probabilities for the number of CpGs in all regions contained in coding exons, and then applies a greedy algorithm for selecting islands from among the regions. We show that the greedy algorithm provably optimizes a biologically motivated criterion for selecting islands while controlling the false discovery rate. We applied this approach to the human genome (hg18) and annotated CpG islands in coding exons. The statistical criterion we apply to evaluating islands reduces the number of false positives in existing annotations, while our approach to defining islands reveals significant numbers of undiscovered CCGIs in coding exons. Many of these appear to be examples of functional epigenetic specialization in coding exons.
Xu, Kuan-Man
2006-01-01
A new method is proposed to compare statistical differences between summary histograms, which are the histograms summed over a large ensemble of individual histograms. It consists of choosing a distance statistic for measuring the difference between summary histograms and using a bootstrap procedure to calculate the statistical significance level. Bootstrapping is an approach to statistical inference that makes few assumptions about the underlying probability distribution that describes the data. Three distance statistics are compared in this study. They are the Euclidean distance, the Jeffries-Matusita distance and the Kuiper distance. The data used in testing the bootstrap method are satellite measurements of cloud systems called cloud objects. Each cloud object is defined as a contiguous region/patch composed of individual footprints or fields of view. A histogram of measured values over footprints is generated for each parameter of each cloud object and then summary histograms are accumulated over all individual histograms in a given cloud-object size category. The results of statistical hypothesis tests using all three distances as test statistics are generally similar, indicating the validity of the proposed method. The Euclidean distance is determined to be most suitable after comparing the statistical tests of several parameters with distinct probability distributions among three cloud-object size categories. Impacts on the statistical significance levels resulting from differences in the total lengths of satellite footprint data between two size categories are also discussed.
Health significance and statistical uncertainty. The value of P-value.
Consonni, Dario; Bertazzi, Pier Alberto
2017-10-27
The P-value is widely used as a summary statistics of scientific results. Unfortunately, there is a widespread tendency to dichotomize its value in "P0.05" ("statistically not significant"), with the former implying a "positive" result and the latter a "negative" one. To show the unsuitability of such an approach when evaluating the effects of environmental and occupational risk factors. We provide examples of distorted use of P-value and of the negative consequences for science and public health of such a black-and-white vision. The rigid interpretation of P-value as a dichotomy favors the confusion between health relevance and statistical significance, discourages thoughtful thinking, and distorts attention from what really matters, the health significance. A much better way to express and communicate scientific results involves reporting effect estimates (e.g., risks, risks ratios or risk differences) and their confidence intervals (CI), which summarize and convey both health significance and statistical uncertainty. Unfortunately, many researchers do not usually consider the whole interval of CI but only examine if it includes the null-value, therefore degrading this procedure to the same P-value dichotomy (statistical significance or not). In reporting statistical results of scientific research present effects estimates with their confidence intervals and do not qualify the P-value as "significant" or "not significant".
DEFF Research Database (Denmark)
Engsted, Tom
I comment on the controversy between McCloskey & Ziliak and Hoover & Siegler on statistical versus economic significance, in the March 2008 issue of the Journal of Economic Methodology. I argue that while McCloskey & Ziliak are right in emphasizing 'real error', i.e. non-sampling error that cannot...... be eliminated through specification testing, they fail to acknowledge those areas in economics, e.g. rational expectations macroeconomics and asset pricing, where researchers clearly distinguish between statistical and economic significance and where statistical testing plays a relatively minor role in model...
Zhang, Zhang
2012-03-22
Background: Genetic mutation, selective pressure for translational efficiency and accuracy, level of gene expression, and protein function through natural selection are all believed to lead to codon usage bias (CUB). Therefore, informative measurement of CUB is of fundamental importance to making inferences regarding gene function and genome evolution. However, extant measures of CUB have not fully accounted for the quantitative effect of background nucleotide composition and have not statistically evaluated the significance of CUB in sequence analysis.Results: Here we propose a novel measure--Codon Deviation Coefficient (CDC)--that provides an informative measurement of CUB and its statistical significance without requiring any prior knowledge. Unlike previous measures, CDC estimates CUB by accounting for background nucleotide compositions tailored to codon positions and adopts the bootstrapping to assess the statistical significance of CUB for any given sequence. We evaluate CDC by examining its effectiveness on simulated sequences and empirical data and show that CDC outperforms extant measures by achieving a more informative estimation of CUB and its statistical significance.Conclusions: As validated by both simulated and empirical data, CDC provides a highly informative quantification of CUB and its statistical significance, useful for determining comparative magnitudes and patterns of biased codon usage for genes or genomes with diverse sequence compositions. 2012 Zhang et al; licensee BioMed Central Ltd.
Directory of Open Access Journals (Sweden)
Zhang Zhang
2012-03-01
Full Text Available Abstract Background Genetic mutation, selective pressure for translational efficiency and accuracy, level of gene expression, and protein function through natural selection are all believed to lead to codon usage bias (CUB. Therefore, informative measurement of CUB is of fundamental importance to making inferences regarding gene function and genome evolution. However, extant measures of CUB have not fully accounted for the quantitative effect of background nucleotide composition and have not statistically evaluated the significance of CUB in sequence analysis. Results Here we propose a novel measure--Codon Deviation Coefficient (CDC--that provides an informative measurement of CUB and its statistical significance without requiring any prior knowledge. Unlike previous measures, CDC estimates CUB by accounting for background nucleotide compositions tailored to codon positions and adopts the bootstrapping to assess the statistical significance of CUB for any given sequence. We evaluate CDC by examining its effectiveness on simulated sequences and empirical data and show that CDC outperforms extant measures by achieving a more informative estimation of CUB and its statistical significance. Conclusions As validated by both simulated and empirical data, CDC provides a highly informative quantification of CUB and its statistical significance, useful for determining comparative magnitudes and patterns of biased codon usage for genes or genomes with diverse sequence compositions.
Directory of Open Access Journals (Sweden)
Melissa Coulson
2010-07-01
Full Text Available A statistically significant result, and a non-significant result may differ little, although significance status may tempt an interpretation of difference. Two studies are reported that compared interpretation of such results presented using null hypothesis significance testing (NHST, or confidence intervals (CIs. Authors of articles published in psychology, behavioural neuroscience, and medical journals were asked, via email, to interpret two fictitious studies that found similar results, one statistically significant, and the other non-significant. Responses from 330 authors varied greatly, but interpretation was generally poor, whether results were presented as CIs or using NHST. However, when interpreting CIs respondents who mentioned NHST were 60% likely to conclude, unjustifiably, the two results conflicted, whereas those who interpreted CIs without reference to NHST were 95% likely to conclude, justifiably, the two results were consistent. Findings were generally similar for all three disciplines. An email survey of academic psychologists confirmed that CIs elicit better interpretations if NHST is not invoked. Improved statistical inference can result from encouragement of meta-analytic thinking and use of CIs but, for full benefit, such highly desirable statistical reform requires also that researchers interpret CIs without recourse to NHST.
Farrell, Mary Beth
2018-06-01
This article is the second part of a continuing education series reviewing basic statistics that nuclear medicine and molecular imaging technologists should understand. In this article, the statistics for evaluating interpretation accuracy, significance, and variance are discussed. Throughout the article, actual statistics are pulled from the published literature. We begin by explaining 2 methods for quantifying interpretive accuracy: interreader and intrareader reliability. Agreement among readers can be expressed simply as a percentage. However, the Cohen κ-statistic is a more robust measure of agreement that accounts for chance. The higher the κ-statistic is, the higher is the agreement between readers. When 3 or more readers are being compared, the Fleiss κ-statistic is used. Significance testing determines whether the difference between 2 conditions or interventions is meaningful. Statistical significance is usually expressed using a number called a probability ( P ) value. Calculation of P value is beyond the scope of this review. However, knowing how to interpret P values is important for understanding the scientific literature. Generally, a P value of less than 0.05 is considered significant and indicates that the results of the experiment are due to more than just chance. Variance, standard deviation (SD), confidence interval, and standard error (SE) explain the dispersion of data around a mean of a sample drawn from a population. SD is commonly reported in the literature. A small SD indicates that there is not much variation in the sample data. Many biologic measurements fall into what is referred to as a normal distribution taking the shape of a bell curve. In a normal distribution, 68% of the data will fall within 1 SD, 95% will fall within 2 SDs, and 99.7% will fall within 3 SDs. Confidence interval defines the range of possible values within which the population parameter is likely to lie and gives an idea of the precision of the statistic being
Statistical significance of trends in monthly heavy precipitation over the US
Mahajan, Salil
2011-05-11
Trends in monthly heavy precipitation, defined by a return period of one year, are assessed for statistical significance in observations and Global Climate Model (GCM) simulations over the contiguous United States using Monte Carlo non-parametric and parametric bootstrapping techniques. The results from the two Monte Carlo approaches are found to be similar to each other, and also to the traditional non-parametric Kendall\\'s τ test, implying the robustness of the approach. Two different observational data-sets are employed to test for trends in monthly heavy precipitation and are found to exhibit consistent results. Both data-sets demonstrate upward trends, one of which is found to be statistically significant at the 95% confidence level. Upward trends similar to observations are observed in some climate model simulations of the twentieth century, but their statistical significance is marginal. For projections of the twenty-first century, a statistically significant upwards trend is observed in most of the climate models analyzed. The change in the simulated precipitation variance appears to be more important in the twenty-first century projections than changes in the mean precipitation. Stochastic fluctuations of the climate-system are found to be dominate monthly heavy precipitation as some GCM simulations show a downwards trend even in the twenty-first century projections when the greenhouse gas forcings are strong. © 2011 Springer-Verlag.
Interpreting Statistical Significance Test Results: A Proposed New "What If" Method.
Kieffer, Kevin M.; Thompson, Bruce
As the 1994 publication manual of the American Psychological Association emphasized, "p" values are affected by sample size. As a result, it can be helpful to interpret the results of statistical significant tests in a sample size context by conducting so-called "what if" analyses. However, these methods can be inaccurate…
Recent Literature on Whether Statistical Significance Tests Should or Should Not Be Banned.
Deegear, James
This paper summarizes the literature regarding statistical significant testing with an emphasis on recent literature in various discipline and literature exploring why researchers have demonstrably failed to be influenced by the American Psychological Association publication manual's encouragement to report effect sizes. Also considered are…
Sierevelt, Inger N.; van Oldenrijk, Jakob; Poolman, Rudolf W.
2007-01-01
In this paper we describe several issues that influence the reporting of statistical significance in relation to clinical importance, since misinterpretation of p values is a common issue in orthopaedic literature. Orthopaedic research is tormented by the risks of false-positive (type I error) and
Linting, Marielle; van Os, Bart Jan; Meulman, Jacqueline J.
2011-01-01
In this paper, the statistical significance of the contribution of variables to the principal components in principal components analysis (PCA) is assessed nonparametrically by the use of permutation tests. We compare a new strategy to a strategy used in previous research consisting of permuting the columns (variables) of a data matrix…
van Tulder, M.W.; Malmivaara, A.; Hayden, J.; Koes, B.
2007-01-01
STUDY DESIGN. Critical appraisal of the literature. OBJECIVES. The objective of this study was to assess if results of back pain trials are statistically significant and clinically important. SUMMARY OF BACKGROUND DATA. There seems to be a discrepancy between conclusions reported by authors and
P-Value, a true test of statistical significance? a cautionary note ...
African Journals Online (AJOL)
While it's not the intention of the founders of significance testing and hypothesis testing to have the two ideas intertwined as if they are complementary, the inconvenient marriage of the two practices into one coherent, convenient, incontrovertible and misinterpreted practice has dotted our standard statistics textbooks and ...
DEFF Research Database (Denmark)
Jones, Allan; Sommerlund, Bo
2007-01-01
The uses of null hypothesis significance testing (NHST) and statistical power analysis within psychological research are critically discussed. The article looks at the problems of relying solely on NHST when dealing with small and large sample sizes. The use of power-analysis in estimating...... the potential error introduced by small and large samples is advocated. Power analysis is not recommended as a replacement to NHST but as an additional source of information about the phenomena under investigation. Moreover, the importance of conceptual analysis in relation to statistical analysis of hypothesis...
DEFF Research Database (Denmark)
Jakobsen, Janus Christian; Wetterslev, Jorn; Winkel, Per
2014-01-01
BACKGROUND: Thresholds for statistical significance when assessing meta-analysis results are being insufficiently demonstrated by traditional 95% confidence intervals and P-values. Assessment of intervention effects in systematic reviews with meta-analysis deserves greater rigour. METHODS......: Methodologies for assessing statistical and clinical significance of intervention effects in systematic reviews were considered. Balancing simplicity and comprehensiveness, an operational procedure was developed, based mainly on The Cochrane Collaboration methodology and the Grading of Recommendations...... Assessment, Development, and Evaluation (GRADE) guidelines. RESULTS: We propose an eight-step procedure for better validation of meta-analytic results in systematic reviews (1) Obtain the 95% confidence intervals and the P-values from both fixed-effect and random-effects meta-analyses and report the most...
Testing statistical significance scores of sequence comparison methods with structure similarity
Directory of Open Access Journals (Sweden)
Leunissen Jack AM
2006-10-01
Full Text Available Abstract Background In the past years the Smith-Waterman sequence comparison algorithm has gained popularity due to improved implementations and rapidly increasing computing power. However, the quality and sensitivity of a database search is not only determined by the algorithm but also by the statistical significance testing for an alignment. The e-value is the most commonly used statistical validation method for sequence database searching. The CluSTr database and the Protein World database have been created using an alternative statistical significance test: a Z-score based on Monte-Carlo statistics. Several papers have described the superiority of the Z-score as compared to the e-value, using simulated data. We were interested if this could be validated when applied to existing, evolutionary related protein sequences. Results All experiments are performed on the ASTRAL SCOP database. The Smith-Waterman sequence comparison algorithm with both e-value and Z-score statistics is evaluated, using ROC, CVE and AP measures. The BLAST and FASTA algorithms are used as reference. We find that two out of three Smith-Waterman implementations with e-value are better at predicting structural similarities between proteins than the Smith-Waterman implementation with Z-score. SSEARCH especially has very high scores. Conclusion The compute intensive Z-score does not have a clear advantage over the e-value. The Smith-Waterman implementations give generally better results than their heuristic counterparts. We recommend using the SSEARCH algorithm combined with e-values for pairwise sequence comparisons.
Diedrich, Alice; Schlegl, Sandra; Greetfeld, Martin; Fumi, Markus; Voderholzer, Ulrich
2018-03-01
This study examines the statistical and clinical significance of symptom changes during an intensive inpatient treatment program with a strong psychotherapeutic focus for individuals with severe bulimia nervosa. 295 consecutively admitted bulimic patients were administered the Structured Interview for Anorexic and Bulimic Syndromes-Self-Rating (SIAB-S), the Eating Disorder Inventory-2 (EDI-2), the Brief Symptom Inventory (BSI), and the Beck Depression Inventory-II (BDI-II) at treatment intake and discharge. Results indicated statistically significant symptom reductions with large effect sizes regarding severity of binge eating and compensatory behavior (SIAB-S), overall eating disorder symptom severity (EDI-2), overall psychopathology (BSI), and depressive symptom severity (BDI-II) even when controlling for antidepressant medication. The majority of patients showed either reliable (EDI-2: 33.7%, BSI: 34.8%, BDI-II: 18.1%) or even clinically significant symptom changes (EDI-2: 43.2%, BSI: 33.9%, BDI-II: 56.9%). Patients with clinically significant improvement were less distressed at intake and less likely to suffer from a comorbid borderline personality disorder when compared with those who did not improve to a clinically significant extent. Findings indicate that intensive psychotherapeutic inpatient treatment may be effective in about 75% of severely affected bulimic patients. For the remaining non-responding patients, inpatient treatment might be improved through an even stronger focus on the reduction of comorbid borderline personality traits.
Ji, Jun; Ling, Jeffrey; Jiang, Helen; Wen, Qiaojun; Whitin, John C; Tian, Lu; Cohen, Harvey J; Ling, Xuefeng B
2013-03-23
Mass spectrometry (MS) has evolved to become the primary high throughput tool for proteomics based biomarker discovery. Until now, multiple challenges in protein MS data analysis remain: large-scale and complex data set management; MS peak identification, indexing; and high dimensional peak differential analysis with the concurrent statistical tests based false discovery rate (FDR). "Turnkey" solutions are needed for biomarker investigations to rapidly process MS data sets to identify statistically significant peaks for subsequent validation. Here we present an efficient and effective solution, which provides experimental biologists easy access to "cloud" computing capabilities to analyze MS data. The web portal can be accessed at http://transmed.stanford.edu/ssa/. Presented web application supplies large scale MS data online uploading and analysis with a simple user interface. This bioinformatic tool will facilitate the discovery of the potential protein biomarkers using MS.
Gaskin, Cadeyrn J; Happell, Brenda
2014-05-01
improvement. Most importantly, researchers should abandon the misleading practice of interpreting the results from inferential tests based solely on whether they are statistically significant (or not) and, instead, focus on reporting and interpreting effect sizes, confidence intervals, and significance levels. Nursing researchers also need to conduct and report a priori power analyses, and to address the issue of Type I experiment-wise error inflation in their studies. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
Statistical determination of significant curved I-girder bridge seismic response parameters
Seo, Junwon
2013-06-01
Curved steel bridges are commonly used at interchanges in transportation networks and more of these structures continue to be designed and built in the United States. Though the use of these bridges continues to increase in locations that experience high seismicity, the effects of curvature and other parameters on their seismic behaviors have been neglected in current risk assessment tools. These tools can evaluate the seismic vulnerability of a transportation network using fragility curves. One critical component of fragility curve development for curved steel bridges is the completion of sensitivity analyses that help identify influential parameters related to their seismic response. In this study, an accessible inventory of existing curved steel girder bridges located primarily in the Mid-Atlantic United States (MAUS) was used to establish statistical characteristics used as inputs for a seismic sensitivity study. Critical seismic response quantities were captured using 3D nonlinear finite element models. Influential parameters from these quantities were identified using statistical tools that incorporate experimental Plackett-Burman Design (PBD), which included Pareto optimal plots and prediction profiler techniques. The findings revealed that the potential variation in the influential parameters included number of spans, radius of curvature, maximum span length, girder spacing, and cross-frame spacing. These parameters showed varying levels of influence on the critical bridge response.
Statistical significance estimation of a signal within the GooFit framework on GPUs
Directory of Open Access Journals (Sweden)
Cristella Leonardo
2017-01-01
Full Text Available In order to test the computing capabilities of GPUs with respect to traditional CPU cores a high-statistics toy Monte Carlo technique has been implemented both in ROOT/RooFit and GooFit frameworks with the purpose to estimate the statistical significance of the structure observed by CMS close to the kinematical boundary of the J/ψϕ invariant mass in the three-body decay B+ → J/ψϕK+. GooFit is a data analysis open tool under development that interfaces ROOT/RooFit to CUDA platform on nVidia GPU. The optimized GooFit application running on GPUs hosted by servers in the Bari Tier2 provides striking speed-up performances with respect to the RooFit application parallelised on multiple CPUs by means of PROOF-Lite tool. The considerable resulting speed-up, evident when comparing concurrent GooFit processes allowed by CUDA Multi Process Service and a RooFit/PROOF-Lite process with multiple CPU workers, is presented and discussed in detail. By means of GooFit it has also been possible to explore the behaviour of a likelihood ratio test statistic in different situations in which the Wilks Theorem may or may not apply because its regularity conditions are not satisfied.
International Nuclear Information System (INIS)
DUDEK, J; SZPAK, B; FORNAL, B; PORQUET, M-G
2011-01-01
In this and the follow-up article we briefly discuss what we believe represents one of the most serious problems in contemporary nuclear structure: the question of statistical significance of parametrizations of nuclear microscopic Hamiltonians and the implied predictive power of the underlying theories. In the present Part I, we introduce the main lines of reasoning of the so-called Inverse Problem Theory, an important sub-field in the contemporary Applied Mathematics, here illustrated on the example of the Nuclear Mean-Field Approach.
Directory of Open Access Journals (Sweden)
Chen Shaotian
2012-04-01
Full Text Available Abstract Background Incarvillea sinensis is widely distributed from Southwest China to Northeast China and in the Russian Far East. The distribution of this species was thought to be influenced by the uplift of the Qinghai-Tibet Plateau and Quaternary glaciation. To reveal the imprints of geological events on the spatial genetic structure of Incarvillea sinensis, we examined two cpDNA segments ( trnH- psbA and trnS- trnfM in 705 individuals from 47 localities. Results A total of 16 haplotypes was identified, and significant genetic differentiation was revealed (GST =0.843, NST = 0.975, P Conclusions The results revealed that the uplift of the Qinghai-Tibet Plateau likely resulted in the significant divergence between the lineage in the eastern Qinghai-Tibet Plateau and the other one outside this area. The diverse niches in the eastern Qinghai-Tibet Plateau created a wide spectrum of habitats to accumulate and accommodate new mutations. The features of genetic diversity of populations outside the eastern Qinghai-Tibet Plateau seemed to reveal the imprints of extinction during the Glacial and the interglacial and postglacial recolonization. Our study is a typical case of the significance of the uplift of the Qinghai-Tibet Plateau and the Quaternary Glacial in spatial genetic structure of eastern Asian plants, and sheds new light on the evolution of biodiversity in the Qinghai-Tibet Plateau at the intraspecies level.
Van Aert, R.C.M.; Van Assen, M.A.L.M.
2018-01-01
The unrealistically high rate of positive results within psychology has increased the attention to replication research. However, researchers who conduct a replication and want to statistically combine the results of their replication with a statistically significant original study encounter
A Note on Comparing the Power of Test Statistics at Low Significance Levels.
Morris, Nathan; Elston, Robert
2011-01-01
It is an obvious fact that the power of a test statistic is dependent upon the significance (alpha) level at which the test is performed. It is perhaps a less obvious fact that the relative performance of two statistics in terms of power is also a function of the alpha level. Through numerous personal discussions, we have noted that even some competent statisticians have the mistaken intuition that relative power comparisons at traditional levels such as α = 0.05 will be roughly similar to relative power comparisons at very low levels, such as the level α = 5 × 10 -8 , which is commonly used in genome-wide association studies. In this brief note, we demonstrate that this notion is in fact quite wrong, especially with respect to comparing tests with differing degrees of freedom. In fact, at very low alpha levels the cost of additional degrees of freedom is often comparatively low. Thus we recommend that statisticians exercise caution when interpreting the results of power comparison studies which use alpha levels that will not be used in practice.
Energy Technology Data Exchange (ETDEWEB)
Crow, C.J.
1985-01-01
Middle Ordovician age Chickamauga Group carbonates crop out along the Birmingham and Murphrees Valley anticlines in central Alabama. The macrofossil contents on exposed surfaces of seven bioherms have been counted to determine their various paleontologic characteristics. Twelve groups of organisms are present in these bioherms. Dominant organisms include bryozoans, algae, brachiopods, sponges, pelmatozoans, stromatoporoids and corals. Minor accessory fauna include predators, scavengers and grazers such as gastropods, ostracods, trilobites, cephalopods and pelecypods. Vertical and horizontal niche zonation has been detected for some of the bioherm dwelling fauna. No one bioherm of those studied exhibits all 12 groups of organisms; rather, individual bioherms display various subsets of the total diversity. Statistical treatment (G-test) of the diversity data indicates a lack of statistical homogeneity of the bioherms, both within and between localities. Between-locality population heterogeneity can be ascribed to differences in biologic responses to such gross environmental factors as water depth and clarity, and energy levels. At any one locality, gross aspects of the paleoenvironments are assumed to have been more uniform. Significant differences among bioherms at any one locality may have resulted from patchy distribution of species populations, differential preservation and other factors.
Papageorgiou, Spyridon N; Kloukos, Dimitrios; Petridis, Haralampos; Pandis, Nikolaos
2015-10-01
To assess the hypothesis that there is excessive reporting of statistically significant studies published in prosthodontic and implantology journals, which could indicate selective publication. The last 30 issues of 9 journals in prosthodontics and implant dentistry were hand-searched for articles with statistical analyses. The percentages of significant and non-significant results were tabulated by parameter of interest. Univariable/multivariable logistic regression analyses were applied to identify possible predictors of reporting statistically significance findings. The results of this study were compared with similar studies in dentistry with random-effects meta-analyses. From the 2323 included studies 71% of them reported statistically significant results, with the significant results ranging from 47% to 86%. Multivariable modeling identified that geographical area and involvement of statistician were predictors of statistically significant results. Compared to interventional studies, the odds that in vitro and observational studies would report statistically significant results was increased by 1.20 times (OR: 2.20, 95% CI: 1.66-2.92) and 0.35 times (OR: 1.35, 95% CI: 1.05-1.73), respectively. The probability of statistically significant results from randomized controlled trials was significantly lower compared to various study designs (difference: 30%, 95% CI: 11-49%). Likewise the probability of statistically significant results in prosthodontics and implant dentistry was lower compared to other dental specialties, but this result did not reach statistical significant (P>0.05). The majority of studies identified in the fields of prosthodontics and implant dentistry presented statistically significant results. The same trend existed in publications of other specialties in dentistry. Copyright © 2015 Elsevier Ltd. All rights reserved.
Statistical evaluation of waveform collapse reveals scale-free properties of neuronal avalanches
Directory of Open Access Journals (Sweden)
Aleena eShaukat
2016-04-01
Full Text Available Neural avalanches are a prominent form of brain activity characterized by network-wide bursts whose statistics follow a power-law distribution with a slope near 3/2. Recent work suggests that avalanches of different durations can be rescaled and thus collapsed together. This collapse mirrors work in statistical physics where it is proposed to form a signature of systems evolving in a critical state. However, no rigorous statistical test has been proposed to examine the degree to which neuronal avalanches collapse together. Here, we describe a statistical test based on functional data analysis, where raw avalanches are first smoothed with a Fourier basis, then rescaled using a time-warping function. Finally, an F ratio test combined with a bootstrap permutation is employed to determine if avalanches collapse together in a statistically reliable fashion. To illustrate this approach, we recorded avalanches from cortical cultures on multielectrode arrays as in previous work. Analyses show that avalanches of various durations can be collapsed together in a statistically robust fashion. However, a principal components analysis revealed that the offset of avalanches resulted in marked variance in the time-warping function, thus arguing for limitations to the strict fractal nature of avalanche dynamics. We compared these results with those obtained from cultures treated with an AMPA/NMDA receptor antagonist (APV/DNQX, which yield a power-law of avalanche durations with a slope greater than 3/2. When collapsed together, these avalanches showed marked misalignments both at onset and offset time-points. In sum, the proposed statistical evaluation suggests the presence of scale-free avalanche waveforms and constitutes an avenue for examining critical dynamics in neuronal systems.
Wu, Jing Qin; Wang, Xi; Beveridge, Natalie J.; Tooney, Paul A.; Scott, Rodney J.; Carr, Vaughan J.; Cairns, Murray J.
2012-01-01
Background While hybridization based analysis of the cortical transcriptome has provided important insight into the neuropathology of schizophrenia, it represents a restricted view of disease-associated gene activity based on predetermined probes. By contrast, sequencing technology can provide un-biased analysis of transcription at nucleotide resolution. Here we use this approach to investigate schizophrenia-associated cortical gene expression. Methodology/Principal Findings The data was generated from 76 bp reads of RNA-Seq, aligned to the reference genome and assembled into transcripts for quantification of exons, splice variants and alternative promoters in postmortem superior temporal gyrus (STG/BA22) from 9 male subjects with schizophrenia and 9 matched non-psychiatric controls. Differentially expressed genes were then subjected to further sequence and functional group analysis. The output, amounting to more than 38 Gb of sequence, revealed significant alteration of gene expression including many previously shown to be associated with schizophrenia. Gene ontology enrichment analysis followed by functional map construction identified three functional clusters highly relevant to schizophrenia including neurotransmission related functions, synaptic vesicle trafficking, and neural development. Significantly, more than 2000 genes displayed schizophrenia-associated alternative promoter usage and more than 1000 genes showed differential splicing (FDRschizophrenia-associated transcriptional diversity within the STG, and revealed variants with important implications for the complex pathophysiology of schizophrenia. PMID:22558445
Indirectional statistics and the significance of an asymmetry discovered by Birch
International Nuclear Information System (INIS)
Kendall, D.G.; Young, G.A.
1984-01-01
Birch (1982, Nature, 298, 451) reported an apparent 'statistical asymmetry of the Universe'. The authors here develop 'indirectional analysis' as a technique for investigating statistical effects of this kind and conclude that the reported effect (whatever may be its origin) is strongly supported by the observations. The estimated pole of the asymmetry is at RA 13h 30m, Dec. -37deg. The angular error in its estimation is unlikely to exceed 20-30deg. (author)
Directory of Open Access Journals (Sweden)
Jing Qin Wu
Full Text Available While hybridization based analysis of the cortical transcriptome has provided important insight into the neuropathology of schizophrenia, it represents a restricted view of disease-associated gene activity based on predetermined probes. By contrast, sequencing technology can provide un-biased analysis of transcription at nucleotide resolution. Here we use this approach to investigate schizophrenia-associated cortical gene expression.The data was generated from 76 bp reads of RNA-Seq, aligned to the reference genome and assembled into transcripts for quantification of exons, splice variants and alternative promoters in postmortem superior temporal gyrus (STG/BA22 from 9 male subjects with schizophrenia and 9 matched non-psychiatric controls. Differentially expressed genes were then subjected to further sequence and functional group analysis. The output, amounting to more than 38 Gb of sequence, revealed significant alteration of gene expression including many previously shown to be associated with schizophrenia. Gene ontology enrichment analysis followed by functional map construction identified three functional clusters highly relevant to schizophrenia including neurotransmission related functions, synaptic vesicle trafficking, and neural development. Significantly, more than 2000 genes displayed schizophrenia-associated alternative promoter usage and more than 1000 genes showed differential splicing (FDR<0.05. Both types of transcriptional isoforms were exemplified by reads aligned to the neurodevelopmentally significant doublecortin-like kinase 1 (DCLK1 gene.This study provided the first deep and un-biased analysis of schizophrenia-associated transcriptional diversity within the STG, and revealed variants with important implications for the complex pathophysiology of schizophrenia.
Directory of Open Access Journals (Sweden)
Mashhood Ahmed Sheikh
2017-08-01
mediate the association between childhood adversity and ADS in adulthood. However, when education was excluded as a mediator-response confounding variable, the indirect effect of childhood adversity on ADS in adulthood was statistically significant (p < 0.05. This study shows that a careful inclusion of potential confounding variables is important when assessing mediation.
Directory of Open Access Journals (Sweden)
Vujović Svetlana R.
2013-01-01
Full Text Available This paper illustrates the utility of multivariate statistical techniques for analysis and interpretation of water quality data sets and identification of pollution sources/factors with a view to get better information about the water quality and design of monitoring network for effective management of water resources. Multivariate statistical techniques, such as factor analysis (FA/principal component analysis (PCA and cluster analysis (CA, were applied for the evaluation of variations and for the interpretation of a water quality data set of the natural water bodies obtained during 2010 year of monitoring of 13 parameters at 33 different sites. FA/PCA attempts to explain the correlations between the observations in terms of the underlying factors, which are not directly observable. Factor analysis is applied to physico-chemical parameters of natural water bodies with the aim classification and data summation as well as segmentation of heterogeneous data sets into smaller homogeneous subsets. Factor loadings were categorized as strong and moderate corresponding to the absolute loading values of >0.75, 0.75-0.50, respectively. Four principal factors were obtained with Eigenvalues >1 summing more than 78 % of the total variance in the water data sets, which is adequate to give good prior information regarding data structure. Each factor that is significantly related to specific variables represents a different dimension of water quality. The first factor F1 accounting for 28 % of the total variance and represents the hydrochemical dimension of water quality. The second factor F2 accounting for 18% of the total variance and may be taken factor of water eutrophication. The third factor F3 accounting 17 % of the total variance and represents the influence of point sources of pollution on water quality. The fourth factor F4 accounting 13 % of the total variance and may be taken as an ecological dimension of water quality. Cluster analysis (CA is an
Energy Technology Data Exchange (ETDEWEB)
Fhager, V
2000-01-01
In order to make correct predictions of the second moment of statistical nuclear variables, such as the number of fissions and the number of thermalized neutrons, the dependence of the energy distribution of the source particles on their number should be considered. It has been pointed out recently that neglecting this number dependence in accelerator driven systems might result in bad estimates of the second moment, and this paper contains qualitative and quantitative estimates of the size of these efforts. We walk towards the requested results in two steps. First, models of the number dependent energy distributions of the neutrons that are ejected in the spallation reactions are constructed, both by simple assumptions and by extracting energy distributions of spallation neutrons from a high-energy particle transport code. Then, the second moment of nuclear variables in a sub-critical reactor, into which spallation neutrons are injected, is calculated. The results from second moment calculations using number dependent energy distributions for the source neutrons are compared to those where only the average energy distribution is used. Two physical models are employed to simulate the neutron transport in the reactor. One is analytical, treating only slowing down of neutrons by elastic scattering in the core material. For this model, equations are written down and solved for the second moment of thermalized neutrons that include the distribution of energy of the spallation neutrons. The other model utilizes Monte Carlo methods for tracking the source neutrons as they travel inside the reactor material. Fast and thermal fission reactions are considered, as well as neutron capture and elastic scattering, and the second moment of the number of fissions, the number of neutrons that leaked out of the system, etc. are calculated. Both models use a cylindrical core with a homogenous mixture of core material. Our results indicate that the number dependence of the energy
International Nuclear Information System (INIS)
Fhager, V.
2000-01-01
In order to make correct predictions of the second moment of statistical nuclear variables, such as the number of fissions and the number of thermalized neutrons, the dependence of the energy distribution of the source particles on their number should be considered. It has been pointed out recently that neglecting this number dependence in accelerator driven systems might result in bad estimates of the second moment, and this paper contains qualitative and quantitative estimates of the size of these efforts. We walk towards the requested results in two steps. First, models of the number dependent energy distributions of the neutrons that are ejected in the spallation reactions are constructed, both by simple assumptions and by extracting energy distributions of spallation neutrons from a high-energy particle transport code. Then, the second moment of nuclear variables in a sub-critical reactor, into which spallation neutrons are injected, is calculated. The results from second moment calculations using number dependent energy distributions for the source neutrons are compared to those where only the average energy distribution is used. Two physical models are employed to simulate the neutron transport in the reactor. One is analytical, treating only slowing down of neutrons by elastic scattering in the core material. For this model, equations are written down and solved for the second moment of thermalized neutrons that include the distribution of energy of the spallation neutrons. The other model utilizes Monte Carlo methods for tracking the source neutrons as they travel inside the reactor material. Fast and thermal fission reactions are considered, as well as neutron capture and elastic scattering, and the second moment of the number of fissions, the number of neutrons that leaked out of the system, etc. are calculated. Both models use a cylindrical core with a homogenous mixture of core material. Our results indicate that the number dependence of the energy
DEFF Research Database (Denmark)
Madsen, Tobias
2017-01-01
In the present thesis I develop, implement and apply statistical methods for detecting genomic elements implicated in cancer development and progression. This is done in two separate bodies of work. The first uses the somatic mutation burden to distinguish cancer driver mutations from passenger m...
Lin, Hong; Rao, Jun; Shi, Jianxin; Hu, Chaoyang; Cheng, Fang; Wilson, Zoe A; Zhang, Dabing; Quan, Sheng
2014-09-01
Soybean [Glycine max (L.) Merr.] is one of the world's major crops, and soybean seeds are a rich and important resource for proteins and oils. While "omics" studies, such as genomics, transcriptomics, and proteomics, have been widely applied in soybean molecular research, fewer metabolomic studies have been conducted for large-scale detection of low molecular weight metabolites, especially in soybean seeds. In this study, we investigated the seed metabolomes of 29 common soybean cultivars through combined gas chromatography-mass spectrometry and ultra-performance liquid chromatography-tandem mass spectrometry. One hundred sixty-nine named metabolites were identified and subsequently used to construct a metabolic network of mature soybean seed. Among the 169 detected metabolites, 104 were found to be significantly variable in their levels across tested cultivars. Metabolite markers that could be used to distinguish genetically related soybean cultivars were also identified, and metabolite-metabolite correlation analysis revealed some significant associations within the same or among different metabolite groups. Findings from this work may potentially provide the basis for further studies on both soybean seed metabolism and metabolic engineering to improve soybean seed quality and yield. © 2014 Institute of Botany, Chinese Academy of Sciences.
Institute of Scientific and Technical Information of China (English)
Hong Lin; Jun Rao; Jianxin Shi; Chaoyang Hu; Fang Cheng; Zoe AWilson; Dabing Zhang; Sheng Quan
2014-01-01
Soybean [Glycine max (L.) Merr.] is one of the world’s major crops, and soybean seeds are a rich and important resource for proteins and oils. While “omics”studies, such as genomics, transcriptomics, and proteomics, have been widely applied in soybean molecular research, fewer metabolomic studies have been conducted for large-scale detection of low molecular weight metabolites, especial y in soybean seeds. In this study, we investigated the seed metabolomes of 29 common soybean cultivars through combined gas chromatography-mass spectrometry and ultra-performance liquid chromatography-tandem mass spectrometry. One hundred sixty-nine named metabolites were identified and subsequently used to construct a metabolic network of mature soybean seed. Among the 169 detected metabolites, 104 were found to be significantly variable in their levels across tested cultivars. Metabolite markers that could be used to distinguish genetical y related soybean cultivars were also identified, and metabolite-metabolite correlation analysis revealed some significant associations within the same or among different metabolite groups. Findings from this work may potentially provide the basis for further studies on both soybean seed metabolism and metabolic engineering to improve soybean seed quality and yield.
Thomas, David; Finan, Chris; Newport, Melanie J; Jones, Susan
2015-10-01
The complexity of DNA can be quantified using estimates of entropy. Variation in DNA complexity is expected between the promoters of genes with different transcriptional mechanisms; namely housekeeping (HK) and tissue specific (TS). The former are transcribed constitutively to maintain general cellular functions, and the latter are transcribed in restricted tissue and cells types for specific molecular events. It is known that promoter features in the human genome are related to tissue specificity, but this has been difficult to quantify on a genomic scale. If entropy effectively quantifies DNA complexity, calculating the entropies of HK and TS gene promoters as profiles may reveal significant differences. Entropy profiles were calculated for a total dataset of 12,003 human gene promoters and for 501 housekeeping (HK) and 587 tissue specific (TS) human gene promoters. The mean profiles show the TS promoters have a significantly lower entropy (pentropy distributions for the 3 datasets show that promoter entropies could be used to identify novel HK genes. Functional features comprise DNA sequence patterns that are non-random and hence they have lower entropies. The lower entropy of TS gene promoters can be explained by a higher density of positive and negative regulatory elements, required for genes with complex spatial and temporary expression. Copyright © 2015 Elsevier Ltd. All rights reserved.
Survey of French spine surgeons reveals significant variability in spine trauma practices in 2013.
Lonjon, G; Grelat, M; Dhenin, A; Dauzac, C; Lonjon, N; Kepler, C K; Vaccaro, A R
2015-02-01
In France, attempts to define common ground during spine surgery meetings have revealed significant variability in clinical practices across different schools of surgery and the two specialities involved in spine surgery, namely, neurosurgery and orthopaedic surgery. To objectively characterise this variability by performing a survey based on a fictitious spine trauma case. Our working hypothesis was that significant variability existed in trauma practices and that this variability was related to a lack of strong scientific evidence in spine trauma care. We performed a cross-sectional survey based on a clinical vignette describing a 31-year-old male with an L1 burst fracture and neurologic symptoms (numbness). Surgeons received the vignette and a 14-item questionnaire on the management of this patient. For each question, surgeons had to choose among five possible answers. Differences in answers across surgeons were assessed using the Index of Qualitative Variability (IQV), in which 0 indicates no variability and 1 maximal variability. Surgeons also received a questionnaire about their demographics and surgical experience. Of 405 invited spine surgeons, 200 responded to the survey. Five questions had an IQV greater than 0.9, seven an IQV between 0.5 and 0.9, and two an IQV lower than 0.5. Variability was greatest about the need for MRI (IQV=0.93), degree of urgency (IQV=0.93), need for fusion (IQV=0.92), need for post-operative bracing (IQV=0.91), and routine removal of instrumentation (IQV=0.94). Variability was lowest for questions about the need for surgery (IQV=0.42) and use of the posterior approach (IQV=0.36). Answers were influenced by surgeon specialty, age, experience level, and type of centre. Clinical practice regarding spine trauma varies widely in France. Little published evidence is available on which to base recommendations that would diminish this variability. Copyright © 2015. Published by Elsevier Masson SAS.
Meckfessel, Sandra; Stühmer, Constantin; Bormann, Kai-Hendrik; Kupka, Thomas; Behrends, Marianne; Matthies, Herbert; Vaske, Bernhard; Stiesch, Meike; Gellrich, Nils-Claudius; Rücker, Martin
2011-01-01
Because a traditionally instructed dental radiology lecture course is very time-consuming and labour-intensive, online courseware, including an interactive-learning module, was implemented to support the lectures. The purpose of this study was to evaluate the perceptions of students who have worked with web-based courseware as well as the effect on their results in final examinations. Users (n(3+4)=138) had access to the e-program from any networked computer at any time. Two groups (n(3)=71, n(4)=67) had to pass a final exam after using the e-course. Results were compared with two groups (n(1)=42, n(2)=48) who had studied the same content by attending traditional lectures. In addition a survey of the students was statistically evaluated. Most of the respondents reported a positive attitude towards e-learning and would have appreciated more access to computer-assisted instruction. Two years after initiating the e-course the failure rate in the final examination dropped significantly, from 40% to less than 2%. The very positive response to the e-program and improved test scores demonstrated the effectiveness of our e-course as a learning aid. Interactive modules in step with clinical practice provided learning that is not achieved by traditional teaching methods alone. To what extent staff savings are possible is part of a further study. Copyright © 2010 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Statistical Analysis and Evaluation of the Depth of the Ruts on Lithuanian State Significance Roads
Directory of Open Access Journals (Sweden)
Erinijus Getautis
2011-04-01
Full Text Available The aim of this work is to gather information about the national flexible pavement roads ruts depth, to determine its statistical dispersijon index and to determine their validity for needed requirements. Analysis of scientific works of ruts apearance in the asphalt and their influence for driving is presented in this work. Dynamical models of ruts in asphalt are presented in the work as well. Experimental outcome data of rut depth dispersijon in the national highway of Lithuania Vilnius – Kaunas is prepared. Conclusions are formulated and presented. Article in Lithuanian
Directory of Open Access Journals (Sweden)
Dominic Beaulieu-Prévost
2006-03-01
Full Text Available For the last 50 years of research in quantitative social sciences, the empirical evaluation of scientific hypotheses has been based on the rejection or not of the null hypothesis. However, more than 300 articles demonstrated that this method was problematic. In summary, null hypothesis testing (NHT is unfalsifiable, its results depend directly on sample size and the null hypothesis is both improbable and not plausible. Consequently, alternatives to NHT such as confidence intervals (CI and measures of effect size are starting to be used in scientific publications. The purpose of this article is, first, to provide the conceptual tools necessary to implement an approach based on confidence intervals, and second, to briefly demonstrate why such an approach is an interesting alternative to an approach based on NHT. As demonstrated in the article, the proposed CI approach avoids most problems related to a NHT approach and can often improve the scientific and contextual relevance of the statistical interpretations by testing range hypotheses instead of a point hypothesis and by defining the minimal value of a substantial effect. The main advantage of such a CI approach is that it replaces the notion of statistical power by an easily interpretable three-value logic (probable presence of a substantial effect, probable absence of a substantial effect and probabilistic undetermination. The demonstration includes a complete example.
The SACE Review Panel's Final Report: Significant Flaws in the Analysis of Statistical Data
Gregory, Kelvin
2006-01-01
The South Australian Certificate of Education (SACE) is a credential and formal qualification within the Australian Qualifications Framework. A recent review of the SACE outlined a number of recommendations for significant changes to this certificate. These recommendations were the result of a process that began with the review panel…
Holtzman, Jessica N; Miller, Shefali; Hooshmand, Farnaz; Wang, Po W; Chang, Kiki D; Hill, Shelley J; Rasgon, Natalie L; Ketter, Terence A
2015-07-01
The strengths and limitations of considering childhood-and adolescent-onset bipolar disorder (BD) separately versus together remain to be established. We assessed this issue. BD patients referred to the Stanford Bipolar Disorder Clinic during 2000-2011 were assessed with the Systematic Treatment Enhancement Program for BD Affective Disorders Evaluation. Patients with childhood- and adolescent-onset were compared to those with adult-onset for 7 unfavorable bipolar illness characteristics with replicated associations with early-onset patients. Among 502 BD outpatients, those with childhood- (adolescent- (13-18 years, N=218) onset had significantly higher rates for 4/7 unfavorable illness characteristics, including lifetime comorbid anxiety disorder, at least ten lifetime mood episodes, lifetime alcohol use disorder, and prior suicide attempt, than those with adult-onset (>18 years, N=174). Childhood- but not adolescent-onset BD patients also had significantly higher rates of first-degree relative with mood disorder, lifetime substance use disorder, and rapid cycling in the prior year. Patients with pooled childhood/adolescent - compared to adult-onset had significantly higher rates for 5/7 of these unfavorable illness characteristics, while patients with childhood- compared to adolescent-onset had significantly higher rates for 4/7 of these unfavorable illness characteristics. Caucasian, insured, suburban, low substance abuse, American specialty clinic-referred sample limits generalizability. Onset age is based on retrospective recall. Childhood- compared to adolescent-onset BD was more robustly related to unfavorable bipolar illness characteristics, so pooling these groups attenuated such relationships. Further study is warranted to determine the extent to which adolescent-onset BD represents an intermediate phenotype between childhood- and adult-onset BD. Copyright © 2015 Elsevier B.V. All rights reserved.
2013-09-11
..., 1965 (79 Stat. 985; 22 U.S.C. 2459), Executive Order 12047 of March 27, 1978, the Foreign Affairs...). The mailing address is U.S. Department of State, SA-5, L/PD, Fifth Floor (Suite 5H03), Washington, DC... Determinations: ``Beauty Revealed: Images of Women in Qing Dynasty Chinese Painting'' SUMMARY: Notice is hereby...
2012-06-08
... exhibition ``Revealing the African Presence in Renaissance Europe'' imported from abroad for temporary... exhibit objects at The Walters Art Museum, Baltimore, MD, from on or about October 14, 2012, until on or about January 21, 2013; at the Princeton University Art Museum, Princeton, NJ, from on or about February...
Massey, J. L.
1976-01-01
The very low error probability obtained with long error-correcting codes results in a very small number of observed errors in simulation studies of practical size and renders the usual confidence interval techniques inapplicable to the observed error probability. A natural extension of the notion of a 'confidence interval' is made and applied to such determinations of error probability by simulation. An example is included to show the surprisingly great significance of as few as two decoding errors in a very large number of decoding trials.
Hayslett, H T
1991-01-01
Statistics covers the basic principles of Statistics. The book starts by tackling the importance and the two kinds of statistics; the presentation of sample data; the definition, illustration and explanation of several measures of location; and the measures of variation. The text then discusses elementary probability, the normal distribution and the normal approximation to the binomial. Testing of statistical hypotheses and tests of hypotheses about the theoretical proportion of successes in a binomial population and about the theoretical mean of a normal population are explained. The text the
Saha, Ranajit; Pan, Sudip; Chattaraj, Pratim K
2016-11-05
The validity of the maximum hardness principle (MHP) is tested in the cases of 50 chemical reactions, most of which are organic in nature and exhibit anomeric effect. To explore the effect of the level of theory on the validity of MHP in an exothermic reaction, B3LYP/6-311++G(2df,3pd) and LC-BLYP/6-311++G(2df,3pd) (def2-QZVP for iodine and mercury) levels are employed. Different approximations like the geometric mean of hardness and combined hardness are considered in case there are multiple reactants and/or products. It is observed that, based on the geometric mean of hardness, while 82% of the studied reactions obey the MHP at the B3LYP level, 84% of the reactions follow this rule at the LC-BLYP level. Most of the reactions possess the hardest species on the product side. A 50% null hypothesis is rejected at a 1% level of significance.
Zhen, Shoumin; Dong, Kun; Deng, Xiong; Zhou, Jiaxing; Xu, Xuexin; Han, Caixia; Zhang, Wenying; Xu, Yanhao; Wang, Zhimin; Yan, Yueming
2016-08-01
Metabolites in wheat grains greatly influence nutritional values. Wheat provides proteins, minerals, B-group vitamins and dietary fiber to humans. These metabolites are important to human health. However, the metabolome of the grain during the development of bread wheat has not been studied so far. In this work the first dynamic metabolome of the developing grain of the elite Chinese bread wheat cultivar Zhongmai 175 was analyzed, using non-targeted gas chromatography/mass spectrometry (GC/MS) for metabolite profiling. In total, 74 metabolites were identified over the grain developmental stages. Metabolite-metabolite correlation analysis revealed that the metabolism of amino acids, carbohydrates, organic acids, amines and lipids was interrelated. An integrated metabolic map revealed a distinct regulatory profile. The results provide information that can be used by metabolic engineers and molecular breeders to improve wheat grain quality. The present metabolome approach identified dynamic changes in metabolite levels, and correlations among such levels, in developing seeds. The comprehensive metabolic map may be useful when breeding programs seek to improve grain quality. The work highlights the utility of GC/MS-based metabolomics, in conjunction with univariate and multivariate data analysis, when it is sought to understand metabolic changes in developing seeds. © 2015 Society of Chemical Industry. © 2015 Society of Chemical Industry.
Liébana, Raquel; Arregui, Lucía; Belda, Ignacio; Gamella, Luis; Santos, Antonio; Marquina, Domingo; Serrano, Susana
2015-01-01
The yeast community was studied in a municipal full-scale membrane bioreactor wastewater treatment plant (MBR-WWTP). The unexpectedly high diversity of yeasts indicated that the activated sludge formed a suitable environment for them to proliferate, with cellular concentrations of 2.2 ± 0.8 × 10(3) CFU ml(-1). Sixteen species of seven genera were present in the biological reactor, with Ascomycetes being the most prevalent group (93%). Most isolates were able to grow in a synthetic wastewater medium, adhere to polyethylene surfaces, and develop biofilms of variable complexity. The relationship between yeast populations and the protists in the MBR-WWTP was also studied, revealing that some protist species preyed on and ingested yeasts. These results suggest that yeast populations may play a role in the food web of a WWTP and, to some extent, contribute to membrane biofouling in MBR systems.
Links to sources of cancer-related statistics, including the Surveillance, Epidemiology and End Results (SEER) Program, SEER-Medicare datasets, cancer survivor prevalence data, and the Cancer Trends Progress Report.
Hashim, Muhammad Jawad
2010-09-01
Post-hoc secondary data analysis with no prespecified hypotheses has been discouraged by textbook authors and journal editors alike. Unfortunately no single term describes this phenomenon succinctly. I would like to coin the term "sigsearch" to define this practice and bring it within the teaching lexicon of statistics courses. Sigsearch would include any unplanned, post-hoc search for statistical significance using multiple comparisons of subgroups. It would also include data analysis with outcomes other than the prespecified primary outcome measure of a study as well as secondary data analyses of earlier research.
Duncan, Laramie; Yilmaz, Zeynep; Gaspar, Helena; Walters, Raymond; Goldstein, Jackie; Anttila, Verneri; Bulik-Sullivan, Brendan; Ripke, Stephan; Thornton, Laura; Hinney, Anke; Daly, Mark; Sullivan, Patrick F; Zeggini, Eleftheria; Breen, Gerome; Bulik, Cynthia M
2017-09-01
The authors conducted a genome-wide association study of anorexia nervosa and calculated genetic correlations with a series of psychiatric, educational, and metabolic phenotypes. Following uniform quality control and imputation procedures using the 1000 Genomes Project (phase 3) in 12 case-control cohorts comprising 3,495 anorexia nervosa cases and 10,982 controls, the authors performed standard association analysis followed by a meta-analysis across cohorts. Linkage disequilibrium score regression was used to calculate genome-wide common variant heritability (single-nucleotide polymorphism [SNP]-based heritability [h 2 SNP ]), partitioned heritability, and genetic correlations (r g ) between anorexia nervosa and 159 other phenotypes. Results were obtained for 10,641,224 SNPs and insertion-deletion variants with minor allele frequencies >1% and imputation quality scores >0.6. The h 2 SNP of anorexia nervosa was 0.20 (SE=0.02), suggesting that a substantial fraction of the twin-based heritability arises from common genetic variation. The authors identified one genome-wide significant locus on chromosome 12 (rs4622308) in a region harboring a previously reported type 1 diabetes and autoimmune disorder locus. Significant positive genetic correlations were observed between anorexia nervosa and schizophrenia, neuroticism, educational attainment, and high-density lipoprotein cholesterol, and significant negative genetic correlations were observed between anorexia nervosa and body mass index, insulin, glucose, and lipid phenotypes. Anorexia nervosa is a complex heritable phenotype for which this study has uncovered the first genome-wide significant locus. Anorexia nervosa also has large and significant genetic correlations with both psychiatric phenotypes and metabolic traits. The study results encourage a reconceptualization of this frequently lethal disorder as one with both psychiatric and metabolic etiology.
Directory of Open Access Journals (Sweden)
Shuji Mizumoto
2017-03-01
Full Text Available The indispensable roles of dermatan sulfate-proteoglycans (DS-PGs have been demonstrated in various biological events including construction of the extracellular matrix and cell signaling through interactions with collagen and transforming growth factor-β, respectively. Defects in the core proteins of DS-PGs such as decorin and biglycan cause congenital stromal dystrophy of the cornea, spondyloepimetaphyseal dysplasia, and Meester-Loeys syndrome. Furthermore, mutations in human genes encoding the glycosyltransferases, epimerases, and sulfotransferases responsible for the biosynthesis of DS chains cause connective tissue disorders including Ehlers-Danlos syndrome and spondyloepimetaphyseal dysplasia with joint laxity characterized by skin hyperextensibility, joint hypermobility, and tissue fragility, and by severe skeletal disorders such as kyphoscoliosis, short trunk, dislocation, and joint laxity. Glycobiological approaches revealed that mutations in DS-biosynthetic enzymes cause reductions in enzymatic activities and in the amount of synthesized DS and also disrupt the formation of collagen bundles. This review focused on the growing number of glycobiological studies on recently reported genetic diseases caused by defects in the biosynthesis of DS and DS-PGs.
Directory of Open Access Journals (Sweden)
Veerla Srinivas
2006-06-01
Full Text Available Abstract Background An alternative to standard approaches to uncover biologically meaningful structures in micro array data is to treat the data as a blind source separation (BSS problem. BSS attempts to separate a mixture of signals into their different sources and refers to the problem of recovering signals from several observed linear mixtures. In the context of micro array data, "sources" may correspond to specific cellular responses or to co-regulated genes. Results We applied independent component analysis (ICA to three different microarray data sets; two tumor data sets and one time series experiment. To obtain reliable components we used iterated ICA to estimate component centrotypes. We found that many of the low ranking components indeed may show a strong biological coherence and hence be of biological significance. Generally ICA achieved a higher resolution when compared with results based on correlated expression and a larger number of gene clusters with significantly enriched for gene ontology (GO categories. In addition, components characteristic for molecular subtypes and for tumors with specific chromosomal translocations were identified. ICA also identified more than one gene clusters significant for the same GO categories and hence disclosed a higher level of biological heterogeneity, even within coherent groups of genes. Conclusion Although the ICA approach primarily detects hidden variables, these surfaced as highly correlated genes in time series data and in one instance in the tumor data. This further strengthens the biological relevance of latent variables detected by ICA.
DEFF Research Database (Denmark)
Serviss, Jason T.; Gådin, Jesper R.; Eriksson, Per
2017-01-01
, e.g. genes in a specific pathway, alone can separate samples into these established classes. Despite this, the evaluation of class separations is often subjective and performed via visualization. Here we present the ClusterSignificance package; a set of tools designed to assess the statistical...... significance of class separations downstream of dimensionality reduction algorithms. In addition, we demonstrate the design and utility of the ClusterSignificance package and utilize it to determine the importance of long non-coding RNA expression in the identity of multiple hematological malignancies....
Diurnal sampling reveals significant variation in CO2 emission from a tropical productive lake.
Reis, P C J; Barbosa, F A R
2014-08-01
It is well accepted in the literature that lakes are generally net heterotrophic and supersaturated with CO2 because they receive allochthonous carbon inputs. However, autotrophy and CO2 undersaturation may happen for at least part of the time, especially in productive lakes. Since diurnal scale is particularly important to tropical lakes dynamics, we evaluated diurnal changes in pCO2 and CO2 flux across the air-water interface in a tropical productive lake in southeastern Brazil (Lake Carioca) over two consecutive days. Both pCO2 and CO2 flux were significantly different between day (9:00 to 17:00) and night (21:00 to 5:00) confirming the importance of this scale for CO2 dynamics in tropical lakes. Net heterotrophy and CO2 outgassing from the lake were registered only at night, while significant CO2 emission did not happen during the day. Dissolved oxygen concentration and temperature trends over the diurnal cycle indicated the dependence of CO2 dynamics on lake metabolism (respiration and photosynthesis). This study indicates the importance of considering the diurnal scale when examining CO2 emissions from tropical lakes.
Melatonin Distribution Reveals Clues to Its Biological Significance in Basal Metazoans
Roopin, Modi; Levy, Oren
2012-01-01
Although nearly ubiquitous in nature, the precise biological significance of endogenous melatonin is poorly understood in phylogenetically basal taxa. In the present work, we describe insights into the functional role of melatonin at the most “basal” level of metazoan evolution. Hitherto unknown morphological determinants of melatonin distribution were evaluated in Nematostella vectensis by detecting melatonin immunoreactivity and examining the spatial gene expression patterns of putative melatonin biosynthetic and receptor elements that are located at opposing ends of the melatonin signaling pathway. Immuno-melatonin profiling indicated an elaborate interaction with reproductive tissues, reinforcing previous conjectures of a melatonin-responsive component in anthozoan reproduction. In situ hybridization (ISH) to putative melatonin receptor elements highlighted the possibility that the bioregulatory effects of melatonin in anthozoan reproduction may be mediated by interactions with membrane receptors, as in higher vertebrates. Another intriguing finding of the present study pertains to the prevalence of melatonin in centralized nervous structures. This pattern may be of great significance given that it 1) identifies an ancestral association between melatonin and key neuronal components and 2) potentially implies that certain effects of melatonin in basal species may be spread widely by regionalized nerve centers. PMID:23300630
International Nuclear Information System (INIS)
2005-01-01
For the years 2004 and 2005 the figures shown in the tables of Energy Review are partly preliminary. The annual statistics published in Energy Review are presented in more detail in a publication called Energy Statistics that comes out yearly. Energy Statistics also includes historical time-series over a longer period of time (see e.g. Energy Statistics, Statistics Finland, Helsinki 2004.) The applied energy units and conversion coefficients are shown in the back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supplies and total consumption of electricity GWh, Energy imports by country of origin in January-June 2003, Energy exports by recipient country in January-June 2003, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes, precautionary stock fees and oil pollution fees
International Nuclear Information System (INIS)
2001-01-01
For the year 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g. Energiatilastot 1999, Statistics Finland, Helsinki 2000, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions from the use of fossil fuels, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in 2000, Energy exports by recipient country in 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products
International Nuclear Information System (INIS)
2000-01-01
For the year 1999 and 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g., Energiatilastot 1998, Statistics Finland, Helsinki 1999, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-March 2000, Energy exports by recipient country in January-March 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products
International Nuclear Information System (INIS)
1999-01-01
For the year 1998 and the year 1999, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g. Energiatilastot 1998, Statistics Finland, Helsinki 1999, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-June 1999, Energy exports by recipient country in January-June 1999, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products
van de Merwe, Jason P; Neale, Peta A; Melvin, Steven D; Leusch, Frederic D L
2018-06-01
Pesticides commonly used around households can contain additives of unknown concentrations and toxicity. Given the likelihood of these chemicals washing into urban waterways, it is important to understand the effects that these additives may have on aquatic organisms. The aim of this study was to compare the toxicity of commercially available household pesticides to that of the active ingredient(s) alone. The toxicity of five household pesticides (three herbicides and two insecticides) was investigated using a bacterial cytotoxicity bioassay and an algal photosynthesis bioassay. The commercial products were up to an order of magnitude more toxic than the active ingredient(s) alone. In addition, two commercial products with the same listed active ingredients in the same ratio had a 600× difference in potency. These results clearly demonstrate that additives in commercial formulations are significant contributors to the toxicity of household pesticides. The toxicity of pesticides in aquatic systems is therefore likely underestimated by conventional chemical monitoring and risk assessment when only the active ingredients are considered. Regulators and customers should require more clarity from pesticide manufacturers about the nature and concentrations of not only the active ingredients, but also additives used in commercial formulations. In addition, monitoring programmes and chemical risk assessments schemes should develop a structured approach to assessing the toxic effects of commercial formulations, including additives, rather than simply those of the listed active ingredients. Copyright © 2018. Published by Elsevier B.V.
Kim, Sung-Min; Choi, Yosoon
2017-06-18
To develop appropriate measures to prevent soil contamination in abandoned mining areas, an understanding of the spatial variation of the potentially toxic trace elements (PTEs) in the soil is necessary. For the purpose of effective soil sampling, this study uses hot spot analysis, which calculates a z -score based on the Getis-Ord Gi* statistic to identify a statistically significant hot spot sample. To constitute a statistically significant hot spot, a feature with a high value should also be surrounded by other features with high values. Using relatively cost- and time-effective portable X-ray fluorescence (PXRF) analysis, sufficient input data are acquired from the Busan abandoned mine and used for hot spot analysis. To calibrate the PXRF data, which have a relatively low accuracy, the PXRF analysis data are transformed using the inductively coupled plasma atomic emission spectrometry (ICP-AES) data. The transformed PXRF data of the Busan abandoned mine are classified into four groups according to their normalized content and z -scores: high content with a high z -score (HH), high content with a low z -score (HL), low content with a high z -score (LH), and low content with a low z -score (LL). The HL and LH cases may be due to measurement errors. Additional or complementary surveys are required for the areas surrounding these suspect samples or for significant hot spot areas. The soil sampling is conducted according to a four-phase procedure in which the hot spot analysis and proposed group classification method are employed to support the development of a sampling plan for the following phase. Overall, 30, 50, 80, and 100 samples are investigated and analyzed in phases 1-4, respectively. The method implemented in this case study may be utilized in the field for the assessment of statistically significant soil contamination and the identification of areas for which an additional survey is required.
Directory of Open Access Journals (Sweden)
Sung-Min Kim
2017-06-01
Full Text Available To develop appropriate measures to prevent soil contamination in abandoned mining areas, an understanding of the spatial variation of the potentially toxic trace elements (PTEs in the soil is necessary. For the purpose of effective soil sampling, this study uses hot spot analysis, which calculates a z-score based on the Getis-Ord Gi* statistic to identify a statistically significant hot spot sample. To constitute a statistically significant hot spot, a feature with a high value should also be surrounded by other features with high values. Using relatively cost- and time-effective portable X-ray fluorescence (PXRF analysis, sufficient input data are acquired from the Busan abandoned mine and used for hot spot analysis. To calibrate the PXRF data, which have a relatively low accuracy, the PXRF analysis data are transformed using the inductively coupled plasma atomic emission spectrometry (ICP-AES data. The transformed PXRF data of the Busan abandoned mine are classified into four groups according to their normalized content and z-scores: high content with a high z-score (HH, high content with a low z-score (HL, low content with a high z-score (LH, and low content with a low z-score (LL. The HL and LH cases may be due to measurement errors. Additional or complementary surveys are required for the areas surrounding these suspect samples or for significant hot spot areas. The soil sampling is conducted according to a four-phase procedure in which the hot spot analysis and proposed group classification method are employed to support the development of a sampling plan for the following phase. Overall, 30, 50, 80, and 100 samples are investigated and analyzed in phases 1–4, respectively. The method implemented in this case study may be utilized in the field for the assessment of statistically significant soil contamination and the identification of areas for which an additional survey is required.
Rare human papillomavirus 16 E6 variants reveal significant oncogenic potential
Directory of Open Access Journals (Sweden)
Tommasino Massimo
2011-06-01
Full Text Available Abstract The aim of this study was to determine whether low prevalence human papillomavirus (HPV 16 E6 variants differ from high prevalence types in their functional abilities. We evaluated functions relevant to carcinogenesis for the rarely-detected European variants R8Q, R10G and R48W as compared to the commonly detected L83V. Human immortalized keratinocytes (NIKS stably transduced with the E6 variants were used in most functional assays. Low and high prevalence E6 variants displayed similar abilities in abrogation of growth arrest and inhibition of p53 elevation induced by actinomycin D. Differences were detected in the abilities to dysregulate stratification and differentiation of NIKS in organotypic raft cultures, modulate detachment induced apoptosis (anoikis and hyperactivate Wnt signaling. No distinctive phenotype could be assigned to include all rare variants. Like L83V, raft cultures derived from variants R10G and R48W similarly induced hyperplasia and aberrantly expressed keratin 5 in the suprabasal compartment with significantly lower expression of keratin 10. Unlike L83V, both variants, and particularly R48W, induced increased levels of anoikis upon suspension in semisolid medium. R8Q induced a unique phenotype characterized by thin organotypic raft cultures, low expression of keratin 10, and high expression of keratins 5 and 14 throughout all raft layers. Interestingly, in a reporter based assay R8Q exhibited a higher ability to augment TCF/β-catenin transcription. The data suggests that differences in E6 variant prevalence in cervical carcinoma may not be related to the carcinogenic potential of the E6 protein.
Torrado, Mario; Iglesias, Raquel; Centeno, Alberto; López, Eduardo; Mikhailov, Alexander T.
2011-01-01
Background Myocardin (MYOCD), a potent transcriptional coactivator of smooth muscle (SM) and cardiac genes, is upregulated in failing myocardium in animal models and human end-stage heart failure (HF). However, the molecular and functional consequences of myocd upregulation in HF are still unclear. Methodology/Principal Findings The goal of the present study was to investigate if targeted inhibition of upregulated expression of myocd could influence failing heart gene expression and function. To this end, we used the doxorubicin (Dox)-induced diastolic HF (DHF) model in neonatal piglets, in which, as we show, not only myocd but also myocd-dependent SM-marker genes are highly activated in failing left ventricular (LV) myocardium. In this model, intra-myocardial delivery of short-hairpin RNAs, designed to target myocd variants expressed in porcine heart, leads on day 2 post-delivery to: (1) a decrease in the activated expression of myocd and myocd-dependent SM-marker genes in failing myocardium to levels seen in healthy control animals, (2) amelioration of impaired diastolic dysfunction, and (3) higher survival rates of DHF piglets. The posterior restoration of elevated myocd expression (on day 7 post-delivery) led to overexpression of myocd-dependent SM-marker genes in failing LV-myocardium that was associated with a return to altered diastolic function. Conclusions/Significance These data provide the first evidence that a moderate inhibition (e.g., normalization) of the activated MYOCD signaling in the diseased heart may be promising from a therapeutic point of view. PMID:22028870
Directory of Open Access Journals (Sweden)
Mario Torrado
Full Text Available BACKGROUND: Myocardin (MYOCD, a potent transcriptional coactivator of smooth muscle (SM and cardiac genes, is upregulated in failing myocardium in animal models and human end-stage heart failure (HF. However, the molecular and functional consequences of myocd upregulation in HF are still unclear. METHODOLOGY/PRINCIPAL FINDINGS: The goal of the present study was to investigate if targeted inhibition of upregulated expression of myocd could influence failing heart gene expression and function. To this end, we used the doxorubicin (Dox-induced diastolic HF (DHF model in neonatal piglets, in which, as we show, not only myocd but also myocd-dependent SM-marker genes are highly activated in failing left ventricular (LV myocardium. In this model, intra-myocardial delivery of short-hairpin RNAs, designed to target myocd variants expressed in porcine heart, leads on day 2 post-delivery to: (1 a decrease in the activated expression of myocd and myocd-dependent SM-marker genes in failing myocardium to levels seen in healthy control animals, (2 amelioration of impaired diastolic dysfunction, and (3 higher survival rates of DHF piglets. The posterior restoration of elevated myocd expression (on day 7 post-delivery led to overexpression of myocd-dependent SM-marker genes in failing LV-myocardium that was associated with a return to altered diastolic function. CONCLUSIONS/SIGNIFICANCE: These data provide the first evidence that a moderate inhibition (e.g., normalization of the activated MYOCD signaling in the diseased heart may be promising from a therapeutic point of view.
International Nuclear Information System (INIS)
2003-01-01
For the year 2002, part of the figures shown in the tables of the Energy Review are partly preliminary. The annual statistics of the Energy Review also includes historical time-series over a longer period (see e.g. Energiatilastot 2001, Statistics Finland, Helsinki 2002). The applied energy units and conversion coefficients are shown in the inside back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supply and total consumption of electricity GWh, Energy imports by country of origin in January-June 2003, Energy exports by recipient country in January-June 2003, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Excise taxes, precautionary stock fees on oil pollution fees on energy products
International Nuclear Information System (INIS)
2004-01-01
For the year 2003 and 2004, the figures shown in the tables of the Energy Review are partly preliminary. The annual statistics of the Energy Review also includes historical time-series over a longer period (see e.g. Energiatilastot, Statistics Finland, Helsinki 2003, ISSN 0785-3165). The applied energy units and conversion coefficients are shown in the inside back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supplies and total consumption of electricity GWh, Energy imports by country of origin in January-March 2004, Energy exports by recipient country in January-March 2004, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Excise taxes, precautionary stock fees on oil pollution fees
International Nuclear Information System (INIS)
2000-01-01
For the year 1999 and 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy also includes historical time series over a longer period (see e.g., Energiatilastot 1999, Statistics Finland, Helsinki 2000, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-June 2000, Energy exports by recipient country in January-June 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products
Directory of Open Access Journals (Sweden)
E. A. Tatokchin
2017-01-01
Full Text Available Development of the modern educational technologies caused by broad introduction of comput-er testing and development of distant forms of education does necessary revision of methods of an examination of pupils. In work it was shown, need transition to mathematical criteria, exami-nations of knowledge which are deprived of subjectivity. In article the review of the problems arising at realization of this task and are offered approaches for its decision. The greatest atten-tion is paid to discussion of a problem of objective transformation of rated estimates of the ex-pert on to the scale estimates of the student. In general, the discussion this question is was con-cluded that the solution to this problem lies in the creation of specialized intellectual systems. The basis for constructing intelligent system laid the mathematical model of self-organizing nonequilibrium dissipative system, which is a group of students. This article assumes that the dissipative system is provided by the constant influx of new test items of the expert and non-equilibrium – individual psychological characteristics of students in the group. As a result, the system must self-organize themselves into stable patterns. This patern will allow for, relying on large amounts of data, get a statistically significant assessment of student. To justify the pro-posed approach in the work presents the data of the statistical analysis of the results of testing a large sample of students (> 90. Conclusions from this statistical analysis allowed to develop intelligent system statistically significant examination of student performance. It is based on data clustering algorithm (k-mean for the three key parameters. It is shown that this approach allows you to create of the dynamics and objective expertise evaluation.
Statistical language learning in neonates revealed by event-related brain potentials
Directory of Open Access Journals (Sweden)
Näätänen Risto
2009-03-01
Full Text Available Abstract Background Statistical learning is a candidate for one of the basic prerequisites underlying the expeditious acquisition of spoken language. Infants from 8 months of age exhibit this form of learning to segment fluent speech into distinct words. To test the statistical learning skills at birth, we recorded event-related brain responses of sleeping neonates while they were listening to a stream of syllables containing statistical cues to word boundaries. Results We found evidence that sleeping neonates are able to automatically extract statistical properties of the speech input and thus detect the word boundaries in a continuous stream of syllables containing no morphological cues. Syllable-specific event-related brain responses found in two separate studies demonstrated that the neonatal brain treated the syllables differently according to their position within pseudowords. Conclusion These results demonstrate that neonates can efficiently learn transitional probabilities or frequencies of co-occurrence between different syllables, enabling them to detect word boundaries and in this way isolate single words out of fluent natural speech. The ability to adopt statistical structures from speech may play a fundamental role as one of the earliest prerequisites of language acquisition.
Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y; Drake, Steven K; Gucek, Marjan; Sacks, David B; Yu, Yi-Kuo
2018-06-05
Rapid and accurate identification and classification of microorganisms is of paramount importance to public health and safety. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is complicating correct microbial identification even in a simple sample due to the large number of candidates present. To properly untwine candidate microbes in samples containing one or more microbes, one needs to go beyond apparent morphology or simple "fingerprinting"; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptide-centric representations of microbes to better separate them and by augmenting our earlier analysis method that yields accurate statistical significance. Here, we present an updated analysis workflow that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using 226 MS/MS publicly available data files (each containing from 2500 to nearly 100,000 MS/MS spectra) and 4000 additional MS/MS data files, that the updated workflow can correctly identify multiple microbes at the genus and often the species level for samples containing more than one microbe. We have also shown that the proposed workflow computes accurate statistical significances, i.e., E values for identified peptides and unified E values for identified microbes. Our updated analysis workflow MiCId, a freely available software for Microorganism Classification and Identification, is available for download at https://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html . Graphical Abstract ᅟ.
Method of Check of Statistical Hypotheses for Revealing of “Fraud” Point of Sale
Directory of Open Access Journals (Sweden)
T. M. Bolotskaya
2011-06-01
Full Text Available Application method checking of statistical hypotheses fraud Point of Sale working with purchasing cards and suspected of accomplishment of unauthorized operations is analyzed. On the basis of the received results the algorithm is developed, allowing receive an assessment of works of terminals in regime off-line.
Oliveira Mendes, Thiago de; Pinto, Liliane Pereira; Santos, Laurita dos; Tippavajhala, Vamshi Krishna; Téllez Soto, Claudio Alberto; Martin, Airton Abrahão
2016-07-01
The analysis of biological systems by spectroscopic techniques involves the evaluation of hundreds to thousands of variables. Hence, different statistical approaches are used to elucidate regions that discriminate classes of samples and to propose new vibrational markers for explaining various phenomena like disease monitoring, mechanisms of action of drugs, food, and so on. However, the technical statistics are not always widely discussed in applied sciences. In this context, this work presents a detailed discussion including the various steps necessary for proper statistical analysis. It includes univariate parametric and nonparametric tests, as well as multivariate unsupervised and supervised approaches. The main objective of this study is to promote proper understanding of the application of various statistical tools in these spectroscopic methods used for the analysis of biological samples. The discussion of these methods is performed on a set of in vivo confocal Raman spectra of human skin analysis that aims to identify skin aging markers. In the Appendix, a complete routine of data analysis is executed in a free software that can be used by the scientific community involved in these studies.
Liu, Wei; Ding, Jinhui
2018-04-01
The application of the principle of the intention-to-treat (ITT) to the analysis of clinical trials is challenged in the presence of missing outcome data. The consequences of stopping an assigned treatment in a withdrawn subject are unknown. It is difficult to make a single assumption about missing mechanisms for all clinical trials because there are complicated reactions in the human body to drugs due to the presence of complex biological networks, leading to data missing randomly or non-randomly. Currently there is no statistical method that can tell whether a difference between two treatments in the ITT population of a randomized clinical trial with missing data is significant at a pre-specified level. Making no assumptions about the missing mechanisms, we propose a generalized complete-case (GCC) analysis based on the data of completers. An evaluation of the impact of missing data on the ITT analysis reveals that a statistically significant GCC result implies a significant treatment effect in the ITT population at a pre-specified significance level unless, relative to the comparator, the test drug is poisonous to the non-completers as documented in their medical records. Applications of the GCC analysis are illustrated using literature data, and its properties and limits are discussed.
Perneger, Thomas V; Combescure, Christophe
2017-07-01
Published P-values provide a window into the global enterprise of medical research. The aim of this study was to use the distribution of published P-values to estimate the relative frequencies of null and alternative hypotheses and to seek irregularities suggestive of publication bias. This cross-sectional study included P-values published in 120 medical research articles in 2016 (30 each from the BMJ, JAMA, Lancet, and New England Journal of Medicine). The observed distribution of P-values was compared with expected distributions under the null hypothesis (i.e., uniform between 0 and 1) and the alternative hypothesis (strictly decreasing from 0 to 1). P-values were categorized according to conventional levels of statistical significance and in one-percent intervals. Among 4,158 recorded P-values, 26.1% were highly significant (P values values equal to 1, and (3) about twice as many P-values less than 0.05 compared with those more than 0.05. The latter finding was seen in both randomized trials and observational studies, and in most types of analyses, excepting heterogeneity tests and interaction tests. Under plausible assumptions, we estimate that about half of the tested hypotheses were null and the other half were alternative. This analysis suggests that statistical tests published in medical journals are not a random sample of null and alternative hypotheses but that selective reporting is prevalent. In particular, significant results are about twice as likely to be reported as nonsignificant results. Copyright © 2017 Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Leitner Dietmar
2005-04-01
Full Text Available Abstract Background A reliable prediction of the Xaa-Pro peptide bond conformation would be a useful tool for many protein structure calculation methods. We have analyzed the Protein Data Bank and show that the combined use of sequential and structural information has a predictive value for the assessment of the cis versus trans peptide bond conformation of Xaa-Pro within proteins. For the analysis of the data sets different statistical methods such as the calculation of the Chou-Fasman parameters and occurrence matrices were used. Furthermore we analyzed the relationship between the relative solvent accessibility and the relative occurrence of prolines in the cis and in the trans conformation. Results One of the main results of the statistical investigations is the ranking of the secondary structure and sequence information with respect to the prediction of the Xaa-Pro peptide bond conformation. We observed a significant impact of secondary structure information on the occurrence of the Xaa-Pro peptide bond conformation, while the sequence information of amino acids neighboring proline is of little predictive value for the conformation of this bond. Conclusion In this work, we present an extensive analysis of the occurrence of the cis and trans proline conformation in proteins. Based on the data set, we derived patterns and rules for a possible prediction of the proline conformation. Upon adoption of the Chou-Fasman parameters, we are able to derive statistically relevant correlations between the secondary structure of amino acid fragments and the Xaa-Pro peptide bond conformation.
Kossobokov, V.G.; Romashkova, L.L.; Keilis-Borok, V. I.; Healy, J.H.
1999-01-01
Algorithms M8 and MSc (i.e., the Mendocino Scenario) were used in a real-time intermediate-term research prediction of the strongest earthquakes in the Circum-Pacific seismic belt. Predictions are made by M8 first. Then, the areas of alarm are reduced by MSc at the cost that some earthquakes are missed in the second approximation of prediction. In 1992-1997, five earthquakes of magnitude 8 and above occurred in the test area: all of them were predicted by M8 and MSc identified correctly the locations of four of them. The space-time volume of the alarms is 36% and 18%, correspondingly, when estimated with a normalized product measure of empirical distribution of epicenters and uniform time. The statistical significance of the achieved results is beyond 99% both for M8 and MSc. For magnitude 7.5 + , 10 out of 19 earthquakes were predicted by M8 in 40% and five were predicted by M8-MSc in 13% of the total volume considered. This implies a significance level of 81% for M8 and 92% for M8-MSc. The lower significance levels might result from a global change in seismic regime in 1993-1996, when the rate of the largest events has doubled and all of them become exclusively normal or reversed faults. The predictions are fully reproducible; the algorithms M8 and MSc in complete formal definitions were published before we started our experiment [Keilis-Borok, V.I., Kossobokov, V.G., 1990. Premonitory activation of seismic flow: Algorithm M8, Phys. Earth and Planet. Inter. 61, 73-83; Kossobokov, V.G., Keilis-Borok, V.I., Smith, S.W., 1990. Localization of intermediate-term earthquake prediction, J. Geophys. Res., 95, 19763-19772; Healy, J.H., Kossobokov, V.G., Dewey, J.W., 1992. A test to evaluate the earthquake prediction algorithm, M8. U.S. Geol. Surv. OFR 92-401]. M8 is available from the IASPEI Software Library [Healy, J.H., Keilis-Borok, V.I., Lee, W.H.K. (Eds.), 1997. Algorithms for Earthquake Statistics and Prediction, Vol. 6. IASPEI Software Library]. ?? 1999 Elsevier
Nacher, Jose C; Ochiai, Tomoshiro
2012-05-01
Increasingly accessible financial data allow researchers to infer market-dynamics-based laws and to propose models that are able to reproduce them. In recent years, several stylized facts have been uncovered. Here we perform an extensive analysis of foreign exchange data that leads to the unveiling of a statistical financial law. First, our findings show that, on average, volatility increases more when the price exceeds the highest (or lowest) value, i.e., breaks the resistance line. We call this the breaking-acceleration effect. Second, our results show that the probability P(T) to break the resistance line in the past time T follows power law in both real data and theoretically simulated data. However, the probability calculated using real data is rather lower than the one obtained using a traditional Black-Scholes (BS) model. Taken together, the present analysis characterizes a different stylized fact of financial markets and shows that the market exceeds a past (historical) extreme price fewer times than expected by the BS model (the resistance effect). However, when the market does, we predict that the average volatility at that time point will be much higher. These findings indicate that any Markovian model does not faithfully capture the market dynamics.
Directory of Open Access Journals (Sweden)
Lutz Bornmann
Full Text Available Using the InCites tool of Thomson Reuters, this study compares normalized citation impact values calculated for China, Japan, France, Germany, United States, and the UK throughout the time period from 1981 to 2010. InCites offers a unique opportunity to study the normalized citation impacts of countries using (i a long publication window (1981 to 2010, (ii a differentiation in (broad or more narrow subject areas, and (iii allowing for the use of statistical procedures in order to obtain an insightful investigation of national citation trends across the years. Using four broad categories, our results show significantly increasing trends in citation impact values for France, the UK, and especially Germany across the last thirty years in all areas. The citation impact of papers from China is still at a relatively low level (mostly below the world average, but the country follows an increasing trend line. The USA exhibits a stable pattern of high citation impact values across the years. With small impact differences between the publication years, the US trend is increasing in engineering and technology but decreasing in medical and health sciences as well as in agricultural sciences. Similar to the USA, Japan follows increasing as well as decreasing trends in different subject areas, but the variability across the years is small. In most of the years, papers from Japan perform below or approximately at the world average in each subject area.
International Nuclear Information System (INIS)
Shakespeare, T.P.; Mukherjee, R.K.; Gebski, V.J.
2003-01-01
Confidence levels, clinical significance curves, and risk-benefit contours are tools improving analysis of clinical studies and minimizing misinterpretation of published results, however no software has been available for their calculation. The objective was to develop software to help clinicians utilize these tools. Excel 2000 spreadsheets were designed using only built-in functions, without macros. The workbook was protected and encrypted so that users can modify only input cells. The workbook has 4 spreadsheets for use in studies comparing two patient groups. Sheet 1 comprises instructions and graphic examples for use. Sheet 2 allows the user to input the main study results (e.g. survival rates) into a 2-by-2 table. Confidence intervals (95%), p-value and the confidence level for Treatment A being better than Treatment B are automatically generated. An additional input cell allows the user to determine the confidence associated with a specified level of benefit. For example if the user wishes to know the confidence that Treatment A is at least 10% better than B, 10% is entered. Sheet 2 automatically displays clinical significance curves, graphically illustrating confidence levels for all possible benefits of one treatment over the other. Sheet 3 allows input of toxicity data, and calculates the confidence that one treatment is more toxic than the other. It also determines the confidence that the relative toxicity of the most effective arm does not exceed user-defined tolerability. Sheet 4 automatically calculates risk-benefit contours, displaying the confidence associated with a specified scenario of minimum benefit and maximum risk of one treatment arm over the other. The spreadsheet is freely downloadable at www.ontumor.com/professional/statistics.htm A simple, self-explanatory, freely available spreadsheet calculator was developed using Excel 2000. The incorporated decision-making tools can be used for data analysis and improve the reporting of results of any
Statistical modeling reveals the effect of absolute humidity on dengue in Singapore.
Directory of Open Access Journals (Sweden)
Hai-Yan Xu
2014-05-01
Full Text Available Weather factors are widely studied for their effects on indicating dengue incidence trends. However, these studies have been limited due to the complex epidemiology of dengue, which involves dynamic interplay of multiple factors such as herd immunity within a population, distinct serotypes of the virus, environmental factors and intervention programs. In this study, we investigate the impact of weather factors on dengue in Singapore, considering the disease epidemiology and profile of virus serotypes. A Poisson regression combined with Distributed Lag Non-linear Model (DLNM was used to evaluate and compare the impact of weekly Absolute Humidity (AH and other weather factors (mean temperature, minimum temperature, maximum temperature, rainfall, relative humidity and wind speed on dengue incidence from 2001 to 2009. The same analysis was also performed on three sub-periods, defined by predominant circulating serotypes. The performance of DLNM regression models were then evaluated through the Akaike's Information Criterion. From the correlation and DLNM regression modeling analyses of the studied period, AH was found to be a better predictor for modeling dengue incidence than the other unique weather variables. Whilst mean temperature (MeanT also showed significant correlation with dengue incidence, the relationship between AH or MeanT and dengue incidence, however, varied in the three sub-periods. Our results showed that AH had a more stable impact on dengue incidence than temperature when virological factors were taken into consideration. AH appeared to be the most consistent factor in modeling dengue incidence in Singapore. Considering the changes in dominant serotypes, the improvements in vector control programs and the inconsistent weather patterns observed in the sub-periods, the impact of weather on dengue is modulated by these other factors. Future studies on the impact of climate change on dengue need to take all the other contributing
Statistical modeling reveals the effect of absolute humidity on dengue in Singapore.
Xu, Hai-Yan; Fu, Xiuju; Lee, Lionel Kim Hock; Ma, Stefan; Goh, Kee Tai; Wong, Jiancheng; Habibullah, Mohamed Salahuddin; Lee, Gary Kee Khoon; Lim, Tian Kuay; Tambyah, Paul Anantharajah; Lim, Chin Leong; Ng, Lee Ching
2014-05-01
Weather factors are widely studied for their effects on indicating dengue incidence trends. However, these studies have been limited due to the complex epidemiology of dengue, which involves dynamic interplay of multiple factors such as herd immunity within a population, distinct serotypes of the virus, environmental factors and intervention programs. In this study, we investigate the impact of weather factors on dengue in Singapore, considering the disease epidemiology and profile of virus serotypes. A Poisson regression combined with Distributed Lag Non-linear Model (DLNM) was used to evaluate and compare the impact of weekly Absolute Humidity (AH) and other weather factors (mean temperature, minimum temperature, maximum temperature, rainfall, relative humidity and wind speed) on dengue incidence from 2001 to 2009. The same analysis was also performed on three sub-periods, defined by predominant circulating serotypes. The performance of DLNM regression models were then evaluated through the Akaike's Information Criterion. From the correlation and DLNM regression modeling analyses of the studied period, AH was found to be a better predictor for modeling dengue incidence than the other unique weather variables. Whilst mean temperature (MeanT) also showed significant correlation with dengue incidence, the relationship between AH or MeanT and dengue incidence, however, varied in the three sub-periods. Our results showed that AH had a more stable impact on dengue incidence than temperature when virological factors were taken into consideration. AH appeared to be the most consistent factor in modeling dengue incidence in Singapore. Considering the changes in dominant serotypes, the improvements in vector control programs and the inconsistent weather patterns observed in the sub-periods, the impact of weather on dengue is modulated by these other factors. Future studies on the impact of climate change on dengue need to take all the other contributing factors into
International Nuclear Information System (INIS)
Tumur, Odgerel; Soon, Kean; Brown, Fraser; Mykytowycz, Marcus
2013-01-01
The aims of our study were to evaluate the effect of application of Adaptive Statistical Iterative Reconstruction (ASIR) algorithm on the radiation dose of coronary computed tomography angiography (CCTA) and its effects on image quality of CCTA and to evaluate the effects of various patient and CT scanning factors on the radiation dose of CCTA. This was a retrospective study that included 347 consecutive patients who underwent CCTA at a tertiary university teaching hospital between 1 July 2009 and 20 September 2011. Analysis was performed comparing patient demographics, scan characteristics, radiation dose and image quality in two groups of patients in whom conventional Filtered Back Projection (FBP) or ASIR was used for image reconstruction. There were 238 patients in the FBP group and 109 patients in the ASIR group. There was no difference between the groups in the use of prospective gating, scan length or tube voltage. In ASIR group, significantly lower tube current was used compared with FBP group, 550mA (450–600) vs. 650mA (500–711.25) (median (interquartile range)), respectively, P<0.001. There was 27% effective radiation dose reduction in the ASIR group compared with FBP group, 4.29mSv (2.84–6.02) vs. 5.84mSv (3.88–8.39) (median (interquartile range)), respectively, P<0.001. Although ASIR was associated with increased image noise compared with FBP (39.93±10.22 vs. 37.63±18.79 (mean ±standard deviation), respectively, P<001), it did not affect the signal intensity, signal-to-noise ratio, contrast-to-noise ratio or the diagnostic quality of CCTA. Application of ASIR reduces the radiation dose of CCTA without affecting the image quality.
Tumur, Odgerel; Soon, Kean; Brown, Fraser; Mykytowycz, Marcus
2013-06-01
The aims of our study were to evaluate the effect of application of Adaptive Statistical Iterative Reconstruction (ASIR) algorithm on the radiation dose of coronary computed tomography angiography (CCTA) and its effects on image quality of CCTA and to evaluate the effects of various patient and CT scanning factors on the radiation dose of CCTA. This was a retrospective study that included 347 consecutive patients who underwent CCTA at a tertiary university teaching hospital between 1 July 2009 and 20 September 2011. Analysis was performed comparing patient demographics, scan characteristics, radiation dose and image quality in two groups of patients in whom conventional Filtered Back Projection (FBP) or ASIR was used for image reconstruction. There were 238 patients in the FBP group and 109 patients in the ASIR group. There was no difference between the groups in the use of prospective gating, scan length or tube voltage. In ASIR group, significantly lower tube current was used compared with FBP group, 550 mA (450-600) vs. 650 mA (500-711.25) (median (interquartile range)), respectively, P ASIR group compared with FBP group, 4.29 mSv (2.84-6.02) vs. 5.84 mSv (3.88-8.39) (median (interquartile range)), respectively, P ASIR was associated with increased image noise compared with FBP (39.93 ± 10.22 vs. 37.63 ± 18.79 (mean ± standard deviation), respectively, P ASIR reduces the radiation dose of CCTA without affecting the image quality. © 2013 The Authors. Journal of Medical Imaging and Radiation Oncology © 2013 The Royal Australian and New Zealand College of Radiologists.
Xia, Li C; Ai, Dongmei; Cram, Jacob A; Liang, Xiaoyi; Fuhrman, Jed A; Sun, Fengzhu
2015-09-21
Local trend (i.e. shape) analysis of time series data reveals co-changing patterns in dynamics of biological systems. However, slow permutation procedures to evaluate the statistical significance of local trend scores have limited its applications to high-throughput time series data analysis, e.g., data from the next generation sequencing technology based studies. By extending the theories for the tail probability of the range of sum of Markovian random variables, we propose formulae for approximating the statistical significance of local trend scores. Using simulations and real data, we show that the approximate p-value is close to that obtained using a large number of permutations (starting at time points >20 with no delay and >30 with delay of at most three time steps) in that the non-zero decimals of the p-values obtained by the approximation and the permutations are mostly the same when the approximate p-value is less than 0.05. In addition, the approximate p-value is slightly larger than that based on permutations making hypothesis testing based on the approximate p-value conservative. The approximation enables efficient calculation of p-values for pairwise local trend analysis, making large scale all-versus-all comparisons possible. We also propose a hybrid approach by integrating the approximation and permutations to obtain accurate p-values for significantly associated pairs. We further demonstrate its use with the analysis of the Polymouth Marine Laboratory (PML) microbial community time series from high-throughput sequencing data and found interesting organism co-occurrence dynamic patterns. The software tool is integrated into the eLSA software package that now provides accelerated local trend and similarity analysis pipelines for time series data. The package is freely available from the eLSA website: http://bitbucket.org/charade/elsa.
Directory of Open Access Journals (Sweden)
Alireza Tavakkoli
2015-06-01
Full Text Available This paper presents the comprehensive results of the study of a cohort of college graduate and undergraduate students who participated in playing a Massively Multiplayer Online Role Playing Game (MMORPG as a gameplay rich with social interaction as well as intellectual and aesthetic features. We present the full results of the study in the form of inferential statistics and a review of our descriptive statistics previously reported in [46]. Separate one-way independent-measures multivariate analysis of variance (MANOVA's were used to analyze the data from several instruments to determine if there were statistically significant differences first by gender, then by age group, and then by degree. Moreover, a one-way repeated-measures analysis of variance (ANOVA was used to determine if there was a statistically significant difference between the clusters in the 5 gaming clusters on the Game Characteristic Survey. Follow-up paired samples t-tests were used to see if there was a statistically significant difference between each of the 10 possible combinations of paired clusters. Our results support the hypotheses and outline the features that may need to be taken into account in support of tailoring gamified educational content targeting a certain demographic. Sections 1, 2, and 3 below from our pervious study [46] are included because this is the second part of the two-part study. [46] Tavakkoli, A., Loffredo, D., Ward, M., Sr. (2014. "Insights from Massively Multiplayer Online Role Playing Games to Enhance Gamification in Education", Journal of Systemics, Cybernetics, and Informatics, 12(4, 66-78.
Liu, Feng; Sun, Fei; Xia, Jun Hong; Li, Jian; Fu, Gui Hong; Lin, Grace; Tu, Rong Jian; Wan, Zi Yi; Quek, Delia; Yue, Gen Hua
2014-01-01
Growth is an important trait in animal breeding. However, the genetic effects underpinning fish growth variability are still poorly understood. QTL mapping and analysis of candidate genes are effective methods to address this issue. We conducted a genome-wide QTL analysis for growth in tilapia. A total of 10, 7 and 8 significant QTLs were identified for body weight, total length and standard length at 140 dph, respectively. The majority of these QTLs were sex-specific. One major QTL for growth traits was identified in the sex-determining locus in LG1, explaining 71.7%, 67.2% and 64.9% of the phenotypic variation (PV) of body weight, total length and standard length, respectively. In addition, a candidate gene GHR2 in a QTL was significantly associated with body weight, explaining 13.1% of PV. Real-time qPCR revealed that different genotypes at the GHR2 locus influenced the IGF-1 expression level. The markers located in the major QTL for growth traits could be used in marker-assisted selection of tilapia. The associations between GHR2 variants and growth traits suggest that the GHR2 gene should be an important gene that explains the difference in growth among tilapia species. PMID:25435025
Peter, Elizabeth
2002-06-01
The history of nursing in the home: revealing the significance of place in the expression of moral agency The relationship between place and moral agency in home care nursing is explored in this paper. The notion of place is argued to have relevance to moral agency beyond moral context. This argument is theoretically located in feminist ethics and human geography and is supported through an examination of historical documents (1900-33) that describe the experiences and insights of American home care/private duty nurses or that are related to nursing ethics. Specifically, the role of place in inhibiting and enhancing care, justice, good relationships, and power in the practice of private duty nurses is explored. Several implications for current nursing ethics come out of this analysis. (i) The moral agency of nurses is highly nuanced. It is not only structured by nurses' relationships to patients and health professionals, i.e. moral context, it is also structured by the place of nursing care. (ii) Place has the potential to limit and enhance the power of nurses. (iii) Some aspects of nursing's conception of the good, such as what constitutes a good nurse-patient relationship, are historically and geographically relative.
Directory of Open Access Journals (Sweden)
J Tilak eRatnanather
2014-08-01
Full Text Available Differences in cortical thickness in the lateral temporal lobe, including the planum temporale (PT, have been reported in MRI studies of schizophrenia (SCZ and bipolar disorder (BPD patients. Most of these studies have used a single-valued global or local measure for thickness. However, additional and complementary information can be obtained by generating Labelled Cortical Depth Maps (LCDMs, which are distances of labeled gray matter voxels from the nearest point on the gray/white matter (inner cortical surface. Statistical analyses of pooled and censored LCDM distances reveal subtle differences in PT between SCZ and BPD groups from data generated by Ratnanather et al. (Schizophrenia Research, http://dx.doi.org/10.1016/j.schres.2013.08.014. These results confirm that the left PT (LPT is more sensitive than the right PT in distinguishing between SCZ, BPD and healthy controls. Also confirmed is a strong gender effect, with a thicker PT seen in males than in females. The differences between groups at smaller distances in the LPT revealed by pooled and censored LCDM analysis suggest that SCZ and BPD have different effects on the cortical mantle close to the gray/white matter surface. This is consistent with reported subtle changes in the cortical mantle observed in postmortem studies.
Directory of Open Access Journals (Sweden)
Justin London
2010-01-01
Full Text Available In “National Metrical Types in Nineteenth Century Art Song” Leigh Van Handel gives a sympathetic critique of William Rothstein’s claim that in western classical music of the late 18th and 19th centuries there are discernable differences in the phrasing and metrical practice of German versus French and Italian composers. This commentary (a examines just what Rothstein means in terms of his proposed metrical typology, (b questions Van Handel on how she has applied it to a purely melodic framework, (c amplifies Van Handel’s critique of Rothstein, and then (d concludes with a rumination on the reach of quantitative (i.e., statistically-driven versus qualitative claims regarding such things as “national metrical types.”
Yokoyama, Shozo; Takenaka, Naomi
2005-04-01
Red-green color vision is strongly suspected to enhance the survival of its possessors. Despite being red-green color blind, however, many species have successfully competed in nature, which brings into question the evolutionary advantage of achieving red-green color vision. Here, we propose a new method of identifying positive selection at individual amino acid sites with the premise that if positive Darwinian selection has driven the evolution of the protein under consideration, then it should be found mostly at the branches in the phylogenetic tree where its function had changed. The statistical and molecular methods have been applied to 29 visual pigments with the wavelengths of maximal absorption at approximately 510-540 nm (green- or middle wavelength-sensitive [MWS] pigments) and at approximately 560 nm (red- or long wavelength-sensitive [LWS] pigments), which are sampled from a diverse range of vertebrate species. The results show that the MWS pigments are positively selected through amino acid replacements S180A, Y277F, and T285A and that the LWS pigments have been subjected to strong evolutionary conservation. The fact that these positively selected M/LWS pigments are found not only in animals with red-green color vision but also in those with red-green color blindness strongly suggests that both red-green color vision and color blindness have undergone adaptive evolution independently in different species.
Cramer, Grant R; Ghan, Ryan; Schlauch, Karen A; Tillett, Richard L; Heymann, Hildegarde; Ferrarini, Alberto; Delledonne, Massimo; Zenoni, Sara; Fasoli, Marianna; Pezzotti, Mario
2014-12-19
Grapevine berry, a nonclimacteric fruit, has three developmental stages; the last one is when berry color and sugar increase. Flavors derived from terpenoid and fatty acid metabolism develop at the very end of this ripening stage. The transcriptomic response of pulp and skin of Cabernet Sauvignon berries in the late stages of ripening between 22 and 37 °Brix was assessed using whole-genome micorarrays. The transcript abundance of approximately 18,000 genes changed with °Brix and tissue type. There were a large number of changes in many gene ontology (GO) categories involving metabolism, signaling and abiotic stress. GO categories reflecting tissue differences were overrepresented in photosynthesis, isoprenoid metabolism and pigment biosynthesis. Detailed analysis of the interaction of the skin and pulp with °Brix revealed that there were statistically significantly higher abundances of transcripts changing with °Brix in the skin that were involved in ethylene signaling, isoprenoid and fatty acid metabolism. Many transcripts were peaking around known optimal fruit stages for flavor production. The transcript abundance of approximately two-thirds of the AP2/ERF superfamily of transcription factors changed during these developmental stages. The transcript abundance of a unique clade of ERF6-type transcription factors had the largest changes in the skin and clustered with genes involved in ethylene, senescence, and fruit flavor production including ACC oxidase, terpene synthases, and lipoxygenases. The transcript abundance of important transcription factors involved in fruit ripening was also higher in the skin. A detailed analysis of the transcriptome dynamics during late stages of ripening of grapevine berries revealed that these berries went through massive transcriptional changes in gene ontology categories involving chemical signaling and metabolism in both the pulp and skin, particularly in the skin. Changes in the transcript abundance of genes involved in
Szabo, Linda; Morey, Robert; Palpant, Nathan J; Wang, Peter L; Afari, Nastaran; Jiang, Chuan; Parast, Mana M; Murry, Charles E; Laurent, Louise C; Salzman, Julia
2015-06-16
The pervasive expression of circular RNA is a recently discovered feature of gene expression in highly diverged eukaryotes, but the functions of most circular RNAs are still unknown. Computational methods to discover and quantify circular RNA are essential. Moreover, discovering biological contexts where circular RNAs are regulated will shed light on potential functional roles they may play. We present a new algorithm that increases the sensitivity and specificity of circular RNA detection by discovering and quantifying circular and linear RNA splicing events at both annotated and un-annotated exon boundaries, including intergenic regions of the genome, with high statistical confidence. Unlike approaches that rely on read count and exon homology to determine confidence in prediction of circular RNA expression, our algorithm uses a statistical approach. Using our algorithm, we unveiled striking induction of general and tissue-specific circular RNAs, including in the heart and lung, during human fetal development. We discover regions of the human fetal brain, such as the frontal cortex, with marked enrichment for genes where circular RNA isoforms are dominant. The vast majority of circular RNA production occurs at major spliceosome splice sites; however, we find the first examples of developmentally induced circular RNAs processed by the minor spliceosome, and an enriched propensity of minor spliceosome donors to splice into circular RNA at un-annotated, rather than annotated, exons. Together, these results suggest a potentially significant role for circular RNA in human development.
Directory of Open Access Journals (Sweden)
Marianna Caterino
2018-03-01
Full Text Available Background:/Aims: Renal disease is a common cause of morbidity in patients with Bardet-Biedl syndrome (BBS, however the severity of kidney dysfunction is highly variable. To date, there is little information on the pathogenesis, the risk and predictor factors for poor renal outcome in this setting. The present study aims to analyze the spectrum of urinary proteins in BBS patients, in order to potentially identify 1 disease-specific proteomic profiles that may differentiate the patients from normal subjects; 2 urinary markers of renal dysfunction. Methods: Fourteen individuals (7 males and 7 females with a clinical diagnosis of BBS have been selected in this study. A pool of 10 aged-matched males and 10 aged-matched females have been used as controls for proteomic analysis. The glomerular filtration rate (eGFR has been estimated using the CKD-EPI formula. Variability of eGFR has been retrospectively assessed calculating average annual eGFR decline (ΔeGFR in a mean follow-up period of 4 years (3-7. Results: 42 proteins were significantly over- or under-represented in BBS patients compared with controls; the majority of these proteins are involved in fibrosis, cell adhesion and extracellular matrix organization. Statistic studies revealed a significant correlation between urine fibronectin (u-FN (r2=0.28; p<0.05, CD44 antigen (r2 =0.35; p<0.03 and lysosomal alfa glucosidase ( r20.27; p<0.05 abundance with the eGFR. In addition, u-FN (r2 =0.2389; p<0.05 was significantly correlated with ΔeGFR. Conclusion: The present study demonstrates that urine proteome of BBS patients differs from that of normal subjects; in addition, kidney dysfunction correlated with urine abundance of known markers of renal fibrosis.
Fang, Yongxiang; Wit, Ernst
2008-01-01
Fisher’s combined probability test is the most commonly used method to test the overall significance of a set independent p-values. However, it is very obviously that Fisher’s statistic is more sensitive to smaller p-values than to larger p-value and a small p-value may overrule the other p-values
Ziółkowska, Natalia; Lewczuk, Bogdan; Petryński, Wojciech; Palkowska, Katarzyna; Prusik, Magdalena; Targońska, Krystyna; Giżejewski, Zygmunt; Przybylska-Gornowicz, Barbara
2014-01-01
Anatomical, histological, and ultrastructural studies of the European beaver stomach revealed several unique morphological features. The prominent attribute of its gross morphology was the cardiogastric gland (CGG), located near the oesophageal entrance. Light microscopy showed that the CGG was formed by invaginations of the mucosa into the submucosa, which contained densely packed proper gastric glands comprised primarily of parietal and chief cells. Mucous neck cells represented beaver stomach was the presence of specific mucus with a thickness up to 950 µm (in frozen, unfixed sections) that coated the mucosa. Our observations suggest that the formation of this mucus is complex and includes the secretory granule accumulation in the cytoplasm of pit cells, the granule aggregation inside cells, and the incorporation of degenerating cells into the mucus.
Directory of Open Access Journals (Sweden)
Anita Lindmark
Full Text Available When profiling hospital performance, quality inicators are commonly evaluated through hospital-specific adjusted means with confidence intervals. When identifying deviations from a norm, large hospitals can have statistically significant results even for clinically irrelevant deviations while important deviations in small hospitals can remain undiscovered. We have used data from the Swedish Stroke Register (Riksstroke to illustrate the properties of a benchmarking method that integrates considerations of both clinical relevance and level of statistical significance.The performance measure used was case-mix adjusted risk of death or dependency in activities of daily living within 3 months after stroke. A hospital was labeled as having outlying performance if its case-mix adjusted risk exceeded a benchmark value with a specified statistical confidence level. The benchmark was expressed relative to the population risk and should reflect the clinically relevant deviation that is to be detected. A simulation study based on Riksstroke patient data from 2008-2009 was performed to investigate the effect of the choice of the statistical confidence level and benchmark value on the diagnostic properties of the method.Simulations were based on 18,309 patients in 76 hospitals. The widely used setting, comparing 95% confidence intervals to the national average, resulted in low sensitivity (0.252 and high specificity (0.991. There were large variations in sensitivity and specificity for different requirements of statistical confidence. Lowering statistical confidence improved sensitivity with a relatively smaller loss of specificity. Variations due to different benchmark values were smaller, especially for sensitivity. This allows the choice of a clinically relevant benchmark to be driven by clinical factors without major concerns about sufficiently reliable evidence.The study emphasizes the importance of combining clinical relevance and level of statistical
Lindmark, Anita; van Rompaye, Bart; Goetghebeur, Els; Glader, Eva-Lotta; Eriksson, Marie
2016-01-01
When profiling hospital performance, quality inicators are commonly evaluated through hospital-specific adjusted means with confidence intervals. When identifying deviations from a norm, large hospitals can have statistically significant results even for clinically irrelevant deviations while important deviations in small hospitals can remain undiscovered. We have used data from the Swedish Stroke Register (Riksstroke) to illustrate the properties of a benchmarking method that integrates considerations of both clinical relevance and level of statistical significance. The performance measure used was case-mix adjusted risk of death or dependency in activities of daily living within 3 months after stroke. A hospital was labeled as having outlying performance if its case-mix adjusted risk exceeded a benchmark value with a specified statistical confidence level. The benchmark was expressed relative to the population risk and should reflect the clinically relevant deviation that is to be detected. A simulation study based on Riksstroke patient data from 2008-2009 was performed to investigate the effect of the choice of the statistical confidence level and benchmark value on the diagnostic properties of the method. Simulations were based on 18,309 patients in 76 hospitals. The widely used setting, comparing 95% confidence intervals to the national average, resulted in low sensitivity (0.252) and high specificity (0.991). There were large variations in sensitivity and specificity for different requirements of statistical confidence. Lowering statistical confidence improved sensitivity with a relatively smaller loss of specificity. Variations due to different benchmark values were smaller, especially for sensitivity. This allows the choice of a clinically relevant benchmark to be driven by clinical factors without major concerns about sufficiently reliable evidence. The study emphasizes the importance of combining clinical relevance and level of statistical confidence when
DEFF Research Database (Denmark)
Skov, Vibe; Riley, Caroline Hasselbalch; Thomassen, Mads
2013-01-01
Gene expression profiling studies in the Philadelphia-negative chronic myeloproliferative neoplasms have revealed significant deregulation of several immune and inflammation genes that might be of importance for clonal evolution due to defective tumor immune surveillance. Other mechanisms might b...
Directory of Open Access Journals (Sweden)
Mauro Nirchio
Full Text Available Karyotype of M. curema from the Gulf of Mexico and Brazil have been reported as possessing chromosome complement with 2n=28 and FN=48, whereas specimens from Venezuela has been reported as possessing a diploid number 2n=24 and a conserved FN (48. Although at first sight this variation suggests the presence of a chromosomal intraspecific (interpopulational variability, the possibility that we are dealing with two different species was examined. This work revisit the karyotypes of M. curema from Venezuela and Brazil, including new data on C-banding, and NOR localization, and compares morphologic characteristics of samples from both localities. Thus, besides diploid number, the constitutive heterochromatin distribution and NORs location, mark other differences between M. curema Cytotype 1 (2n=28; FN=48 and Cytotype 2 (2n=24; NF=48. Moreover, morphologic comparison revealed differences in the scale counts and pectoral fin rays: 35 scales in the middle body line and 15 pectoral fin rays in specimens possessing the karyotype 2n=28, compared with 37-39 scales in the middle body line and 17 pectoral fin rays in specimens with the karyotype 2n=24. These differences lead us to suggest that both cytotypes are not related merely to geographic polytipic variations but could correspond to different species.
International Nuclear Information System (INIS)
Daziano, C.
2010-01-01
Statistical analysis of trace elements in volcanics research s, allowed to distinguish two independent populations with the same geochemical environment. For each component they have variable index of homogeneity resulting in dissimilar average values that reveal geochemical intra telluric phenomena. On the other hand the inhomogeneities observed in these rocks - as reflected in its petrochemical characters - could be exacerbated especially at so remote and dispersed location of their pitches, their relations with the enclosing rocks for the ranges of compositional variation, due differences relative ages
Dyble, Mark; Thompson, James; Smith, Daniel; Salali, Gul Deniz; Chaudhary, Nikhil; Page, Abigail E; Vinicuis, Lucio; Mace, Ruth; Migliano, Andrea Bamberg
2016-08-08
Like many other mammalian and primate societies [1-4], humans are said to live in multilevel social groups, with individuals situated in a series of hierarchically structured sub-groups [5, 6]. Although this multilevel social organization has been described among contemporary hunter-gatherers [5], questions remain as to the benefits that individuals derive from living in such groups. Here, we show that food sharing among two populations of contemporary hunter-gatherers-the Palanan Agta (Philippines) and Mbendjele BaYaka (Republic of Congo)-reveals similar multilevel social structures, with individuals situated in households, within sharing clusters of 3-4 households, within the wider residential camps, which vary in size. We suggest that these groupings serve to facilitate inter-sexual provisioning, kin provisioning, and risk reduction reciprocity, three levels of cooperation argued to be fundamental in human societies [7, 8]. Humans have a suite of derived life history characteristics including a long childhood and short inter-birth intervals that make offspring energetically demanding [9] and have moved to a dietary niche that often involves the exploitation of difficult to acquire foods with highly variable return rates [10-12]. This means that human foragers face both day-to-day and more long-term energetic deficits that conspire to make humans energetically interdependent. We suggest that a multilevel social organization allows individuals access to both the food sharing partners required to buffer themselves against energetic shortfalls and the cooperative partners required for skill-based tasks such as cooperative foraging. Copyright © 2016 Elsevier Ltd. All rights reserved.
Yellapu, Nanda Kumar; Gopal, Jeyakodi; Kasinathan, Gunasekaran; Purushothaman, Jambulingam
2018-06-01
Voltage gated sodium channels (VGSC) of mosquito vectors are the primary targets of dichlorodiphenyltrichloroethane (DDT) and other synthetic pyrethroids used in public health programmes. The knockdown resistant (kdr) mutations in VGSC are associated with the insecticide resistance especially in Anophelines. The present study is aimed to emphasize and demarcate the impact of three kdr-mutations such as L1014S, L1014F and L1014H on insecticide resistance. The membrane model of sodium transport domain of VGSC (STD-VGSC) was constructed using de novo approach based on domain and trans-membrane predictions. The comparative molecular modelling studies of wild type and mutant models of STD-VGSC revealed that L1014F mutant was observed to be near native to the wild type model in all the respects, but, L1014S and L1014H mutations showed drastic variations in the energy levels, root mean square fluctuations (RMSF) that resulted in conformational variations. The predicted binding sites also showed variable cavity volumes and RMSF in L1014S and L1014H mutants. Further, DDT also found be bound in near native manner to wild type in L1014F mutant and with variable orientation and affinities in L1014S and L1014H mutants. The variations and fluctuations observed in mutant structures explained that each mutation has its specific impact on the conformation of VGSC and its binding with DDT. The study provides new insights into the structure-function-correlations of mutant STD-VGSC structures and demonstrates the role and effects of kdr mutations on insecticide resistance in mosquito vectors.
Wang, Xiaoming; Zhou, Guofa; Zhong, Daibin; Wang, Xiaoling; Wang, Ying; Yang, Zhaoqing; Cui, Liwang; Yan, Guiyun
2016-06-06
Many developing countries are experiencing rapid ecological changes such as deforestation and shifting agricultural practices. These environmental changes may have an important consequence on malaria due to their impact on vector survival and reproduction. Despite intensive deforestation and malaria transmission in the China-Myanmar border area, the impact of deforestation on malaria vectors in the border area is unknown. We conducted life table studies on Anopheles minimus larvae to determine the pupation rate and development time in microcosms under deforested, banana plantation, and forested environments. The pupation rate of An. minimus was 3.8 % in the forested environment. It was significantly increased to 12.5 % in banana plantations and to 52.5 % in the deforested area. Deforestation reduced larval-to-pupal development time by 1.9-3.3 days. Food supplementation to aquatic habitats in forested environments and banana plantations significantly increased larval survival rate to a similar level as in the deforested environment. Deforestation enhanced the survival and development of An. minimus larvae, a major malaria vector in the China-Myanmar border area. Experimental determination of the life table parameters on mosquito larvae under a variety of environmental conditions is valuable to model malaria transmission dynamics and impact by climate and environmental changes.
Meta-Analysis Reveals Significant Association of the 3'-UTR VNTR in SLC6A3 with Alcohol Dependence.
Ma, Yunlong; Fan, Rongli; Li, Ming D
2016-07-01
Although many studies have analyzed the association of 3'-untranslated region variable-number tandem repeat (VNTR) polymorphism in SLC6A3 with alcohol dependence (AD), the results remain controversial. This study aimed to determine whether this variant indeed has any genetic effect on AD by integrating 17 reported studies with 5,929 participants included. The A9-dominant genetic model that considers A9-repeat and non-A9 repeat as 2 genotypes and compared their frequencies in alcoholics with that in controls was adopted. Considering the potential influence of ethnicity, differences in diagnostic criteria of AD, and alcoholic subgroups, stratified meta-analyses were conducted. There existed no evidence for the presence of heterogeneity among the studied samples, indicating the results under the fixed-effects model are acceptable. We found a significant association of VNTR A9 genotypes with AD in all ethnic populations (pooled odds ratio [OR] 1.12; 95% confidence interval [CI] 1.00, 1.25; p = 0.045) and the Caucasian population (pooled OR 1.15; 95% CI 1.01, 1.31; p = 0.036). We also found VNTR A9 genotypes to be significantly associated with alcoholism as defined by the DSM-IV criteria (pooled OR 1.18; 95% CI 1.03, 1.36; p = 0.02). Further, we found a significant association between VNTR A9 genotypes and alcoholism associated with alcohol withdrawal seizure or delirium tremens (pooled OR 1.55; 95% CI 1.24, 1.92; p = 1.0 × 10(-4) ). In all these meta-analyses, no evidence of publication bias was detected. We concluded that the VNTR polymorphism has an important role in the etiology of AD, and individuals with at least 1 A9 allele are more likely to be dependent on alcohol than persons carrying the non-A9 allele. Copyright © 2016 by the Research Society on Alcoholism.
Park, Jeong Hyun; Lee, Se Kyung; Lee, Jeong Eon; Kim, Seok Won; Nam, Seok Jin; Kim, Ji-Yeon; Ahn, Jin-Seok; Park, Won; Yu, Jonghan; Park, Yeon Hee
2018-03-01
In this study, we aimed to evaluate the economic loss due to the diagnosis of breast cancer within the female South Korean working-age population. A population-based cost analysis was performed for cancer-related diagnoses between 1999 and 2014, using respective public government funded databases. Among the five most common cancers, breast cancer mortality was strongly associated with the growth in gross domestic product between 1999 and 2014 (R=0.98). In the female population, breast cancer represented the greatest productivity loss among all cancers, which was a consequence of the peak in the incidence of breast cancer during mid-working age in the working-age population, in addition to being the most common and fastest growing cancer among South Korean women. Our study shows that breast cancer not only represents a significant disease burden for individual patients, but also contributes a real, nonnegligible loss in productivity in the South Korean economy.
Ghiani, Maria Elena; Varesi, Laurent; Mitchell, Robert John; Vona, Giuseppe
2009-12-01
Using 10 Y-chromosome short tandem repeat allelic and haplotypic frequencies, we examined genetic variation within the population of Corsica and its relationship with other Mediterranean populations. The most significant finding is the high level of genetic differentiation within Corsica, with strong evidence of an effective barrier to male-mediated gene flow between the south and the rest of the island. This internal differentiation most probably results from low exogamy among small isolated populations and also from the orography of the island, with a central mountain chain running the length of the island restricting human movement. This physical barrier is reflected not only in present-day intraisland linguistic and genetic differences but also in the relatedness of Corsican regions to other Mediterranean groups. Northwest and Central Corsica are much closer to West Mediterranean populations, whereas South Corsica is closer to Central-North Sardinia and East Mediterranean populations.
Fritz, Stefan; Fernandez-del Castillo, Carlos; Mino-Kenudson, Mari; Crippa, Stefano; Deshpande, Vikram; Lauwers, Gregory Y; Warshaw, Andrew L; Thayer, Sarah P; Iafrate, A John
2009-03-01
To determine whether intraductal papillary mucinous neoplasms of the pancreas (IPMNs) have a different genetic background compared with ductal adenocarcinoma (PDAC). The biologic and clinical behavior of IPMNs and IPMN-associated adenocarcinomas is different from PDAC in having a less aggressive tumor growth and significantly improved survival. Up to date, the molecular mechanisms underlying the clinical behavior of IPMNs are incompletely understood. 128 cystic pancreatic lesions were prospectively identified during the course of 2 years. From the corresponding surgical specimens, 57 IPMNs were separated and subdivided by histologic criteria into those with low-grade dysplasia, moderate dysplasia, high-grade dysplasia, and invasive cancer. Twenty specimens were suitable for DNA isolation and subsequent performance of array CGH. While none of the IPMNs with low-grade dysplasia displayed detectable chromosomal aberrations, IPMNs with moderate and high-grade dysplasia showed frequent copy number alterations. Commonly lost regions were located on chromosome 5q, 6q, 10q, 11q, 13q, 18q, and 22q. The incidence of loss of chromosome 5q, 6q, and 11q was significantly higher in IPMNs with high-grade dysplasia or invasion compared with PDAC. Ten of 13 IPMNs with moderate dysplasia or malignancy had loss of part or all of chromosome 6q, with a minimal deleted region between linear positions 78.0 and 130.0. This study is the first to use array CGH to characterize IPMNs. Recurrent cytogenetic alterations were identified and were different than those described in PDAC. Array CGH may help distinguish between these 2 entities and give insight into the differences in their biology and prognosis.
Wartenberg, Martin; Cibin, Silvia; Zlobec, Inti; Vassella, Erik; Eppenberger-Castori, Serenella M M; Terracciano, Luigi; Eichmann, Micha; Worni, Mathias; Gloor, Beat; Perren, Aurel; Karamitopoulou, Eva
2018-04-16
Current clinical classification of pancreatic ductal adenocarcinoma (PDAC) is unable to predict prognosis or response to chemo- or immunotherapy and does not take into account the host reaction to PDAC-cells. Our aim is to classify PDAC according to host- and tumor-related factors into clinically/biologically relevant subtypes by integrating molecular and microenvironmental findings. A well-characterized PDAC-cohort (n=110) underwent next-generation sequencing with a hotspot cancer panel, while Next-generation Tissue-Microarrays were immunostained for CD3, CD4, CD8, CD20, PD-L1, p63, hyaluronan-mediated motility receptor (RHAMM) and DNA mismatch-repair proteins. Previous data on FOXP3 were integrated. Immune-cell counts and protein expression were correlated with tumor-derived driver mutations, clinicopathologic features (TNM 8. 2017), survival and epithelial-mesenchymal-transition (EMT)-like tumor budding. Results: Three PDAC-subtypes were identified: the "immune-escape" (54%), poor in T- and B-cells and enriched in FOXP3+Tregs, with high-grade budding, frequent CDKN2A- , SMAD4- and PIK3CA-mutations and poor outcome; the "immune-rich" (35%), rich in T- and B-cells and poorer in FOXP3+Tregs, with infrequent budding, lower CDKN2A- and PIK3CA-mutation rate and better outcome and a subpopulation with tertiary lymphoid tissue (TLT), mutations in DNA damage response genes (STK11, ATM) and the best outcome; and the "immune-exhausted" (11%) with immunogenic microenvironment and two subpopulations: one with PD-L1-expression and high PIK3CA-mutation rate and a microsatellite-unstable subpopulation with high prevalence of JAK3-mutations. The combination of low budding, low stromal FOXP3-counts, presence of TLTs and absence of CDKN2A-mutations confers significant survival advantage in PDAC-patients. Immune host responses correlate with tumor characteristics leading to morphologically recognizable PDAC-subtypes with prognostic/predictive significance. Copyright ©2018
Le Roy, Caroline Ivanne; Passey, Jade Louise; Woodward, Martin John; La Ragione, Roberto Marcello; Claus, Sandrine Paule
2017-06-01
Pathogenic anaerobes Brachyspira spp. are responsible for an increasing number of Intestinal Spirochaetosis (IS) cases in livestock against which few approved treatments are available. Tiamulin is used to treat swine dysentery caused by Brachyspira spp. and recently has been used to handle avian intestinal spirochaetosis (AIS). The therapeutic dose used in chickens requires further evaluation since cases of bacterial resistance to tiamulin have been reported. In this study, we evaluated the impact of tiamulin at varying concentrations on the metabolism of B. pilosicoli using a 1 H-NMR-based metabonomics approach allowing the capture of the overall bacterial metabolic response to antibiotic treatment. Based on growth curve studies, tiamulin impacted bacterial growth even at very low concentration (0.008 μg/mL) although its metabolic activity was barely affected 72 h post exposure to antibiotic treatment. Only the highest dose of tiamulin tested (0.250 μg/mL) caused a major metabolic shift. Results showed that below this concentration, bacteria could maintain a normal metabolic trajectory despite significant growth inhibition by the antibiotic, which may contribute to disease reemergence post antibiotic treatment. Indeed, we confirmed that B. pilosicoli remained viable even after exposition to the highest antibiotic dose. This paper stresses the need to ensure new evaluation of bacterial viability post bacteriostatic exposure such as tiamulin to guarantee treatment efficacy and decrease antibiotic resistance development. Copyright © 2017 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Anne L Baldock
Full Text Available Malignant gliomas are incurable, primary brain neoplasms noted for their potential to extensively invade brain parenchyma. Current methods of clinical imaging do not elucidate the full extent of brain invasion, making it difficult to predict which, if any, patients are likely to benefit from gross total resection. Our goal was to apply a mathematical modeling approach to estimate the overall tumor invasiveness on a patient-by-patient basis and determine whether gross total resection would improve survival in patients with relatively less invasive gliomas.In 243 patients presenting with contrast-enhancing gliomas, estimates of the relative invasiveness of each patient's tumor, in terms of the ratio of net proliferation rate of the glioma cells to their net dispersal rate, were derived by applying a patient-specific mathematical model to routine pretreatment MR imaging. The effect of varying degrees of extent of resection on overall survival was assessed for cohorts of patients grouped by tumor invasiveness.We demonstrate that patients with more diffuse tumors showed no survival benefit (P = 0.532 from gross total resection over subtotal/biopsy, while those with nodular (less diffuse tumors showed a significant benefit (P = 0.00142 with a striking median survival benefit of over eight months compared to sub-totally resected tumors in the same cohort (an 80% improvement in survival time for GTR only seen for nodular tumors.These results suggest that our patient-specific, model-based estimates of tumor invasiveness have clinical utility in surgical decision making. Quantification of relative invasiveness assessed from routinely obtained pre-operative imaging provides a practical predictor of the benefit of gross total resection.
Kratz, Anna L; Murphy, Susan L; Braley, Tiffany J
2017-11-01
To describe the daily variability and patterns of pain, fatigue, depressed mood, and cognitive function in persons with multiple sclerosis (MS). Repeated-measures observational study of 7 consecutive days of home monitoring, including ecological momentary assessment (EMA) of symptoms. Multilevel mixed models were used to analyze data. General community. Ambulatory adults (N=107) with MS recruited through the University of Michigan and surrounding community. Not applicable. EMA measures of pain, fatigue, depressed mood, and cognitive function rated on a 0 to 10 scale, collected 5 times a day for 7 days. Cognitive function and depressed mood exhibited more stable within-person patterns than pain and fatigue, which varied considerably within person. All symptoms increased in intensity across the day (all Pfatigue showing the most substantial increase. Notably, this diurnal increase varied by sex and age; women showed a continuous increase from wake to bedtime, whereas fatigue plateaued after 7 pm for men (wake-bed B=1.04, P=.004). For the oldest subgroup, diurnal increases were concentrated to the middle of the day compared with younger subgroups, which showed an earlier onset of fatigue increase and sustained increases until bed time (wake-3 pm B=.04, P=.01; wake-7 pm B=.03, P=.02). Diurnal patterns of cognitive function varied by education; those with advanced college degrees showed a more stable pattern across the day, with significant differences compared with those with bachelor-level degrees in the evening (wake-7 pm B=-.47, P=.02; wake-bed B=-.45, P=.04). Findings suggest that chronic symptoms in MS are not static, even over a short time frame; rather, symptoms-fatigue and pain in particular-vary dynamically across and within days. Incorporation of EMA methods should be considered in the assessment of these chronic MS symptoms to enhance assessment and treatment strategies. Copyright © 2017 American Congress of Rehabilitation Medicine. Published by Elsevier
Siegelman, Noam; Bogaerts, Louisa; Kronenfeld, Ofer; Frost, Ram
2017-10-07
From a theoretical perspective, most discussions of statistical learning (SL) have focused on the possible "statistical" properties that are the object of learning. Much less attention has been given to defining what "learning" is in the context of "statistical learning." One major difficulty is that SL research has been monitoring participants' performance in laboratory settings with a strikingly narrow set of tasks, where learning is typically assessed offline, through a set of two-alternative-forced-choice questions, which follow a brief visual or auditory familiarization stream. Is that all there is to characterizing SL abilities? Here we adopt a novel perspective for investigating the processing of regularities in the visual modality. By tracking online performance in a self-paced SL paradigm, we focus on the trajectory of learning. In a set of three experiments we show that this paradigm provides a reliable and valid signature of SL performance, and it offers important insights for understanding how statistical regularities are perceived and assimilated in the visual modality. This demonstrates the promise of integrating different operational measures to our theory of SL. © 2017 Cognitive Science Society, Inc.
Di Florio, Adriano
2017-10-01
In order to test the computing capabilities of GPUs with respect to traditional CPU cores a high-statistics toy Monte Carlo technique has been implemented both in ROOT/RooFit and GooFit frameworks with the purpose to estimate the statistical significance of the structure observed by CMS close to the kinematical boundary of the J/ψϕ invariant mass in the three-body decay B + → J/ψϕK +. GooFit is a data analysis open tool under development that interfaces ROOT/RooFit to CUDA platform on nVidia GPU. The optimized GooFit application running on GPUs hosted by servers in the Bari Tier2 provides striking speed-up performances with respect to the RooFit application parallelised on multiple CPUs by means of PROOF-Lite tool. The considerable resulting speed-up, evident when comparing concurrent GooFit processes allowed by CUDA Multi Process Service and a RooFit/PROOF-Lite process with multiple CPU workers, is presented and discussed in detail. By means of GooFit it has also been possible to explore the behaviour of a likelihood ratio test statistic in different situations in which the Wilks Theorem may or may not apply because its regularity conditions are not satisfied.
Mirosław Mrozkowiak; Hanna Żukowska
2015-01-01
Mrozkowiak Mirosław, Żukowska Hanna. Znaczenie Dobrego Krzesła, jako elementu szkolnego i domowego środowiska ucznia, w profilaktyce zaburzeń statyki postawy ciała = The significance of Good Chair as part of children’s school and home environment in the preventive treatment of body statistics distortions. Journal of Education, Health and Sport. 2015;5(7):179-215. ISSN 2391-8306. DOI 10.5281/zenodo.19832 http://ojs.ukw.edu.pl/index.php/johs/article/view/2015%3B5%287%29%3A179-215 https:...
Fang, Yongxiang; Wit, Ernst
2008-01-01
Fisher’s combined probability test is the most commonly used method to test the overall significance of a set independent p-values. However, it is very obviously that Fisher’s statistic is more sensitive to smaller p-values than to larger p-value and a small p-value may overrule the other p-values and decide the test result. This is, in some cases, viewed as a flaw. In order to overcome this flaw and improve the power of the test, the joint tail probability of a set p-values is proposed as a ...
Souto, R Seoane; Martín-Rodero, A; Yeyati, A Levy
2016-12-23
We analyze the quantum quench dynamics in the formation of a phase-biased superconducting nanojunction. We find that in the absence of an external relaxation mechanism and for very general conditions the system gets trapped in a metastable state, corresponding to a nonequilibrium population of the Andreev bound states. The use of the time-dependent full counting statistics analysis allows us to extract information on the asymptotic population of even and odd many-body states, demonstrating that a universal behavior, dependent only on the Andreev state energy, is reached in the quantum point contact limit. These results shed light on recent experimental observations on quasiparticle trapping in superconducting atomic contacts.
Maric, Marija; de Haan, Else; Hogendoorn, Sanne M; Wolters, Lidewij H; Huizenga, Hilde M
2015-03-01
Single-case experimental designs are useful methods in clinical research practice to investigate individual client progress. Their proliferation might have been hampered by methodological challenges such as the difficulty applying existing statistical procedures. In this article, we describe a data-analytic method to analyze univariate (i.e., one symptom) single-case data using the common package SPSS. This method can help the clinical researcher to investigate whether an intervention works as compared with a baseline period or another intervention type, and to determine whether symptom improvement is clinically significant. First, we describe the statistical method in a conceptual way and show how it can be implemented in SPSS. Simulation studies were performed to determine the number of observation points required per intervention phase. Second, to illustrate this method and its implications, we present a case study of an adolescent with anxiety disorders treated with cognitive-behavioral therapy techniques in an outpatient psychotherapy clinic, whose symptoms were regularly assessed before each session. We provide a description of the data analyses and results of this case study. Finally, we discuss the advantages and shortcomings of the proposed method. Copyright © 2014. Published by Elsevier Ltd.
Pardo-Igúzquiza, Eulogio; Rodríguez-Tovar, Francisco J.
2012-12-01
Many spectral analysis techniques have been designed assuming sequences taken with a constant sampling interval. However, there are empirical time series in the geosciences (sediment cores, fossil abundance data, isotope analysis, …) that do not follow regular sampling because of missing data, gapped data, random sampling or incomplete sequences, among other reasons. In general, interpolating an uneven series in order to obtain a succession with a constant sampling interval alters the spectral content of the series. In such cases it is preferable to follow an approach that works with the uneven data directly, avoiding the need for an explicit interpolation step. The Lomb-Scargle periodogram is a popular choice in such circumstances, as there are programs available in the public domain for its computation. One new computer program for spectral analysis improves the standard Lomb-Scargle periodogram approach in two ways: (1) It explicitly adjusts the statistical significance to any bias introduced by variance reduction smoothing, and (2) it uses a permutation test to evaluate confidence levels, which is better suited than parametric methods when neighbouring frequencies are highly correlated. Another novel program for cross-spectral analysis offers the advantage of estimating the Lomb-Scargle cross-periodogram of two uneven time series defined on the same interval, and it evaluates the confidence levels of the estimated cross-spectra by a non-parametric computer intensive permutation test. Thus, the cross-spectrum, the squared coherence spectrum, the phase spectrum, and the Monte Carlo statistical significance of the cross-spectrum and the squared-coherence spectrum can be obtained. Both of the programs are written in ANSI Fortran 77, in view of its simplicity and compatibility. The program code is of public domain, provided on the website of the journal (http://www.iamg.org/index.php/publisher/articleview/frmArticleID/112/). Different examples (with simulated and
Directory of Open Access Journals (Sweden)
Ahmed Seid
2014-05-01
Full Text Available Cutaneous leishmaniasis (CL is a neglected tropical disease strongly associated with poverty. Treatment is problematic and no vaccine is available. Ethiopia has seen new outbreaks in areas previously not known to be endemic, often with co-infection by the human immunodeficiency virus (HIV with rates reaching 5.6% of the cases. The present study concerns the development of a risk model based on environmental factors using geographical information systems (GIS, statistical analysis and modelling. Odds ratio (OR of bivariate and multivariate logistic regression was used to evaluate the relative importance of environmental factors, accepting P ≤0.056 as the inclusion level for the model’s environmental variables. When estimating risk from the viewpoint of geographical surface, slope, elevation and annual rainfall were found to be good predictors of CL presence based on both probabilistic and weighted overlay approaches. However, when considering Ethiopia as whole, a minor difference was observed between the two methods with the probabilistic technique giving a 22.5% estimate, while that of weighted overlay approach was 19.5%. Calculating the population according to the land surface estimated by the latter method, the total Ethiopian population at risk for CL was estimated at 28,955,035, mainly including people in the highlands of the regional states of Amhara, Oromia, Tigray and the Southern Nations, Nationalities and Peoples’ Region, one of the nine ethnic divisions in Ethiopia. Our environmental risk model provided an overall prediction accuracy of 90.4%. The approach proposed here can be replicated for other diseases to facilitate implementation of evidence-based, integrated disease control activities.
Dubois, Albertine; Hérard, Anne-Sophie; Delatour, Benoît; Hantraye, Philippe; Bonvento, Gilles; Dhenain, Marc; Delzescaux, Thierry
2010-06-01
Biomarkers and technologies similar to those used in humans are essential for the follow-up of Alzheimer's disease (AD) animal models, particularly for the clarification of mechanisms and the screening and validation of new candidate treatments. In humans, changes in brain metabolism can be detected by 1-deoxy-2-[(18)F] fluoro-D-glucose PET (FDG-PET) and assessed in a user-independent manner with dedicated software, such as Statistical Parametric Mapping (SPM). FDG-PET can be carried out in small animals, but its resolution is low as compared to the size of rodent brain structures. In mouse models of AD, changes in cerebral glucose utilization are usually detected by [(14)C]-2-deoxyglucose (2DG) autoradiography, but this requires prior manual outlining of regions of interest (ROI) on selected sections. Here, we evaluate the feasibility of applying the SPM method to 3D autoradiographic data sets mapping brain metabolic activity in a transgenic mouse model of AD. We report the preliminary results obtained with 4 APP/PS1 (64+/-1 weeks) and 3 PS1 (65+/-2 weeks) mice. We also describe new procedures for the acquisition and use of "blockface" photographs and provide the first demonstration of their value for the 3D reconstruction and spatial normalization of post mortem mouse brain volumes. Despite this limited sample size, our results appear to be meaningful, consistent, and more comprehensive than findings from previously published studies based on conventional ROI-based methods. The establishment of statistical significance at the voxel level, rather than with a user-defined ROI, makes it possible to detect more reliably subtle differences in geometrically complex regions, such as the hippocampus. Our approach is generic and could be easily applied to other biomarkers and extended to other species and applications. Copyright 2010 Elsevier Inc. All rights reserved.
Xiong, Wu; Zhao, Qingyun; Zhao, Jun; Xun, Weibing; Li, Rong; Zhang, Ruifu; Wu, Huasong; Shen, Qirong
2015-07-01
In the present study, soil bacterial and fungal communities across vanilla continuous cropping time-series fields were assessed through deep pyrosequencing of 16S ribosomal RNA (rRNA) genes and internal transcribed spacer (ITS) regions. The results demonstrated that the long-term monoculture of vanilla significantly altered soil microbial communities. Soil fungal diversity index increased with consecutive cropping years, whereas soil bacterial diversity was relatively stable. Bray-Curtis dissimilarity cluster and UniFrac-weighted principal coordinate analysis (PCoA) revealed that monoculture time was the major determinant for fungal community structure, but not for bacterial community structure. The relative abundances (RAs) of the Firmicutes, Actinobacteria, Bacteroidetes, and Basidiomycota phyla were depleted along the years of vanilla monoculture. Pearson correlations at the phyla level demonstrated that Actinobacteria, Armatimonadetes, Bacteroidetes, Verrucomicrobia, and Firmicutes had significant negative correlations with vanilla disease index (DI), while no significant correlation for fungal phyla was observed. In addition, the amount of the pathogen Fusarium oxysporum accumulated with increasing years and was significantly positively correlated with vanilla DI. By contrast, the abundance of beneficial bacteria, including Bradyrhizobium and Bacillus, significantly decreased over time. In sum, soil weakness and vanilla stem wilt disease after long-term continuous cropping can be attributed to the alteration of the soil microbial community membership and structure, i.e., the reduction of the beneficial microbes and the accumulation of the fungal pathogen.
Directory of Open Access Journals (Sweden)
Yasumoto Matsui
2012-01-01
Full Text Available Bone mineral density (aBMD is equivalent to bone mineral content (BMC divided by area. We rechecked the significance of aBMD changes in aging by examining BMC and area separately. Subjects were 1167 community-dwelling Japanese men and women, aged 40–79 years. ABMDs of femoral neck and lumbar spine were assessed by DXA twice, at 6-year intervals. The change rates of BMC and area, as well as aBMD, were calculated and described separately by the age stratum and by sex. In the femoral neck region, aBMDs were significantly decreased in all age strata by an increase in area as well as BMC loss in the same pattern in both sexes. In the lumbar spine region, aBMDs decreased until the age of 60 in women, caused by the significant BMC decrease accompanying the small area change. Very differently in men, aBMDs increased after their 50s due to BMC increase, accompanied by an area increase. Separate analyses of BMC and area change revealed that the significance of aBMD changes in aging was very divergent among sites and between sexes. This may explain in part the dissociation of aBMD change and bone strength, suggesting that we should be more cautious when interpreting the meaning of aBMD change.
Wilson, Robert M.
2001-01-01
Since 1750, the number of cataclysmic volcanic eruptions (volcanic explosivity index (VEI)>=4) per decade spans 2-11, with 96 percent located in the tropics and extra-tropical Northern Hemisphere. A two-point moving average of the volcanic time series has higher values since the 1860's than before, being 8.00 in the 1910's (the highest value) and 6.50 in the 1980's, the highest since the 1910's peak. Because of the usual behavior of the first difference of the two-point moving averages, one infers that its value for the 1990's will measure approximately 6.50 +/- 1, implying that approximately 7 +/- 4 cataclysmic volcanic eruptions should be expected during the present decade (2000-2009). Because cataclysmic volcanic eruptions (especially those having VEI>=5) nearly always have been associated with short-term episodes of global cooling, the occurrence of even one might confuse our ability to assess the effects of global warming. Poisson probability distributions reveal that the probability of one or more events with a VEI>=4 within the next ten years is >99 percent. It is approximately 49 percent for an event with a VEI>=5, and 18 percent for an event with a VEI>=6. Hence, the likelihood that a climatically significant volcanic eruption will occur within the next ten years appears reasonably high.
Maric, M.; de Haan, M.; Hogendoorn, S.M.; Wolters, L.H.; Huizenga, H.M.
2015-01-01
Single-case experimental designs are useful methods in clinical research practice to investigate individual client progress. Their proliferation might have been hampered by methodological challenges such as the difficulty applying existing statistical procedures. In this article, we describe a
Maric, Marija; de Haan, Else; Hogendoorn, Sanne M.; Wolters, Lidewij H.; Huizenga, Hilde M.
2015-01-01
Single-case experimental designs are useful methods in clinical research practice to investigate individual client progress. Their proliferation might have been hampered by methodological challenges such as the difficulty applying existing statistical procedures. In this article, we describe a
Zha, Shanjie; Liu, Saixi; Su, Wenhao; Shi, Wei; Xiao, Guoqiang; Yan, Maocang; Liu, Guangxu
2017-12-01
It has been suggested that climate change may promote the outbreaks of diseases in the sea through altering the host susceptibility, the pathogen virulence, and the host-pathogen interaction. However, the impacts of ocean acidification (OA) on the pathogen components of bacterial community and the host-pathogen interaction of marine bivalves are still poorly understood. Therefore, 16S rRNA high-throughput sequencing and host-pathogen interaction analysis between blood clam (Tegillarca granosa) and Vibrio harveyi were conducted in the present study to gain a better understanding of the ecological impacts of ocean acidification. The results obtained revealed a significant impact of ocean acidification on the composition of microbial community at laboratory scale. Notably, the abundance of Vibrio, a major group of pathogens to many marine organisms, was significantly increased under ocean acidification condition. In addition, the survival rate and haemolytic activity of V. harveyi were significantly higher in the presence of haemolymph of OA treated T. granosa, indicating a compromised immunity of the clam and enhanced virulence of V. harveyi under future ocean acidification scenarios. Conclusively, the results obtained in this study suggest that future ocean acidification may increase the risk of Vibrio pathogen infection for marine bivalve species, such as blood clams. Copyright © 2017 Elsevier Ltd. All rights reserved.
Lai, Jonathan H; Fleming, Kirsten E; Ly, Thai Yen; Pasternak, Sylvia; Godlewski, Marek; Doucette, Steve; Walsh, Noreen M
2015-09-01
Merkel cell polyomavirus is of oncogenic significance in approximately 80% of Merkel cell carcinomas. Morphological subcategories of the tumor differ in regard to viral status, the rare combined type being uniformly virus negative and the predominant pure type being mainly virus positive. Indications that different biological subsets of the tumor exist led us to explore this diversity. In an Eastern Canadian cohort of cases (75 patients; mean age, 76 years [range, 43-91]; male/female ratio, 43:32; 51 [68%] pure and 24 [34%] combined tumors), we semiquantitatively compared the immunohistochemical expression of 3 cellular proteins (p53, Bcl-2, and c-kit) in pure versus combined groups. Viral status was known in a subset of cases. The significant overexpression of p53 in the combined group (mean [SD], 153.8 [117.8] versus 121.6 [77.9]; P = .01) and the increased epidermal expression of this protein (p53 patches) in the same group lend credence to a primary etiologic role for sun damage in these cases. Expression of Bcl-2 and c-kit did not differ significantly between the 2 morphological groups. A relative increase in c-kit expression was significantly associated with a virus-negative status (median [interquartile range], 100 [60-115] versus 70 [0-100]; P = .03). Emerging data reveal divergent biological pathways in Merkel cell carcinoma, each with a characteristic immunohistochemical profile. Virus-positive tumors (all pure) exhibit high retinoblastoma protein and low p53 expression, whereas virus-negative cases (few pure and all combined) show high p53 and relatively high c-kit expression. The potential biological implications of this dichotomy call for consistent stratification of these tumors in future studies. Copyright © 2015 Elsevier Inc. All rights reserved.
Herold, Christine; Hooli, Basavaraj V.; Mullin, Kristina; Liu, Tian; Roehr, Johannes T; Mattheisen, Manuel; Parrado, Antonio R.; Bertram, Lars; Lange, Christoph; Tanzi, Rudolph E.
2015-01-01
The genetic basis of Alzheimer's disease (AD) is complex and heterogeneous. Over 200 highly penetrant pathogenic variants in the genes APP, PSEN1 and PSEN2 cause a subset of early-onset familial Alzheimer's disease (EOFAD). On the other hand, susceptibility to late-onset forms of AD (LOAD) is indisputably associated to the ε4 allele in the gene APOE, and more recently to variants in more than two-dozen additional genes identified in the large-scale genome-wide association studies (GWAS) and meta-analyses reports. Taken together however, although the heritability in AD is estimated to be as high as 80%, a large proportion of the underlying genetic factors still remain to be elucidated. In this study we performed a systematic family-based genome-wide association and meta-analysis on close to 15 million imputed variants from three large collections of AD families (~3,500 subjects from 1,070 families). Using a multivariate phenotype combining affection status and onset age, meta-analysis of the association results revealed three single nucleotide polymorphisms (SNPs) that achieved genome-wide significance for association with AD risk: rs7609954 in the gene PTPRG (P-value = 3.98·10−08), rs1347297 in the gene OSBPL6 (P-value = 4.53·10−08), and rs1513625 near PDCL3 (P-value = 4.28·10−08). In addition, rs72953347 in OSBPL6 (P-value = 6.36·10−07) and two SNPs in the gene CDKAL1 showed marginally significant association with LOAD (rs10456232, P-value: 4.76·10−07; rs62400067, P-value: 3.54·10−07). In summary, family-based GWAS meta-analysis of imputed SNPs revealed novel genomic variants in (or near) PTPRG, OSBPL6, and PDCL3 that influence risk for AD with genome-wide significance. PMID:26830138
Tsai, M. C.
2017-12-01
High strain accumulation across the fold-and-thrust belt in Southwestern Taiwan are revealed by the Continuous GPS (cGPS) and SAR interferometry. This high strain is generally accommodated by the major active structures in fold-and-thrust belt of western Foothills in SW Taiwan connected to the accretionary wedge in the incipient are-continent collision zone. The active structures across the high strain accumulation include the deformation front around the Tainan Tableland, the Hochiali, Hsiaokangshan, Fangshan and Chishan faults. Among these active structures, the deformation pattern revealed from cGPS and SAR interferometry suggest that the Fangshan transfer fault may be a left-lateral fault zone with thrust component accommodating the westward differential motion of thrust sheets on both side of the fault. In addition, the Chishan fault connected to the splay fault bordering the lower-slope and upper-slope of the accretionary wedge which could be the major seismogenic fault and an out-of-sequence thrust fault in SW Taiwan. The big earthquakes resulted from the reactivation of out-of-sequence thrusts have been observed along the Nankai accretionary wedge, thus the assessment of the major seismogenic structures by strain accumulation between the frontal décollement and out-of-sequence thrusts is a crucial topic. According to the background seismicity, the low seismicity and mid-crust to mantle events are observed inland and the lower- and upper- slope domain offshore SW Taiwan, which rheologically implies the upper crust of the accretionary wedge is more or less aseimic. This result may suggest that the excess fluid pressure from the accretionary wedge not only has significantly weakened the prism materials as well as major fault zone, but also makes the accretionary wedge landward extension, which is why the low seismicity is observed in SW Taiwan area. Key words: Continuous GPS, SAR interferometry, strain rate, out-of-sequence thrust.
Takebe, Jun; Ito, Shigeki; Miura, Shingo; Miyata, Kyohei; Ishibashi, Kanji
2012-01-01
A method of coating commercially pure titanium (cpTi) implants with a highly crystalline, thin hydroxyapatite (HA) layer using discharge anodic oxidation followed by hydrothermal treatment (Spark discharged Anodic oxidation treatment ; SA-treated cpTi) has been reported for use in clinical dentistry. We hypothesized that a thin HA layer with high crystallinity and nanostructured anodic titanium oxide film on such SA-treated cpTi implant surfaces might be a crucial function of their surface-specific potential energy. To test this, we analyzed anodic oxide (AO) cpTi and SA-treated cpTi disks by SEM and AFM. Contact angles and surface free energy of each disk surface was measured using FAMAS software. High-magnification SEM and AFM revealed the nanotopographic structure of the anodic titanium oxide film on SA-treated cpTi; however, this was not observed on the AO cpTi surface. The contact angle and surface free energy measurements were also significantly different between AO cpTi and SA-treated cpTi surfaces (Tukey's, P<0.05). These data indicated that the change of physicochemical properties of an anodic titanium oxide film with HA crystals on an SA-treated cpTi surface may play a key role in the phenomenon of osteoconduction during the process of osseointegration. Copyright © 2011 Elsevier B.V. All rights reserved.
Cardenas, Erick; Wu, Wei-Min; Leigh, Mary Beth; Carley, Jack; Carroll, Sue; Gentry, Terry; Luo, Jian; Watson, David; Gu, Baohua; Ginder-Vogel, Matthew; Kitanidis, Peter K.; Jardine, Philip M.; Zhou, Jizhong; Criddle, Craig S.; Marsh, Terence L.; Tiedje, James M.
2010-01-01
Massively parallel sequencing has provided a more affordable and high-throughput method to study microbial communities, although it has mostly been used in an exploratory fashion. We combined pyrosequencing with a strict indicator species statistical analysis to test if bacteria specifically responded to ethanol injection that successfully promoted dissimilatory uranium(VI) reduction in the subsurface of a uranium contamination plume at the Oak Ridge Field Research Center in Tennessee. Remediation was achieved with a hydraulic flow control consisting of an inner loop, where ethanol was injected, and an outer loop for flow-field protection. This strategy reduced uranium concentrations in groundwater to levels below 0.126 μM and created geochemical gradients in electron donors from the inner-loop injection well toward the outer loop and downgradient flow path. Our analysis with 15 sediment samples from the entire test area found significant indicator species that showed a high degree of adaptation to the three different hydrochemical-created conditions. Castellaniella and Rhodanobacter characterized areas with low pH, heavy metals, and low bioactivity, while sulfate-, Fe(III)-, and U(VI)-reducing bacteria (Desulfovibrio, Anaeromyxobacter, and Desulfosporosinus) were indicators of areas where U(VI) reduction occurred. The abundance of these bacteria, as well as the Fe(III) and U(VI) reducer Geobacter, correlated with the hydraulic connectivity to the substrate injection site, suggesting that the selected populations were a direct response to electron donor addition by the groundwater flow path. A false-discovery-rate approach was implemented to discard false-positive results by chance, given the large amount of data compared. PMID:20729318
Cardenas, Erick; Wu, Wei-Min; Leigh, Mary Beth; Carley, Jack; Carroll, Sue; Gentry, Terry; Luo, Jian; Watson, David; Gu, Baohua; Ginder-Vogel, Matthew; Kitanidis, Peter K; Jardine, Philip M; Zhou, Jizhong; Criddle, Craig S; Marsh, Terence L; Tiedje, James M
2010-10-01
Massively parallel sequencing has provided a more affordable and high-throughput method to study microbial communities, although it has mostly been used in an exploratory fashion. We combined pyrosequencing with a strict indicator species statistical analysis to test if bacteria specifically responded to ethanol injection that successfully promoted dissimilatory uranium(VI) reduction in the subsurface of a uranium contamination plume at the Oak Ridge Field Research Center in Tennessee. Remediation was achieved with a hydraulic flow control consisting of an inner loop, where ethanol was injected, and an outer loop for flow-field protection. This strategy reduced uranium concentrations in groundwater to levels below 0.126 μM and created geochemical gradients in electron donors from the inner-loop injection well toward the outer loop and downgradient flow path. Our analysis with 15 sediment samples from the entire test area found significant indicator species that showed a high degree of adaptation to the three different hydrochemical-created conditions. Castellaniella and Rhodanobacter characterized areas with low pH, heavy metals, and low bioactivity, while sulfate-, Fe(III)-, and U(VI)-reducing bacteria (Desulfovibrio, Anaeromyxobacter, and Desulfosporosinus) were indicators of areas where U(VI) reduction occurred. The abundance of these bacteria, as well as the Fe(III) and U(VI) reducer Geobacter, correlated with the hydraulic connectivity to the substrate injection site, suggesting that the selected populations were a direct response to electron donor addition by the groundwater flow path. A false-discovery-rate approach was implemented to discard false-positive results by chance, given the large amount of data compared.
Lippuner, Christoph; Ramakrishnan, Chandra; Basso, Walter U; Schmid, Marc W; Okoniewski, Michal; Smith, Nicholas C; Hässig, Michael; Deplazes, Peter; Hehl, Adrian B
2018-05-01
Cryptosporidium parvum is a major cause of diarrhoea in humans and animals. There are no vaccines and few drugs available to control C. parvum. In this study, we used RNA-Seq to compare gene expression in sporozoites and intracellular stages of C. parvum to identify genes likely to be important for successful completion of the parasite's life cycle and, thereby, possible targets for drugs or vaccines. We identified 3774 protein-encoding transcripts in C. parvum. Applying a stringent cut-off of eight fold for determination of differential expression, we identified 173 genes (26 coding for predicted secreted proteins) upregulated in sporozoites. On the other hand, expression of 1259 genes was upregulated in intestinal stages (merozoites/gamonts) with a gene ontology enrichment for 63 biological processes and upregulation of 117 genes in 23 metabolic pathways. There was no clear stage specificity of expression of AP2-domain containing transcription factors, although sporozoites had a relatively small repertoire of these important regulators. Our RNA-Seq analysis revealed a new calcium-dependent protein kinase, bringing the total number of known calcium-dependent protein kinases (CDPKs) in C. parvum to 11. One of these, CDPK1, was expressed in all stages, strengthening the notion that it is a valid drug target. By comparing parasites grown in vivo (which produce bona fide thick-walled oocysts) and in vitro (which are arrested in sexual development prior to oocyst generation) we were able to confirm that genes encoding oocyst wall proteins are expressed in gametocytes and that the proteins are stockpiled rather than generated de novo in zygotes. RNA-Seq analysis of C. parvum revealed genes expressed in a stage-specific manner and others whose expression is required at all stages of development. The functional significance of these can now be addressed through recent advances in transgenics for C. parvum, and may lead to the identification of viable drug and vaccine
Directory of Open Access Journals (Sweden)
Raymond Kiu
2017-12-01
Full Text Available Clostridium perfringens is an important cause of animal and human infections, however information about the genetic makeup of this pathogenic bacterium is currently limited. In this study, we sought to understand and characterise the genomic variation, pangenomic diversity, and key virulence traits of 56 C. perfringens strains which included 51 public, and 5 newly sequenced and annotated genomes using Whole Genome Sequencing. Our investigation revealed that C. perfringens has an “open” pangenome comprising 11667 genes and 12.6% of core genes, identified as the most divergent single-species Gram-positive bacterial pangenome currently reported. Our computational analyses also defined C. perfringens phylogeny (16S rRNA gene in relation to some 25 Clostridium species, with C. baratii and C. sardiniense determined to be the closest relatives. Profiling virulence-associated factors confirmed presence of well-characterised C. perfringens-associated exotoxins genes including α-toxin (plc, enterotoxin (cpe, and Perfringolysin O (pfo or pfoA, although interestingly there did not appear to be a close correlation with encoded toxin type and disease phenotype. Furthermore, genomic analysis indicated significant horizontal gene transfer events as defined by presence of prophage genomes, and notably absence of CRISPR defence systems in >70% (40/56 of the strains. In relation to antimicrobial resistance mechanisms, tetracycline resistance genes (tet and anti-defensins genes (mprF were consistently detected in silico (tet: 75%; mprF: 100%. However, pre-antibiotic era strain genomes did not encode for tet, thus implying antimicrobial selective pressures in C. perfringens evolutionary history over the past 80 years. This study provides new genomic understanding of this genetically divergent multi-host bacterium, and further expands our knowledge on this medically and veterinary important pathogen.
Kiu, Raymond; Caim, Shabhonam; Alexander, Sarah; Pachori, Purnima; Hall, Lindsay J
2017-01-01
Clostridium perfringens is an important cause of animal and human infections, however information about the genetic makeup of this pathogenic bacterium is currently limited. In this study, we sought to understand and characterise the genomic variation, pangenomic diversity, and key virulence traits of 56 C. perfringens strains which included 51 public, and 5 newly sequenced and annotated genomes using Whole Genome Sequencing. Our investigation revealed that C. perfringens has an "open" pangenome comprising 11667 genes and 12.6% of core genes, identified as the most divergent single-species Gram-positive bacterial pangenome currently reported. Our computational analyses also defined C. perfringens phylogeny (16S rRNA gene) in relation to some 25 Clostridium species, with C. baratii and C. sardiniense determined to be the closest relatives. Profiling virulence-associated factors confirmed presence of well-characterised C. perfringens -associated exotoxins genes including α-toxin ( plc ), enterotoxin ( cpe ), and Perfringolysin O ( pfo or pfoA ), although interestingly there did not appear to be a close correlation with encoded toxin type and disease phenotype. Furthermore, genomic analysis indicated significant horizontal gene transfer events as defined by presence of prophage genomes, and notably absence of CRISPR defence systems in >70% (40/56) of the strains. In relation to antimicrobial resistance mechanisms, tetracycline resistance genes ( tet ) and anti-defensins genes ( mprF ) were consistently detected in silico ( tet : 75%; mprF : 100%). However, pre-antibiotic era strain genomes did not encode for tet , thus implying antimicrobial selective pressures in C. perfringens evolutionary history over the past 80 years. This study provides new genomic understanding of this genetically divergent multi-host bacterium, and further expands our knowledge on this medically and veterinary important pathogen.
Hoover, Malachia; Adamian, Yvess; Brown, Mark; Maawy, Ali; Chang, Alexander; Lee, Jacqueline; Gharibi, Armen; Katz, Matthew H; Fleming, Jason; Hoffman, Robert M; Bouvet, Michael; Doebler, Robert; Kelber, Jonathan A
2017-01-24
Next-generation sequencing (NGS) can identify and validate new biomarkers of cancer onset, progression and therapy resistance. Substantial archives of formalin-fixed, paraffin-embedded (FFPE) cancer samples from patients represent a rich resource for linking molecular signatures to clinical data. However, performing NGS on FFPE samples is limited by poor RNA purification methods. To address this hurdle, we developed an improved methodology for extracting high-quality RNA from FFPE samples. By briefly integrating a newly-designed micro-homogenizing (mH) tool with commercially available FFPE RNA extraction protocols, RNA recovery is increased by approximately 3-fold while maintaining standard A260/A280 ratios and RNA quality index (RQI) values. Furthermore, we demonstrate that the mH-purified FFPE RNAs are longer and of higher integrity. Previous studies have suggested that pancreatic ductal adenocarcinoma (PDAC) gene expression signatures vary significantly under in vitro versus in vivo and in vivo subcutaneous versus orthotopic conditions. By using our improved mH-based method, we were able to preserve established expression patterns of KRas-dependency genes within these three unique microenvironments. Finally, expression analysis of novel biomarkers in KRas mutant PDAC samples revealed that PEAK1 decreases and MST1R increases by over 100-fold in orthotopic versus subcutaneous microenvironments. Interestingly, however, only PEAK1 levels remain elevated in orthotopically grown KRas wild-type PDAC cells. These results demonstrate the critical nature of the orthotopic tumor microenvironment when evaluating the clinical relevance of new biomarkers in cells or patient-derived samples. Furthermore, this new mH-based FFPE RNA extraction method has the potential to enhance and expand future FFPE-RNA-NGS cancer biomarker studies.
Directory of Open Access Journals (Sweden)
Prasanna Vidyasekar
Full Text Available Zero gravity causes several changes in metabolic and functional aspects of the human body and experiments in space flight have demonstrated alterations in cancer growth and progression. This study reports the genome wide expression profiling of a colorectal cancer cell line-DLD-1, and a lymphoblast leukemic cell line-MOLT-4, under simulated microgravity in an effort to understand central processes and cellular functions that are dysregulated among both cell lines. Altered cell morphology, reduced cell viability and an aberrant cell cycle profile in comparison to their static controls were observed in both cell lines under microgravity. The process of cell cycle in DLD-1 cells was markedly affected with reduced viability, reduced colony forming ability, an apoptotic population and dysregulation of cell cycle genes, oncogenes, and cancer progression and prognostic markers. DNA microarray analysis revealed 1801 (upregulated and 2542 (downregulated genes (>2 fold in DLD-1 cultures under microgravity while MOLT-4 cultures differentially expressed 349 (upregulated and 444 (downregulated genes (>2 fold under microgravity. The loss in cell proliferative capacity was corroborated with the downregulation of the cell cycle process as demonstrated by functional clustering of DNA microarray data using gene ontology terms. The genome wide expression profile also showed significant dysregulation of post transcriptional gene silencing machinery and multiple microRNA host genes that are potential tumor suppressors and proto-oncogenes including MIR22HG, MIR17HG and MIR21HG. The MIR22HG, a tumor-suppressor gene was one of the highest upregulated genes in the microarray data showing a 4.4 log fold upregulation under microgravity. Real time PCR validated the dysregulation in the host gene by demonstrating a 4.18 log fold upregulation of the miR-22 microRNA. Microarray data also showed dysregulation of direct targets of miR-22, SP1, CDK6 and CCNA2.
Cardenas, Erick; Wu, Wei-Min; Leigh, Mary Beth; Carley, Jack; Carroll, Sue; Gentry, Terry; Luo, Jian; Watson, David; Gu, Baohua; Ginder-Vogel, Matthew; Kitanidis, Peter K.; Jardine, Philip M.; Zhou, Jizhong; Criddle, Craig S.; Marsh, Terence L.
2010-01-01
Massively parallel sequencing has provided a more affordable and high-throughput method to study microbial communities, although it has mostly been used in an exploratory fashion. We combined pyrosequencing with a strict indicator species statistical analysis to test if bacteria specifically responded to ethanol injection that successfully promoted dissimilatory uranium(VI) reduction in the subsurface of a uranium contamination plume at the Oak Ridge Field Research Center in Tennessee. Remedi...
Yin, Ke; Dou, Xiaomin; Ren, Fei; Chan, Wei-Ping; Chang, Victor Wei-Chung
2018-02-15
Bottom ashes generated from municipal solid waste incineration have gained increasing popularity as alternative construction materials, however, they contains elevated heavy metals posing a challenge for its free usage. Different leaching methods are developed to quantify leaching potential of incineration bottom ashes meanwhile guide its environmentally friendly application. Yet, there are diverse IBA applications while the in situ environment is always complicated, challenging its legislation. In this study, leaching tests were conveyed using batch and column leaching methods with seawater as opposed to deionized water, to unveil the metal leaching potential of IBA subjected to salty environment, which is commonly encountered when using IBA in land reclamation yet not well understood. Statistical analysis for different leaching methods suggested disparate performance between seawater and deionized water primarily ascribed to ionic strength. Impacts of leachant are metal-specific dependent on leaching methods and have a function of intrinsic characteristics of incineration bottom ashes. Leaching performances were further compared on additional perspectives, e.g. leaching approach and liquid to solid ratio, indicating sophisticated leaching potentials dominated by combined geochemistry. It is necessary to develop application-oriented leaching methods with corresponding leaching criteria to preclude discriminations between different applications, e.g., terrestrial applications vs. land reclamation. Copyright © 2017 Elsevier B.V. All rights reserved.
Hsu, Tung-Shin; McPherron, R. L.
2002-11-01
An outstanding problem in magnetospheric physics is deciding whether substorms are always triggered by external changes in the interplanetary magnetic field (IMF) or solar wind plasma, or whether they sometimes occur spontaneously. Over the past decade, arguments have been made on both sides of this issue. In fact, there is considerable evidence that some substorms are triggered. However, equally persuasive examples of substorms with no obvious trigger have been found. Because of conflicting views on this subject, further work is required to determine whether there is a physical relation between IMF triggers and substorm onset. In the work reported here a list of substorm onsets was created using two independent substorm signatures: sudden changes in the slope of the AL index and the start of a Pi 2 pulsation burst. Possible IMF triggers were determined from ISEE-2 observations. With the ISEE spacecraft near local noon immediately upstream of the bow shock, there can be little question about propagation delay to the magnetopause or whether a particular IMF feature hits the subsolar magnetopause. Thus it eliminates the objections that the calculated arrival time is subject to a large error or that the solar wind monitor missed a potential trigger incident at the subsolar point. Using a less familiar technique, statistics of point process, we find that the time delay between substorm onsets and the propagated arrival time of IMF triggers are clustered around zero. We estimate for independent processes that the probability of this clustering by chance alone is about 10-11. If we take into account the requirement that the IMF must have been southward prior to the onset, then the probability of clustering is higher, ˜10-5, but still extremely unlikely. Thus it is not possible to ascribe the apparent relation between IMF northward turnings and substorm onset to coincidence.
Harris, Joshua D; Brand, Jefferson C; Cote, Mark P; Dhawan, Aman
2017-08-01
Within the health care environment, there has been a recent and appropriate trend towards emphasizing the value of care provision. Reduced cost and higher quality improve the value of care. Quality is a challenging, heterogeneous, variably defined concept. At the core of quality is the patient's outcome, quantified by a vast assortment of subjective and objective outcome measures. There has been a recent evolution towards evidence-based medicine in health care, clearly elucidating the role of high-quality evidence across groups of patients and studies. Synthetic studies, such as systematic reviews and meta-analyses, are at the top of the evidence-based medicine hierarchy. Thus, these investigations may be the best potential source of guiding diagnostic, therapeutic, prognostic, and economic medical decision making. Systematic reviews critically appraise and synthesize the best available evidence to provide a conclusion statement (a "take-home point") in response to a specific answerable clinical question. A meta-analysis uses statistical methods to quantitatively combine data from single studies. Meta-analyses should be performed with high methodological quality homogenous studies (Level I or II) or evidence randomized studies, to minimize confounding variable bias. When it is known that the literature is inadequate or a recent systematic review has already been performed with a demonstration of insufficient data, then a new systematic review does not add anything meaningful to the literature. PROSPERO registration and PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines assist authors in the design and conduct of systematic reviews and should always be used. Complete transparency of the conduct of the review permits reproducibility and improves fidelity of the conclusions. Pooling of data from overly dissimilar investigations should be avoided. This particularly applies to Level IV evidence, that is, noncomparative investigations
Berry, Luke; Poudel, Saroj; Tokmina-Lukaszewska, Monika; Colman, Daniel R; Nguyen, Diep M N; Schut, Gerrit J; Adams, Michael W W; Peters, John W; Boyd, Eric S; Bothner, Brian
2018-01-01
Recent investigations into ferredoxin-dependent transhydrogenases, a class of enzymes responsible for electron transport, have highlighted the biological importance of flavin-based electron bifurcation (FBEB). FBEB generates biomolecules with very low reduction potential by coupling the oxidation of an electron donor with intermediate potential to the reduction of high and low potential molecules. Bifurcating systems can generate biomolecules with very low reduction potentials, such as reduced ferredoxin (Fd), from species such as NADPH. Metabolic systems that use bifurcation are more efficient and confer a competitive advantage for the organisms that harbor them. Structural models are now available for two NADH-dependent ferredoxin-NADP + oxidoreductase (Nfn) complexes. These models, together with spectroscopic studies, have provided considerable insight into the catalytic process of FBEB. However, much about the mechanism and regulation of these multi-subunit proteins remains unclear. Using hydrogen/deuterium exchange mass spectrometry (HDX-MS) and statistical coupling analysis (SCA), we identified specific pathways of communication within the model FBEB system, Nfn from Pyrococus furiosus, under conditions at each step of the catalytic cycle. HDX-MS revealed evidence for allosteric coupling across protein subunits upon nucleotide and ferredoxin binding. SCA uncovered a network of co-evolving residues that can provide connectivity across the complex. Together, the HDX-MS and SCA data show that protein allostery occurs across the ensemble of iron‑sulfur cofactors and ligand binding sites using specific pathways that connect domains allowing them to function as dynamically coordinated units. Copyright © 2017 Elsevier B.V. All rights reserved.
Kim, Yee Suk; Lee, Sungin; Zong, Nansu; Kahng, Jimin
2017-01-01
The present study aimed to investigate differences in prognosis based on human papillomavirus (HPV) infection, persistent infection and genotype variations for patients exhibiting atypical squamous cells of undetermined significance (ASCUS) in their initial Papanicolaou (PAP) test results. A latent Dirichlet allocation (LDA)-based tool was developed that may offer a facilitated means of communication to be employed during patient-doctor consultations. The present study assessed 491 patients (139 HPV-positive and 352 HPV-negative cases) with a PAP test result of ASCUS with a follow-up period ≥2 years. Patients underwent PAP and HPV DNA chip tests between January 2006 and January 2009. The HPV-positive subjects were followed up with at least 2 instances of PAP and HPV DNA chip tests. The most common genotypes observed were HPV-16 (25.9%, 36/139), HPV-52 (14.4%, 20/139), HPV-58 (13.7%, 19/139), HPV-56 (11.5%, 16/139), HPV-51 (9.4%, 13/139) and HPV-18 (8.6%, 12/139). A total of 33.3% (12/36) patients positive for HPV-16 had cervical intraepithelial neoplasia (CIN)2 or a worse result, which was significantly higher than the prevalence of CIN2 of 1.8% (8/455) in patients negative for HPV-16 (Paged ≥51 years (38.7%) than in those aged ≤50 years (20.4%; P=0.036). Progression from persistent infection to CIN2 or worse (19/34, 55.9%) was higher than clearance (0/105, 0.0%; Page and long infection period with a clinical progression of CIN2 or worse. Therefore, LDA results may be presented as explanatory evidence during time-constrained patient-doctor consultations in order to deliver information regarding the patient's status. PMID:28587376
Versace, Amelia; Almeida, Jorge R C; Hassel, Stefanie; Walsh, Nicholas D; Novelli, Massimiliano; Klein, Crystal R; Kupfer, David J; Phillips, Mary L
2008-09-01
Diffusion tensor imaging (DTI) studies in adults with bipolar disorder (BD) indicate altered white matter (WM) in the orbitomedial prefrontal cortex (OMPFC), potentially underlying abnormal prefrontal corticolimbic connectivity and mood dysregulation in BD. To use tract-based spatial statistics (TBSS) to examine WM skeleton (ie, the most compact whole-brain WM) in subjects with BD vs healthy control subjects. Cross-sectional, case-control, whole-brain DTI using TBSS. University research institute. Fifty-six individuals, 31 having a DSM-IV diagnosis of BD type I (mean age, 35.9 years [age range, 24-52 years]) and 25 controls (mean age, 29.5 years [age range, 19-52 years]). Fractional anisotropy (FA) longitudinal and radial diffusivities in subjects with BD vs controls (covarying for age) and their relationships with clinical and demographic variables. Subjects with BD vs controls had significantly greater FA (t > 3.0, P left uncinate fasciculus (reduced radial diffusivity distally and increased longitudinal diffusivity centrally), left optic radiation (increased longitudinal diffusivity), and right anterothalamic radiation (no significant diffusivity change). Subjects with BD vs controls had significantly reduced FA (t > 3.0, P right uncinate fasciculus (greater radial diffusivity). Among subjects with BD, significant negative correlations (P right anterothalamic radiation, as well as between medication load and FA in the left optic radiation. Decreased FA (P left optic radiation and in the right anterothalamic radiation among subjects with BD taking vs those not taking mood stabilizers, as well as in the left optic radiation among depressed vs remitted subjects with BD. Subjects having BD with vs without lifetime alcohol or other drug abuse had significantly decreased FA in the left uncinate fasciculus. To our knowledge, this is the first study to use TBSS to examine WM in subjects with BD. Subjects with BD vs controls showed greater WM FA in the left OMPFC that
DEFF Research Database (Denmark)
Kantelinen, Jukka; Hansen, Thomas V O; Kansikas, Minttu
2011-01-01
Inherited pathogenic mutations in the mismatch repair (MMR) genes, MSH2, MLH1, MSH6, and PMS2 predispose to Lynch syndrome (LS). However, the finding of a variant or variants of uncertain significance (VUS) in affected family members complicates the risk assessment. Here, we describe a putative LS...
Ma, Ji; Yang, Bingxian; Zhu, Wei; Sun, Lianli; Tian, Jingkui; Wang, Xumin
2013-10-10
Mahonia bealei (Berberidaceae) is a frequently-used traditional Chinese medicinal plant with efficient anti-inflammatory ability. This plant is one of the sources of berberine, a new cholesterol-lowering drug with anti-diabetic activity. We have sequenced the complete nucleotide sequence of the chloroplast (cp) genome of M. bealei. The complete cp genome of M. bealei is 164,792 bp in length, and has a typical structure with large (LSC 73,052 bp) and small (SSC 18,591 bp) single-copy regions separated by a pair of inverted repeats (IRs 36,501 bp) of large size. The Mahonia cp genome contains 111 unique genes and 39 genes are duplicated in the IR regions. The gene order and content of M. bealei are almost unarranged which is consistent with the hypothesis that large IRs stabilize cp genome and reduce gene loss-and-gain probabilities during evolutionary process. A large IR expansion of over 12 kb has occurred in M. bealei, 15 genes (rps19, rpl22, rps3, rpl16, rpl14, rps8, infA, rpl36, rps11, petD, petB, psbH, psbN, psbT and psbB) have expanded to have an additional copy in the IRs. The IR expansion rearrangement occurred via a double-strand DNA break and subsequence repair, which is different from the ordinary gene conversion mechanism. Repeat analysis identified 39 direct/inverted repeats 30 bp or longer with a sequence identity ≥ 90%. Analysis also revealed 75 simple sequence repeat (SSR) loci and almost all are composed of A or T, contributing to a distinct bias in base composition. Comparison of protein-coding sequences with ESTs reveals 9 putative RNA edits and 5 of them resulted in non-synonymous modifications in rpoC1, rps2, rps19 and ycf1. Phylogenetic analysis using maximum parsimony (MP) and maximum likelihood (ML) was performed on a dataset composed of 65 protein-coding genes from 25 taxa, which yields an identical tree topology as previous plastid-based trees, and provides strong support for the sister relationship between Ranunculaceae and Berberidaceae
Directory of Open Access Journals (Sweden)
Lu Xin-Yan
2009-11-01
Full Text Available Abstract Background Plasmablastic lymphoma (PL is a subtype of diffuse large B-cell lymphoma (DLBCL. Studies have suggested that tumors with PL morphology represent a group of neoplasms with clinopathologic characteristics corresponding to different entities including extramedullary plasmablastic tumors associated with plasma cell myeloma (PCM. The goal of the current study was to evaluate the genetic similarities and differences among PL, DLBCL (AIDS-related and non AIDS-related and PCM using array-based comparative genomic hybridization. Results Examination of genomic data in PL revealed that the most frequent segmental gain (> 40% include: 1p36.11-1p36.33, 1p34.1-1p36.13, 1q21.1-1q23.1, 7q11.2-7q11.23, 11q12-11q13.2 and 22q12.2-22q13.3. This correlated with segmental gains occurring in high frequency in DLBCL (AIDS-related and non AIDS-related cases. There were some segmental gains and some segmental loss that occurred in PL but not in the other types of lymphoma suggesting that these foci may contain genes responsible for the differentiation of this lymphoma. Additionally, some segmental gains and some segmental loss occurred only in PL and AIDS associated DLBCL suggesting that these foci may be associated with HIV infection. Furthermore, some segmental gains and some segmental loss occurred only in PL and PCM suggesting that these lesions may be related to plasmacytic differentiation. Conclusion To the best of our knowledge, the current study represents the first genomic exploration of PL. The genomic aberration pattern of PL appears to be more similar to that of DLBCL (AIDS-related or non AIDS-related than to PCM. Our findings suggest that PL may remain best classified as a subtype of DLBCL at least at the genome level.
International Nuclear Information System (INIS)
Cheng, S.; Chen, M.; Li, Y.; Wang, J.; Sun, X.; Wang, J.
2017-01-01
To identify genetic components involved in high growth vigor in F1 Populus section Tacamahaca hybrid plants, high and low vigor plants showing significant differences in apical dominance during a rapid growth period were selected. Apical bud transcriptomes of high and low-growth-vigor hybrids and their parents were analyzed using high-throughput RNA sequencing on an Illumina HiSeq 2000 platform. A total of 5,542 genes were differently expressed between high growth vigor hybrid and its parents, the genes were significantly enriched in pathways related to processes such as photosynthesis, pyrimidine ribonucleotide biosynthetic processes and nucleoside metabolic processes. There were 1410 differentially expressed genes between high and low growth vigor hybrid, the genes were mainly involved in photosynthesis, chlorophyll biosynthetic process, carbon fixation in photosynthetic organisms, porphyrin and chlorophyll metabolism and nitrogen metabolism. Moreover, a k-core of a gene co-expression network analysis was performed to identify the potential functions of genes related to high growth vigor. The functions of 8 selected candidate genes were associated mainly with circadian rhythm, water transport, cellulose catabolic processes, sucrose biosynthesis, pyrimidine ribonucleotide biosynthesis, purine nucleotide biosynthesis, meristem maintenance, and carbohydrate metabolism. Our results may contribute to a better understanding of the molecular basis of high growth vigor in hybrids and its regulation. (author)
Sivayoganathan, Dhakshana; Maruthini, Deivanayagam; Glanville, Julie M; Balen, Adam H
2011-12-01
This study aimed to compare the spectrum of polycystic ovary syndrome (PCOS) symptoms in patients from four different specialist clinics. A prospective cross-sectional observational study. The study was conducted at the infertility, gynaecology, endocrine and dermatology clinics at Leeds General Infirmary, U.K. Seventy women presenting with features of PCOS: 20 from infertility, 17 from gynaecology, 17 from dermatology and 16 from endocrine clinics. Participants were assessed for symptoms and signs of PCOS and underwent a full endocrine and metabolic profile and a pelvic ultrasound scan. All subjects had experienced menstrual problems, 81% were overweight, 86% had polycystic ovaries on ultrasound, 56% had hirsutism, 53% had acne, 23% had acanthosis nigricans, 16% had alopecia and 38% had previously undiagnosed impaired glucose tolerance (IGT) or diabetes. A significant difference between the four clinic groups existed with regard to menstrual patterns (p = 0.0234), frequency distribution of presenting symptoms and the percentages of patients with PCOS who had already been diagnosed as having PCOS (p = 0.0088). This study emphasizes the importance of understanding the full spectrum of PCOS as presented to different specialty clinics. Not only is the syndrome under diagnosed but also are the significant associated morbidities such as IGT and type 2 diabetes. Different specialists need to appreciate the spectrum of health problems for women with PCOS that may extend beyond the specific symptoms that precipitated the initial referral.
Directory of Open Access Journals (Sweden)
Helene eFalentin
2016-04-01
Full Text Available Mastitis is a mammary gland inflammatory disease often due to bacterial infections. Like many other infections, it used to be considered as a host-pathogen interaction driven by host and bacterial determinants. Until now, the involvement of the bovine mammary gland microbiota in the host-pathogen interaction has been poorly investigated, and mainly during the infectious episode. In this study, the bovine teat microbiome was investigated in 31 quarters corresponding to 27 animals, which were all free of inflammation at sampling time but which had different histories regarding mastitis: from no episode of mastitis on all the previous lactations (Healthy quarter, Hq to one or several clinical mastitis events (Mastitic quarter, Mq. Several quarters whose status was unclear (possible history of subclinical mastitis were classified as NDq. Total bacterial DNA was extracted from foremilk samples and swab samples of the teat canal. Taxonomic profiles were determined by pyrosequencing on 16s amplicons of the V3-4 region. Hq quarters showed a higher diversity compared to Mq ones (Shannon index: ~8 and 6, respectively. Clustering of the quarters based on their bacterial composition made it possible to separate Mq and Hq quarters into two separate clusters (C1 and C2, respectively. Discriminant analysis of taxonomic profiles between these clusters revealed several differences and allowed the identification of taxonomic markers in relation to mastitis history. C2 quarters were associated with a higher proportion of the Clostridia class (including genera such as Ruminococcus, Oscillospira, Roseburia, Dorea, etc., the Bacteroidetes phylum (Prevotella, Bacteroides, Paludibacter, etc., and the Bifidobacteriales order (Bifidobacterium, whereas C1 quarters showed a higher proportion of the Bacilli class (Staphylococcus and Chlamydiia class. These results indicate that microbiota is altered in udders which have already developed mastitis, even far from the
Understanding Statistics - Cancer Statistics
Annual reports of U.S. cancer statistics including new cases, deaths, trends, survival, prevalence, lifetime risk, and progress toward Healthy People targets, plus statistical summaries for a number of common cancer types.
DEFF Research Database (Denmark)
Kantelinen, Jukka; Hansen, Thomas V O; Kansikas, Minttu
2011-01-01
Inherited pathogenic mutations in the mismatch repair (MMR) genes, MSH2, MLH1, MSH6, and PMS2 predispose to Lynch syndrome (LS). However, the finding of a variant or variants of uncertain significance (VUS) in affected family members complicates the risk assessment. Here, we describe a putative LS...... and the tumor pathological data suggested that the missense variation in MSH2, the more common susceptibility gene in LS, would be the predisposing alteration. However, MSH2 VUS was surprisingly found to be MMR proficient in an in vitro MMR assay and a tolerant alteration in silico. By supplying evidence...... identified VUS before predictive gene testing and genetic counseling are offered to a family....
Directory of Open Access Journals (Sweden)
Molloy Mark P
2011-06-01
Full Text Available Abstract Background Glucagon is an important hormone in the regulation of glucose homeostasis, particularly in the maintenance of euglycemia and prevention of hypoglycemia. In type 2 Diabetes Mellitus (T2DM, glucagon levels are elevated in both the fasted and postprandial states, which contributes to inappropriate hyperglycemia through excessive hepatic glucose production. Efforts to discover and evaluate glucagon receptor antagonists for the treatment of T2DM have been ongoing for approximately two decades, with the challenge being to identify an agent with appropriate pharmaceutical properties and efficacy relative to potential side effects. We sought to determine the hepatic & systemic consequence of full glucagon receptor antagonism through the study of the glucagon receptor knock-out mouse (Gcgr-/- compared to wild-type littermates. Results Liver transcriptomics was performed using Affymetric expression array profiling, and liver proteomics was performed by iTRAQ global protein analysis. To complement the transcriptomic and proteomic analyses, we also conducted metabolite profiling (~200 analytes using mass spectrometry in plasma. Overall, there was excellent concordance (R = 0.88 for changes associated with receptor knock-out between the transcript and protein analysis. Pathway analysis tools were used to map the metabolic processes in liver altered by glucagon receptor ablation, the most notable being significant down-regulation of gluconeogenesis, amino acid catabolism, and fatty acid oxidation processes, with significant up-regulation of glycolysis, fatty acid synthesis, and cholesterol biosynthetic processes. These changes at the level of the liver were manifested through an altered plasma metabolite profile in the receptor knock-out mice, e.g. decreased glucose and glucose-derived metabolites, and increased amino acids, cholesterol, and bile acid levels. Conclusions In sum, the results of this study suggest that the complete ablation
Directory of Open Access Journals (Sweden)
Lucero David E
2007-06-01
' and TCZ2 (5' – CCT CCA AGC AGC GGA TAG TTC AGG – 3' primers. Amplicons were chromatographed on a 2% agarose gel with a 100 bp size standard, stained with ethidium bromide and viewed with UV fluorescence. For both the microscopy and PCR assays, we calculated sensitivity (number of positives by a method divided by the number of positives by either method and discrepancy (one method was negative and the other was positive at the locality, life stage and habitat level. The degree of agreement between PCR and microscopy was determined by calculating Kappa (k values with 95% confidence intervals. Results We observed a high prevalence of T. cruzi infection in T. infestans (81.16% by PCR and 56.52% by microscopy and discovered that PCR is significantly more sensitive than microscopic observation. The overall degree of agreement between the two methods was moderate (Kappa = 0.43 ± 0.07. The level of infection is significantly different among communities; however, prevalence was similar among habitats and life stages. Conclusion PCR was significantly more sensitive than microscopy in all habitats, developmental stages and localities in Chuquisaca, Bolivia. Overall we observed a high prevalence of T. cruzi infection in T. infestans in this area of Bolivia; however, microscopy underestimated infection at all levels examined.
Directory of Open Access Journals (Sweden)
Cigudosa Juan C
2011-05-01
Full Text Available Abstract Background Recent observations point towards the existence of a large number of neighborhoods composed of functionally-related gene modules that lie together in the genome. This local component in the distribution of the functionality across chromosomes is probably affecting the own chromosomal architecture by limiting the possibilities in which genes can be arranged and distributed across the genome. As a direct consequence of this fact it is therefore presumable that diseases such as cancer, harboring DNA copy number alterations (CNAs, will have a symptomatology strongly dependent on modules of functionally-related genes rather than on a unique "important" gene. Methods We carried out a systematic analysis of more than 140,000 observations of CNAs in cancers and searched by enrichments in gene functional modules associated to high frequencies of loss or gains. Results The analysis of CNAs in cancers clearly demonstrates the existence of a significant pattern of loss of gene modules functionally related to cancer initiation and progression along with the amplification of modules of genes related to unspecific defense against xenobiotics (probably chemotherapeutical agents. With the extension of this analysis to an Array-CGH dataset (glioblastomas from The Cancer Genome Atlas we demonstrate the validity of this approach to investigate the functional impact of CNAs. Conclusions The presented results indicate promising clinical and therapeutic implications. Our findings also directly point out to the necessity of adopting a function-centric, rather a gene-centric, view in the understanding of phenotypes or diseases harboring CNAs.
McKenna, Duane D; Scully, Erin D; Pauchet, Yannick; Hoover, Kelli; Kirsch, Roy; Geib, Scott M; Mitchell, Robert F; Waterhouse, Robert M; Ahn, Seung-Joon; Arsala, Deanna; Benoit, Joshua B; Blackmon, Heath; Bledsoe, Tiffany; Bowsher, Julia H; Busch, André; Calla, Bernarda; Chao, Hsu; Childers, Anna K; Childers, Christopher; Clarke, Dave J; Cohen, Lorna; Demuth, Jeffery P; Dinh, Huyen; Doddapaneni, HarshaVardhan; Dolan, Amanda; Duan, Jian J; Dugan, Shannon; Friedrich, Markus; Glastad, Karl M; Goodisman, Michael A D; Haddad, Stephanie; Han, Yi; Hughes, Daniel S T; Ioannidis, Panagiotis; Johnston, J Spencer; Jones, Jeffery W; Kuhn, Leslie A; Lance, David R; Lee, Chien-Yueh; Lee, Sandra L; Lin, Han; Lynch, Jeremy A; Moczek, Armin P; Murali, Shwetha C; Muzny, Donna M; Nelson, David R; Palli, Subba R; Panfilio, Kristen A; Pers, Dan; Poelchau, Monica F; Quan, Honghu; Qu, Jiaxin; Ray, Ann M; Rinehart, Joseph P; Robertson, Hugh M; Roehrdanz, Richard; Rosendale, Andrew J; Shin, Seunggwan; Silva, Christian; Torson, Alex S; Jentzsch, Iris M Vargas; Werren, John H; Worley, Kim C; Yocum, George; Zdobnov, Evgeny M; Gibbs, Richard A; Richards, Stephen
2016-11-11
Relatively little is known about the genomic basis and evolution of wood-feeding in beetles. We undertook genome sequencing and annotation, gene expression assays, studies of plant cell wall degrading enzymes, and other functional and comparative studies of the Asian longhorned beetle, Anoplophora glabripennis, a globally significant invasive species capable of inflicting severe feeding damage on many important tree species. Complementary studies of genes encoding enzymes involved in digestion of woody plant tissues or detoxification of plant allelochemicals were undertaken with the genomes of 14 additional insects, including the newly sequenced emerald ash borer and bull-headed dung beetle. The Asian longhorned beetle genome encodes a uniquely diverse arsenal of enzymes that can degrade the main polysaccharide networks in plant cell walls, detoxify plant allelochemicals, and otherwise facilitate feeding on woody plants. It has the metabolic plasticity needed to feed on diverse plant species, contributing to its highly invasive nature. Large expansions of chemosensory genes involved in the reception of pheromones and plant kairomones are consistent with the complexity of chemical cues it uses to find host plants and mates. Amplification and functional divergence of genes associated with specialized feeding on plants, including genes originally obtained via horizontal gene transfer from fungi and bacteria, contributed to the addition, expansion, and enhancement of the metabolic repertoire of the Asian longhorned beetle, certain other phytophagous beetles, and to a lesser degree, other phytophagous insects. Our results thus begin to establish a genomic basis for the evolutionary success of beetles on plants.
Directory of Open Access Journals (Sweden)
Bipin Balan
2017-12-01
Full Text Available RNA sequencing (RNA-Seq technology has enabled the researchers to investigate the host global gene expression changes in plant-virus interactions which helped to understand the molecular basis of virus diseases. The re-analysis of RNA-Seq studies using most updated genome version and the available best analysis pipeline will produce most accurate results. In this study, we re-analysed the Apple stem grooving virus (ASGV infected apple shoots in comparison with that of virus-free in vitro shoots [1] using the most updated Malus x domestica genome downloaded from Phytozome database. The re-analysis was done by using HISAT2 software and Cufflinks program was used to mine the differentially expressed genes. We found that ~20% more reads was mapped to the latest genome using the updated pipeline, which proved the significance of such re-analysis. The comparison of the updated results with that of previous was done. In addition, we performed protein-protein interaction (PPI to investigate the proteins affected by ASGV infection.
Zhang, Wei; Zhou, Xin; Wen, Chi-Kuang
2012-01-01
Overexpression of Arabidopsis Reversion-To-ethylene Sensitivity1 (RTE1) results in whole-plant ethylene insensitivity dependent on the ethylene receptor gene Ethylene Response1 (ETR1). However, overexpression of the tomato RTE1 homologue Green Ripe (GR) delays fruit ripening but does not confer whole-plant ethylene insensitivity. It was decided to investigate whether aspects of ethylene-induced growth and development of the monocotyledonous model plant rice could be modulated by rice RTE1 homologues (OsRTH genes). Results from a cross-species complementation test in Arabidopsis showed that OsRTH1 overexpression complemented the rte1-2 loss-of-function mutation and conferred whole-plant ethylene insensitivity in an ETR1-dependent manner. In contrast, OsRTH2 and OsRTH3 overexpression did not complement rte1-2 or confer ethylene insensitivity. In rice, OsRTH1 overexpression substantially prevented ethylene-induced alterations in growth and development, including leaf senescence, seedling leaf elongation and development, coleoptile elongation or curvature, and adventitious root development. Results of subcellular localizations of OsRTHs, each fused with the green fluorescent protein, in onion epidermal cells suggested that the three OsRTHs were predominantly localized to the Golgi. OsRTH1 may be an RTE1 orthologue of rice and modulate rice ethylene responses. The possible roles of auxins and gibberellins in the ethylene-induced alterations in growth were evaluated and the biological significance of ethylene in the early stage of rice seedling growth is discussed. PMID:22451723
Sison-Young, Rowena L C; Mitsa, Dimitra; Jenkins, Rosalind E; Mottram, David; Alexandre, Eliane; Richert, Lysiane; Aerts, Hélène; Weaver, Richard J; Jones, Robert P; Johann, Esther; Hewitt, Philip G; Ingelman-Sundberg, Magnus; Goldring, Christopher E P; Kitteringham, Neil R; Park, B Kevin
2015-10-01
In vitro preclinical models for the assessment of drug-induced liver injury (DILI) are usually based on cryopreserved primary human hepatocytes (cPHH) or human hepatic tumor-derived cell lines; however, it is unclear how well such cell models reflect the normal function of liver cells. The physiological, pharmacological, and toxicological phenotyping of available cell-based systems is necessary in order to decide the testing purpose for which they are fit. We have therefore undertaken a global proteomic analysis of 3 human-derived hepatic cell lines (HepG2, Upcyte, and HepaRG) in comparison with cPHH with a focus on drug metabolizing enzymes and transport proteins (DMETs), as well as Nrf2-regulated proteins. In total, 4946 proteins were identified, of which 2722 proteins were common across all cell models, including 128 DMETs. Approximately 90% reduction in expression of cytochromes P450 was observed in HepG2 and Upcyte cells, and approximately 60% in HepaRG cells relative to cPHH. Drug transporter expression was also lower compared with cPHH with the exception of MRP3 and P-gp (MDR1) which appeared to be significantly expressed in HepaRG cells. In contrast, a high proportion of Nrf2-regulated proteins were more highly expressed in the cell lines compared with cPHH. The proteomic database derived here will provide a rational basis for the context-specific selection of the most appropriate 'hepatocyte-like' cell for the evaluation of particular cellular functions associated with DILI and, at the same time, assist in the construction of a testing paradigm which takes into account the in vivo disposition of a new drug. © The Author 2015. Published by Oxford University Press on behalf of the Society of Toxicology.
De Vreese, L P; Gomiero, T; Uberti, M; De Bastiani, E; Weger, E; Mantesso, U; Marangoni, A
2015-04-01
(a) A psychometric validation of an Italian version of the Alzheimer's Functional Assessment Tool scale (AFAST-I), designed for informant-based assessment of the degree of impairment and of assistance required in seven basic daily activities in adult/elderly people with intellectual disabilities (ID) and (suspected) dementia; (b) a pilot analysis of its clinical significance with traditional statistical procedures and with an artificial neural network. AFAST-I was administered to the professional caregivers of 61 adults/seniors with ID with a mean age (± SD) of 53.4 (± 7.7) years (36% with Down syndrome). Internal consistency (Cronbach's α coefficient), inter/intra-rater reliabilities (intra-class coefficients, ICC) and concurrent, convergent and discriminant validity (Pearson's r coefficients) were computed. Clinical significance was probed by analysing the relationships among AFAST-I scores and the Sum of Cognitive Scores (SCS) and the Sum of Social Scores (SOS) of the Dementia Questionnaire for Persons with Intellectual Disabilities (DMR-I) after standardisation of their raw scores in equivalent scores (ES). An adaptive artificial system (AutoContractive Maps, AutoCM) was applied to all the variables recorded in the study sample, aimed at uncovering which variable occupies a central position and supports the entire network made up of the remaining variables interconnected among themselves with different weights. AFAST-I shows a high level of internal homogeneity with a Cronbach's α coefficient of 0.92. Inter-rater and intra-rater reliabilities were also excellent with ICC correlations of 0.96 and 0.93, respectively. The results of the analyses of the different AFAST-I validities all go in the expected direction: concurrent validity (r=-0.87 with ADL); convergent validity (r=0.63 with SCS; r=0.61 with SOS); discriminant validity (r=0.21 with the frequency of occurrence of dementia-related Behavioral Excesses of the Assessment for Adults with Developmental
Energy Technology Data Exchange (ETDEWEB)
Kyle, Jennifer E. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Casey, Cameron P. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Stratton, Kelly G. [National Security Directorate, Pacific Northwest National Laboratory, Richland WA USA; Zink, Erika M. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Kim, Young-Mo [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Zheng, Xueyun [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Monroe, Matthew E. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Weitz, Karl K. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Bloodsworth, Kent J. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Orton, Daniel J. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Ibrahim, Yehia M. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Moore, Ronald J. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Lee, Christine G. [Department of Medicine, Bone and Mineral Unit, Oregon Health and Science University, Portland OR USA; Research Service, Portland Veterans Affairs Medical Center, Portland OR USA; Pedersen, Catherine [Department of Medicine, Bone and Mineral Unit, Oregon Health and Science University, Portland OR USA; Orwoll, Eric [Department of Medicine, Bone and Mineral Unit, Oregon Health and Science University, Portland OR USA; Smith, Richard D. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Burnum-Johnson, Kristin E. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA; Baker, Erin S. [Earth and Biological Sciences Directorate, Pacific Northwest National Laboratory, Richland WA USA
2017-02-05
The use of dried blood spots (DBS) has many advantages over traditional plasma and serum samples such as smaller blood volume required, storage at room temperature, and ability for sampling in remote locations. However, understanding the robustness of different analytes in DBS samples is essential, especially in older samples collected for longitudinal studies. Here we analyzed DBS samples collected in 2000-2001 and stored at room temperature and compared them to matched serum samples stored at -80°C to determine if they could be effectively used as specific time points in a longitudinal study following metabolic disease. Four hundred small molecules were identified in both the serum and DBS samples using gas chromatograph-mass spectrometry (GC-MS), liquid chromatography-MS (LC-MS) and LC-ion mobility spectrometry-MS (LC-IMS-MS). The identified polar metabolites overlapped well between the sample types, though only one statistically significant polar metabolite in a case-control study was conserved, indicating degradation occurs in the DBS samples affecting quantitation. Differences in the lipid identifications indicated that some oxidation occurs in the DBS samples. However, thirty-six statistically significant lipids correlated in both sample types indicating that lipid quantitation was more stable across the sample types.
Directory of Open Access Journals (Sweden)
Irakli Dzneladze
Full Text Available Our previous studies demonstrated that INPP4B, a member of the PI3K/Akt signaling pathway, is overexpressed in a subset of AML patients and is associated with lower response to chemotherapy and shorter survival. INPP4B expression analysis in AML revealed a right skewed frequency distribution with 25% of patients expressing significantly higher levels than the majority. The 75% low/25% high cut-off revealed the prognostic power of INPP4B expression status in AML, which would not have been apparent with a standard median cut-off approach. Our identification of a clinically relevant non-median cut-off for INPP4B indicated a need for a generalizable non-median dichotomization approach to optimally study clinically relevant genes. To address this need, we developed Subgroup Identifier (SubID, a tool which examines the relationship between a continuous variable (e.g. gene expression, and a test parameter (e.g. CoxPH or Fisher's exact P values. In our study, Fisher's exact SubID was used to reveal EVI1 as a transcriptional regulator of INPP4B in AML; a finding which was validated in vitro. Next, we used CoxPH SubID to conduct a pan-cancer analysis of INPP4B's prognostic significance. Our analysis revealed that INPP4Blow is associated with shorter survival in kidney clear cell, liver hepatocellular, and bladder urothelial carcinomas. Conversely, INPP4Blow was shown to be associated with increased survival in pancreatic adenocarcinoma in three independent datasets. Overall, our study describes the development and application of a novel subgroup identification tool used to identify prognostically significant rare subgroups based upon gene expression, and for investigating the association between a gene with skewed frequency distribution and potentially important upstream and downstream genes that relate to the index gene.
Dzneladze, Irakli; Woolley, John F; Rossell, Carla; Han, Youqi; Rashid, Ayesha; Jain, Michael; Reimand, Jüri; Minden, Mark D; Salmena, Leonardo
2018-01-01
Our previous studies demonstrated that INPP4B, a member of the PI3K/Akt signaling pathway, is overexpressed in a subset of AML patients and is associated with lower response to chemotherapy and shorter survival. INPP4B expression analysis in AML revealed a right skewed frequency distribution with 25% of patients expressing significantly higher levels than the majority. The 75% low/25% high cut-off revealed the prognostic power of INPP4B expression status in AML, which would not have been apparent with a standard median cut-off approach. Our identification of a clinically relevant non-median cut-off for INPP4B indicated a need for a generalizable non-median dichotomization approach to optimally study clinically relevant genes. To address this need, we developed Subgroup Identifier (SubID), a tool which examines the relationship between a continuous variable (e.g. gene expression), and a test parameter (e.g. CoxPH or Fisher's exact P values). In our study, Fisher's exact SubID was used to reveal EVI1 as a transcriptional regulator of INPP4B in AML; a finding which was validated in vitro. Next, we used CoxPH SubID to conduct a pan-cancer analysis of INPP4B's prognostic significance. Our analysis revealed that INPP4Blow is associated with shorter survival in kidney clear cell, liver hepatocellular, and bladder urothelial carcinomas. Conversely, INPP4Blow was shown to be associated with increased survival in pancreatic adenocarcinoma in three independent datasets. Overall, our study describes the development and application of a novel subgroup identification tool used to identify prognostically significant rare subgroups based upon gene expression, and for investigating the association between a gene with skewed frequency distribution and potentially important upstream and downstream genes that relate to the index gene.
Directory of Open Access Journals (Sweden)
Madelaine Sarria Castro
2004-05-01
Full Text Available OBJETIVOS: Caracterizar el empleo de las pruebas convencionales de significación estadística y las tendencias actuales que muestra su uso en tres revistas biomédicas del ámbito hispanohablante. MÉTODOS: Se examinaron todos los artículos originales descriptivos o explicativos que fueron publicados en el quinquenio de 19962000 en tres publicaciones: Revista Cubana de Medicina General Integral, Revista Panamericana de Salud Pública/Pan American Journal of Public Health y Medicina Clínica. RESULTADOS: En las tres revistas examinadas se detectaron diversos rasgos criticables en el empleo de las pruebas de hipótesis basadas en los "valores P" y la escasa presencia de las nuevas tendencias que se proponen en su lugar: intervalos de confianza (IC e inferencia bayesiana. Los hallazgos fundamentales fueron los siguientes: mínima presencia de los IC, ya fuese como complemento de las pruebas de significación o como recurso estadístico único; mención del tamaño muestral como posible explicación de los resultados; predominio del empleo de valores rígidos de alfa; falta de uniformidad en la presentación de los resultados, y alusión indebida en las conclusiones de la investigación a los resultados de las pruebas de hipótesis. CONCLUSIONES: Los resultados reflejan la falta de acatamiento de autores y editores en relación con las normas aceptadas en torno al uso de las pruebas de significación estadística y apuntan a que el empleo adocenado de estas pruebas sigue ocupando un espacio importante en la literatura biomédica del ámbito hispanohablante.OBJECTIVE: To describe the use of conventional tests of statistical significance and the current trends shown by their use in three biomedical journals read in Spanish-speaking countries. METHODS: All descriptive or explanatory original articles published in the five-year period of 1996 through 2000 were reviewed in three journals: Revista Cubana de Medicina General Integral [Cuban Journal of
International Nuclear Information System (INIS)
Lim, Gyeong Hui
2008-03-01
This book consists of 15 chapters, which are basic conception and meaning of statistical thermodynamics, Maxwell-Boltzmann's statistics, ensemble, thermodynamics function and fluctuation, statistical dynamics with independent particle system, ideal molecular system, chemical equilibrium and chemical reaction rate in ideal gas mixture, classical statistical thermodynamics, ideal lattice model, lattice statistics and nonideal lattice model, imperfect gas theory on liquid, theory on solution, statistical thermodynamics of interface, statistical thermodynamics of a high molecule system and quantum statistics
Okuda, Kosuke; Takao, Keizo; Watanabe, Aya; Miyakawa, Tsuyoshi; Mizuguchi, Masashi; Tanaka, Teruyuki
2018-01-01
Mutations in the Cyclin-dependent kinase-like 5 (CDKL5) gene cause severe neurodevelopmental disorders. Recently we have generated Cdkl5 KO mice by targeting exon 2 on the C57BL/6N background, and demonstrated postsynaptic overaccumulation of GluN2B-containing N-methyl-D-aspartate (NMDA) receptors in the hippocampus. In the current study, we subjected the Cdkl5 KO mice to a battery of comprehensive behavioral tests, aiming to reveal the effects of loss of CDKL5 in a whole perspective of motor, emotional, social, and cognition/memory functions, and to identify its undetermined roles. The neurological screen, rotarod, hot plate, prepulse inhibition, light/dark transition, open field, elevated plus maze, Porsolt forced swim, tail suspension, one-chamber and three-chamber social interaction, 24-h home cage monitoring, contextual and cued fear conditioning, Barnes maze, and T-maze tests were applied on adult Cdkl5 -/Y and +/Y mice. Cdkl5 -/Y mice showed a mild alteration in the gait. Analyses of emotional behaviors revealed significantly enhanced anxiety-like behaviors of Cdkl5 -/Y mice. Depressive-like behaviors and social interaction of Cdkl5 -/Y mice were uniquely altered. The contextual and cued fear conditioning of Cdkl5 -/Y mice were comparable to control mice; however, Cdkl5 -/Y mice showed a significantly increased freezing time and a significantly decreased distance traveled during the pretone period in the altered context. Both acquisition and long-term retention of spatial reference memory were significantly impaired. The morphometric analysis of hippocampal CA1 pyramidal neurons revealed impaired dendritic arborization and immature spine development in Cdkl5 -/Y mice. These results indicate that CDKL5 plays significant roles in regulating emotional behaviors especially on anxiety- and fear-related responses, and in both acquisition and long-term retention of spatial reference memory, which suggests that focus and special attention should be paid to the
Directory of Open Access Journals (Sweden)
Kosuke Okuda
Full Text Available Mutations in the Cyclin-dependent kinase-like 5 (CDKL5 gene cause severe neurodevelopmental disorders. Recently we have generated Cdkl5 KO mice by targeting exon 2 on the C57BL/6N background, and demonstrated postsynaptic overaccumulation of GluN2B-containing N-methyl-D-aspartate (NMDA receptors in the hippocampus. In the current study, we subjected the Cdkl5 KO mice to a battery of comprehensive behavioral tests, aiming to reveal the effects of loss of CDKL5 in a whole perspective of motor, emotional, social, and cognition/memory functions, and to identify its undetermined roles. The neurological screen, rotarod, hot plate, prepulse inhibition, light/dark transition, open field, elevated plus maze, Porsolt forced swim, tail suspension, one-chamber and three-chamber social interaction, 24-h home cage monitoring, contextual and cued fear conditioning, Barnes maze, and T-maze tests were applied on adult Cdkl5 -/Y and +/Y mice. Cdkl5 -/Y mice showed a mild alteration in the gait. Analyses of emotional behaviors revealed significantly enhanced anxiety-like behaviors of Cdkl5 -/Y mice. Depressive-like behaviors and social interaction of Cdkl5 -/Y mice were uniquely altered. The contextual and cued fear conditioning of Cdkl5 -/Y mice were comparable to control mice; however, Cdkl5 -/Y mice showed a significantly increased freezing time and a significantly decreased distance traveled during the pretone period in the altered context. Both acquisition and long-term retention of spatial reference memory were significantly impaired. The morphometric analysis of hippocampal CA1 pyramidal neurons revealed impaired dendritic arborization and immature spine development in Cdkl5 -/Y mice. These results indicate that CDKL5 plays significant roles in regulating emotional behaviors especially on anxiety- and fear-related responses, and in both acquisition and long-term retention of spatial reference memory, which suggests that focus and special attention should be
Okuda, Kosuke; Takao, Keizo; Watanabe, Aya; Miyakawa, Tsuyoshi; Mizuguchi, Masashi
2018-01-01
Mutations in the Cyclin-dependent kinase-like 5 (CDKL5) gene cause severe neurodevelopmental disorders. Recently we have generated Cdkl5 KO mice by targeting exon 2 on the C57BL/6N background, and demonstrated postsynaptic overaccumulation of GluN2B-containing N-methyl-D-aspartate (NMDA) receptors in the hippocampus. In the current study, we subjected the Cdkl5 KO mice to a battery of comprehensive behavioral tests, aiming to reveal the effects of loss of CDKL5 in a whole perspective of motor, emotional, social, and cognition/memory functions, and to identify its undetermined roles. The neurological screen, rotarod, hot plate, prepulse inhibition, light/dark transition, open field, elevated plus maze, Porsolt forced swim, tail suspension, one-chamber and three-chamber social interaction, 24-h home cage monitoring, contextual and cued fear conditioning, Barnes maze, and T-maze tests were applied on adult Cdkl5 -/Y and +/Y mice. Cdkl5 -/Y mice showed a mild alteration in the gait. Analyses of emotional behaviors revealed significantly enhanced anxiety-like behaviors of Cdkl5 -/Y mice. Depressive-like behaviors and social interaction of Cdkl5 -/Y mice were uniquely altered. The contextual and cued fear conditioning of Cdkl5 -/Y mice were comparable to control mice; however, Cdkl5 -/Y mice showed a significantly increased freezing time and a significantly decreased distance traveled during the pretone period in the altered context. Both acquisition and long-term retention of spatial reference memory were significantly impaired. The morphometric analysis of hippocampal CA1 pyramidal neurons revealed impaired dendritic arborization and immature spine development in Cdkl5 -/Y mice. These results indicate that CDKL5 plays significant roles in regulating emotional behaviors especially on anxiety- and fear-related responses, and in both acquisition and long-term retention of spatial reference memory, which suggests that focus and special attention should be paid to the
Statistical metamodeling for revealing synergistic antimicrobial interactions.
Directory of Open Access Journals (Sweden)
Hsiang Chia Chen
2010-11-01
Full Text Available Many bacterial pathogens are becoming drug resistant faster than we can develop new antimicrobials. To address this threat in public health, a metamodel antimicrobial cocktail optimization (MACO scheme is demonstrated for rapid screening of potent antibiotic cocktails using uropathogenic clinical isolates as model systems. With the MACO scheme, only 18 parallel trials were required to determine a potent antimicrobial cocktail out of hundreds of possible combinations. In particular, trimethoprim and gentamicin were identified to work synergistically for inhibiting the bacterial growth. Sensitivity analysis indicated gentamicin functions as a synergist for trimethoprim, and reduces its minimum inhibitory concentration for 40-fold. Validation study also confirmed that the trimethoprim-gentamicin synergistic cocktail effectively inhibited the growths of multiple strains of uropathogenic clinical isolates. With its effectiveness and simplicity, the MACO scheme possesses the potential to serve as a generic platform for identifying synergistic antimicrobial cocktails toward management of bacterial infection in the future.
... What Is Cancer? Cancer Statistics Cancer Disparities Cancer Statistics Cancer has a major impact on society in ... success of efforts to control and manage cancer. Statistics at a Glance: The Burden of Cancer in ...
... this page: https://medlineplus.gov/usestatistics.html MedlinePlus Statistics To use the sharing features on this page, ... By Quarter View image full size Quarterly User Statistics Quarter Page Views Unique Visitors Oct-Dec-98 ...
Pestman, Wiebe R
2009-01-01
This textbook provides a broad and solid introduction to mathematical statistics, including the classical subjects hypothesis testing, normal regression analysis, and normal analysis of variance. In addition, non-parametric statistics and vectorial statistics are considered, as well as applications of stochastic analysis in modern statistics, e.g., Kolmogorov-Smirnov testing, smoothing techniques, robustness and density estimation. For students with some elementary mathematical background. With many exercises. Prerequisites from measure theory and linear algebra are presented.
Zarrineh, Peyman; Sánchez-Rodríguez, Aminael; Hosseinkhan, Nazanin; Narimani, Zahra; Marchal, Kathleen; Masoudi-Nejad, Ali
2014-01-01
Availability of genome-wide gene expression datasets provides the opportunity to study gene expression across different organisms under a plethora of experimental conditions. In our previous work, we developed an algorithm called COMODO (COnserved MODules across Organisms) that identifies conserved expression modules between two species. In the present study, we expanded COMODO to detect the co-expression conservation across three organisms by adapting the statistics behind it. We applied COMODO to study expression conservation/divergence between Escherichia coli, Salmonella enterica, and Bacillus subtilis. We observed that some parts of the regulatory interaction networks were conserved between E. coli and S. enterica especially in the regulon of local regulators. However, such conservation was not observed between the regulatory interaction networks of B. subtilis and the two other species. We found co-expression conservation on a number of genes involved in quorum sensing, but almost no conservation for genes involved in pathogenicity across E. coli and S. enterica which could partially explain their different lifestyles. We concluded that despite their different lifestyles, no significant rewiring have occurred at the level of local regulons involved for instance, and notable conservation can be detected in signaling pathways and stress sensing in the phylogenetically close species S. enterica and E. coli. Moreover, conservation of local regulons seems to depend on the evolutionary time of divergence across species disappearing at larger distances as shown by the comparison with B. subtilis. Global regulons follow a different trend and show major rewiring even at the limited evolutionary distance that separates E. coli and S. enterica. PMID:25101984
Whole Frog Project and Virtual Frog Dissection Statistics wwwstats output for January 1 through duplicate or extraneous accesses. For example, in these statistics, while a POST requesting an image is as well. Note that this under-represents the bytes requested. Starting date for following statistics
Energy Technology Data Exchange (ETDEWEB)
Maeda, Azusa [Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario (Canada); Department of Medical Biophysics, University of Toronto, Toronto, Ontario (Canada); Chen, Yonghong; Bu, Jiachuan; Mujcic, Hilda [Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario (Canada); Wouters, Bradly G. [Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario (Canada); Department of Medical Biophysics, University of Toronto, Toronto, Ontario (Canada); Department of Radiation Oncology, University of Toronto, Toronto, Ontario (Canada); DaCosta, Ralph S., E-mail: rdacosta@uhnres.utoronto.ca [Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario (Canada); Department of Medical Biophysics, University of Toronto, Toronto, Ontario (Canada); Techna Institute, University Health Network, Toronto, Ontario (Canada)
2017-01-01
Purpose: To investigate the effect of high-dose irradiation on pancreatic tumor vasculature and microenvironment using in vivo imaging techniques. Methods and Materials: A BxPC3 pancreatic tumor xenograft was established in a dorsal skinfold window chamber model and a subcutaneous hind leg model. Tumors were irradiated with a single dose of 4, 12, or 24 Gy. The dorsal skinfold window chamber model was used to assess tumor response, vascular function and permeability, platelet and leukocyte adhesion to the vascular endothelium, and tumor hypoxia for up to 14 days after 24-Gy irradiation. The hind leg model was used to monitor tumor size, hypoxia, and vascularity for up to 65 days after 24-Gy irradiation. Tumors were assessed histologically to validate in vivo observations. Results: In vivo fluorescence imaging revealed temporary vascular dysfunction in tumors irradiated with a single dose of 4 to 24 Gy, but most significantly with a single dose of 24 Gy. Vascular functional recovery was observed by 14 days after irradiation in a dose-dependent manner. Furthermore, irradiation with 24 Gy caused platelet and leukocyte adhesion to the vascular endothelium within hours to days after irradiation. Vascular permeability was significantly higher in irradiated tumors compared with nonirradiated controls 14 days after irradiation. This observation corresponded with increased expression of hypoxia-inducible factor-1α in irradiated tumors. In the hind leg model, irradiation with a single dose of 24 Gy led to tumor growth delay, followed by tumor regrowth. Conclusions: Irradiation of the BxPC3 tumors with a single dose of 24 Gy caused transient vascular dysfunction and increased expression of hypoxia-inducible factor-1α. Such biological changes may impact tumor response to high single-dose and hypofractionated irradiation, and further investigations are needed to better understand the clinical outcomes of stereotactic body radiation therapy.
Sadovskii, Michael V
2012-01-01
This volume provides a compact presentation of modern statistical physics at an advanced level. Beginning with questions on the foundations of statistical mechanics all important aspects of statistical physics are included, such as applications to ideal gases, the theory of quantum liquids and superconductivity and the modern theory of critical phenomena. Beyond that attention is given to new approaches, such as quantum field theory methods and non-equilibrium problems.
Goodman, Joseph W
2015-01-01
This book discusses statistical methods that are useful for treating problems in modern optics, and the application of these methods to solving a variety of such problems This book covers a variety of statistical problems in optics, including both theory and applications. The text covers the necessary background in statistics, statistical properties of light waves of various types, the theory of partial coherence and its applications, imaging with partially coherent light, atmospheric degradations of images, and noise limitations in the detection of light. New topics have been introduced i
Energy Technology Data Exchange (ETDEWEB)
Eliazar, Iddo, E-mail: eliazar@post.tau.ac.il
2017-05-15
The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their ‘public relations’ for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of this object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford’s law, and 1/f noise. - Highlights: • Harmonic statistics are described and reviewed in detail. • Connections to various statistical laws are established. • Connections to perturbation, renormalization and dynamics are established.
International Nuclear Information System (INIS)
Eliazar, Iddo
2017-01-01
The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their ‘public relations’ for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of this object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford’s law, and 1/f noise. - Highlights: • Harmonic statistics are described and reviewed in detail. • Connections to various statistical laws are established. • Connections to perturbation, renormalization and dynamics are established.
Szulc, Stefan
1965-01-01
Statistical Methods provides a discussion of the principles of the organization and technique of research, with emphasis on its application to the problems in social statistics. This book discusses branch statistics, which aims to develop practical ways of collecting and processing numerical data and to adapt general statistical methods to the objectives in a given field.Organized into five parts encompassing 22 chapters, this book begins with an overview of how to organize the collection of such information on individual units, primarily as accomplished by government agencies. This text then
... Testing Treatment & Outcomes Health Professionals Statistics More Resources Candidiasis Candida infections of the mouth, throat, and esophagus Vaginal candidiasis Invasive candidiasis Definition Symptoms Risk & Prevention Sources Diagnosis ...
Petocz, Peter; Sowey, Eric
2012-01-01
The term "data snooping" refers to the practice of choosing which statistical analyses to apply to a set of data after having first looked at those data. Data snooping contradicts a fundamental precept of applied statistics, that the scheme of analysis is to be planned in advance. In this column, the authors shall elucidate the…
Petocz, Peter; Sowey, Eric
2008-01-01
In this article, the authors focus on hypothesis testing--that peculiarly statistical way of deciding things. Statistical methods for testing hypotheses were developed in the 1920s and 1930s by some of the most famous statisticians, in particular Ronald Fisher, Jerzy Neyman and Egon Pearson, who laid the foundations of almost all modern methods of…
Glaz, Joseph
2009-01-01
Suitable for graduate students and researchers in applied probability and statistics, as well as for scientists in biology, computer science, pharmaceutical science and medicine, this title brings together a collection of chapters illustrating the depth and diversity of theory, methods and applications in the area of scan statistics.
Lyons, L.
2016-01-01
Accelerators and detectors are expensive, both in terms of money and human effort. It is thus important to invest effort in performing a good statistical anal- ysis of the data, in order to extract the best information from it. This series of five lectures deals with practical aspects of statistical issues that arise in typical High Energy Physics analyses.
Nick, Todd G
2007-01-01
Statistics is defined by the Medical Subject Headings (MeSH) thesaurus as the science and art of collecting, summarizing, and analyzing data that are subject to random variation. The two broad categories of summarizing and analyzing data are referred to as descriptive and inferential statistics. This chapter considers the science and art of summarizing data where descriptive statistics and graphics are used to display data. In this chapter, we discuss the fundamentals of descriptive statistics, including describing qualitative and quantitative variables. For describing quantitative variables, measures of location and spread, for example the standard deviation, are presented along with graphical presentations. We also discuss distributions of statistics, for example the variance, as well as the use of transformations. The concepts in this chapter are useful for uncovering patterns within the data and for effectively presenting the results of a project.
Blakemore, J S
1962-01-01
Semiconductor Statistics presents statistics aimed at complementing existing books on the relationships between carrier densities and transport effects. The book is divided into two parts. Part I provides introductory material on the electron theory of solids, and then discusses carrier statistics for semiconductors in thermal equilibrium. Of course a solid cannot be in true thermodynamic equilibrium if any electrical current is passed; but when currents are reasonably small the distribution function is but little perturbed, and the carrier distribution for such a """"quasi-equilibrium"""" co
Wannier, Gregory Hugh
1966-01-01
Until recently, the field of statistical physics was traditionally taught as three separate subjects: thermodynamics, statistical mechanics, and kinetic theory. This text, a forerunner in its field and now a classic, was the first to recognize the outdated reasons for their separation and to combine the essentials of the three subjects into one unified presentation of thermal physics. It has been widely adopted in graduate and advanced undergraduate courses, and is recommended throughout the field as an indispensable aid to the independent study and research of statistical physics.Designed for
Feiveson, Alan H.; Foy, Millennia; Ploutz-Snyder, Robert; Fiedler, James
2014-01-01
Do you have elevated p-values? Is the data analysis process getting you down? Do you experience anxiety when you need to respond to criticism of statistical methods in your manuscript? You may be suffering from Insufficient Statistical Support Syndrome (ISSS). For symptomatic relief of ISSS, come for a free consultation with JSC biostatisticians at our help desk during the poster sessions at the HRP Investigators Workshop. Get answers to common questions about sample size, missing data, multiple testing, when to trust the results of your analyses and more. Side effects may include sudden loss of statistics anxiety, improved interpretation of your data, and increased confidence in your results.
Introductory statistical inference
Mukhopadhyay, Nitis
2014-01-01
This gracefully organized text reveals the rigorous theory of probability and statistical inference in the style of a tutorial, using worked examples, exercises, figures, tables, and computer simulations to develop and illustrate concepts. Drills and boxed summaries emphasize and reinforce important ideas and special techniques.Beginning with a review of the basic concepts and methods in probability theory, moments, and moment generating functions, the author moves to more intricate topics. Introductory Statistical Inference studies multivariate random variables, exponential families of dist
Tellinghuisen, Joel
2008-01-01
The method of least squares is probably the most powerful data analysis tool available to scientists. Toward a fuller appreciation of that power, this work begins with an elementary review of statistics fundamentals, and then progressively increases in sophistication as the coverage is extended to the theory and practice of linear and nonlinear least squares. The results are illustrated in application to data analysis problems important in the life sciences. The review of fundamentals includes the role of sampling and its connection to probability distributions, the Central Limit Theorem, and the importance of finite variance. Linear least squares are presented using matrix notation, and the significance of the key probability distributions-Gaussian, chi-square, and t-is illustrated with Monte Carlo calculations. The meaning of correlation is discussed, including its role in the propagation of error. When the data themselves are correlated, special methods are needed for the fitting, as they are also when fitting with constraints. Nonlinear fitting gives rise to nonnormal parameter distributions, but the 10% Rule of Thumb suggests that such problems will be insignificant when the parameter is sufficiently well determined. Illustrations include calibration with linear and nonlinear response functions, the dangers inherent in fitting inverted data (e.g., Lineweaver-Burk equation), an analysis of the reliability of the van't Hoff analysis, the problem of correlated data in the Guggenheim method, and the optimization of isothermal titration calorimetry procedures using the variance-covariance matrix for experiment design. The work concludes with illustrations on assessing and presenting results.
Energy Technology Data Exchange (ETDEWEB)
Wendelberger, Laura Jean [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-08-08
In large datasets, it is time consuming or even impossible to pick out interesting images. Our proposed solution is to find statistics to quantify the information in each image and use those to identify and pick out images of interest.
Department of Homeland Security — Accident statistics available on the Coast Guard’s website by state, year, and one variable to obtain tables and/or graphs. Data from reports has been loaded for...
U.S. Department of Health & Human Services — The CMS Center for Strategic Planning produces an annual CMS Statistics reference booklet that provides a quick reference for summary information about health...
Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Data about the usage of the WPRDC site and its various datasets, obtained by combining Google Analytics statistics with information from the WPRDC's data portal.
Serdobolskii, Vadim Ivanovich
2007-01-01
This monograph presents mathematical theory of statistical models described by the essentially large number of unknown parameters, comparable with sample size but can also be much larger. In this meaning, the proposed theory can be called "essentially multiparametric". It is developed on the basis of the Kolmogorov asymptotic approach in which sample size increases along with the number of unknown parameters.This theory opens a way for solution of central problems of multivariate statistics, which up until now have not been solved. Traditional statistical methods based on the idea of an infinite sampling often break down in the solution of real problems, and, dependent on data, can be inefficient, unstable and even not applicable. In this situation, practical statisticians are forced to use various heuristic methods in the hope the will find a satisfactory solution.Mathematical theory developed in this book presents a regular technique for implementing new, more efficient versions of statistical procedures. ...
... Search Form Controls Cancel Submit Search the CDC Gonorrhea Note: Javascript is disabled or is not supported ... Twitter STD on Facebook Sexually Transmitted Diseases (STDs) Gonorrhea Statistics Recommend on Facebook Tweet Share Compartir Gonorrhea ...
DEFF Research Database (Denmark)
Tryggestad, Kjell
2004-01-01
The study aims is to describe how the inclusion and exclusion of materials and calculative devices construct the boundaries and distinctions between statistical facts and artifacts in economics. My methodological approach is inspired by John Graunt's (1667) Political arithmetic and more recent work...... within constructivism and the field of Science and Technology Studies (STS). The result of this approach is here termed reversible statistics, reconstructing the findings of a statistical study within economics in three different ways. It is argued that all three accounts are quite normal, albeit...... in different ways. The presence and absence of diverse materials, both natural and political, is what distinguishes them from each other. Arguments are presented for a more symmetric relation between the scientific statistical text and the reader. I will argue that a more symmetric relation can be achieved...
MacKenzie, Dana
2004-01-01
The drawbacks of using 19th-century mathematics in physics and astronomy are illustrated. To continue with the expansion of the knowledge about the cosmos, the scientists will have to come in terms with modern statistics. Some researchers have deliberately started importing techniques that are used in medical research. However, the physicists need to identify the brand of statistics that will be suitable for them, and make a choice between the Bayesian and the frequentists approach. (Edited abstract).
Directory of Open Access Journals (Sweden)
Sreeram V Ramagopalan
2015-04-01
Full Text Available Background: We and others have shown a significant proportion of interventional trials registered on ClinicalTrials.gov have their primary outcomes altered after the listed study start and completion dates. The objectives of this study were to investigate whether changes made to primary outcomes are associated with the likelihood of reporting a statistically significant primary outcome on ClinicalTrials.gov. Methods: A cross-sectional analysis of all interventional clinical trials registered on ClinicalTrials.gov as of 20 November 2014 was performed. The main outcome was any change made to the initially listed primary outcome and the time of the change in relation to the trial start and end date. Findings: 13,238 completed interventional trials were registered with ClinicalTrials.gov that also had study results posted on the website. 2555 (19.3% had one or more statistically significant primary outcomes. Statistical analysis showed that registration year, funding source and primary outcome change after trial completion were associated with reporting a statistically significant primary outcome. Conclusions: Funding source and primary outcome change after trial completion are associated with a statistically significant primary outcome report on clinicaltrials.gov.
Mauro, John
2013-01-01
Written to reveal statistical deceptions often thrust upon unsuspecting journalists, this book views the use of numbers from a public perspective. Illustrating how the statistical naivete of journalists often nourishes quantitative misinformation, the author's intent is to make journalists more critical appraisers of numerical data so that in reporting them they do not deceive the public. The book frequently uses actual reported examples of misused statistical data reported by mass media and describes how journalists can avoid being taken in by them. Because reports of survey findings seldom g
Goodman, J. W.
This book is based on the thesis that some training in the area of statistical optics should be included as a standard part of any advanced optics curriculum. Random variables are discussed, taking into account definitions of probability and random variables, distribution functions and density functions, an extension to two or more random variables, statistical averages, transformations of random variables, sums of real random variables, Gaussian random variables, complex-valued random variables, and random phasor sums. Other subjects examined are related to random processes, some first-order properties of light waves, the coherence of optical waves, some problems involving high-order coherence, effects of partial coherence on imaging systems, imaging in the presence of randomly inhomogeneous media, and fundamental limits in photoelectric detection of light. Attention is given to deterministic versus statistical phenomena and models, the Fourier transform, and the fourth-order moment of the spectrum of a detected speckle image.
Schwabl, Franz
2006-01-01
The completely revised new edition of the classical book on Statistical Mechanics covers the basic concepts of equilibrium and non-equilibrium statistical physics. In addition to a deductive approach to equilibrium statistics and thermodynamics based on a single hypothesis - the form of the microcanonical density matrix - this book treats the most important elements of non-equilibrium phenomena. Intermediate calculations are presented in complete detail. Problems at the end of each chapter help students to consolidate their understanding of the material. Beyond the fundamentals, this text demonstrates the breadth of the field and its great variety of applications. Modern areas such as renormalization group theory, percolation, stochastic equations of motion and their applications to critical dynamics, kinetic theories, as well as fundamental considerations of irreversibility, are discussed. The text will be useful for advanced students of physics and other natural sciences; a basic knowledge of quantum mechan...
Jana, Madhusudan
2015-01-01
Statistical mechanics is self sufficient, written in a lucid manner, keeping in mind the exam system of the universities. Need of study this subject and its relation to Thermodynamics is discussed in detail. Starting from Liouville theorem gradually, the Statistical Mechanics is developed thoroughly. All three types of Statistical distribution functions are derived separately with their periphery of applications and limitations. Non-interacting ideal Bose gas and Fermi gas are discussed thoroughly. Properties of Liquid He-II and the corresponding models have been depicted. White dwarfs and condensed matter physics, transport phenomenon - thermal and electrical conductivity, Hall effect, Magneto resistance, viscosity, diffusion, etc. are discussed. Basic understanding of Ising model is given to explain the phase transition. The book ends with a detailed coverage to the method of ensembles (namely Microcanonical, canonical and grand canonical) and their applications. Various numerical and conceptual problems ar...
Guénault, Tony
2007-01-01
In this revised and enlarged second edition of an established text Tony Guénault provides a clear and refreshingly readable introduction to statistical physics, an essential component of any first degree in physics. The treatment itself is self-contained and concentrates on an understanding of the physical ideas, without requiring a high level of mathematical sophistication. A straightforward quantum approach to statistical averaging is adopted from the outset (easier, the author believes, than the classical approach). The initial part of the book is geared towards explaining the equilibrium properties of a simple isolated assembly of particles. Thus, several important topics, for example an ideal spin-½ solid, can be discussed at an early stage. The treatment of gases gives full coverage to Maxwell-Boltzmann, Fermi-Dirac and Bose-Einstein statistics. Towards the end of the book the student is introduced to a wider viewpoint and new chapters are included on chemical thermodynamics, interactions in, for exam...
Mandl, Franz
1988-01-01
The Manchester Physics Series General Editors: D. J. Sandiford; F. Mandl; A. C. Phillips Department of Physics and Astronomy, University of Manchester Properties of Matter B. H. Flowers and E. Mendoza Optics Second Edition F. G. Smith and J. H. Thomson Statistical Physics Second Edition E. Mandl Electromagnetism Second Edition I. S. Grant and W. R. Phillips Statistics R. J. Barlow Solid State Physics Second Edition J. R. Hook and H. E. Hall Quantum Mechanics F. Mandl Particle Physics Second Edition B. R. Martin and G. Shaw The Physics of Stars Second Edition A. C. Phillips Computing for Scient
Rohatgi, Vijay K
2003-01-01
Unified treatment of probability and statistics examines and analyzes the relationship between the two fields, exploring inferential issues. Numerous problems, examples, and diagrams--some with solutions--plus clear-cut, highlighted summaries of results. Advanced undergraduate to graduate level. Contents: 1. Introduction. 2. Probability Model. 3. Probability Distributions. 4. Introduction to Statistical Inference. 5. More on Mathematical Expectation. 6. Some Discrete Models. 7. Some Continuous Models. 8. Functions of Random Variables and Random Vectors. 9. Large-Sample Theory. 10. General Meth
Levine-Wissing, Robin
2012-01-01
All Access for the AP® Statistics Exam Book + Web + Mobile Everything you need to prepare for the Advanced Placement® exam, in a study system built around you! There are many different ways to prepare for an Advanced Placement® exam. What's best for you depends on how much time you have to study and how comfortable you are with the subject matter. To score your highest, you need a system that can be customized to fit you: your schedule, your learning style, and your current level of knowledge. This book, and the online tools that come with it, will help you personalize your AP® Statistics prep
Davidson, Norman
2003-01-01
Clear and readable, this fine text assists students in achieving a grasp of the techniques and limitations of statistical mechanics. The treatment follows a logical progression from elementary to advanced theories, with careful attention to detail and mathematical development, and is sufficiently rigorous for introductory or intermediate graduate courses.Beginning with a study of the statistical mechanics of ideal gases and other systems of non-interacting particles, the text develops the theory in detail and applies it to the study of chemical equilibrium and the calculation of the thermody
Indian Academy of Sciences (India)
inference and finite population sampling. Sudhakar Kunte. Elements of statistical computing are discussed in this series. ... which captain gets an option to decide whether to field first or bat first ... may of course not be fair, in the sense that the team which wins ... describe two methods of drawing a random number between 0.
Schrödinger, Erwin
1952-01-01
Nobel Laureate's brilliant attempt to develop a simple, unified standard method of dealing with all cases of statistical thermodynamics - classical, quantum, Bose-Einstein, Fermi-Dirac, and more.The work also includes discussions of Nernst theorem, Planck's oscillator, fluctuations, the n-particle problem, problem of radiation, much more.
Statistics: a Bayesian perspective
National Research Council Canada - National Science Library
Berry, Donald A
1996-01-01
...: it is the only introductory textbook based on Bayesian ideas, it combines concepts and methods, it presents statistics as a means of integrating data into the significant process, it develops ideas...
International Nuclear Information System (INIS)
Anon.
1994-01-01
For the years 1992 and 1993, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period. The tables and figures shown in this publication are: Changes in the volume of GNP and energy consumption; Coal consumption; Natural gas consumption; Peat consumption; Domestic oil deliveries; Import prices of oil; Price development of principal oil products; Fuel prices for power production; Total energy consumption by source; Electricity supply; Energy imports by country of origin in 1993; Energy exports by recipient country in 1993; Consumer prices of liquid fuels; Consumer prices of hard coal and natural gas, prices of indigenous fuels; Average electricity price by type of consumer; Price of district heating by type of consumer and Excise taxes and turnover taxes included in consumer prices of some energy sources
Goodman, Joseph W.
2000-07-01
The Wiley Classics Library consists of selected books that have become recognized classics in their respective fields. With these new unabridged and inexpensive editions, Wiley hopes to extend the life of these important works by making them available to future generations of mathematicians and scientists. Currently available in the Series: T. W. Anderson The Statistical Analysis of Time Series T. S. Arthanari & Yadolah Dodge Mathematical Programming in Statistics Emil Artin Geometric Algebra Norman T. J. Bailey The Elements of Stochastic Processes with Applications to the Natural Sciences Robert G. Bartle The Elements of Integration and Lebesgue Measure George E. P. Box & Norman R. Draper Evolutionary Operation: A Statistical Method for Process Improvement George E. P. Box & George C. Tiao Bayesian Inference in Statistical Analysis R. W. Carter Finite Groups of Lie Type: Conjugacy Classes and Complex Characters R. W. Carter Simple Groups of Lie Type William G. Cochran & Gertrude M. Cox Experimental Designs, Second Edition Richard Courant Differential and Integral Calculus, Volume I RIchard Courant Differential and Integral Calculus, Volume II Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume I Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume II D. R. Cox Planning of Experiments Harold S. M. Coxeter Introduction to Geometry, Second Edition Charles W. Curtis & Irving Reiner Representation Theory of Finite Groups and Associative Algebras Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume I Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume II Cuthbert Daniel Fitting Equations to Data: Computer Analysis of Multifactor Data, Second Edition Bruno de Finetti Theory of Probability, Volume I Bruno de Finetti Theory of Probability, Volume 2 W. Edwards Deming Sample Design in Business Research
Pivato, Marcus
2013-01-01
We show that, in a sufficiently large population satisfying certain statistical regularities, it is often possible to accurately estimate the utilitarian social welfare function, even if we only have very noisy data about individual utility functions and interpersonal utility comparisons. In particular, we show that it is often possible to identify an optimal or close-to-optimal utilitarian social choice using voting rules such as the Borda rule, approval voting, relative utilitarianism, or a...
Natrella, Mary Gibbons
1963-01-01
Formulated to assist scientists and engineers engaged in army ordnance research and development programs, this well-known and highly regarded handbook is a ready reference for advanced undergraduate and graduate students as well as for professionals seeking engineering information and quantitative data for designing, developing, constructing, and testing equipment. Topics include characterizing and comparing the measured performance of a material, product, or process; general considerations in planning experiments; statistical techniques for analyzing extreme-value data; use of transformations
Directory of Open Access Journals (Sweden)
Andrew J Parker
2014-04-01
Full Text Available The power and significance of artwork in shaping human cognition is self-evident. The starting point for our empirical investigations is the view that the task of neuroscience is to integrate itself with other forms of knowledge, rather than to seek to supplant them. In our recent work, we examined a particular aspect of the appreciation of artwork using present-day functional magnetic resonance imaging (fMRI. Our results emphasised the continuity between viewing artwork and other human cognitive activities. We also showed that appreciation of a particular aspect of artwork, namely authenticity, depends upon the co-ordinated activity between the brain regions involved in multiple decision making and those responsible for processing visual information. The findings about brain function probably have no specific consequences for understanding how people respond to the art of Rembrandt in comparison with their response to other artworks. However, the use of images of Rembrandt’s portraits, his most intimate and personal works, clearly had a significant impact upon our viewers, even though they have been spatially confined to the interior of an MRI scanner at the time of viewing. Neuroscientific studies of humans viewing artwork have the capacity to reveal the diversity of human cognitive responses that may be induced by external advice or context as people view artwork in a variety of frameworks and settings.
Intervention for Maltreating Fathers: Statistically and Clinically Significant Change
Scott, Katreena L.; Lishak, Vicky
2012-01-01
Objective: Fathers are seldom the focus of efforts to address child maltreatment and little is currently known about the effectiveness of intervention for this population. To address this gap, we examined the efficacy of a community-based group treatment program for fathers who had abused or neglected their children or exposed their children to…
The questioned p value: clinical, practical and statistical significance
Directory of Open Access Journals (Sweden)
Rosa Jiménez-Paneque
2016-09-01
Full Text Available Resumen El uso del valor de p y la significación estadística han estado en entredicho desde principios de la década de los 80 en el siglo pasado hasta nuestros días. Mucho se ha discutido al respecto en el ámbito de la estadística y sus aplicaciones, en particular a la Epidemiología y la Salud Pública. El valor de p y su equivalente, la significación estadística, son por demás conceptos difíciles de asimilar para los muchos profesionales de la salud involucrados de alguna manera en la investigación aplicada a sus áreas de trabajo. Sin embargo, su significado debería ser claro en términos intuitivos a pesar de que se basa en conceptos teóricos del terreno de la Estadística-Matemática. Este artículo intenta presentar al valor de p como un concepto que se aplica a la vida diaria y por tanto intuitivamente sencillo pero cuyo uso adecuado no se puede separar de elementos teóricos y metodológicos con complejidad intrínseca. Se explican también de manera intuitiva las razones detrás de las críticas que ha recibido el valor de p y su uso aislado, principalmente la necesidad de deslindar significación estadística de significación clínica y se mencionan algunos de los remedios propuestos para estos problemas. Se termina aludiendo a la actual tendencia a reivindicar su uso apelando a la conveniencia de utilizarlo en ciertas situaciones y la reciente declaración de la Asociación Americana de Estadística al respecto.
Sibling Competition & Growth Tradeoffs. Biological vs. Statistical Significance
Kramer, Karen L.; Veile, Amanda; Ot?rola-Castillo, Erik
2016-01-01
Early childhood growth has many downstream effects on future health and reproduction and is an important measure of offspring quality. While a tradeoff between family size and child growth outcomes is theoretically predicted in high-fertility societies, empirical evidence is mixed. This is often attributed to phenotypic variation in parental condition. However, inconsistent study results may also arise because family size confounds the potentially differential effects that older and younger s...
DEFF Research Database (Denmark)
Bull, Tim J.; Munshil, Tulika; Melvang, Heidi Mikkelsen
2017-01-01
The quantitative detection of viable pathogen load is an important tool in determining the degree of infection in animals and contamination of foodstuffs. Current conventional culture methods are limited in their ability to determine these levels in Mycobacterium avium subspecies paratuberculosis......Ka culture equates well with qPCR and provides important evidence that accuracy in estimating viable MAP load using DNA tests alone may vary significantly between samples of mucosal and lymphatic origin....... (MAP) due to slow growth, clumping and low recoverability issues. The principle goal of this study was to evaluate a novel culturing process (TiKa) with unique ability to stimulate MAP growth from low sample loads and dilutions. We demonstrate it was able to stimulate a mean 29-fold increase...
International Nuclear Information System (INIS)
Anon.
1989-01-01
World data from the United Nation's latest Energy Statistics Yearbook, first published in our last issue, are completed here. The 1984-86 data were revised and 1987 data added for world commercial energy production and consumption, world natural gas plant liquids production, world LP-gas production, imports, exports, and consumption, world residual fuel oil production, imports, exports, and consumption, world lignite production, imports, exports, and consumption, world peat production and consumption, world electricity production, imports, exports, and consumption (Table 80), and world nuclear electric power production
Statistics 101 for Radiologists.
Anvari, Arash; Halpern, Elkan F; Samir, Anthony E
2015-10-01
Diagnostic tests have wide clinical applications, including screening, diagnosis, measuring treatment effect, and determining prognosis. Interpreting diagnostic test results requires an understanding of key statistical concepts used to evaluate test efficacy. This review explains descriptive statistics and discusses probability, including mutually exclusive and independent events and conditional probability. In the inferential statistics section, a statistical perspective on study design is provided, together with an explanation of how to select appropriate statistical tests. Key concepts in recruiting study samples are discussed, including representativeness and random sampling. Variable types are defined, including predictor, outcome, and covariate variables, and the relationship of these variables to one another. In the hypothesis testing section, we explain how to determine if observed differences between groups are likely to be due to chance. We explain type I and II errors, statistical significance, and study power, followed by an explanation of effect sizes and how confidence intervals can be used to generalize observed effect sizes to the larger population. Statistical tests are explained in four categories: t tests and analysis of variance, proportion analysis tests, nonparametric tests, and regression techniques. We discuss sensitivity, specificity, accuracy, receiver operating characteristic analysis, and likelihood ratios. Measures of reliability and agreement, including κ statistics, intraclass correlation coefficients, and Bland-Altman graphs and analysis, are introduced. © RSNA, 2015.
Ludwig, Kerstin U.; Ahmed, Syeda Tasnim; Böhmer, Anne C.; Sangani, Nasim Bahram; Varghese, Sheryil; Klamt, Johanna; Schuenke, Hannah; Gültepe, Pinar; Hofmann, Andrea; Rubini, Michele; Aldhorae, Khalid Ahmed; Steegers-Theunissen, Regine P.; Rojas-Martinez, Augusto; Reiter, Rudolf; Borck, Guntram; Knapp, Michael; Nakatomi, Mitsushiro; Graf, Daniel; Mangold, Elisabeth; Peters, Heiko
2016-01-01
Nonsyndromic orofacial clefts are common birth defects with multifactorial etiology. The most common type is cleft lip, which occurs with or without cleft palate (nsCLP and nsCLO, respectively). Although genetic components play an important role in nsCLP, the genetic factors that predispose to palate involvement are largely unknown. In this study, we carried out a meta-analysis on genetic and clinical data from three large cohorts and identified strong association between a region on chromosome 15q13 and nsCLP (P = 8.13×10−14 for rs1258763; relative risk (RR): 1.46, 95% confidence interval (CI): 1.32–1.61)) but not nsCLO (P = 0.27; RR: 1.09 (0.94–1.27)). The 5 kb region of strongest association maps downstream of Gremlin-1 (GREM1), which encodes a secreted antagonist of the BMP4 pathway. We show during mouse embryogenesis, Grem1 is expressed in the developing lip and soft palate but not in the hard palate. This is consistent with genotype-phenotype correlations between rs1258763 and a specific nsCLP subphenotype, since a more than two-fold increase in risk was observed in patients displaying clefts of both the lip and soft palate but who had an intact hard palate (RR: 3.76, CI: 1.47–9.61, Pdifflip or palate defects in Grem1-deficient mice, wild type embryonic palatal shelves developed divergent shapes when cultured in the presence of ectopic Grem1 protein (P = 0.0014). The present study identified a non-coding region at 15q13 as the second, genome-wide significant locus specific for nsCLP, after 13q31. Moreover, our data suggest that the closely located GREM1 gene contributes to a rare clinical nsCLP entity. This entity specifically involves abnormalities of the lip and soft palate, which develop at different time-points and in separate anatomical regions. PMID:26968009
Santilli, S; Kast, D R; Grozdev, I; Cao, L; Feig, R L; Golden, J B; Debanne, S M; Gilkeson, R C; Orringer, C E; McCormick, T S; Ward, N L; Cooper, K D; Korman, N J
2016-07-22
Psoriasis is a chronic inflammatory disease of the skin and joints that may also have systemic inflammatory effects, including the development of cardiovascular disease (CVD). Multiple epidemiologic studies have demonstrated increased rates of CVD in psoriasis patients, although a causal link has not been established. A growing body of evidence suggests that sub-clinical systemic inflammation may develop in psoriasis patients, even from a young age. We aimed to evaluate the prevalence of atherosclerosis and identify specific clinical risk factors associated with early vascular inflammation. We conducted a cross-sectional study of a tertiary care cohort of psoriasis patients using coronary artery calcium (CAC) score and carotid intima-media thickness (CIMT) to detect atherosclerosis, along with high sensitivity C-reactive protein (hsCRP) to measure inflammation. Psoriasis patients and controls were recruited from our tertiary care dermatology clinic. Presence of atherosclerosis was defined using validated numeric values within CAC and CIMT imaging. Descriptive data comparing groups was analyzed using Welch's t test and Pearson Chi square tests. Logistic regression was used to analyze clinical factors associated with atherosclerosis, and linear regression to evaluate the relationship between psoriasis and hsCRP. 296 patients were enrolled, with 283 (207 psoriatic and 76 controls) having all data for the hsCRP and atherosclerosis analysis. Atherosclerosis was found in 67.6 % of psoriasis subjects versus 52.6 % of controls; Psoriasis patients were found to have a 2.67-fold higher odds of having atherosclerosis compared to controls [95 % CI (1.2, 5.92); p = 0.016], after adjusting for age, gender, race, BMI, smoking, HDL and hsCRP. In addition, a non-significant trend was found between HsCRP and psoriasis severity, as measured by PASI, PGA, or BSA, again after adjusting for confounders. A tertiary care cohort of psoriasis patients have a high prevalence of early
Directory of Open Access Journals (Sweden)
Bárbara Juarez Amorim
2005-12-01
Full Text Available OBJECTIVE: To investigate the pattern of perfusion abnormalities in ictal and interictal brain perfusion SPECT images (BSI from patients with temporal lobe epilepsy (TLE. METHOD: It was acquired interictal and ictal BSI from 24 patients with refractory TLE. BSIs were analyzed by visual inspection and statistical parametric mapping (SPM2. Statistical analysis compared the patients group to a control group of 50 volunteers. The images from patients with left-TLE were left-right flipped. RESULTS: It was not observed significant perfusional differences in interictal scans with SPM. Ictal BSI in SPM analysis revealed hyperperfusion within ipsilateral temporal lobe (epileptogenic focus and also contralateral parieto-occipital region, ipsilateral posterior cingulate gyrus, occipital lobes and ipsilateral basal ganglia. Ictal BSI also showed areas of hypoperfusion. CONCLUSION: In a group analysis of ictal BSI of patients with TLE, voxel-wise analysis detects a network of distant regions of perfusional alteration which may play active role in seizure genesis and propagation.OBJETIVO: Investigar o padrão de anormalidades perfusionais no SPECT de perfusão cerebral (SPC ictal e interictal na epilepsia de lobo temporal (ELT. MÉTODO: Foram realizados SPCs ictal e interictal de 24 pacientes com ELT que foram analisados visualmente e com o statistical parametric mapping (SPM2. A análise estatística comparou o grupo de pacientes versus um grupo controle de 50 voluntários. RESULTADOS: Na análise do SPM não foram observadas diferenças significativas no grupo de SPC interictal. No grupo de SPC ictal o SPM revelou hiperperfusão no lobo temporal ipsilateral (foco epileptogênico e também na região parieto-occipital contralateral, porção posterior do cíngulo ipsilateral, lobos occipitais e núcleos da base ipsilateral. O SPC ictal também mostrou áreas de hipoperfusão. CONCLUSÃO: Em uma análise de grupo do SPC ictal de pacientes com ELT, a an
National Statistical Commission and Indian Official Statistics*
Indian Academy of Sciences (India)
IAS Admin
a good collection of official statistics of that time. With more .... statistical agencies and institutions to provide details of statistical activities .... ing several training programmes. .... ful completion of Indian Statistical Service examinations, the.
Müller-Kirsten, Harald J W
2013-01-01
Statistics links microscopic and macroscopic phenomena, and requires for this reason a large number of microscopic elements like atoms. The results are values of maximum probability or of averaging. This introduction to statistical physics concentrates on the basic principles, and attempts to explain these in simple terms supplemented by numerous examples. These basic principles include the difference between classical and quantum statistics, a priori probabilities as related to degeneracies, the vital aspect of indistinguishability as compared with distinguishability in classical physics, the differences between conserved and non-conserved elements, the different ways of counting arrangements in the three statistics (Maxwell-Boltzmann, Fermi-Dirac, Bose-Einstein), the difference between maximization of the number of arrangements of elements, and averaging in the Darwin-Fowler method. Significant applications to solids, radiation and electrons in metals are treated in separate chapters, as well as Bose-Eins...
Directory of Open Access Journals (Sweden)
Joachim I. Krueger
2018-04-01
Full Text Available The practice of Significance Testing (ST remains widespread in psychological science despite continual criticism of its flaws and abuses. Using simulation experiments, we address four concerns about ST and for two of these we compare ST’s performance with prominent alternatives. We find the following: First, the 'p' values delivered by ST predict the posterior probability of the tested hypothesis well under many research conditions. Second, low 'p' values support inductive inferences because they are most likely to occur when the tested hypothesis is false. Third, 'p' values track likelihood ratios without raising the uncertainties of relative inference. Fourth, 'p' values predict the replicability of research findings better than confidence intervals do. Given these results, we conclude that 'p' values may be used judiciously as a heuristic tool for inductive inference. Yet, 'p' values cannot bear the full burden of inference. We encourage researchers to be flexible in their selection and use of statistical methods.
Can a significance test be genuinely Bayesian?
Pereira, Carlos A. de B.; Stern, Julio Michael; Wechsler, Sergio
2008-01-01
The Full Bayesian Significance Test, FBST, is extensively reviewed. Its test statistic, a genuine Bayesian measure of evidence, is discussed in detail. Its behavior in some problems of statistical inference like testing for independence in contingency tables is discussed.
Whither Statistics Education Research?
Watson, Jane
2016-01-01
This year marks the 25th anniversary of the publication of a "National Statement on Mathematics for Australian Schools", which was the first curriculum statement this country had including "Chance and Data" as a significant component. It is hence an opportune time to survey the history of the related statistics education…
Berman, Elizabeth
1979-01-01
Mathematics Revealed focuses on the principles, processes, operations, and exercises in mathematics.The book first offers information on whole numbers, fractions, and decimals and percents. Discussions focus on measuring length, percent, decimals, numbers as products, addition and subtraction of fractions, mixed numbers and ratios, division of fractions, addition, subtraction, multiplication, and division. The text then examines positive and negative numbers and powers and computation. Topics include division and averages, multiplication, ratios, and measurements, scientific notation and estim
Worry, Intolerance of Uncertainty, and Statistics Anxiety
Williams, Amanda S.
2013-01-01
Statistics anxiety is a problem for most graduate students. This study investigates the relationship between intolerance of uncertainty, worry, and statistics anxiety. Intolerance of uncertainty was significantly related to worry, and worry was significantly related to three types of statistics anxiety. Six types of statistics anxiety were…
[Statistics for statistics?--Thoughts about psychological tools].
Berger, Uwe; Stöbel-Richter, Yve
2007-12-01
Statistical methods take a prominent place among psychologists' educational programs. Being known as difficult to understand and heavy to learn, students fear of these contents. Those, who do not aspire after a research carrier at the university, will forget the drilled contents fast. Furthermore, because it does not apply for the work with patients and other target groups at a first glance, the methodological education as a whole was often questioned. For many psychological practitioners the statistical education makes only sense by enforcing respect against other professions, namely physicians. For the own business, statistics is rarely taken seriously as a professional tool. The reason seems to be clear: Statistics treats numbers, while psychotherapy treats subjects. So, does statistics ends in itself? With this article, we try to answer the question, if and how statistical methods were represented within the psychotherapeutical and psychological research. Therefore, we analyzed 46 Originals of a complete volume of the journal Psychotherapy, Psychosomatics, Psychological Medicine (PPmP). Within the volume, 28 different analyse methods were applied, from which 89 per cent were directly based upon statistics. To be able to write and critically read Originals as a backbone of research, presumes a high degree of statistical education. To ignore statistics means to ignore research and at least to reveal the own professional work to arbitrariness.
Directory of Open Access Journals (Sweden)
Davis Woodworth
Full Text Available Studies have suggested chronic pain syndromes are associated with neural reorganization in specific regions associated with perception, processing, and integration of pain. Urological chronic pelvic pain syndrome (UCPPS represents a collection of pain syndromes characterized by pelvic pain, namely Chronic Prostatitis/Chronic Pelvic Pain Syndrome (CP/CPPS and Interstitial Cystitis/Painful Bladder Syndrome (IC/PBS, that are both poorly understood in their pathophysiology, and treated ineffectively. We hypothesized patients with UCPPS may have microstructural differences in the brain compared with healthy control subjects (HCs, as well as patients with irritable bowel syndrome (IBS, a common gastrointestinal pain disorder. In the current study we performed population-based voxel-wise DTI and super-resolution track density imaging (TDI in a large, two-center sample of phenotyped patients from the multicenter cohort with UCPPS (N = 45, IBS (N = 39, and HCs (N = 56 as part of the MAPP Research Network. Compared with HCs, UCPPS patients had lower fractional anisotropy (FA, lower generalized anisotropy (GA, lower track density, and higher mean diffusivity (MD in brain regions commonly associated with perception and integration of pain information. Results also showed significant differences in specific anatomical regions in UCPPS patients when compared with IBS patients, consistent with microstructural alterations specific to UCPPS. While IBS patients showed clear sex related differences in FA, MD, GA, and track density consistent with previous reports, few such differences were observed in UCPPS patients. Heat maps illustrating the correlation between specific regions of interest and various pain and urinary symptom scores showed clustering of significant associations along the cortico-basal ganglia-thalamic-cortical loop associated with pain integration, modulation, and perception. Together, results suggest patients with UCPPS have extensive
... Watchdog Ratings Feedback Contact Select Page Childhood Cancer Statistics Home > Cancer Resources > Childhood Cancer Statistics Childhood Cancer Statistics – Graphs and Infographics Number of Diagnoses Incidence Rates ...
Statistics for experimentalists
Cooper, B E
2014-01-01
Statistics for Experimentalists aims to provide experimental scientists with a working knowledge of statistical methods and search approaches to the analysis of data. The book first elaborates on probability and continuous probability distributions. Discussions focus on properties of continuous random variables and normal variables, independence of two random variables, central moments of a continuous distribution, prediction from a normal distribution, binomial probabilities, and multiplication of probabilities and independence. The text then examines estimation and tests of significance. Topics include estimators and estimates, expected values, minimum variance linear unbiased estimators, sufficient estimators, methods of maximum likelihood and least squares, and the test of significance method. The manuscript ponders on distribution-free tests, Poisson process and counting problems, correlation and function fitting, balanced incomplete randomized block designs and the analysis of covariance, and experiment...
... Standards Act and Program MQSA Insights MQSA National Statistics Share Tweet Linkedin Pin it More sharing options ... but should level off with time. Archived Scorecard Statistics 2018 Scorecard Statistics 2017 Scorecard Statistics 2016 Scorecard ...
State Transportation Statistics 2014
2014-12-15
The Bureau of Transportation Statistics (BTS) presents State Transportation Statistics 2014, a statistical profile of transportation in the 50 states and the District of Columbia. This is the 12th annual edition of State Transportation Statistics, a ...
Significance evaluation in factor graphs
DEFF Research Database (Denmark)
Madsen, Tobias; Hobolth, Asger; Jensen, Jens Ledet
2017-01-01
in genomics and the multiple-testing issues accompanying them, accurate significance evaluation is of great importance. We here address the problem of evaluating statistical significance of observations from factor graph models. Results Two novel numerical approximations for evaluation of statistical...... significance are presented. First a method using importance sampling. Second a saddlepoint approximation based method. We develop algorithms to efficiently compute the approximations and compare them to naive sampling and the normal approximation. The individual merits of the methods are analysed both from....... Conclusions The applicability of saddlepoint approximation and importance sampling is demonstrated on known models in the factor graph framework. Using the two methods we can substantially improve computational cost without compromising accuracy. This contribution allows analyses of large datasets...
Peculiarities of Teaching Medical Informatics and Statistics
Directory of Open Access Journals (Sweden)
Sergey Glushkov
2017-05-01
Full Text Available The article reviews features of teaching Medical Informatics and Statistics. The course is referred to the disciplines of Mathematical and Natural sciences. The course is provided in all the faculties of I. M. Sechenov First Moscow State Medical University. For students of Preventive Medicine Department the time frame allotted for studying the course is significantly larger than for similar course provided at other faculties. To improve the teaching methodology of the discipline an analysis of the curriculum has been carried out, attendance and students’ performance statistics have been summarized. As a result, the main goals and objectives have been identified. Besides, general educational functions and the contribution to the solution of problems of education, students’ upbringing and development have been revealed; two stages of teaching have been presented. Recommendations referred to the newest methodological development aimed at improving the quality of teaching the discipline are provided. The ways of improving the methods and organizational forms of education are outlined.
Renyi statistics in equilibrium statistical mechanics
International Nuclear Information System (INIS)
Parvan, A.S.; Biro, T.S.
2010-01-01
The Renyi statistics in the canonical and microcanonical ensembles is examined both in general and in particular for the ideal gas. In the microcanonical ensemble the Renyi statistics is equivalent to the Boltzmann-Gibbs statistics. By the exact analytical results for the ideal gas, it is shown that in the canonical ensemble, taking the thermodynamic limit, the Renyi statistics is also equivalent to the Boltzmann-Gibbs statistics. Furthermore it satisfies the requirements of the equilibrium thermodynamics, i.e. the thermodynamical potential of the statistical ensemble is a homogeneous function of first degree of its extensive variables of state. We conclude that the Renyi statistics arrives at the same thermodynamical relations, as those stemming from the Boltzmann-Gibbs statistics in this limit.
Sampling, Probability Models and Statistical Reasoning Statistical
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 5. Sampling, Probability Models and Statistical Reasoning Statistical Inference. Mohan Delampady V R Padmawar. General Article Volume 1 Issue 5 May 1996 pp 49-58 ...
Statistics Using Just One Formula
Rosenthal, Jeffrey S.
2018-01-01
This article advocates that introductory statistics be taught by basing all calculations on a single simple margin-of-error formula and deriving all of the standard introductory statistical concepts (confidence intervals, significance tests, comparisons of means and proportions, etc) from that one formula. It is argued that this approach will…
Statistics Anxiety among Postgraduate Students
Koh, Denise; Zawi, Mohd Khairi
2014-01-01
Most postgraduate programmes, that have research components, require students to take at least one course of research statistics. Not all postgraduate programmes are science based, there are a significant number of postgraduate students who are from the social sciences that will be taking statistics courses, as they try to complete their…
International Nuclear Information System (INIS)
Nemnes, G A; Anghel, D V
2010-01-01
We present a stochastic method for the simulation of the time evolution in systems which obey generalized statistics, namely fractional exclusion statistics and Gentile's statistics. The transition rates are derived in the framework of canonical ensembles. This approach introduces a tool for describing interacting fermionic and bosonic systems in non-equilibrium as ideal FES systems, in a computationally efficient manner. The two types of statistics are analyzed comparatively, indicating their intrinsic thermodynamic differences and revealing key aspects related to the species size
International Nuclear Information System (INIS)
Venkataraman, G.
1992-01-01
Treating radiation gas as a classical gas, Einstein derived Planck's law of radiation by considering the dynamic equilibrium between atoms and radiation. Dissatisfied with this treatment, S.N. Bose derived Plank's law by another original way. He treated the problem in generality: he counted how many cells were available for the photon gas in phase space and distributed the photons into these cells. In this manner of distribution, there were three radically new ideas: The indistinguishability of particles, the spin of the photon (with only two possible orientations) and the nonconservation of photon number. This gave rise to a new discipline of quantum statistical mechanics. Physics underlying Bose's discovery, its significance and its role in development of the concept of ideal gas, spin-statistics theorem and spin particles are described. The book has been written in a simple and direct language in an informal style aiming to stimulate the curiosity of a reader. (M.G.B.)
Statistical and theoretical research
International Nuclear Information System (INIS)
Anon.
1983-01-01
Significant accomplishments include the creation of field designs to detect population impacts, new census procedures for small mammals, and methods for designing studies to determine where and how much of a contaminant is extent over certain landscapes. A book describing these statistical methods is currently being written and will apply to a variety of environmental contaminants, including radionuclides. PNL scientists also have devised an analytical method for predicting the success of field eexperiments on wild populations. Two highlights of current research are the discoveries that population of free-roaming horse herds can double in four years and that grizzly bear populations may be substantially smaller than once thought. As stray horses become a public nuisance at DOE and other large Federal sites, it is important to determine their number. Similar statistical theory can be readily applied to other situations where wild animals are a problem of concern to other government agencies. Another book, on statistical aspects of radionuclide studies, is written specifically for researchers in radioecology
Savage, Leonard J
1972-01-01
Classic analysis of the foundations of statistics and development of personal probability, one of the greatest controversies in modern statistical thought. Revised edition. Calculus, probability, statistics, and Boolean algebra are recommended.
State Transportation Statistics 2010
2011-09-14
The Bureau of Transportation Statistics (BTS), a part of DOTs Research and Innovative Technology Administration (RITA), presents State Transportation Statistics 2010, a statistical profile of transportation in the 50 states and the District of Col...
State Transportation Statistics 2012
2013-08-15
The Bureau of Transportation Statistics (BTS), a part of the U.S. Department of Transportation's (USDOT) Research and Innovative Technology Administration (RITA), presents State Transportation Statistics 2012, a statistical profile of transportation ...
Adrenal Gland Tumors: Statistics
... Gland Tumor: Statistics Request Permissions Adrenal Gland Tumor: Statistics Approved by the Cancer.Net Editorial Board , 03/ ... primary adrenal gland tumor is very uncommon. Exact statistics are not available for this type of tumor ...
State transportation statistics 2009
2009-01-01
The Bureau of Transportation Statistics (BTS), a part of DOTs Research and : Innovative Technology Administration (RITA), presents State Transportation : Statistics 2009, a statistical profile of transportation in the 50 states and the : District ...
State Transportation Statistics 2011
2012-08-08
The Bureau of Transportation Statistics (BTS), a part of DOTs Research and Innovative Technology Administration (RITA), presents State Transportation Statistics 2011, a statistical profile of transportation in the 50 states and the District of Col...
Neuroendocrine Tumor: Statistics
... Tumor > Neuroendocrine Tumor: Statistics Request Permissions Neuroendocrine Tumor: Statistics Approved by the Cancer.Net Editorial Board , 01/ ... the body. It is important to remember that statistics on the survival rates for people with a ...
State Transportation Statistics 2013
2014-09-19
The Bureau of Transportation Statistics (BTS), a part of the U.S. Department of Transportations (USDOT) Research and Innovative Technology Administration (RITA), presents State Transportation Statistics 2013, a statistical profile of transportatio...
BTS statistical standards manual
2005-10-01
The Bureau of Transportation Statistics (BTS), like other federal statistical agencies, establishes professional standards to guide the methods and procedures for the collection, processing, storage, and presentation of statistical data. Standards an...
Detecting errors in micro and trace analysis by using statistics
DEFF Research Database (Denmark)
Heydorn, K.
1993-01-01
By assigning a standard deviation to each step in an analytical method it is possible to predict the standard deviation of each analytical result obtained by this method. If the actual variability of replicate analytical results agrees with the expected, the analytical method is said...... to be in statistical control. Significant deviations between analytical results from different laboratories reveal the presence of systematic errors, and agreement between different laboratories indicate the absence of systematic errors. This statistical approach, referred to as the analysis of precision, was applied...
Tibély, Gergely; Pollner, Péter; Vicsek, Tamás; Palla, Gergely
2012-05-01
Due to the increasing popularity of collaborative tagging systems, the research on tagged networks, hypergraphs, ontologies, folksonomies and other related concepts is becoming an important interdisciplinary area with great potential and relevance for practical applications. In most collaborative tagging systems the tagging by the users is completely ‘flat’, while in some cases they are allowed to define a shallow hierarchy for their own tags. However, usually no overall hierarchical organization of the tags is given, and one of the interesting challenges of this area is to provide an algorithm generating the ontology of the tags from the available data. In contrast, there are also other types of tagged networks available for research, where the tags are already organized into a directed acyclic graph (DAG), encapsulating the ‘is a sub-category of’ type of hierarchy between each other. In this paper, we study how this DAG affects the statistical distribution of tags on the nodes marked by the tags in various real networks. The motivation for this research was the fact that understanding the tagging based on a known hierarchy can help in revealing the hidden hierarchy of tags in collaborative tagging systems. We analyse the relation between the tag-frequency and the position of the tag in the DAG in two large sub-networks of the English Wikipedia and a protein-protein interaction network. We also study the tag co-occurrence statistics by introducing a two-dimensional (2D) tag-distance distribution preserving both the difference in the levels and the absolute distance in the DAG for the co-occurring pairs of tags. Our most interesting finding is that the local relevance of tags in the DAG (i.e. their rank or significance as characterized by, e.g., the length of the branches starting from them) is much more important than their global distance from the root. Furthermore, we also introduce a simple tagging model based on random walks on the DAG, capable of
International Nuclear Information System (INIS)
Tibély, Gergely; Vicsek, Tamás; Pollner, Péter; Palla, Gergely
2012-01-01
Due to the increasing popularity of collaborative tagging systems, the research on tagged networks, hypergraphs, ontologies, folksonomies and other related concepts is becoming an important interdisciplinary area with great potential and relevance for practical applications. In most collaborative tagging systems the tagging by the users is completely ‘flat’, while in some cases they are allowed to define a shallow hierarchy for their own tags. However, usually no overall hierarchical organization of the tags is given, and one of the interesting challenges of this area is to provide an algorithm generating the ontology of the tags from the available data. In contrast, there are also other types of tagged networks available for research, where the tags are already organized into a directed acyclic graph (DAG), encapsulating the ‘is a sub-category of’ type of hierarchy between each other. In this paper, we study how this DAG affects the statistical distribution of tags on the nodes marked by the tags in various real networks. The motivation for this research was the fact that understanding the tagging based on a known hierarchy can help in revealing the hidden hierarchy of tags in collaborative tagging systems. We analyse the relation between the tag-frequency and the position of the tag in the DAG in two large sub-networks of the English Wikipedia and a protein-protein interaction network. We also study the tag co-occurrence statistics by introducing a two-dimensional (2D) tag-distance distribution preserving both the difference in the levels and the absolute distance in the DAG for the co-occurring pairs of tags. Our most interesting finding is that the local relevance of tags in the DAG (i.e. their rank or significance as characterized by, e.g., the length of the branches starting from them) is much more important than their global distance from the root. Furthermore, we also introduce a simple tagging model based on random walks on the DAG, capable of
Statistical Seismology and Induced Seismicity
Tiampo, K. F.; González, P. J.; Kazemian, J.
2014-12-01
While seismicity triggered or induced by natural resources production such as mining or water impoundment in large dams has long been recognized, the recent increase in the unconventional production of oil and gas has been linked to rapid rise in seismicity in many places, including central North America (Ellsworth et al., 2012; Ellsworth, 2013). Worldwide, induced events of M~5 have occurred and, although rare, have resulted in both damage and public concern (Horton, 2012; Keranen et al., 2013). In addition, over the past twenty years, the increase in both number and coverage of seismic stations has resulted in an unprecedented ability to precisely record the magnitude and location of large numbers of small magnitude events. The increase in the number and type of seismic sequences available for detailed study has revealed differences in their statistics that previously difficult to quantify. For example, seismic swarms that produce significant numbers of foreshocks as well as aftershocks have been observed in different tectonic settings, including California, Iceland, and the East Pacific Rise (McGuire et al., 2005; Shearer, 2012; Kazemian et al., 2014). Similarly, smaller events have been observed prior to larger induced events in several occurrences from energy production. The field of statistical seismology has long focused on the question of triggering and the mechanisms responsible (Stein et al., 1992; Hill et al., 1993; Steacy et al., 2005; Parsons, 2005; Main et al., 2006). For example, in most cases the associated stress perturbations are much smaller than the earthquake stress drop, suggesting an inherent sensitivity to relatively small stress changes (Nalbant et al., 2005). Induced seismicity provides the opportunity to investigate triggering and, in particular, the differences between long- and short-range triggering. Here we investigate the statistics of induced seismicity sequences from around the world, including central North America and Spain, and
[Comment on] Statistical discrimination
Chinn, Douglas
In the December 8, 1981, issue of Eos, a news item reported the conclusion of a National Research Council study that sexual discrimination against women with Ph.D.'s exists in the field of geophysics. Basically, the item reported that even when allowances are made for motherhood the percentage of female Ph.D.'s holding high university and corporate positions is significantly lower than the percentage of male Ph.D.'s holding the same types of positions. The sexual discrimination conclusion, based only on these statistics, assumes that there are no basic psychological differences between men and women that might cause different populations in the employment group studied. Therefore, the reasoning goes, after taking into account possible effects from differences related to anatomy, such as women stopping their careers in order to bear and raise children, the statistical distributions of positions held by male and female Ph.D.'s ought to be very similar to one another. Any significant differences between the distributions must be caused primarily by sexual discrimination.
Preventing statistical errors in scientific journals.
Nuijten, M.B.
2016-01-01
There is evidence for a high prevalence of statistical reporting errors in psychology and other scientific fields. These errors display a systematic preference for statistically significant results, distorting the scientific literature. There are several possible causes for this systematic error
Information Statistics in Schools Educate your students about the value and everyday use of statistics. The Statistics in Schools program provides resources for teaching and learning with real life data. Explore the site for standards-aligned, classroom-ready activities. Statistics in Schools Math Activities History
Transport Statistics - Transport - UNECE
Sustainable Energy Statistics Trade Transport Themes UNECE and the SDGs Climate Change Gender Ideas 4 Change UNECE Weekly Videos UNECE Transport Areas of Work Transport Statistics Transport Transport Statistics About us Terms of Reference Meetings and Events Meetings Working Party on Transport Statistics (WP.6
Statistics in the 21st century
Wells, Martin T; Wells, Martin T
2001-01-01
Exactly what is the state of the art in statistics as we move forward into the 21st century? What promises, what trends does its future hold? Through the reflections of 70 of the world's leading statistical methodologists, researchers, theorists, and practitioners, Statistics in the 21st Century answers those questions. Originally published in the Journal of the American Statistical Association, this collection of vignettes examines our statistical past, comments on our present, and speculates on our future. Although the coverage is broad and the topics diverse, it reveals the essential intell
Semiclassical statistical mechanics
International Nuclear Information System (INIS)
Stratt, R.M.
1979-04-01
On the basis of an approach devised by Miller, a formalism is developed which allows the nonperturbative incorporation of quantum effects into equilibrium classical statistical mechanics. The resulting expressions bear a close similarity to classical phase space integrals and, therefore, are easily molded into forms suitable for examining a wide variety of problems. As a demonstration of this, three such problems are briefly considered: the simple harmonic oscillator, the vibrational state distribution of HCl, and the density-independent radial distribution function of He 4 . A more detailed study is then made of two more general applications involving the statistical mechanics of nonanalytic potentials and of fluids. The former, which is a particularly difficult problem for perturbative schemes, is treated with only limited success by restricting phase space and by adding an effective potential. The problem of fluids, however, is readily found to yield to a semiclassical pairwise interaction approximation, which in turn permits any classical many-body model to be expressed in a convenient form. The remainder of the discussion concentrates on some ramifications of having a phase space version of quantum mechanics. To test the breadth of the formulation, the task of constructing quantal ensemble averages of phase space functions is undertaken, and in the process several limitations of the formalism are revealed. A rather different approach is also pursued. The concept of quantum mechanical ergodicity is examined through the use of numerically evaluated eigenstates of the Barbanis potential, and the existence of this quantal ergodicity - normally associated with classical phase space - is verified. 21 figures, 4 tables
Generalized quantum statistics
International Nuclear Information System (INIS)
Chou, C.
1992-01-01
In the paper, a non-anyonic generalization of quantum statistics is presented, in which Fermi-Dirac statistics (FDS) and Bose-Einstein statistics (BES) appear as two special cases. The new quantum statistics, which is characterized by the dimension of its single particle Fock space, contains three consistent parts, namely the generalized bilinear quantization, the generalized quantum mechanical description and the corresponding statistical mechanics
Significance of thymic scintigraphy
International Nuclear Information System (INIS)
Baba, Hiromi; Oshiumi, Yoshihiko; Nakayama, Chikashi; Morita, Kazunori; Koga, Ichinari
1978-01-01
Thymic scintigraphy by 67 Ga-citrate and 75 Se-methionine was done on 6 cases of thymoma, and 5 cases of myasthenia gravis. Scan was positive on 5 of 6 cases of thymoma. All patients with malignant thymoma were positive. Among the 7 cases of myasthenia gravis, scintigrams revealed 2 thymomas and 1 hyperplasia on whom no thymic mass suspected. Thymic scintigraphy is useful examination when dealing with myasthenia gravis. (auth.)
Statistical analysis and data management
International Nuclear Information System (INIS)
Anon.
1981-01-01
This report provides an overview of the history of the WIPP Biology Program. The recommendations of the American Institute of Biological Sciences (AIBS) for the WIPP biology program are summarized. The data sets available for statistical analyses and problems associated with these data sets are also summarized. Biological studies base maps are presented. A statistical model is presented to evaluate any correlation between climatological data and small mammal captures. No statistically significant relationship between variance in small mammal captures on Dr. Gennaro's 90m x 90m grid and precipitation records from the Duval Potash Mine were found
Symmetries and statistical behavior in fermion systems
International Nuclear Information System (INIS)
French, J.B.; Draayer, J.P.
1978-01-01
The interplay between statistical behavior and symmetries in nuclei, as revealed, for example, by spectra and by distributions for various kinds of excitations is considered. Methods and general results, rather than specific applications, are given. 16 references
Symmetries and statistical behavior in fermion systems
Energy Technology Data Exchange (ETDEWEB)
French, J.B.; Draayer, J.P.
1978-01-01
The interplay between statistical behavior and symmetries in nuclei, as revealed, for example, by spectra and by distributions for various kinds of excitations is considered. Methods and general results, rather than specific applications, are given. 16 references. (JFP)
Detecting Novelty and Significance
Ferrari, Vera; Bradley, Margaret M.; Codispoti, Maurizio; Lang, Peter J.
2013-01-01
Studies of cognition often use an “oddball” paradigm to study effects of stimulus novelty and significance on information processing. However, an oddball tends to be perceptually more novel than the standard, repeated stimulus as well as more relevant to the ongoing task, making it difficult to disentangle effects due to perceptual novelty and stimulus significance. In the current study, effects of perceptual novelty and significance on ERPs were assessed in a passive viewing context by presenting repeated and novel pictures (natural scenes) that either signaled significant information regarding the current context or not. A fronto-central N2 component was primarily affected by perceptual novelty, whereas a centro-parietal P3 component was modulated by both stimulus significance and novelty. The data support an interpretation that the N2 reflects perceptual fluency and is attenuated when a current stimulus matches an active memory representation and that the amplitude of the P3 reflects stimulus meaning and significance. PMID:19400680
Significant NRC Enforcement Actions
Nuclear Regulatory Commission — This dataset provides a list of Nuclear Regulartory Commission (NRC) issued significant enforcement actions. These actions, referred to as "escalated", are issued by...
National Statistical Commission and Indian Official Statistics
Indian Academy of Sciences (India)
Author Affiliations. T J Rao1. C. R. Rao Advanced Institute of Mathematics, Statistics and Computer Science (AIMSCS) University of Hyderabad Campus Central University Post Office, Prof. C. R. Rao Road Hyderabad 500 046, AP, India.
Comparative Gender Performance in Business Statistics.
Mogull, Robert G.
1989-01-01
Comparative performance of male and female students in introductory and intermediate statistics classes was examined for over 16 years at a state university. Gender means from 97 classes and 1,609 males and 1,085 females revealed a probabilistic--although statistically insignificant--superior performance by female students that appeared to…
Rumsey, Deborah
2011-01-01
The fun and easy way to get down to business with statistics Stymied by statistics? No fear ? this friendly guide offers clear, practical explanations of statistical ideas, techniques, formulas, and calculations, with lots of examples that show you how these concepts apply to your everyday life. Statistics For Dummies shows you how to interpret and critique graphs and charts, determine the odds with probability, guesstimate with confidence using confidence intervals, set up and carry out a hypothesis test, compute statistical formulas, and more.Tracks to a typical first semester statistics cou
Industrial statistics with Minitab
Cintas, Pere Grima; Llabres, Xavier Tort-Martorell
2012-01-01
Industrial Statistics with MINITAB demonstrates the use of MINITAB as a tool for performing statistical analysis in an industrial context. This book covers introductory industrial statistics, exploring the most commonly used techniques alongside those that serve to give an overview of more complex issues. A plethora of examples in MINITAB are featured along with case studies for each of the statistical techniques presented. Industrial Statistics with MINITAB: Provides comprehensive coverage of user-friendly practical guidance to the essential statistical methods applied in industry.Explores
International Nuclear Information System (INIS)
Dienes, J.K.
1993-01-01
Although it is possible to simulate the ground blast from a single explosive shot with a simple computer algorithm and appropriate constants, the most commonly used modelling methods do not account for major changes in geology or shot energy because mechanical features such as tectonic stresses, fault structure, microcracking, brittle-ductile transition, and water content are not represented in significant detail. An alternative approach for modelling called Statistical Crack Mechanics is presented in this paper. This method, developed in the seventies as a part of the oil shale program, accounts for crack opening, shear, growth, and coalescence. Numerous photographs and micrographs show that shocked materials tend to involve arrays of planar cracks. The approach described here provides a way to account for microstructure and give a representation of the physical behavior of a material at the microscopic level that can account for phenomena such as permeability, fragmentation, shear banding, and hot-spot formation in explosives
Graphene Statistical Mechanics
Bowick, Mark; Kosmrlj, Andrej; Nelson, David; Sknepnek, Rastko
2015-03-01
Graphene provides an ideal system to test the statistical mechanics of thermally fluctuating elastic membranes. The high Young's modulus of graphene means that thermal fluctuations over even small length scales significantly stiffen the renormalized bending rigidity. We study the effect of thermal fluctuations on graphene ribbons of width W and length L, pinned at one end, via coarse-grained Molecular Dynamics simulations and compare with analytic predictions of the scaling of width-averaged root-mean-squared height fluctuations as a function of distance along the ribbon. Scaling collapse as a function of W and L also allows us to extract the scaling exponent eta governing the long-wavelength stiffening of the bending rigidity. A full understanding of the geometry-dependent mechanical properties of graphene, including arrays of cuts, may allow the design of a variety of modular elements with desired mechanical properties starting from pure graphene alone. Supported by NSF grant DMR-1435794
[Big data in official statistics].
Zwick, Markus
2015-08-01
The concept of "big data" stands to change the face of official statistics over the coming years, having an impact on almost all aspects of data production. The tasks of future statisticians will not necessarily be to produce new data, but rather to identify and make use of existing data to adequately describe social and economic phenomena. Until big data can be used correctly in official statistics, a lot of questions need to be answered and problems solved: the quality of data, data protection, privacy, and the sustainable availability are some of the more pressing issues to be addressed. The essential skills of official statisticians will undoubtedly change, and this implies a number of challenges to be faced by statistical education systems, in universities, and inside the statistical offices. The national statistical offices of the European Union have concluded a concrete strategy for exploring the possibilities of big data for official statistics, by means of the Big Data Roadmap and Action Plan 1.0. This is an important first step and will have a significant influence on implementing the concept of big data inside the statistical offices of Germany.
Recreational Boating Statistics 2012
Department of Homeland Security — Every year, the USCG compiles statistics on reported recreational boating accidents. These statistics are derived from accident reports that are filed by the owners...
Recreational Boating Statistics 2013
Department of Homeland Security — Every year, the USCG compiles statistics on reported recreational boating accidents. These statistics are derived from accident reports that are filed by the owners...
Statistical data analysis handbook
National Research Council Canada - National Science Library
Wall, Francis J
1986-01-01
It must be emphasized that this is not a text book on statistics. Instead it is a working tool that presents data analysis in clear, concise terms which can be readily understood even by those without formal training in statistics...
U.S. Department of Health & Human Services — The CMS Office of Enterprise Data and Analytics has developed CMS Program Statistics, which includes detailed summary statistics on national health care, Medicare...
Recreational Boating Statistics 2011
Department of Homeland Security — Every year, the USCG compiles statistics on reported recreational boating accidents. These statistics are derived from accident reports that are filed by the owners...
... Doing AMIGAS Stay Informed Cancer Home Uterine Cancer Statistics Language: English (US) Español (Spanish) Recommend on Facebook ... the most commonly diagnosed gynecologic cancer. U.S. Cancer Statistics Data Visualizations Tool The Data Visualizations tool makes ...
Tuberculosis Data and Statistics
... Advisory Groups Federal TB Task Force Data and Statistics Language: English (US) Español (Spanish) Recommend on Facebook ... Set) Mortality and Morbidity Weekly Reports Data and Statistics Decrease in Reported Tuberculosis Cases MMWR 2010; 59 ( ...
National transportation statistics 2011
2011-04-01
Compiled and published by the U.S. Department of Transportation's Bureau of Transportation Statistics : (BTS), National Transportation Statistics presents information on the U.S. transportation system, including : its physical components, safety reco...
National Transportation Statistics 2008
2009-01-08
Compiled and published by the U.S. Department of Transportations Bureau of Transportation Statistics (BTS), National Transportation Statistics presents information on the U.S. transportation system, including its physical components, safety record...
... News & Events About Us Home > Health Information Share Statistics Research shows that mental illnesses are common in ... of mental illnesses, such as suicide and disability. Statistics Top ı cs Mental Illness Any Anxiety Disorder ...
School Violence: Data & Statistics
... Social Media Publications Injury Center School Violence: Data & Statistics Recommend on Facebook Tweet Share Compartir The first ... Vehicle Safety Traumatic Brain Injury Injury Response Data & Statistics (WISQARS) Funded Programs Press Room Social Media Publications ...
Caregiver Statistics: Demographics
... You are here Home Selected Long-Term Care Statistics Order this publication Printer-friendly version What is ... needs and services are wide-ranging and complex, statistics may vary from study to study. Sources for ...
... Summary Coverdell Program 2012-2015 State Summaries Data & Statistics Fact Sheets Heart Disease and Stroke Fact Sheets ... Roadmap for State Planning Other Data Resources Other Statistic Resources Grantee Information Cross-Program Information Online Tools ...
... Standard Drink? Drinking Levels Defined Alcohol Facts and Statistics Print version Alcohol Use in the United States: ... 1238–1245, 2004. PMID: 15010446 National Center for Statistics and Analysis. 2014 Crash Data Key Findings (Traffic ...
National Transportation Statistics 2009
2010-01-21
Compiled and published by the U.S. Department of Transportation's Bureau of Transportation Statistics (BTS), National Transportation Statistics presents information on the U.S. transportation system, including its physical components, safety record, ...
National transportation statistics 2010
2010-01-01
National Transportation Statistics presents statistics on the U.S. transportation system, including its physical components, safety record, economic performance, the human and natural environment, and national security. This is a large online documen...
DEFF Research Database (Denmark)
Lindström, Erik; Madsen, Henrik; Nielsen, Jan Nygaard
Statistics for Finance develops students’ professional skills in statistics with applications in finance. Developed from the authors’ courses at the Technical University of Denmark and Lund University, the text bridges the gap between classical, rigorous treatments of financial mathematics...
Principles of applied statistics
National Research Council Canada - National Science Library
Cox, D. R; Donnelly, Christl A
2011-01-01
.... David Cox and Christl Donnelly distil decades of scientific experience into usable principles for the successful application of statistics, showing how good statistical strategy shapes every stage of an investigation...
Applying contemporary statistical techniques
Wilcox, Rand R
2003-01-01
Applying Contemporary Statistical Techniques explains why traditional statistical methods are often inadequate or outdated when applied to modern problems. Wilcox demonstrates how new and more powerful techniques address these problems far more effectively, making these modern robust methods understandable, practical, and easily accessible.* Assumes no previous training in statistics * Explains how and why modern statistical methods provide more accurate results than conventional methods* Covers the latest developments on multiple comparisons * Includes recent advanc
The Significance of Normativity
DEFF Research Database (Denmark)
Presskorn-Thygesen, Thomas
and Weber. It does so by situating Durkheim and Weber in the context of Neo-Kantian philosophy, which prevailed among their contemporaries, and the chapter thereby reveals a series of under-thematised similarities not only with regard to their methodological positions, but also in their conception of social...... of social theory. In pursuing this overall research agenda, the dissertation contributes to a number of specific research literatures. Following two introductory and methodological chapters, Chapter 3 thus critically examines the analysis of normativity suggested in the recent attempts at transforming...... the methods of neo-classical economics into a broader form of social theory. The chapter thereby contributes to the critical discourses, particularly in philosophy of science, that challenge the validity of neo-classical economics and its underlying conception of practical rationality. In examining...
Interactive statistics with ILLMO
Martens, J.B.O.S.
2014-01-01
Progress in empirical research relies on adequate statistical analysis and reporting. This article proposes an alternative approach to statistical modeling that is based on an old but mostly forgotten idea, namely Thurstone modeling. Traditional statistical methods assume that either the measured
Lenard, Christopher; McCarthy, Sally; Mills, Terence
2014-01-01
There are many different aspects of statistics. Statistics involves mathematics, computing, and applications to almost every field of endeavour. Each aspect provides an opportunity to spark someone's interest in the subject. In this paper we discuss some ethical aspects of statistics, and describe how an introduction to ethics has been…
Youth Sports Safety Statistics
... 6):794-799. 31 American Heart Association. CPR statistics. www.heart.org/HEARTORG/CPRAndECC/WhatisCPR/CPRFactsandStats/CPRpercent20Statistics_ ... Mental Health Services Administration, Center for Behavioral Health Statistics and Quality. (January 10, 2013). The DAWN Report: ...
On two methods of statistical image analysis
Missimer, J; Knorr, U; Maguire, RP; Herzog, H; Seitz, RJ; Tellman, L; Leenders, K.L.
1999-01-01
The computerized brain atlas (CBA) and statistical parametric mapping (SPM) are two procedures for voxel-based statistical evaluation of PET activation studies. Each includes spatial standardization of image volumes, computation of a statistic, and evaluation of its significance. In addition,
The Math Problem: Advertising Students' Attitudes toward Statistics
Fullerton, Jami A.; Kendrick, Alice
2013-01-01
This study used the Students' Attitudes toward Statistics Scale (STATS) to measure attitude toward statistics among a national sample of advertising students. A factor analysis revealed four underlying factors make up the attitude toward statistics construct--"Interest & Future Applicability," "Confidence," "Statistical Tools," and "Initiative."…
Dowdy, Shirley; Chilko, Daniel
2011-01-01
Praise for the Second Edition "Statistics for Research has other fine qualities besides superior organization. The examples and the statistical methods are laid out with unusual clarity by the simple device of using special formats for each. The book was written with great care and is extremely user-friendly."-The UMAP Journal Although the goals and procedures of statistical research have changed little since the Second Edition of Statistics for Research was published, the almost universal availability of personal computers and statistical computing application packages have made it possible f
Boslaugh, Sarah
2013-01-01
Need to learn statistics for your job? Want help passing a statistics course? Statistics in a Nutshell is a clear and concise introduction and reference for anyone new to the subject. Thoroughly revised and expanded, this edition helps you gain a solid understanding of statistics without the numbing complexity of many college texts. Each chapter presents easy-to-follow descriptions, along with graphics, formulas, solved examples, and hands-on exercises. If you want to perform common statistical analyses and learn a wide range of techniques without getting in over your head, this is your book.
Statistics & probaility for dummies
Rumsey, Deborah J
2013-01-01
Two complete eBooks for one low price! Created and compiled by the publisher, this Statistics I and Statistics II bundle brings together two math titles in one, e-only bundle. With this special bundle, you'll get the complete text of the following two titles: Statistics For Dummies, 2nd Edition Statistics For Dummies shows you how to interpret and critique graphs and charts, determine the odds with probability, guesstimate with confidence using confidence intervals, set up and carry out a hypothesis test, compute statistical formulas, and more. Tra
Nonparametric statistical inference
Gibbons, Jean Dickinson
2010-01-01
Overall, this remains a very fine book suitable for a graduate-level course in nonparametric statistics. I recommend it for all people interested in learning the basic ideas of nonparametric statistical inference.-Eugenia Stoimenova, Journal of Applied Statistics, June 2012… one of the best books available for a graduate (or advanced undergraduate) text for a theory course on nonparametric statistics. … a very well-written and organized book on nonparametric statistics, especially useful and recommended for teachers and graduate students.-Biometrics, 67, September 2011This excellently presente
Business statistics for dummies
Anderson, Alan
2013-01-01
Score higher in your business statistics course? Easy. Business statistics is a common course for business majors and MBA candidates. It examines common data sets and the proper way to use such information when conducting research and producing informational reports such as profit and loss statements, customer satisfaction surveys, and peer comparisons. Business Statistics For Dummies tracks to a typical business statistics course offered at the undergraduate and graduate levels and provides clear, practical explanations of business statistical ideas, techniques, formulas, and calculations, w
Griffiths, Dawn
2009-01-01
Wouldn't it be great if there were a statistics book that made histograms, probability distributions, and chi square analysis more enjoyable than going to the dentist? Head First Statistics brings this typically dry subject to life, teaching you everything you want and need to know about statistics through engaging, interactive, and thought-provoking material, full of puzzles, stories, quizzes, visual aids, and real-world examples. Whether you're a student, a professional, or just curious about statistical analysis, Head First's brain-friendly formula helps you get a firm grasp of statistics
Kim, E.; Newton, A. P.
2012-04-01
One major problem in dynamo theory is the multi-scale nature of the MHD turbulence, which requires statistical theory in terms of probability distribution functions. In this contribution, we present the statistical theory of magnetic fields in a simplified mean field α-Ω dynamo model by varying the statistical property of alpha, including marginal stability and intermittency, and then utilize observational data of solar activity to fine-tune the mean field dynamo model. Specifically, we first present a comprehensive investigation into the effect of the stochastic parameters in a simplified α-Ω dynamo model. Through considering the manifold of marginal stability (the region of parameter space where the mean growth rate is zero), we show that stochastic fluctuations are conductive to dynamo. Furthermore, by considering the cases of fluctuating alpha that are periodic and Gaussian coloured random noise with identical characteristic time-scales and fluctuating amplitudes, we show that the transition to dynamo is significantly facilitated for stochastic alpha with random noise. Furthermore, we show that probability density functions (PDFs) of the growth-rate, magnetic field and magnetic energy can provide a wealth of useful information regarding the dynamo behaviour/intermittency. Finally, the precise statistical property of the dynamo such as temporal correlation and fluctuating amplitude is found to be dependent on the distribution the fluctuations of stochastic parameters. We then use observations of solar activity to constrain parameters relating to the effect in stochastic α-Ω nonlinear dynamo models. This is achieved through performing a comprehensive statistical comparison by computing PDFs of solar activity from observations and from our simulation of mean field dynamo model. The observational data that are used are the time history of solar activity inferred for C14 data in the past 11000 years on a long time scale and direct observations of the sun spot
Lectures on algebraic statistics
Drton, Mathias; Sullivant, Seth
2009-01-01
How does an algebraic geometer studying secant varieties further the understanding of hypothesis tests in statistics? Why would a statistician working on factor analysis raise open problems about determinantal varieties? Connections of this type are at the heart of the new field of "algebraic statistics". In this field, mathematicians and statisticians come together to solve statistical inference problems using concepts from algebraic geometry as well as related computational and combinatorial techniques. The goal of these lectures is to introduce newcomers from the different camps to algebraic statistics. The introduction will be centered around the following three observations: many important statistical models correspond to algebraic or semi-algebraic sets of parameters; the geometry of these parameter spaces determines the behaviour of widely used statistical inference procedures; computational algebraic geometry can be used to study parameter spaces and other features of statistical models.
Naghshpour, Shahdad
2012-01-01
Statistics is the branch of mathematics that deals with real-life problems. As such, it is an essential tool for economists. Unfortunately, the way you and many other economists learn the concept of statistics is not compatible with the way economists think and learn. The problem is worsened by the use of mathematical jargon and complex derivations. Here's a book that proves none of this is necessary. All the examples and exercises in this book are constructed within the field of economics, thus eliminating the difficulty of learning statistics with examples from fields that have no relation to business, politics, or policy. Statistics is, in fact, not more difficult than economics. Anyone who can comprehend economics can understand and use statistics successfully within this field, including you! This book utilizes Microsoft Excel to obtain statistical results, as well as to perform additional necessary computations. Microsoft Excel is not the software of choice for performing sophisticated statistical analy...
Baseline Statistics of Linked Statistical Data
Scharnhorst, Andrea; Meroño-Peñuela, Albert; Guéret, Christophe
2014-01-01
We are surrounded by an ever increasing ocean of information, everybody will agree to that. We build sophisticated strategies to govern this information: design data models, develop infrastructures for data sharing, building tool for data analysis. Statistical datasets curated by National
Chiou, Chei-Chang; Wang, Yu-Min; Lee, Li-Tze
2014-08-01
Statistical knowledge is widely used in academia; however, statistics teachers struggle with the issue of how to reduce students' statistics anxiety and enhance students' statistics learning. This study assesses the effectiveness of a "one-minute paper strategy" in reducing students' statistics-related anxiety and in improving students' statistics-related achievement. Participants were 77 undergraduates from two classes enrolled in applied statistics courses. An experiment was implemented according to a pretest/posttest comparison group design. The quasi-experimental design showed that the one-minute paper strategy significantly reduced students' statistics anxiety and improved students' statistics learning achievement. The strategy was a better instructional tool than the textbook exercise for reducing students' statistics anxiety and improving students' statistics achievement.
Dunbar, P. K.; Furtney, M.; McLean, S. J.; Sweeney, A. D.
2014-12-01
Tsunamis have inflicted death and destruction on the coastlines of the world throughout history. The occurrence of tsunamis and the resulting effects have been collected and studied as far back as the second millennium B.C. The knowledge gained from cataloging and examining these events has led to significant changes in our understanding of tsunamis, tsunami sources, and methods to mitigate the effects of tsunamis. The most significant, not surprisingly, are often the most devastating, such as the 2011 Tohoku, Japan earthquake and tsunami. The goal of this poster is to give a brief overview of the occurrence of tsunamis and then focus specifically on several significant tsunamis. There are various criteria to determine the most significant tsunamis: the number of deaths, amount of damage, maximum runup height, had a major impact on tsunami science or policy, etc. As a result, descriptions will include some of the most costly (2011 Tohoku, Japan), the most deadly (2004 Sumatra, 1883 Krakatau), and the highest runup ever observed (1958 Lituya Bay, Alaska). The discovery of the Cascadia subduction zone as the source of the 1700 Japanese "Orphan" tsunami and a future tsunami threat to the U.S. northwest coast, contributed to the decision to form the U.S. National Tsunami Hazard Mitigation Program. The great Lisbon earthquake of 1755 marked the beginning of the modern era of seismology. Knowledge gained from the 1964 Alaska earthquake and tsunami helped confirm the theory of plate tectonics. The 1946 Alaska, 1952 Kuril Islands, 1960 Chile, 1964 Alaska, and the 2004 Banda Aceh, tsunamis all resulted in warning centers or systems being established.The data descriptions on this poster were extracted from NOAA's National Geophysical Data Center (NGDC) global historical tsunami database. Additional information about these tsunamis, as well as water level data can be found by accessing the NGDC website www.ngdc.noaa.gov/hazard/
Conversion factors and oil statistics
International Nuclear Information System (INIS)
Karbuz, Sohbet
2004-01-01
World oil statistics, in scope and accuracy, are often far from perfect. They can easily lead to misguided conclusions regarding the state of market fundamentals. Without proper attention directed at statistic caveats, the ensuing interpretation of oil market data opens the door to unnecessary volatility, and can distort perception of market fundamentals. Among the numerous caveats associated with the compilation of oil statistics, conversion factors, used to produce aggregated data, play a significant role. Interestingly enough, little attention is paid to conversion factors, i.e. to the relation between different units of measurement for oil. Additionally, the underlying information regarding the choice of a specific factor when trying to produce measurements of aggregated data remains scant. The aim of this paper is to shed some light on the impact of conversion factors for two commonly encountered issues, mass to volume equivalencies (barrels to tonnes) and for broad energy measures encountered in world oil statistics. This paper will seek to demonstrate how inappropriate and misused conversion factors can yield wildly varying results and ultimately distort oil statistics. Examples will show that while discrepancies in commonly used conversion factors may seem trivial, their impact on the assessment of a world oil balance is far from negligible. A unified and harmonised convention for conversion factors is necessary to achieve accurate comparisons and aggregate oil statistics for the benefit of both end-users and policy makers
Statistical Physics An Introduction
Yoshioka, Daijiro
2007-01-01
This book provides a comprehensive presentation of the basics of statistical physics. The first part explains the essence of statistical physics and how it provides a bridge between microscopic and macroscopic phenomena, allowing one to derive quantities such as entropy. Here the author avoids going into details such as Liouville’s theorem or the ergodic theorem, which are difficult for beginners and unnecessary for the actual application of the statistical mechanics. In the second part, statistical mechanics is applied to various systems which, although they look different, share the same mathematical structure. In this way readers can deepen their understanding of statistical physics. The book also features applications to quantum dynamics, thermodynamics, the Ising model and the statistical dynamics of free spins.
Statistical symmetries in physics
International Nuclear Information System (INIS)
Green, H.S.; Adelaide Univ., SA
1994-01-01
Every law of physics is invariant under some group of transformations and is therefore the expression of some type of symmetry. Symmetries are classified as geometrical, dynamical or statistical. At the most fundamental level, statistical symmetries are expressed in the field theories of the elementary particles. This paper traces some of the developments from the discovery of Bose statistics, one of the two fundamental symmetries of physics. A series of generalizations of Bose statistics is described. A supersymmetric generalization accommodates fermions as well as bosons, and further generalizations, including parastatistics, modular statistics and graded statistics, accommodate particles with properties such as 'colour'. A factorization of elements of ggl(n b ,n f ) can be used to define truncated boson operators. A general construction is given for q-deformed boson operators, and explicit constructions of the same type are given for various 'deformed' algebras. A summary is given of some of the applications and potential applications. 39 refs., 2 figs
The statistical stability phenomenon
Gorban, Igor I
2017-01-01
This monograph investigates violations of statistical stability of physical events, variables, and processes and develops a new physical-mathematical theory taking into consideration such violations – the theory of hyper-random phenomena. There are five parts. The first describes the phenomenon of statistical stability and its features, and develops methods for detecting violations of statistical stability, in particular when data is limited. The second part presents several examples of real processes of different physical nature and demonstrates the violation of statistical stability over broad observation intervals. The third part outlines the mathematical foundations of the theory of hyper-random phenomena, while the fourth develops the foundations of the mathematical analysis of divergent and many-valued functions. The fifth part contains theoretical and experimental studies of statistical laws where there is violation of statistical stability. The monograph should be of particular interest to engineers...
Equilibrium statistical mechanics
Jackson, E Atlee
2000-01-01
Ideal as an elementary introduction to equilibrium statistical mechanics, this volume covers both classical and quantum methodology for open and closed systems. Introductory chapters familiarize readers with probability and microscopic models of systems, while additional chapters describe the general derivation of the fundamental statistical mechanics relationships. The final chapter contains 16 sections, each dealing with a different application, ordered according to complexity, from classical through degenerate quantum statistical mechanics. Key features include an elementary introduction t
Applied statistics for economists
Lewis, Margaret
2012-01-01
This book is an undergraduate text that introduces students to commonly-used statistical methods in economics. Using examples based on contemporary economic issues and readily-available data, it not only explains the mechanics of the various methods, it also guides students to connect statistical results to detailed economic interpretations. Because the goal is for students to be able to apply the statistical methods presented, online sources for economic data and directions for performing each task in Excel are also included.
Mineral industry statistics 1975
Energy Technology Data Exchange (ETDEWEB)
1978-01-01
Production, consumption and marketing statistics are given for solid fuels (coal, peat), liquid fuels and gases (oil, natural gas), iron ore, bauxite and other minerals quarried in France, in 1975. Also accident statistics are included. Production statistics are presented of the Overseas Departments and territories (French Guiana, New Caledonia, New Hebrides). An account of modifications in the mining field in 1975 is given. Concessions, exploitation permits, and permits solely for prospecting for mineral products are discussed. (In French)
Lectures on statistical mechanics
Bowler, M G
1982-01-01
Anyone dissatisfied with the almost ritual dullness of many 'standard' texts in statistical mechanics will be grateful for the lucid explanation and generally reassuring tone. Aimed at securing firm foundations for equilibrium statistical mechanics, topics of great subtlety are presented transparently and enthusiastically. Very little mathematical preparation is required beyond elementary calculus and prerequisites in physics are limited to some elementary classical thermodynamics. Suitable as a basis for a first course in statistical mechanics, the book is an ideal supplement to more convent
Directory of Open Access Journals (Sweden)
Mirjam Nielen
2017-01-01
Full Text Available Always wondered why research papers often present rather complicated statistical analyses? Or wondered how to properly analyse the results of a pragmatic trial from your own practice? This talk will give an overview of basic statistical principles and focus on the why of statistics, rather than on the how.This is a podcast of Mirjam's talk at the Veterinary Evidence Today conference, Edinburgh November 2, 2016.
Equilibrium statistical mechanics
Mayer, J E
1968-01-01
The International Encyclopedia of Physical Chemistry and Chemical Physics, Volume 1: Equilibrium Statistical Mechanics covers the fundamental principles and the development of theoretical aspects of equilibrium statistical mechanics. Statistical mechanical is the study of the connection between the macroscopic behavior of bulk matter and the microscopic properties of its constituent atoms and molecules. This book contains eight chapters, and begins with a presentation of the master equation used for the calculation of the fundamental thermodynamic functions. The succeeding chapters highlight t
Mahalanobis, P C
1965-01-01
Contributions to Statistics focuses on the processes, methodologies, and approaches involved in statistics. The book is presented to Professor P. C. Mahalanobis on the occasion of his 70th birthday. The selection first offers information on the recovery of ancillary information and combinatorial properties of partially balanced designs and association schemes. Discussions focus on combinatorial applications of the algebra of association matrices, sample size analogy, association matrices and the algebra of association schemes, and conceptual statistical experiments. The book then examines latt
Safety significance evaluation system
International Nuclear Information System (INIS)
Lew, B.S.; Yee, D.; Brewer, W.K.; Quattro, P.J.; Kirby, K.D.
1991-01-01
This paper reports that the Pacific Gas and Electric Company (PG and E), in cooperation with ABZ, Incorporated and Science Applications International Corporation (SAIC), investigated the use of artificial intelligence-based programming techniques to assist utility personnel in regulatory compliance problems. The result of this investigation is that artificial intelligence-based programming techniques can successfully be applied to this problem. To demonstrate this, a general methodology was developed and several prototype systems based on this methodology were developed. The prototypes address U.S. Nuclear Regulatory Commission (NRC) event reportability requirements, technical specification compliance based on plant equipment status, and quality assurance assistance. This collection of prototype modules is named the safety significance evaluation system
Predicting significant torso trauma.
Nirula, Ram; Talmor, Daniel; Brasel, Karen
2005-07-01
Identification of motor vehicle crash (MVC) characteristics associated with thoracoabdominal injury would advance the development of automatic crash notification systems (ACNS) by improving triage and response times. Our objective was to determine the relationships between MVC characteristics and thoracoabdominal trauma to develop a torso injury probability model. Drivers involved in crashes from 1993 to 2001 within the National Automotive Sampling System were reviewed. Relationships between torso injury and MVC characteristics were assessed using multivariate logistic regression. Receiver operating characteristic curves were used to compare the model to current ACNS models. There were a total of 56,466 drivers. Age, ejection, braking, avoidance, velocity, restraints, passenger-side impact, rollover, and vehicle weight and type were associated with injury (p < 0.05). The area under the receiver operating characteristic curve (83.9) was significantly greater than current ACNS models. We have developed a thoracoabdominal injury probability model that may improve patient triage when used with ACNS.
Gas revenue increasingly significant
International Nuclear Information System (INIS)
Megill, R.E.
1991-01-01
This paper briefly describes the wellhead prices of natural gas compared to crude oil over the past 70 years. Although natural gas prices have never reached price parity with crude oil, the relative value of a gas BTU has been increasing. It is one of the reasons that the total amount of money coming from natural gas wells is becoming more significant. From 1920 to 1955 the revenue at the wellhead for natural gas was only about 10% of the money received by producers. Most of the money needed for exploration, development, and production came from crude oil. At present, however, over 40% of the money from the upstream portion of the petroleum industry is from natural gas. As a result, in a few short years natural gas may become 50% of the money revenues generated from wellhead production facilities
Boslaugh, Sarah
2008-01-01
Need to learn statistics as part of your job, or want some help passing a statistics course? Statistics in a Nutshell is a clear and concise introduction and reference that's perfect for anyone with no previous background in the subject. This book gives you a solid understanding of statistics without being too simple, yet without the numbing complexity of most college texts. You get a firm grasp of the fundamentals and a hands-on understanding of how to apply them before moving on to the more advanced material that follows. Each chapter presents you with easy-to-follow descriptions illustrat
Understanding Computational Bayesian Statistics
Bolstad, William M
2011-01-01
A hands-on introduction to computational statistics from a Bayesian point of view Providing a solid grounding in statistics while uniquely covering the topics from a Bayesian perspective, Understanding Computational Bayesian Statistics successfully guides readers through this new, cutting-edge approach. With its hands-on treatment of the topic, the book shows how samples can be drawn from the posterior distribution when the formula giving its shape is all that is known, and how Bayesian inferences can be based on these samples from the posterior. These ideas are illustrated on common statistic
Annual Statistical Supplement, 2002
Social Security Administration — The Annual Statistical Supplement, 2002 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...
Annual Statistical Supplement, 2010
Social Security Administration — The Annual Statistical Supplement, 2010 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...
Annual Statistical Supplement, 2007
Social Security Administration — The Annual Statistical Supplement, 2007 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...
Annual Statistical Supplement, 2001
Social Security Administration — The Annual Statistical Supplement, 2001 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...
Annual Statistical Supplement, 2016
Social Security Administration — The Annual Statistical Supplement, 2016 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...
Annual Statistical Supplement, 2011
Social Security Administration — The Annual Statistical Supplement, 2011 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...
Annual Statistical Supplement, 2005
Social Security Administration — The Annual Statistical Supplement, 2005 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...
Annual Statistical Supplement, 2015
Social Security Administration — The Annual Statistical Supplement, 2015 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...
Annual Statistical Supplement, 2003
Social Security Administration — The Annual Statistical Supplement, 2003 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...
Annual Statistical Supplement, 2017
Social Security Administration — The Annual Statistical Supplement, 2017 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...
Annual Statistical Supplement, 2008
Social Security Administration — The Annual Statistical Supplement, 2008 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...
Annual Statistical Supplement, 2014
Social Security Administration — The Annual Statistical Supplement, 2014 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...
Annual Statistical Supplement, 2004
Social Security Administration — The Annual Statistical Supplement, 2004 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...
Annual Statistical Supplement, 2000
Social Security Administration — The Annual Statistical Supplement, 2000 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...
Annual Statistical Supplement, 2009
Social Security Administration — The Annual Statistical Supplement, 2009 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...
Annual Statistical Supplement, 2006
Social Security Administration — The Annual Statistical Supplement, 2006 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...
Bulmer, M G
1979-01-01
There are many textbooks which describe current methods of statistical analysis, while neglecting related theory. There are equally many advanced textbooks which delve into the far reaches of statistical theory, while bypassing practical applications. But between these two approaches is an unfilled gap, in which theory and practice merge at an intermediate level. Professor M. G. Bulmer's Principles of Statistics, originally published in 1965, was created to fill that need. The new, corrected Dover edition of Principles of Statistics makes this invaluable mid-level text available once again fo
Kanji, Gopal K
2006-01-01
This expanded and updated Third Edition of Gopal K. Kanji's best-selling resource on statistical tests covers all the most commonly used tests with information on how to calculate and interpret results with simple datasets. Each entry begins with a short summary statement about the test's purpose, and contains details of the test objective, the limitations (or assumptions) involved, a brief outline of the method, a worked example, and the numerical calculation. 100 Statistical Tests, Third Edition is the one indispensable guide for users of statistical materials and consumers of statistical information at all levels and across all disciplines.
Statistical distribution sampling
Johnson, E. S.
1975-01-01
Determining the distribution of statistics by sampling was investigated. Characteristic functions, the quadratic regression problem, and the differential equations for the characteristic functions are analyzed.
Testing statistical hypotheses of equivalence
Wellek, Stefan
2010-01-01
Equivalence testing has grown significantly in importance over the last two decades, especially as its relevance to a variety of applications has become understood. Yet published work on the general methodology remains scattered in specialists' journals, and for the most part, it focuses on the relatively narrow topic of bioequivalence assessment.With a far broader perspective, Testing Statistical Hypotheses of Equivalence provides the first comprehensive treatment of statistical equivalence testing. The author addresses a spectrum of specific, two-sided equivalence testing problems, from the
International Nuclear Information System (INIS)
Supe, S.J.; Nagalaxmi, K.V.; Meenakshi, L.
1983-01-01
In the practice of radiotherapy, various concepts like NSD, CRE, TDF, and BIR are being used to evaluate the biological effectiveness of the treatment schedules on the normal tissues. This has been accepted as the tolerance of the normal tissue is the limiting factor in the treatment of cancers. At present when various schedules are tried, attention is therefore paid to the biological damage of the normal tissues only and it is expected that the damage to the cancerous tissues would be extensive enough to control the cancer. Attempt is made in the present work to evaluate the concent of tumor significant dose (TSD) which will represent the damage to the cancerous tissue. Strandquist in the analysis of a large number of cases of squamous cell carcinoma found that for the 5 fraction/week treatment, the total dose required to bring about the same damage for the cancerous tissue is proportional to T/sup -0.22/, where T is the overall time over which the dose is delivered. Using this finding the TSD was defined as DxN/sup -p/xT/sup -q/, where D is the total dose, N the number of fractions, T the overall time p and q are the exponents to be suitably chosen. The values of p and q are adjusted such that p+q< or =0.24, and p varies from 0.0 to 0.24 and q varies from 0.0 to 0.22. Cases of cancer of cervix uteri treated between 1978 and 1980 in the V. N. Cancer Centre, Kuppuswamy Naidu Memorial Hospital, Coimbatore, India were analyzed on the basis of these formulations. These data, coupled with the clinical experience, were used for choice of a formula for the TSD. Further, the dose schedules used in the British Institute of Radiology fraction- ation studies were also used to propose that the tumor significant dose is represented by DxN/sup -0.18/xT/sup -0.06/
Uranium chemistry: significant advances
International Nuclear Information System (INIS)
Mazzanti, M.
2011-01-01
The author reviews recent progress in uranium chemistry achieved in CEA laboratories. Like its neighbors in the Mendeleev chart uranium undergoes hydrolysis, oxidation and disproportionation reactions which make the chemistry of these species in water highly complex. The study of the chemistry of uranium in an anhydrous medium has led to correlate the structural and electronic differences observed in the interaction of uranium(III) and the lanthanides(III) with nitrogen or sulfur molecules and the effectiveness of these molecules in An(III)/Ln(III) separation via liquid-liquid extraction. Recent work on the redox reactivity of trivalent uranium U(III) in an organic medium with molecules such as water or an azide ion (N 3 - ) in stoichiometric quantities, led to extremely interesting uranium aggregates particular those involved in actinide migration in the environment or in aggregation problems in the fuel processing cycle. Another significant advance was the discovery of a compound containing the uranyl ion with a degree of oxidation (V) UO 2 + , obtained by oxidation of uranium(III). Recently chemists have succeeded in blocking the disproportionation reaction of uranyl(V) and in stabilizing polymetallic complexes of uranyl(V), opening the way to to a systematic study of the reactivity and the electronic and magnetic properties of uranyl(V) compounds. (A.C.)
Directory of Open Access Journals (Sweden)
Ph D Student Roman Mihaela
2011-05-01
Full Text Available The concept of "public accountability" is a challenge for political science as a new concept in this area in full debate and developement ,both in theory and practice. This paper is a theoretical approach of displaying some definitions, relevant meanings and significance odf the concept in political science. The importance of this concept is that although originally it was used as a tool to improve effectiveness and eficiency of public governance, it has gradually become a purpose it itself. "Accountability" has become an image of good governance first in the United States of America then in the European Union.Nevertheless,the concept is vaguely defined and provides ambiguous images of good governance.This paper begins with the presentation of some general meanings of the concept as they emerge from specialized dictionaries and ancyclopaedies and continues with the meanings developed in political science. The concept of "public accontability" is rooted in economics and management literature,becoming increasingly relevant in today's political science both in theory and discourse as well as in practice in formulating and evaluating public policies. A first conclusin that emerges from, the analysis of the evolution of this term is that it requires a conceptual clarification in political science. A clear definition will then enable an appropriate model of proving the system of public accountability in formulating and assessing public policies, in order to implement a system of assessment and monitoring thereof.
Statistical Properties of Online Auctions
Namazi, Alireza; Schadschneider, Andreas
We characterize the statistical properties of a large number of online auctions run on eBay. Both stationary and dynamic properties, like distributions of prices, number of bids etc., as well as relations between these quantities are studied. The analysis of the data reveals surprisingly simple distributions and relations, typically of power-law form. Based on these findings we introduce a simple method to identify suspicious auctions that could be influenced by a form of fraud known as shill bidding. Furthermore the influence of bidding strategies is discussed. The results indicate that the observed behavior is related to a mixture of agents using a variety of strategies.
Significant Radionuclides Determination
Energy Technology Data Exchange (ETDEWEB)
Jo A. Ziegler
2001-07-31
The purpose of this calculation is to identify radionuclides that are significant to offsite doses from potential preclosure events for spent nuclear fuel (SNF) and high-level radioactive waste expected to be received at the potential Monitored Geologic Repository (MGR). In this calculation, high-level radioactive waste is included in references to DOE SNF. A previous document, ''DOE SNF DBE Offsite Dose Calculations'' (CRWMS M&O 1999b), calculated the source terms and offsite doses for Department of Energy (DOE) and Naval SNF for use in design basis event analyses. This calculation reproduces only DOE SNF work (i.e., no naval SNF work is included in this calculation) created in ''DOE SNF DBE Offsite Dose Calculations'' and expands the calculation to include DOE SNF expected to produce a high dose consequence (even though the quantity of the SNF is expected to be small) and SNF owned by commercial nuclear power producers. The calculation does not address any specific off-normal/DBE event scenarios for receiving, handling, or packaging of SNF. The results of this calculation are developed for comparative analysis to establish the important radionuclides and do not represent the final source terms to be used for license application. This calculation will be used as input to preclosure safety analyses and is performed in accordance with procedure AP-3.12Q, ''Calculations'', and is subject to the requirements of DOE/RW-0333P, ''Quality Assurance Requirements and Description'' (DOE 2000) as determined by the activity evaluation contained in ''Technical Work Plan for: Preclosure Safety Analysis, TWP-MGR-SE-000010'' (CRWMS M&O 2000b) in accordance with procedure AP-2.21Q, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities''.
Indian Academy of Sciences (India)
IAS Admin
Pauli exclusion principle, Fermi–. Dirac statistics, identical and in- distinguishable particles, Fermi gas. Fermi–Dirac Statistics. Derivation and Consequences. S Chaturvedi and Shyamal Biswas. (left) Subhash Chaturvedi is at University of. Hyderabad. His current research interests include phase space descriptions.
Generalized interpolative quantum statistics
International Nuclear Information System (INIS)
Ramanathan, R.
1992-01-01
A generalized interpolative quantum statistics is presented by conjecturing a certain reordering of phase space due to the presence of possible exotic objects other than bosons and fermions. Such an interpolation achieved through a Bose-counting strategy predicts the existence of an infinite quantum Boltzmann-Gibbs statistics akin to the one discovered by Greenberg recently
Handbook of Spatial Statistics
Gelfand, Alan E
2010-01-01
Offers an introduction detailing the evolution of the field of spatial statistics. This title focuses on the three main branches of spatial statistics: continuous spatial variation (point referenced data); discrete spatial variation, including lattice and areal unit data; and, spatial point patterns.
International Nuclear Information System (INIS)
2003-01-01
The energy statistical table is a selection of statistical data for energies and countries from 1997 to 2002. It concerns the petroleum, the natural gas, the coal, the electric power, the production, the external market, the consumption per sector, the energy accounting 2002 and graphs on the long-dated forecasting. (A.L.B.)
Bayesian statistical inference
Directory of Open Access Journals (Sweden)
Bruno De Finetti
2017-04-01
Full Text Available This work was translated into English and published in the volume: Bruno De Finetti, Induction and Probability, Biblioteca di Statistica, eds. P. Monari, D. Cocchi, Clueb, Bologna, 1993.Bayesian statistical Inference is one of the last fundamental philosophical papers in which we can find the essential De Finetti's approach to the statistical inference.
Practical statistics for educators
Ravid, Ruth
2014-01-01
Practical Statistics for Educators, Fifth Edition, is a clear and easy-to-follow text written specifically for education students in introductory statistics courses and in action research courses. It is also a valuable resource and guidebook for educational practitioners who wish to study their own settings.
DEFF Research Database (Denmark)
Lauritzen, Steffen Lilholt
This book studies the brilliant Danish 19th Century astronomer, T.N. Thiele who made important contributions to statistics, actuarial science, astronomy and mathematics. The most important of these contributions in statistics are translated into English for the first time, and the text includes...
Huizingh, Eelko K. R. E.
2007-01-01
Accessibly written and easy to use, "Applied Statistics Using SPSS" is an all-in-one self-study guide to SPSS and do-it-yourself guide to statistics. What is unique about Eelko Huizingh's approach is that this book is based around the needs of undergraduate students embarking on their own research project, and its self-help style is designed to…
This tool allows users to animate cancer trends over time by cancer site and cause of death, race, and sex. Provides access to incidence, mortality, and survival. Select the type of statistic, variables, format, and then extract the statistics in a delimited format for further analyses.
Energy statistics yearbook 2002
International Nuclear Information System (INIS)
2005-01-01
The Energy Statistics Yearbook 2002 is a comprehensive collection of international energy statistics prepared by the United Nations Statistics Division. It is the forty-sixth in a series of annual compilations which commenced under the title World Energy Supplies in Selected Years, 1929-1950. It updates the statistical series shown in the previous issue. Supplementary series of monthly and quarterly data on production of energy may be found in the Monthly Bulletin of Statistics. The principal objective of the Yearbook is to provide a global framework of comparable data on long-term trends in the supply of mainly commercial primary and secondary forms of energy. Data for each type of fuel and aggregate data for the total mix of commercial fuels are shown for individual countries and areas and are summarized into regional and world totals. The data are compiled primarily from the annual energy questionnaire distributed by the United Nations Statistics Division and supplemented by official national statistical publications. Where official data are not available or are inconsistent, estimates are made by the Statistics Division based on governmental, professional or commercial materials. Estimates include, but are not limited to, extrapolated data based on partial year information, use of annual trends, trade data based on partner country reports, breakdowns of aggregated data as well as analysis of current energy events and activities
Howard Stauffer; Nadav Nur
2005-01-01
The papers included in the Advances in Statistics section of the Partners in Flight (PIF) 2002 Proceedings represent a small sample of statistical topics of current importance to Partners In Flight research scientists: hierarchical modeling, estimation of detection probabilities, and Bayesian applications. Sauer et al. (this volume) examines a hierarchical model...
Energy statistics yearbook 2001
International Nuclear Information System (INIS)
2004-01-01
The Energy Statistics Yearbook 2001 is a comprehensive collection of international energy statistics prepared by the United Nations Statistics Division. It is the forty-fifth in a series of annual compilations which commenced under the title World Energy Supplies in Selected Years, 1929-1950. It updates the statistical series shown in the previous issue. Supplementary series of monthly and quarterly data on production of energy may be found in the Monthly Bulletin of Statistics. The principal objective of the Yearbook is to provide a global framework of comparable data on long-term trends in the supply of mainly commercial primary and secondary forms of energy. Data for each type of fuel and aggregate data for the total mix of commercial fuels are shown for individual countries and areas and are summarized into regional and world totals. The data are compiled primarily from the annual energy questionnaire distributed by the United Nations Statistics Division and supplemented by official national statistical publications. Where official data are not available or are inconsistent, estimates are made by the Statistics Division based on governmental, professional or commercial materials. Estimates include, but are not limited to, extrapolated data based on partial year information, use of annual trends, trade data based on partner country reports, breakdowns of aggregated data as well as analysis of current energy events and activities
Energy statistics yearbook 2000
International Nuclear Information System (INIS)
2002-01-01
The Energy Statistics Yearbook 2000 is a comprehensive collection of international energy statistics prepared by the United Nations Statistics Division. It is the forty-third in a series of annual compilations which commenced under the title World Energy Supplies in Selected Years, 1929-1950. It updates the statistical series shown in the previous issue. Supplementary series of monthly and quarterly data on production of energy may be found in the Monthly Bulletin of Statistics. The principal objective of the Yearbook is to provide a global framework of comparable data on long-term trends in the supply of mainly commercial primary and secondary forms of energy. Data for each type of fuel and aggregate data for the total mix of commercial fuels are shown for individual countries and areas and are summarized into regional and world totals. The data are compiled primarily from the annual energy questionnaire distributed by the United Nations Statistics Division and supplemented by official national statistical publications. Where official data are not available or are inconsistent, estimates are made by the Statistics Division based on governmental, professional or commercial materials. Estimates include, but are not limited to, extrapolated data based on partial year information, use of annual trends, trade data based on partner country reports, breakdowns of aggregated data as well as analysis of current energy events and activities
Temperature dependent anomalous statistics
International Nuclear Information System (INIS)
Das, A.; Panda, S.
1991-07-01
We show that the anomalous statistics which arises in 2 + 1 dimensional Chern-Simons gauge theories can become temperature dependent in the most natural way. We analyze and show that a statistic's changing phase transition can happen in these theories only as T → ∞. (author). 14 refs
Introduction to Bayesian statistics
Bolstad, William M
2017-01-01
There is a strong upsurge in the use of Bayesian methods in applied statistical analysis, yet most introductory statistics texts only present frequentist methods. Bayesian statistics has many important advantages that students should learn about if they are going into fields where statistics will be used. In this Third Edition, four newly-added chapters address topics that reflect the rapid advances in the field of Bayesian staistics. The author continues to provide a Bayesian treatment of introductory statistical topics, such as scientific data gathering, discrete random variables, robust Bayesian methods, and Bayesian approaches to inferenfe cfor discrete random variables, bionomial proprotion, Poisson, normal mean, and simple linear regression. In addition, newly-developing topics in the field are presented in four new chapters: Bayesian inference with unknown mean and variance; Bayesian inference for Multivariate Normal mean vector; Bayesian inference for Multiple Linear RegressionModel; and Computati...
Understanding advanced statistical methods
Westfall, Peter
2013-01-01
Introduction: Probability, Statistics, and ScienceReality, Nature, Science, and ModelsStatistical Processes: Nature, Design and Measurement, and DataModelsDeterministic ModelsVariabilityParametersPurely Probabilistic Statistical ModelsStatistical Models with Both Deterministic and Probabilistic ComponentsStatistical InferenceGood and Bad ModelsUses of Probability ModelsRandom Variables and Their Probability DistributionsIntroductionTypes of Random Variables: Nominal, Ordinal, and ContinuousDiscrete Probability Distribution FunctionsContinuous Probability Distribution FunctionsSome Calculus-Derivatives and Least SquaresMore Calculus-Integrals and Cumulative Distribution FunctionsProbability Calculation and SimulationIntroductionAnalytic Calculations, Discrete and Continuous CasesSimulation-Based ApproximationGenerating Random NumbersIdentifying DistributionsIntroductionIdentifying Distributions from Theory AloneUsing Data: Estimating Distributions via the HistogramQuantiles: Theoretical and Data-Based Estimate...
Ector, Hugo
2010-12-01
I still remember my first book on statistics: "Elementary statistics with applications in medicine and the biological sciences" by Frederick E. Croxton. For me, it has been the start of pursuing understanding statistics in daily life and in medical practice. It was the first volume in a long row of books. In his introduction, Croxton pretends that"nearly everyone involved in any aspect of medicine needs to have some knowledge of statistics". The reality is that for many clinicians, statistics are limited to a "P statistical methods. They have never had the opportunity to learn concise and clear descriptions of the key features. I have experienced how some authors can describe difficult methods in a well understandable language. Others fail completely. As a teacher, I tell my students that life is impossible without a basic knowledge of statistics. This feeling has resulted in an annual seminar of 90 minutes. This tutorial is the summary of this seminar. It is a summary and a transcription of the best pages I have detected.
International Nuclear Information System (INIS)
Tonchev, N.; Shumovskij, A.S.
1986-01-01
The history of investigations, conducted at the JINR in the field of statistical mechanics, beginning with the fundamental works by Bogolyubov N.N. on superconductivity microscopic theory is presented. Ideas, introduced in these works and methods developed in them, have largely determined the ways for developing statistical mechanics in the JINR and Hartree-Fock-Bogolyubov variational principle has become an important method of the modern nucleus theory. A brief review of the main achievements, connected with the development of statistical mechanics methods and their application in different fields of physical science is given
Wallis, W Allen
2014-01-01
Focusing on everyday applications as well as those of scientific research, this classic of modern statistical methods requires little to no mathematical background. Readers develop basic skills for evaluating and using statistical data. Lively, relevant examples include applications to business, government, social and physical sciences, genetics, medicine, and public health. ""W. Allen Wallis and Harry V. Roberts have made statistics fascinating."" - The New York Times ""The authors have set out with considerable success, to write a text which would be of interest and value to the student who,
D'Alessio, Michael
2012-01-01
AP Statistics Crash Course - Gets You a Higher Advanced Placement Score in Less Time Crash Course is perfect for the time-crunched student, the last-minute studier, or anyone who wants a refresher on the subject. AP Statistics Crash Course gives you: Targeted, Focused Review - Study Only What You Need to Know Crash Course is based on an in-depth analysis of the AP Statistics course description outline and actual Advanced Placement test questions. It covers only the information tested on the exam, so you can make the most of your valuable study time. Our easy-to-read format covers: exploring da
Liao, Tim Futing
2011-01-01
An incomparably useful examination of statistical methods for comparisonThe nature of doing science, be it natural or social, inevitably calls for comparison. Statistical methods are at the heart of such comparison, for they not only help us gain understanding of the world around us but often define how our research is to be carried out. The need to compare between groups is best exemplified by experiments, which have clearly defined statistical methods. However, true experiments are not always possible. What complicates the matter more is a great deal of diversity in factors that are not inde
Statistical Pattern Recognition
Webb, Andrew R
2011-01-01
Statistical pattern recognition relates to the use of statistical techniques for analysing data measurements in order to extract information and make justified decisions. It is a very active area of study and research, which has seen many advances in recent years. Applications such as data mining, web searching, multimedia data retrieval, face recognition, and cursive handwriting recognition, all require robust and efficient pattern recognition techniques. This third edition provides an introduction to statistical pattern theory and techniques, with material drawn from a wide range of fields,
Mineral statistics yearbook 1994
International Nuclear Information System (INIS)
1994-01-01
A summary of mineral production in Saskatchewan was compiled and presented as a reference manual. Statistical information on fuel minerals such as crude oil, natural gas, liquefied petroleum gas and coal, and of industrial and metallic minerals, such as potash, sodium sulphate, salt and uranium, was provided in all conceivable variety of tables. Production statistics, disposition and value of sales of industrial and metallic minerals were also made available. Statistical data on drilling of oil and gas reservoirs and crown land disposition were also included. figs., tabs
Evolutionary Statistical Procedures
Baragona, Roberto; Poli, Irene
2011-01-01
This proposed text appears to be a good introduction to evolutionary computation for use in applied statistics research. The authors draw from a vast base of knowledge about the current literature in both the design of evolutionary algorithms and statistical techniques. Modern statistical research is on the threshold of solving increasingly complex problems in high dimensions, and the generalization of its methodology to parameters whose estimators do not follow mathematically simple distributions is underway. Many of these challenges involve optimizing functions for which analytic solutions a
Methods of statistical physics
Akhiezer, Aleksandr I
1981-01-01
Methods of Statistical Physics is an exposition of the tools of statistical mechanics, which evaluates the kinetic equations of classical and quantized systems. The book also analyzes the equations of macroscopic physics, such as the equations of hydrodynamics for normal and superfluid liquids and macroscopic electrodynamics. The text gives particular attention to the study of quantum systems. This study begins with a discussion of problems of quantum statistics with a detailed description of the basics of quantum mechanics along with the theory of measurement. An analysis of the asymptotic be
Cancer Data and Statistics Tools
... Educational Campaigns Initiatives Stay Informed Cancer Data and Statistics Tools Recommend on Facebook Tweet Share Compartir Cancer Statistics Tools United States Cancer Statistics: Data Visualizations The ...
Breast cancer statistics, 2011.
DeSantis, Carol; Siegel, Rebecca; Bandi, Priti; Jemal, Ahmedin
2011-01-01
In this article, the American Cancer Society provides an overview of female breast cancer statistics in the United States, including trends in incidence, mortality, survival, and screening. Approximately 230,480 new cases of invasive breast cancer and 39,520 breast cancer deaths are expected to occur among US women in 2011. Breast cancer incidence rates were stable among all racial/ethnic groups from 2004 to 2008. Breast cancer death rates have been declining since the early 1990s for all women except American Indians/Alaska Natives, among whom rates have remained stable. Disparities in breast cancer death rates are evident by state, socioeconomic status, and race/ethnicity. While significant declines in mortality rates were observed for 36 states and the District of Columbia over the past 10 years, rates for 14 states remained level. Analyses by county-level poverty rates showed that the decrease in mortality rates began later and was slower among women residing in poor areas. As a result, the highest breast cancer death rates shifted from the affluent areas to the poor areas in the early 1990s. Screening rates continue to be lower in poor women compared with non-poor women, despite much progress in increasing mammography utilization. In 2008, 51.4% of poor women had undergone a screening mammogram in the past 2 years compared with 72.8% of non-poor women. Encouraging patients aged 40 years and older to have annual mammography and a clinical breast examination is the single most important step that clinicians can take to reduce suffering and death from breast cancer. Clinicians should also ensure that patients at high risk of breast cancer are identified and offered appropriate screening and follow-up. Continued progress in the control of breast cancer will require sustained and increased efforts to provide high-quality screening, diagnosis, and treatment to all segments of the population. Copyright © 2011 American Cancer Society, Inc.
Elements of statistical thermodynamics
Nash, Leonard K
2006-01-01
Encompassing essentially all aspects of statistical mechanics that appear in undergraduate texts, this concise, elementary treatment shows how an atomic-molecular perspective yields new insights into macroscopic thermodynamics. 1974 edition.
Department of Veterans Affairs — National-level, VISN-level, and/or VAMC-level statistics on the numbers and percentages of users of VHA care form the Northeast Program Evaluation Center (NEPEC)....
International Nuclear Information System (INIS)
Hilaire, S.
2001-01-01
A review of the statistical model of nuclear reactions is presented. The main relations are described, together with the ingredients necessary to perform practical calculations. In addition, a substantial overview of the width fluctuation correction factor is given. (author)
... and Statistics Recommend on Facebook Tweet Share Compartir Plague in the United States Plague was first introduced ... them at higher risk. Reported Cases of Human Plague - United States, 1970-2016 Since the mid–20th ...
Statistical Measures of Marksmanship
National Research Council Canada - National Science Library
Johnson, Richard
2001-01-01
.... This report describes objective statistical procedures to measure both rifle marksmanship accuracy, the proximity of an array of shots to the center of mass of a target, and marksmanship precision...
Titanic: A Statistical Exploration.
Takis, Sandra L.
1999-01-01
Uses the available data about the Titanic's passengers to interest students in exploring categorical data and the chi-square distribution. Describes activities incorporated into a statistics class and gives additional resources for collecting information about the Titanic. (ASK)
... About Us Information For… Media Policy Makers Data & Statistics Recommend on Facebook Tweet Share Compartir Sickle cell ... 1999 through 2002. This drop coincided with the introduction in 2000 of a vaccine that protects against ...
U.S. Department of Health & Human Services — The United States Cancer Statistics (USCS) online databases in WONDER provide cancer incidence and mortality data for the United States for the years since 1999, by...
Probability and Statistical Inference
Prosper, Harrison B.
2006-01-01
These lectures introduce key concepts in probability and statistical inference at a level suitable for graduate students in particle physics. Our goal is to paint as vivid a picture as possible of the concepts covered.
On quantum statistical inference
Barndorff-Nielsen, O.E.; Gill, R.D.; Jupp, P.E.
2003-01-01
Interest in problems of statistical inference connected to measurements of quantum systems has recently increased substantially, in step with dramatic new developments in experimental techniques for studying small quantum systems. Furthermore, developments in the theory of quantum measurements have
CMS Statistics Reference Booklet
U.S. Department of Health & Human Services — The annual CMS Statistics reference booklet provides a quick reference for summary information about health expenditures and the Medicare and Medicaid health...
Statistical mechanics of superconductivity
Kita, Takafumi
2015-01-01
This book provides a theoretical, step-by-step comprehensive explanation of superconductivity for undergraduate and graduate students who have completed elementary courses on thermodynamics and quantum mechanics. To this end, it adopts the unique approach of starting with the statistical mechanics of quantum ideal gases and successively adding and clarifying elements and techniques indispensible for understanding it. They include the spin-statistics theorem, second quantization, density matrices, the Bloch–De Dominicis theorem, the variational principle in statistical mechanics, attractive interaction, and bound states. Ample examples of their usage are also provided in terms of topics from advanced statistical mechanics such as two-particle correlations of quantum ideal gases, derivation of the Hartree–Fock equations, and Landau’s Fermi-liquid theory, among others. With these preliminaries, the fundamental mean-field equations of superconductivity are derived with maximum mathematical clarity based on ...
Statistical electromagnetics: Complex cavities
Naus, H.W.L.
2008-01-01
A selection of the literature on the statistical description of electromagnetic fields and complex cavities is concisely reviewed. Some essential concepts, for example, the application of the central limit theorem and the maximum entropy principle, are scrutinized. Implicit assumptions, biased
Davison, Anthony C.; Huser, Raphaë l
2015-01-01
Statistics of extremes concerns inference for rare events. Often the events have never yet been observed, and their probabilities must therefore be estimated by extrapolation of tail models fitted to available data. Because data concerning the event
Saffran, Jenny R.; Kirkham, Natasha Z.
2017-01-01
Perception involves making sense of a dynamic, multimodal environment. In the absence of mechanisms capable of exploiting the statistical patterns in the natural world, infants would face an insurmountable computational problem. Infant statistical learning mechanisms facilitate the detection of structure. These abilities allow the infant to compute across elements in their environmental input, extracting patterns for further processing and subsequent learning. In this selective review, we summarize findings that show that statistical learning is both a broad and flexible mechanism (supporting learning from different modalities across many different content areas) and input specific (shifting computations depending on the type of input and goal of learning). We suggest that statistical learning not only provides a framework for studying language development and object knowledge in constrained laboratory settings, but also allows researchers to tackle real-world problems, such as multilingualism, the role of ever-changing learning environments, and differential developmental trajectories. PMID:28793812
CSIR Research Space (South Africa)
Shepperson, L
1997-12-01
Full Text Available This publication contains transport and related statistics on roads, vehicles, infrastructure, passengers, freight, rail, air, maritime and road traffic, and international comparisons. The information compiled in this publication has been gathered...
Genton, Marc G.; Castruccio, Stefano; Crippa, Paola; Dutta, Subhajit; Huser, Raphaë l; Sun, Ying; Vettori, Sabrina
2015-01-01
This paper explores the use of visualization through animations, coined visuanimation, in the field of statistics. In particular, it illustrates the embedding of animations in the paper itself and the storage of larger movies in the online
Playing at Statistical Mechanics
Clark, Paul M.; And Others
1974-01-01
Discussed are the applications of counting techniques of a sorting game to distributions and concepts in statistical mechanics. Included are the following distributions: Fermi-Dirac, Bose-Einstein, and most probable. (RH)
Illinois travel statistics, 2008
2009-01-01
The 2008 Illinois Travel Statistics publication is assembled to provide detailed traffic : information to the different users of traffic data. While most users of traffic data at this level : of detail are within the Illinois Department of Transporta...
Illinois travel statistics, 2009
2010-01-01
The 2009 Illinois Travel Statistics publication is assembled to provide detailed traffic : information to the different users of traffic data. While most users of traffic data at this level : of detail are within the Illinois Department of Transporta...
Cholesterol Facts and Statistics
... Managing High Cholesterol Cholesterol-lowering Medicine High Cholesterol Statistics and Maps High Cholesterol Facts High Cholesterol Maps ... Deo R, et al. Heart disease and stroke statistics—2017 update: a report from the American Heart ...
Illinois travel statistics, 2010
2011-01-01
The 2010 Illinois Travel Statistics publication is assembled to provide detailed traffic : information to the different users of traffic data. While most users of traffic data at this level : of detail are within the Illinois Department of Transporta...
U.S. Department of Health & Human Services — This section contains statistical information and reports related to the percentage of electronic transactions being sent to Medicare contractors in the formats...
Information theory and statistics
Kullback, Solomon
1968-01-01
Highly useful text studies logarithmic measures of information and their application to testing statistical hypotheses. Includes numerous worked examples and problems. References. Glossary. Appendix. 1968 2nd, revised edition.
Department of Homeland Security — Accident statistics available on the Coast Guard’s website by state, year, and one variable to obtain tables and/or graphs. Data from reports has been loaded for...
Medicaid Drug Claims Statistics
U.S. Department of Health & Human Services — The Medicaid Drug Claims Statistics CD is a useful tool that conveniently breaks up Medicaid claim counts and separates them by quarter and includes an annual count.
Scheck, Florian
2016-01-01
Scheck’s textbook starts with a concise introduction to classical thermodynamics, including geometrical aspects. Then a short introduction to probabilities and statistics lays the basis for the statistical interpretation of thermodynamics. Phase transitions, discrete models and the stability of matter are explained in great detail. Thermodynamics has a special role in theoretical physics. Due to the general approach of thermodynamics the field has a bridging function between several areas like the theory of condensed matter, elementary particle physics, astrophysics and cosmology. The classical thermodynamics describes predominantly averaged properties of matter, reaching from few particle systems and state of matter to stellar objects. Statistical Thermodynamics covers the same fields, but explores them in greater depth and unifies classical statistical mechanics with quantum theory of multiple particle systems. The content is presented as two tracks: the fast track for master students, providing the essen...
Record Statistics and Dynamics
DEFF Research Database (Denmark)
Sibani, Paolo; Jensen, Henrik J.
2009-01-01
with independent random increments. The term record dynamics covers the rather new idea that records may, in special situations, have measurable dynamical consequences. The approach applies to the aging dynamics of glasses and other systems with multiple metastable states. The basic idea is that record sizes...... fluctuations of e. g. the energy are able to push the system past some sort of ‘edge of stability’, inducing irreversible configurational changes, whose statistics then closely follows the statistics of record fluctuations....
Statistical mechanics rigorous results
Ruelle, David
1999-01-01
This classic book marks the beginning of an era of vigorous mathematical progress in equilibrium statistical mechanics. Its treatment of the infinite system limit has not been superseded, and the discussion of thermodynamic functions and states remains basic for more recent work. The conceptual foundation provided by the Rigorous Results remains invaluable for the study of the spectacular developments of statistical mechanics in the second half of the 20th century.
Statistical mechanics of anyons
International Nuclear Information System (INIS)
Arovas, D.P.
1985-01-01
We study the statistical mechanics of a two-dimensional gas of free anyons - particles which interpolate between Bose-Einstein and Fermi-Dirac character. Thermodynamic quantities are discussed in the low-density regime. In particular, the second virial coefficient is evaluated by two different methods and is found to exhibit a simple, periodic, but nonanalytic behavior as a function of the statistics determining parameter. (orig.)
Mulholland, Henry
1968-01-01
Fundamentals of Statistics covers topics on the introduction, fundamentals, and science of statistics. The book discusses the collection, organization and representation of numerical data; elementary probability; the binomial Poisson distributions; and the measures of central tendency. The text describes measures of dispersion for measuring the spread of a distribution; continuous distributions for measuring on a continuous scale; the properties and use of normal distribution; and tests involving the normal or student's 't' distributions. The use of control charts for sample means; the ranges
Business statistics I essentials
Clark, Louise
2014-01-01
REA's Essentials provide quick and easy access to critical information in a variety of different fields, ranging from the most basic to the most advanced. As its name implies, these concise, comprehensive study guides summarize the essentials of the field covered. Essentials are helpful when preparing for exams, doing homework and will remain a lasting reference source for students, teachers, and professionals. Business Statistics I includes descriptive statistics, introduction to probability, probability distributions, sampling and sampling distributions, interval estimation, and hypothesis t
Johnson, Norman
This is author-approved bcc: This is the third volume of a collection of seminal papers in the statistical sciences written during the past 110 years. These papers have each had an outstanding influence on the development of statistical theory and practice over the last century. Each paper is preceded by an introduction written by an authority in the field providing background information and assessing its influence. Volume III concerntrates on articles from the 1980's while including some earlier articles not included in Volume I and II. Samuel Kotz is Professor of Statistics in the College of Business and Management at the University of Maryland. Norman L. Johnson is Professor Emeritus of Statistics at the University of North Carolina. Also available: Breakthroughs in Statistics Volume I: Foundations and Basic Theory Samuel Kotz and Norman L. Johnson, Editors 1993. 631 pp. Softcover. ISBN 0-387-94037-5 Breakthroughs in Statistics Volume II: Methodology and Distribution Samuel Kotz and Norman L. Johnson, Edi...
Practical Statistics for Particle Physicists
Lista, Luca
2017-01-01
These three lectures provide an introduction to the main concepts of statistical data analysis useful for precision measurements and searches for new signals in High Energy Physics. The frequentist and Bayesian approaches to probability theory will introduced and, for both approaches, inference methods will be presented. Hypothesis tests will be discussed, then significance and upper limit evaluation will be presented with an overview of the modern and most advanced techniques adopted for data analysis at the Large Hadron Collider.
UN Data- Environmental Statistics: Waste
World Wide Human Geography Data Working Group — The Environment Statistics Database contains selected water and waste statistics by country. Statistics on water and waste are based on official statistics supplied...
UN Data: Environment Statistics: Waste
World Wide Human Geography Data Working Group — The Environment Statistics Database contains selected water and waste statistics by country. Statistics on water and waste are based on official statistics supplied...
Encounter Probability of Significant Wave Height
DEFF Research Database (Denmark)
Liu, Z.; Burcharth, H. F.
The determination of the design wave height (often given as the significant wave height) is usually based on statistical analysis of long-term extreme wave height measurement or hindcast. The result of such extreme wave height analysis is often given as the design wave height corresponding to a c...
A study of statistics anxiety levels of graduate dental hygiene students.
Welch, Paul S; Jacks, Mary E; Smiley, Lynn A; Walden, Carolyn E; Clark, William D; Nguyen, Carol A
2015-02-01
In light of increased emphasis on evidence-based practice in the profession of dental hygiene, it is important that today's dental hygienist comprehend statistical measures to fully understand research articles, and thereby apply scientific evidence to practice. Therefore, the purpose of this study was to investigate statistics anxiety among graduate dental hygiene students in the U.S. A web-based self-report, anonymous survey was emailed to directors of 17 MSDH programs in the U.S. with a request to distribute to graduate students. The survey collected data on statistics anxiety, sociodemographic characteristics and evidence-based practice. Statistic anxiety was assessed using the Statistical Anxiety Rating Scale. Study significance level was α=0.05. Only 8 of the 17 invited programs participated in the study. Statistical Anxiety Rating Scale data revealed graduate dental hygiene students experience low to moderate levels of statistics anxiety. Specifically, the level of anxiety on the Interpretation Anxiety factor indicated this population could struggle with making sense of scientific research. A decisive majority (92%) of students indicated statistics is essential for evidence-based practice and should be a required course for all dental hygienists. This study served to identify statistics anxiety in a previously unexplored population. The findings should be useful in both theory building and in practical applications. Furthermore, the results can be used to direct future research. Copyright © 2015 The American Dental Hygienists’ Association.
Conformity and statistical tolerancing
Leblond, Laurent; Pillet, Maurice
2018-02-01
Statistical tolerancing was first proposed by Shewhart (Economic Control of Quality of Manufactured Product, (1931) reprinted 1980 by ASQC), in spite of this long history, its use remains moderate. One of the probable reasons for this low utilization is undoubtedly the difficulty for designers to anticipate the risks of this approach. The arithmetic tolerance (worst case) allows a simple interpretation: conformity is defined by the presence of the characteristic in an interval. Statistical tolerancing is more complex in its definition. An interval is not sufficient to define the conformance. To justify the statistical tolerancing formula used by designers, a tolerance interval should be interpreted as the interval where most of the parts produced should probably be located. This tolerance is justified by considering a conformity criterion of the parts guaranteeing low offsets on the latter characteristics. Unlike traditional arithmetic tolerancing, statistical tolerancing requires a sustained exchange of information between design and manufacture to be used safely. This paper proposes a formal definition of the conformity, which we apply successively to the quadratic and arithmetic tolerancing. We introduce a concept of concavity, which helps us to demonstrate the link between tolerancing approach and conformity. We use this concept to demonstrate the various acceptable propositions of statistical tolerancing (in the space decentring, dispersion).
Intuitive introductory statistics
Wolfe, Douglas A
2017-01-01
This textbook is designed to give an engaging introduction to statistics and the art of data analysis. The unique scope includes, but also goes beyond, classical methodology associated with the normal distribution. What if the normal model is not valid for a particular data set? This cutting-edge approach provides the alternatives. It is an introduction to the world and possibilities of statistics that uses exercises, computer analyses, and simulations throughout the core lessons. These elementary statistical methods are intuitive. Counting and ranking features prominently in the text. Nonparametric methods, for instance, are often based on counts and ranks and are very easy to integrate into an introductory course. The ease of computation with advanced calculators and statistical software, both of which factor into this text, allows important techniques to be introduced earlier in the study of statistics. This book's novel scope also includes measuring symmetry with Walsh averages, finding a nonp...
International Nuclear Information System (INIS)
Holttinen, H.; Tammelin, B.; Hyvoenen, R.
1997-01-01
The recording, analyzing and publishing of statistics of wind energy production has been reorganized in cooperation of VTT Energy, Finnish Meteorological (FMI Energy) and Finnish Wind Energy Association (STY) and supported by the Ministry of Trade and Industry (KTM). VTT Energy has developed a database that contains both monthly data and information on the wind turbines, sites and operators involved. The monthly production figures together with component failure statistics are collected from the operators by VTT Energy, who produces the final wind energy statistics to be published in Tuulensilmae and reported to energy statistics in Finland and abroad (Statistics Finland, Eurostat, IEA). To be able to verify the annual and monthly wind energy potential with average wind energy climate a production index in adopted. The index gives the expected wind energy production at various areas in Finland calculated using real wind speed observations, air density and a power curve for a typical 500 kW-wind turbine. FMI Energy has produced the average figures for four weather stations using the data from 1985-1996, and produces the monthly figures. (orig.)
Rapid Statistical Learning Supporting Word Extraction From Continuous Speech.
Batterink, Laura J
2017-07-01
The identification of words in continuous speech, known as speech segmentation, is a critical early step in language acquisition. This process is partially supported by statistical learning, the ability to extract patterns from the environment. Given that speech segmentation represents a potential bottleneck for language acquisition, patterns in speech may be extracted very rapidly, without extensive exposure. This hypothesis was examined by exposing participants to continuous speech streams composed of novel repeating nonsense words. Learning was measured on-line using a reaction time task. After merely one exposure to an embedded novel word, learners demonstrated significant learning effects, as revealed by faster responses to predictable than to unpredictable syllables. These results demonstrate that learners gained sensitivity to the statistical structure of unfamiliar speech on a very rapid timescale. This ability may play an essential role in early stages of language acquisition, allowing learners to rapidly identify word candidates and "break in" to an unfamiliar language.
Probability and Bayesian statistics
1987-01-01
This book contains selected and refereed contributions to the "Inter national Symposium on Probability and Bayesian Statistics" which was orga nized to celebrate the 80th birthday of Professor Bruno de Finetti at his birthplace Innsbruck in Austria. Since Professor de Finetti died in 1985 the symposium was dedicated to the memory of Bruno de Finetti and took place at Igls near Innsbruck from 23 to 26 September 1986. Some of the pa pers are published especially by the relationship to Bruno de Finetti's scientific work. The evolution of stochastics shows growing importance of probability as coherent assessment of numerical values as degrees of believe in certain events. This is the basis for Bayesian inference in the sense of modern statistics. The contributions in this volume cover a broad spectrum ranging from foundations of probability across psychological aspects of formulating sub jective probability statements, abstract measure theoretical considerations, contributions to theoretical statistics an...
Multivariate Statistical Process Control
DEFF Research Database (Denmark)
Kulahci, Murat
2013-01-01
As sensor and computer technology continues to improve, it becomes a normal occurrence that we confront with high dimensional data sets. As in many areas of industrial statistics, this brings forth various challenges in statistical process control (SPC) and monitoring for which the aim...... is to identify “out-of-control” state of a process using control charts in order to reduce the excessive variation caused by so-called assignable causes. In practice, the most common method of monitoring multivariate data is through a statistic akin to the Hotelling’s T2. For high dimensional data with excessive...... amount of cross correlation, practitioners are often recommended to use latent structures methods such as Principal Component Analysis to summarize the data in only a few linear combinations of the original variables that capture most of the variation in the data. Applications of these control charts...
DEFF Research Database (Denmark)
Lindström, Erik; Madsen, Henrik; Nielsen, Jan Nygaard
Statistics for Finance develops students’ professional skills in statistics with applications in finance. Developed from the authors’ courses at the Technical University of Denmark and Lund University, the text bridges the gap between classical, rigorous treatments of financial mathematics...... that rarely connect concepts to data and books on econometrics and time series analysis that do not cover specific problems related to option valuation. The book discusses applications of financial derivatives pertaining to risk assessment and elimination. The authors cover various statistical...... and mathematical techniques, including linear and nonlinear time series analysis, stochastic calculus models, stochastic differential equations, Itō’s formula, the Black–Scholes model, the generalized method-of-moments, and the Kalman filter. They explain how these tools are used to price financial derivatives...
1992 Energy statistics Yearbook
International Nuclear Information System (INIS)
1994-01-01
The principal objective of the Yearbook is to provide a global framework of comparable data on long-term trends in the supply of mainly commercial primary and secondary forms of energy. Data for each type of fuel and aggregate data for the total mix of commercial fuels are shown for individual countries and areas and are summarized into regional and world totals. The data are compiled primarily from annual questionnaires distributed by the United Nations Statistical Division and supplemented by official national statistical publications. Where official data are not available or are inconsistent, estimates are made by the Statistical Division based on governmental, professional or commercial materials. Estimates include, but are not limited to, extrapolated data based on partial year information, use of annual trends, trade data based on partner country reports, breakdowns of aggregated data as well as analysis of current energy events and activities
Energy Technology Data Exchange (ETDEWEB)
NONE
2010-07-01
Detailed, complete, timely and reliable statistics are essential to monitor the energy situation at a country level as well as at an international level. Energy statistics on supply, trade, stocks, transformation and demand are indeed the basis for any sound energy policy decision. For instance, the market of oil -- which is the largest traded commodity worldwide -- needs to be closely monitored in order for all market players to know at any time what is produced, traded, stocked and consumed and by whom. In view of the role and importance of energy in world development, one would expect that basic energy information to be readily available and reliable. This is not always the case and one can even observe a decline in the quality, coverage and timeliness of energy statistics over the last few years.
Statistical Engine Knock Control
DEFF Research Database (Denmark)
Stotsky, Alexander A.
2008-01-01
A new statistical concept of the knock control of a spark ignition automotive engine is proposed . The control aim is associated with the statistical hy pothesis test which compares the threshold value to the average value of the max imal amplitud e of the knock sensor signal at a given freq uency....... C ontrol algorithm which is used for minimization of the regulation error realizes a simple count-up-count-d own logic. A new ad aptation algorithm for the knock d etection threshold is also d eveloped . C onfi d ence interval method is used as the b asis for ad aptation. A simple statistical mod el...... which includ es generation of the amplitud e signals, a threshold value d etermination and a knock sound mod el is d eveloped for evaluation of the control concept....
Forster, Malcolm R
2011-01-01
Statisticians and philosophers of science have many common interests but restricted communication with each other. This volume aims to remedy these shortcomings. It provides state-of-the-art research in the area of philosophy of statistics by encouraging numerous experts to communicate with one another without feeling "restricted” by their disciplines or thinking "piecemeal” in their treatment of issues. A second goal of this book is to present work in the field without bias toward any particular statistical paradigm. Broadly speaking, the essays in this Handbook are concerned with problems of induction, statistics and probability. For centuries, foundational problems like induction have been among philosophers' favorite topics; recently, however, non-philosophers have increasingly taken a keen interest in these issues. This volume accordingly contains papers by both philosophers and non-philosophers, including scholars from nine academic disciplines.
Perception in statistical graphics
VanderPlas, Susan Ruth
There has been quite a bit of research on statistical graphics and visualization, generally focused on new types of graphics, new software to create graphics, interactivity, and usability studies. Our ability to interpret and use statistical graphics hinges on the interface between the graph itself and the brain that perceives and interprets it, and there is substantially less research on the interplay between graph, eye, brain, and mind than is sufficient to understand the nature of these relationships. The goal of the work presented here is to further explore the interplay between a static graph, the translation of that graph from paper to mental representation (the journey from eye to brain), and the mental processes that operate on that graph once it is transferred into memory (mind). Understanding the perception of statistical graphics should allow researchers to create more effective graphs which produce fewer distortions and viewer errors while reducing the cognitive load necessary to understand the information presented in the graph. Taken together, these experiments should lay a foundation for exploring the perception of statistical graphics. There has been considerable research into the accuracy of numerical judgments viewers make from graphs, and these studies are useful, but it is more effective to understand how errors in these judgments occur so that the root cause of the error can be addressed directly. Understanding how visual reasoning relates to the ability to make judgments from graphs allows us to tailor graphics to particular target audiences. In addition, understanding the hierarchy of salient features in statistical graphics allows us to clearly communicate the important message from data or statistical models by constructing graphics which are designed specifically for the perceptual system.
READING STATISTICS AND RESEARCH
Directory of Open Access Journals (Sweden)
Reviewed by Yavuz Akbulut
2008-10-01
Full Text Available The book demonstrates the best and most conservative ways to decipher and critique research reports particularly for social science researchers. In addition, new editions of the book are always better organized, effectively structured and meticulously updated in line with the developments in the field of research statistics. Even the most trivial issues are revisited and updated in new editions. For instance, purchaser of the previous editions might check the interpretation of skewness and kurtosis indices in the third edition (p. 34 and in the fifth edition (p.29 to see how the author revisits every single detail. Theory and practice always go hand in hand in all editions of the book. Re-reading previous editions (e.g. third edition before reading the fifth edition gives the impression that the author never stops ameliorating his instructional text writing methods. In brief, “Reading Statistics and Research” is among the best sources showing research consumers how to understand and critically assess the statistical information and research results contained in technical research reports. In this respect, the review written by Mirko Savić in Panoeconomicus (2008, 2, pp. 249-252 will help the readers to get a more detailed overview of each chapters. I cordially urge the beginning researchers to pick a highlighter to conduct a detailed reading with the book. A thorough reading of the source will make the researchers quite selective in appreciating the harmony between the data analysis, results and discussion sections of typical journal articles. If interested, beginning researchers might begin with this book to grasp the basics of research statistics, and prop up their critical research reading skills with some statistics package applications through the help of Dr. Andy Field’s book, Discovering Statistics using SPSS (second edition published by Sage in 2005.
Waller, Derek L
2008-01-01
Statistical analysis is essential to business decision-making and management, but the underlying theory of data collection, organization and analysis is one of the most challenging topics for business students and practitioners. This user-friendly text and CD-ROM package will help you to develop strong skills in presenting and interpreting statistical information in a business or management environment. Based entirely on using Microsoft Excel rather than more complicated applications, it includes a clear guide to using Excel with the key functions employed in the book, a glossary of terms and
Statistics As Principled Argument
Abelson, Robert P
2012-01-01
In this illuminating volume, Robert P. Abelson delves into the too-often dismissed problems of interpreting quantitative data and then presenting them in the context of a coherent story about one's research. Unlike too many books on statistics, this is a remarkably engaging read, filled with fascinating real-life (and real-research) examples rather than with recipes for analysis. It will be of true interest and lasting value to beginning graduate students and seasoned researchers alike. The focus of the book is that the purpose of statistics is to organize a useful argument from quantitative
International Nuclear Information System (INIS)
1998-01-01
The international office of energy information and studies (Enerdata), has published the second edition of its 1997 statistical yearbook which includes consolidated 1996 data with respect to the previous version from June 1997. The CD-Rom comprises the annual worldwide petroleum, natural gas, coal and electricity statistics from 1991 to 1996 with information about production, external trade, consumption, market shares, sectoral distribution of consumption and energy balance sheets. The world is divided into 12 zones (52 countries available). It contains also energy indicators: production and consumption tendencies, supply and production structures, safety of supplies, energy efficiency, and CO 2 emissions. (J.S.)
Einstein's statistical mechanics
Energy Technology Data Exchange (ETDEWEB)
Baracca, A; Rechtman S, R
1985-08-01
The foundation of equilibrium classical statistical mechanics were laid down in 1902 independently by Gibbs and Einstein. The latter's contribution, developed in three papers published between 1902 and 1904, is usually forgotten and when not, rapidly dismissed as equivalent to Gibb's. We review in detail Einstein's ideas on the foundations of statistical mechanics and show that they constitute the beginning of a research program that led Einstein to quantum theory. We also show how these ideas may be used as a starting point for an introductory course on the subject.
Einstein's statistical mechanics
International Nuclear Information System (INIS)
Baracca, A.; Rechtman S, R.
1985-01-01
The foundation of equilibrium classical statistical mechanics were laid down in 1902 independently by Gibbs and Einstein. The latter's contribution, developed in three papers published between 1902 and 1904, is usually forgotten and when not, rapidly dismissed as equivalent to Gibb's. We review in detail Einstein's ideas on the foundations of statistical mechanics and show that they constitute the beginning of a research program that led Einstein to quantum theory. We also show how these ideas may be used as a starting point for an introductory course on the subject. (author)
Neave, Henry R
2012-01-01
This book, designed for students taking a basic introductory course in statistical analysis, is far more than just a book of tables. Each table is accompanied by a careful but concise explanation and useful worked examples. Requiring little mathematical background, Elementary Statistics Tables is thus not just a reference book but a positive and user-friendly teaching and learning aid. The new edition contains a new and comprehensive "teach-yourself" section on a simple but powerful approach, now well-known in parts of industry but less so in academia, to analysing and interpreting process dat
Energy Technology Data Exchange (ETDEWEB)
Suh, M. Y.; Jee, K. Y.; Park, K. K.; Park, Y. J.; Kim, W. H
1999-08-01
This report is intended to describe the statistical methods necessary to design and conduct radiation counting experiments and evaluate the data from the experiment. The methods are described for the evaluation of the stability of a counting system and the estimation of the precision of counting data by application of probability distribution models. The methods for the determination of the uncertainty of the results calculated from the number of counts, as well as various statistical methods for the reduction of counting error are also described. (Author). 11 refs., 8 tabs., 8 figs.
Energy Technology Data Exchange (ETDEWEB)
Suh, M. Y.; Jee, K. Y.; Park, K. K. [Korea Atomic Energy Research Institute, Taejon (Korea)
1999-08-01
This report is intended to describe the statistical methods necessary to design and conduct radiation counting experiments and evaluate the data from the experiments. The methods are described for the evaluation of the stability of a counting system and the estimation of the precision of counting data by application of probability distribution models. The methods for the determination of the uncertainty of the results calculated from the number of counts, as well as various statistical methods for the reduction of counting error are also described. 11 refs., 6 figs., 8 tabs. (Author)
Bayesian statistics an introduction
Lee, Peter M
2012-01-01
Bayesian Statistics is the school of thought that combines prior beliefs with the likelihood of a hypothesis to arrive at posterior beliefs. The first edition of Peter Lee’s book appeared in 1989, but the subject has moved ever onwards, with increasing emphasis on Monte Carlo based techniques. This new fourth edition looks at recent techniques such as variational methods, Bayesian importance sampling, approximate Bayesian computation and Reversible Jump Markov Chain Monte Carlo (RJMCMC), providing a concise account of the way in which the Bayesian approach to statistics develops as wel
Milewski, Emil G
2012-01-01
REA's Essentials provide quick and easy access to critical information in a variety of different fields, ranging from the most basic to the most advanced. As its name implies, these concise, comprehensive study guides summarize the essentials of the field covered. Essentials are helpful when preparing for exams, doing homework and will remain a lasting reference source for students, teachers, and professionals. Statistics II discusses sampling theory, statistical inference, independent and dependent variables, correlation theory, experimental design, count data, chi-square test, and time se
Search Databases and Statistics
DEFF Research Database (Denmark)
Refsgaard, Jan C; Munk, Stephanie; Jensen, Lars J
2016-01-01
having strengths and weaknesses that must be considered for the individual needs. These are reviewed in this chapter. Equally critical for generating highly confident output datasets is the application of sound statistical criteria to limit the inclusion of incorrect peptide identifications from database...... searches. Additionally, careful filtering and use of appropriate statistical tests on the output datasets affects the quality of all downstream analyses and interpretation of the data. Our considerations and general practices on these aspects of phosphoproteomics data processing are presented here....
Environmental accounting and statistics
International Nuclear Information System (INIS)
Bartelmus, P.L.P.
1992-01-01
The objective of sustainable development is to integrate environmental concerns with mainstream socio-economic policies. Integrated policies need to be supported by integrated data. Environmental accounting achieves this integration by incorporating environmental costs and benefits into conventional national accounts. Modified accounting aggregates can thus be used in defining and measuring environmentally sound and sustainable economic growth. Further development objectives need to be assessed by more comprehensive, though necessarily less integrative, systems of environmental statistics and indicators. Integrative frameworks for the different statistical systems in the fields of economy, environment and population would facilitate the provision of comparable data for the analysis of integrated development. (author). 19 refs, 2 figs, 2 tabs
International Nuclear Information System (INIS)
Suh, M. Y.; Jee, K. Y.; Park, K. K.; Park, Y. J.; Kim, W. H.
1999-08-01
This report is intended to describe the statistical methods necessary to design and conduct radiation counting experiments and evaluate the data from the experiment. The methods are described for the evaluation of the stability of a counting system and the estimation of the precision of counting data by application of probability distribution models. The methods for the determination of the uncertainty of the results calculated from the number of counts, as well as various statistical methods for the reduction of counting error are also described. (Author). 11 refs., 8 tabs., 8 figs
Statistical process control for residential treated wood
Patricia K. Lebow; Timothy M. Young; Stan Lebow
2017-01-01
This paper is the first stage of a study that attempts to improve the process of manufacturing treated lumber through the use of statistical process control (SPC). Analysis of industrial and auditing agency data sets revealed there are differences between the industry and agency probability density functions (pdf) for normalized retention data. Resampling of batches of...
Detection of significant protein coevolution.
Ochoa, David; Juan, David; Valencia, Alfonso; Pazos, Florencio
2015-07-01
The evolution of proteins cannot be fully understood without taking into account the coevolutionary linkages entangling them. From a practical point of view, coevolution between protein families has been used as a way of detecting protein interactions and functional relationships from genomic information. The most common approach to inferring protein coevolution involves the quantification of phylogenetic tree similarity using a family of methodologies termed mirrortree. In spite of their success, a fundamental problem of these approaches is the lack of an adequate statistical framework to assess the significance of a given coevolutionary score (tree similarity). As a consequence, a number of ad hoc filters and arbitrary thresholds are required in an attempt to obtain a final set of confident coevolutionary signals. In this work, we developed a method for associating confidence estimators (P values) to the tree-similarity scores, using a null model specifically designed for the tree comparison problem. We show how this approach largely improves the quality and coverage (number of pairs that can be evaluated) of the detected coevolution in all the stages of the mirrortree workflow, independently of the starting genomic information. This not only leads to a better understanding of protein coevolution and its biological implications, but also to obtain a highly reliable and comprehensive network of predicted interactions, as well as information on the substructure of macromolecular complexes using only genomic information. The software and datasets used in this work are freely available at: http://csbg.cnb.csic.es/pMT/. pazos@cnb.csic.es Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Mai, Lan-Yin; Li, Yi-Xuan; Chen, Yong; Xie, Zhen; Li, Jie; Zhong, Ming-Yu
2014-05-01
The compatibility of traditional Chinese medicines (TCMs) formulae containing enormous information, is a complex component system. Applications of mathematical statistics methods on the compatibility researches of traditional Chinese medicines formulae have great significance for promoting the modernization of traditional Chinese medicines and improving clinical efficacies and optimizations of formulae. As a tool for quantitative analysis, data inference and exploring inherent rules of substances, the mathematical statistics method can be used to reveal the working mechanisms of the compatibility of traditional Chinese medicines formulae in qualitatively and quantitatively. By reviewing studies based on the applications of mathematical statistics methods, this paper were summarized from perspective of dosages optimization, efficacies and changes of chemical components as well as the rules of incompatibility and contraindication of formulae, will provide the references for further studying and revealing the working mechanisms and the connotations of traditional Chinese medicines.
Statistical inference for template aging
Schuckers, Michael E.
2006-04-01
A change in classification error rates for a biometric device is often referred to as template aging. Here we offer two methods for determining whether the effect of time is statistically significant. The first of these is the use of a generalized linear model to determine if these error rates change linearly over time. This approach generalizes previous work assessing the impact of covariates using generalized linear models. The second approach uses of likelihood ratio tests methodology. The focus here is on statistical methods for estimation not the underlying cause of the change in error rates over time. These methodologies are applied to data from the National Institutes of Standards and Technology Biometric Score Set Release 1. The results of these applications are discussed.