WorldWideScience

Sample records for models statistically significant

  1. Strategies for Testing Statistical and Practical Significance in Detecting DIF with Logistic Regression Models

    Science.gov (United States)

    Fidalgo, Angel M.; Alavi, Seyed Mohammad; Amirian, Seyed Mohammad Reza

    2014-01-01

    This study examines three controversial aspects in differential item functioning (DIF) detection by logistic regression (LR) models: first, the relative effectiveness of different analytical strategies for detecting DIF; second, the suitability of the Wald statistic for determining the statistical significance of the parameters of interest; and…

  2. Does Statistical Significance Help to Evaluate Predictive Performance of Competing Models?

    Directory of Open Access Journals (Sweden)

    Levent Bulut

    2016-04-01

    Full Text Available In Monte Carlo experiment with simulated data, we show that as a point forecast criterion, the Clark and West's (2006 unconditional test of mean squared prediction errors does not reflect the relative performance of a superior model over a relatively weaker one. The simulation results show that even though the mean squared prediction errors of a constructed superior model is far below a weaker alternative, the Clark- West test does not reflect this in their test statistics. Therefore, studies that use this statistic in testing the predictive accuracy of alternative exchange rate models, stock return predictability, inflation forecasting, and unemployment forecasting should not weight too much on the magnitude of the statistically significant Clark-West tests statistics.

  3. Statistically significant relational data mining :

    Energy Technology Data Exchange (ETDEWEB)

    Berry, Jonathan W.; Leung, Vitus Joseph; Phillips, Cynthia Ann; Pinar, Ali; Robinson, David Gerald; Berger-Wolf, Tanya; Bhowmick, Sanjukta; Casleton, Emily; Kaiser, Mark; Nordman, Daniel J.; Wilson, Alyson G.

    2014-02-01

    This report summarizes the work performed under the project (3z(BStatitically significant relational data mining.(3y (BThe goal of the project was to add more statistical rigor to the fairly ad hoc area of data mining on graphs. Our goal was to develop better algorithms and better ways to evaluate algorithm quality. We concetrated on algorithms for community detection, approximate pattern matching, and graph similarity measures. Approximate pattern matching involves finding an instance of a relatively small pattern, expressed with tolerance, in a large graph of data observed with uncertainty. This report gathers the abstracts and references for the eight refereed publications that have appeared as part of this work. We then archive three pieces of research that have not yet been published. The first is theoretical and experimental evidence that a popular statistical measure for comparison of community assignments favors over-resolved communities over approximations to a ground truth. The second are statistically motivated methods for measuring the quality of an approximate match of a small pattern in a large graph. The third is a new probabilistic random graph model. Statisticians favor these models for graph analysis. The new local structure graph model overcomes some of the issues with popular models such as exponential random graph models and latent variable models.

  4. Weighted Feature Significance: A Simple, Interpretable Model of Compound Toxicity Based on the Statistical Enrichment of Structural Features

    OpenAIRE

    Huang, Ruili; Southall, Noel; Xia, Menghang; Cho, Ming-Hsuang; Jadhav, Ajit; Nguyen, Dac-Trung; Inglese, James; Tice, Raymond R.; Austin, Christopher P.

    2009-01-01

    In support of the U.S. Tox21 program, we have developed a simple and chemically intuitive model we call weighted feature significance (WFS) to predict the toxicological activity of compounds, based on the statistical enrichment of structural features in toxic compounds. We trained and tested the model on the following: (1) data from quantitative high–throughput screening cytotoxicity and caspase activation assays conducted at the National Institutes of Health Chemical Genomics Center, (2) dat...

  5. Statistical significance versus clinical relevance.

    Science.gov (United States)

    van Rijn, Marieke H C; Bech, Anneke; Bouyer, Jean; van den Brand, Jan A J G

    2017-04-01

    In March this year, the American Statistical Association (ASA) posted a statement on the correct use of P-values, in response to a growing concern that the P-value is commonly misused and misinterpreted. We aim to translate these warnings given by the ASA into a language more easily understood by clinicians and researchers without a deep background in statistics. Moreover, we intend to illustrate the limitations of P-values, even when used and interpreted correctly, and bring more attention to the clinical relevance of study findings using two recently reported studies as examples. We argue that P-values are often misinterpreted. A common mistake is saying that P < 0.05 means that the null hypothesis is false, and P ≥0.05 means that the null hypothesis is true. The correct interpretation of a P-value of 0.05 is that if the null hypothesis were indeed true, a similar or more extreme result would occur 5% of the times upon repeating the study in a similar sample. In other words, the P-value informs about the likelihood of the data given the null hypothesis and not the other way around. A possible alternative related to the P-value is the confidence interval (CI). It provides more information on the magnitude of an effect and the imprecision with which that effect was estimated. However, there is no magic bullet to replace P-values and stop erroneous interpretation of scientific results. Scientists and readers alike should make themselves familiar with the correct, nuanced interpretation of statistical tests, P-values and CIs. © The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  6. Common pitfalls in statistical analysis: Clinical versus statistical significance

    Science.gov (United States)

    Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc

    2015-01-01

    In clinical research, study results, which are statistically significant are often interpreted as being clinically important. While statistical significance indicates the reliability of the study results, clinical significance reflects its impact on clinical practice. The third article in this series exploring pitfalls in statistical analysis clarifies the importance of differentiating between statistical significance and clinical significance. PMID:26229754

  7. Statistical significance of cis-regulatory modules

    Directory of Open Access Journals (Sweden)

    Smith Andrew D

    2007-01-01

    Full Text Available Abstract Background It is becoming increasingly important for researchers to be able to scan through large genomic regions for transcription factor binding sites or clusters of binding sites forming cis-regulatory modules. Correspondingly, there has been a push to develop algorithms for the rapid detection and assessment of cis-regulatory modules. While various algorithms for this purpose have been introduced, most are not well suited for rapid, genome scale scanning. Results We introduce methods designed for the detection and statistical evaluation of cis-regulatory modules, modeled as either clusters of individual binding sites or as combinations of sites with constrained organization. In order to determine the statistical significance of module sites, we first need a method to determine the statistical significance of single transcription factor binding site matches. We introduce a straightforward method of estimating the statistical significance of single site matches using a database of known promoters to produce data structures that can be used to estimate p-values for binding site matches. We next introduce a technique to calculate the statistical significance of the arrangement of binding sites within a module using a max-gap model. If the module scanned for has defined organizational parameters, the probability of the module is corrected to account for organizational constraints. The statistical significance of single site matches and the architecture of sites within the module can be combined to provide an overall estimation of statistical significance of cis-regulatory module sites. Conclusion The methods introduced in this paper allow for the detection and statistical evaluation of single transcription factor binding sites and cis-regulatory modules. The features described are implemented in the Search Tool for Occurrences of Regulatory Motifs (STORM and MODSTORM software.

  8. Social significance of community structure: statistical view.

    Science.gov (United States)

    Li, Hui-Jia; Daniels, Jasmine J

    2015-01-01

    Community structure analysis is a powerful tool for social networks that can simplify their topological and functional analysis considerably. However, since community detection methods have random factors and real social networks obtained from complex systems always contain error edges, evaluating the significance of a partitioned community structure is an urgent and important question. In this paper, integrating the specific characteristics of real society, we present a framework to analyze the significance of a social community. The dynamics of social interactions are modeled by identifying social leaders and corresponding hierarchical structures. Instead of a direct comparison with the average outcome of a random model, we compute the similarity of a given node with the leader by the number of common neighbors. To determine the membership vector, an efficient community detection algorithm is proposed based on the position of the nodes and their corresponding leaders. Then, using a log-likelihood score, the tightness of the community can be derived. Based on the distribution of community tightness, we establish a connection between p-value theory and network analysis, and then we obtain a significance measure of statistical form . Finally, the framework is applied to both benchmark networks and real social networks. Experimental results show that our work can be used in many fields, such as determining the optimal number of communities, analyzing the social significance of a given community, comparing the performance among various algorithms, etc.

  9. Social significance of community structure: Statistical view

    Science.gov (United States)

    Li, Hui-Jia; Daniels, Jasmine J.

    2015-01-01

    Community structure analysis is a powerful tool for social networks that can simplify their topological and functional analysis considerably. However, since community detection methods have random factors and real social networks obtained from complex systems always contain error edges, evaluating the significance of a partitioned community structure is an urgent and important question. In this paper, integrating the specific characteristics of real society, we present a framework to analyze the significance of a social community. The dynamics of social interactions are modeled by identifying social leaders and corresponding hierarchical structures. Instead of a direct comparison with the average outcome of a random model, we compute the similarity of a given node with the leader by the number of common neighbors. To determine the membership vector, an efficient community detection algorithm is proposed based on the position of the nodes and their corresponding leaders. Then, using a log-likelihood score, the tightness of the community can be derived. Based on the distribution of community tightness, we establish a connection between p -value theory and network analysis, and then we obtain a significance measure of statistical form . Finally, the framework is applied to both benchmark networks and real social networks. Experimental results show that our work can be used in many fields, such as determining the optimal number of communities, analyzing the social significance of a given community, comparing the performance among various algorithms, etc.

  10. Assessing statistical significance in causal graphs

    Directory of Open Access Journals (Sweden)

    Chindelevitch Leonid

    2012-02-01

    Full Text Available Abstract Background Causal graphs are an increasingly popular tool for the analysis of biological datasets. In particular, signed causal graphs--directed graphs whose edges additionally have a sign denoting upregulation or downregulation--can be used to model regulatory networks within a cell. Such models allow prediction of downstream effects of regulation of biological entities; conversely, they also enable inference of causative agents behind observed expression changes. However, due to their complex nature, signed causal graph models present special challenges with respect to assessing statistical significance. In this paper we frame and solve two fundamental computational problems that arise in practice when computing appropriate null distributions for hypothesis testing. Results First, we show how to compute a p-value for agreement between observed and model-predicted classifications of gene transcripts as upregulated, downregulated, or neither. Specifically, how likely are the classifications to agree to the same extent under the null distribution of the observed classification being randomized? This problem, which we call "Ternary Dot Product Distribution" owing to its mathematical form, can be viewed as a generalization of Fisher's exact test to ternary variables. We present two computationally efficient algorithms for computing the Ternary Dot Product Distribution and investigate its combinatorial structure analytically and numerically to establish computational complexity bounds. Second, we develop an algorithm for efficiently performing random sampling of causal graphs. This enables p-value computation under a different, equally important null distribution obtained by randomizing the graph topology but keeping fixed its basic structure: connectedness and the positive and negative in- and out-degrees of each vertex. We provide an algorithm for sampling a graph from this distribution uniformly at random. We also highlight theoretical

  11. The thresholds for statistical and clinical significance

    DEFF Research Database (Denmark)

    Jakobsen, Janus Christian; Gluud, Christian; Winkel, Per

    2014-01-01

    threshold if the trial is stopped early or if interim analyses have been conducted; (4) adjust the confidence intervals and the P-values for multiplicity due to number of outcome comparisons; and (5) assess clinical significance of the trial results. CONCLUSIONS: If the proposed five-step procedure...... not reflect the probability of getting a result assuming an alternative hypothesis to the null hypothesis is true. Second, a confidence interval or a P-value showing significance may be caused by multiplicity. Third, statistical significance does not necessarily result in clinical significance. Therefore......, assessment of intervention effects in randomised clinical trials deserves more rigour in order to become more valid. METHODS: Several methodologies for assessing the statistical and clinical significance of intervention effects in randomised clinical trials were considered. Balancing simplicity...

  12. Social significance of community structure: Statistical view

    CERN Document Server

    Li, Hui-Jia

    2015-01-01

    Community structure analysis is a powerful tool for social networks, which can simplify their topological and functional analysis considerably. However, since community detection methods have random factors and real social networks obtained from complex systems always contain error edges, evaluating the significance of community structure partitioned is an urgent and important question. In this paper, integrating the specific characteristics of real society, we present a novel framework analyzing the significance of social community specially. The dynamics of social interactions are modeled by identifying social leaders and corresponding hierarchical structures. Instead of a direct comparison with the average outcome of a random model, we compute the similarity of a given node with the leader by the number of common neighbors. To determine the membership vector, an efficient community detection algorithm is proposed based on the position of nodes and their corresponding leaders. Then, using log-likelihood sco...

  13. Significant Statistics: Viewed with a Contextual Lens

    Science.gov (United States)

    Tait-McCutcheon, Sandi

    2010-01-01

    This paper examines the pedagogical and organisational changes three lead teachers made to their statistics teaching and learning programs. The lead teachers posed the research question: What would the effect of contextually integrating statistical investigations and literacies into other curriculum areas be on student achievement? By finding the…

  14. Statistical Model for Content Extraction

    DEFF Research Database (Denmark)

    2011-01-01

    We present a statistical model for content extraction from HTML documents. The model operates on Document Object Model (DOM) tree of the corresponding HTML document. It evaluates each tree node and associated statistical features to predict significance of the node towards overall content...

  15. Diffeomorphic Statistical Deformation Models

    DEFF Research Database (Denmark)

    Hansen, Michael Sass; Hansen, Mads/Fogtman; Larsen, Rasmus

    2007-01-01

    In this paper we present a new method for constructing diffeomorphic statistical deformation models in arbitrary dimensional images with a nonlinear generative model and a linear parameter space. Our deformation model is a modified version of the diffeomorphic model introduced by Cootes et al. Th...... with ground truth in form of manual expert annotations, and compared to Cootes's model. We anticipate applications in unconstrained diffeomorphic synthesis of images, e.g. for tracking, segmentation, registration or classification purposes....

  16. A Statistical Programme Assignment Model

    DEFF Research Database (Denmark)

    Rosholm, Michael; Staghøj, Jonas; Svarer, Michael

    assignment  mechanism, which is based on the discretionary choice of case workers. This is done in a duration model context, using the timing-of-events framework to identify causal effects. We compare different assignment  mechanisms, and the results suggest that a significant reduction in the average...... duration of unemployment spells may result if a statistical programme assignment model is introduced. We discuss several issues regarding the  plementation of such a system, especially the interplay between the statistical model and  case workers....

  17. Modeling cosmic void statistics

    Science.gov (United States)

    Hamaus, Nico; Sutter, P. M.; Wandelt, Benjamin D.

    2016-10-01

    Understanding the internal structure and spatial distribution of cosmic voids is crucial when considering them as probes of cosmology. We present recent advances in modeling void density- and velocity-profiles in real space, as well as void two-point statistics in redshift space, by examining voids identified via the watershed transform in state-of-the-art ΛCDM n-body simulations and mock galaxy catalogs. The simple and universal characteristics that emerge from these statistics indicate the self-similarity of large-scale structure and suggest cosmic voids to be among the most pristine objects to consider for future studies on the nature of dark energy, dark matter and modified gravity.

  18. Caveats for using statistical significance tests in research assessments

    OpenAIRE

    2011-01-01

    This paper raises concerns about the advantages of using statistical significance tests in research assessments as has recently been suggested in the debate about proper normalization procedures for citation indicators. Statistical significance tests are highly controversial and numerous criticisms have been leveled against their use. Based on examples from articles by proponents of the use statistical significance tests in research assessments, we address some of the numerous problems with s...

  19. A Statistical Programme Assignment Model

    DEFF Research Database (Denmark)

    Rosholm, Michael; Staghøj, Jonas; Svarer, Michael

    When treatment effects of active labour market programmes are heterogeneous in an observable way  across the population, the allocation of the unemployed into different programmes becomes a particularly  important issue. In this paper, we present a statistical model designed to improve the present...... assignment  mechanism, which is based on the discretionary choice of case workers. This is done in a duration model context, using the timing-of-events framework to identify causal effects. We compare different assignment  mechanisms, and the results suggest that a significant reduction in the average...... duration of unemployment spells may result if a statistical programme assignment model is introduced. We discuss several issues regarding the  plementation of such a system, especially the interplay between the statistical model and  case workers....

  20. Caveats for using statistical significance tests in research assessments

    CERN Document Server

    Schneider, Jesper W

    2011-01-01

    This paper raises concerns about the advantages of using statistical significance tests in research assessments as has recently been suggested in the debate about proper normalization procedures for citation indicators. Statistical significance tests are highly controversial and numerous criticisms have been leveled against their use. Based on examples from articles by proponents of the use statistical significance tests in research assessments, we address some of the numerous problems with such tests. The issues specifically discussed are the ritual practice of such tests, their dichotomous application in decision making, the difference between statistical and substantive significance, the implausibility of most null hypotheses, the crucial assumption of randomness, as well as the utility of standard errors and confidence intervals for inferential purposes. We argue that applying statistical significance tests and mechanically adhering to their results is highly problematic and detrimental to critical thinki...

  1. Statistical significance of spectral lag transition in GRB 160625B

    Science.gov (United States)

    Ganguly, Shalini; Desai, Shantanu

    2017-09-01

    Recently Wei et al.[1] have found evidence for a transition from positive time lags to negative time lags in the spectral lag data of GRB 160625B. They have fit these observed lags to a sum of two components: an assumed functional form for intrinsic time lag due to astrophysical mechanisms and an energy-dependent speed of light due to quadratic and linear Lorentz invariance violation (LIV) models. Here, we examine the statistical significance of the evidence for a transition to negative time lags. Such a transition, even if present in GRB 160625B, cannot be due to an energy dependent speed of light as this would contradict previous limits by some 3-4 orders of magnitude, and must therefore be of intrinsic astrophysical origin. We use three different model comparison techniques: a frequentist test and two information based criteria (AIC and BIC). From the frequentist model comparison test, we find that the evidence for transition in the spectral lag data is favored at 3.05σ and 3.74σ for the linear and quadratic models respectively. We find that ΔAIC and ΔBIC have values ≳ 10 for the spectral lag transition that was motivated as being due to quadratic Lorentz invariance violating model pointing to ;decisive evidence;. We note however that none of the three models (including the model of intrinsic astrophysical emission) provide a good fit to the data.

  2. Significance analysis and statistical mechanics: an application to clustering.

    Science.gov (United States)

    Łuksza, Marta; Lässig, Michael; Berg, Johannes

    2010-11-26

    This Letter addresses the statistical significance of structures in random data: given a set of vectors and a measure of mutual similarity, how likely is it that a subset of these vectors forms a cluster with enhanced similarity among its elements? The computation of this cluster p value for randomly distributed vectors is mapped onto a well-defined problem of statistical mechanics. We solve this problem analytically, establishing a connection between the physics of quenched disorder and multiple-testing statistics in clustering and related problems. In an application to gene expression data, we find a remarkable link between the statistical significance of a cluster and the functional relationships between its genes.

  3. Mass spectrometry based protein identification with accurate statistical significance assignment

    OpenAIRE

    Alves, Gelio; Yu, Yi-Kuo

    2014-01-01

    Motivation: Assigning statistical significance accurately has become increasingly important as meta data of many types, often assembled in hierarchies, are constructed and combined for further biological analyses. Statistical inaccuracy of meta data at any level may propagate to downstream analyses, undermining the validity of scientific conclusions thus drawn. From the perspective of mass spectrometry based proteomics, even though accurate statistics for peptide identification can now be ach...

  4. Significance and importance: some common misapprehensions about statistics

    OpenAIRE

    Currey, John; Paul D Baxter; Pitchford, Jonathan W

    2009-01-01

    Abstract This paper attempts to discuss, in a readily understandable way, some very common misapprehensions that occur in laboratory-based scientists? thinking about statistics. We deal mainly with three issues 1) P-values are best thought of as merely guides to action: are your experimental data consistent with your null hypothesis, or not.? 2) When confronted with statistically non-significant results, you should also think about the power of the statistical test jdc1@york....

  5. The Use of Meta-Analytic Statistical Significance Testing

    Science.gov (United States)

    Polanin, Joshua R.; Pigott, Terri D.

    2015-01-01

    Meta-analysis multiplicity, the concept of conducting multiple tests of statistical significance within one review, is an underdeveloped literature. We address this issue by considering how Type I errors can impact meta-analytic results, suggest how statistical power may be affected through the use of multiplicity corrections, and propose how…

  6. Caveats for using statistical significance tests in research assessments

    DEFF Research Database (Denmark)

    Schneider, Jesper Wiborg

    2013-01-01

    This article raises concerns about the advantages of using statistical significance tests in research assessments as has recently been suggested in the debate about proper normalization procedures for citation indicators by Opthof and Leydesdorff (2010). Statistical significance tests are highly...... controversial and numerous criticisms have been leveled against their use. Based on examples from articles by proponents of the use statistical significance tests in research assessments, we address some of the numerous problems with such tests. The issues specifically discussed are the ritual practice...... of such tests, their dichotomous application in decision making, the difference between statistical and substantive significance, the implausibility of most null hypotheses, the crucial assumption of randomness, as well as the utility of standard errors and confidence intervals for inferential purposes. We...

  7. Caveats for using statistical significance tests in research assessments

    DEFF Research Database (Denmark)

    Schneider, Jesper Wiborg

    2013-01-01

    This article raises concerns about the advantages of using statistical significance tests in research assessments as has recently been suggested in the debate about proper normalization procedures for citation indicators by Opthof and Leydesdorff (2010). Statistical significance tests are highly...... controversial and numerous criticisms have been leveled against their use. Based on examples from articles by proponents of the use statistical significance tests in research assessments, we address some of the numerous problems with such tests. The issues specifically discussed are the ritual practice...... are important or not. On the contrary their use may be harmful. Like many other critics, we generally believe that statistical significance tests are over- and misused in the empirical sciences including scientometrics and we encourage a reform on these matters....

  8. The questioned p value: clinical, practical and statistical significance.

    Science.gov (United States)

    Jiménez-Paneque, Rosa

    2016-09-09

    The use of p-value and statistical significance have been questioned since the early 80s in the last century until today. Much has been discussed about it in the field of statistics and its applications, especially in Epidemiology and Public Health. As a matter of fact, the p-value and its equivalent, statistical significance, are difficult concepts to grasp for the many health professionals some way involved in research applied to their work areas. However, its meaning should be clear in intuitive terms although it is based on theoretical concepts of the field of Statistics. This paper attempts to present the p-value as a concept that applies to everyday life and therefore intuitively simple but whose proper use cannot be separated from theoretical and methodological elements of inherent complexity. The reasons behind the criticism received by the p-value and its isolated use are intuitively explained, mainly the need to demarcate statistical significance from clinical significance and some of the recommended remedies for these problems are approached as well. It finally refers to the current trend to vindicate the p-value appealing to the convenience of its use in certain situations and the recent statement of the American Statistical Association in this regard.

  9. Statistical significance test for transition matrices of atmospheric Markov chains

    Science.gov (United States)

    Vautard, Robert; Mo, Kingtse C.; Ghil, Michael

    1990-01-01

    Low-frequency variability of large-scale atmospheric dynamics can be represented schematically by a Markov chain of multiple flow regimes. This Markov chain contains useful information for the long-range forecaster, provided that the statistical significance of the associated transition matrix can be reliably tested. Monte Carlo simulation yields a very reliable significance test for the elements of this matrix. The results of this test agree with previously used empirical formulae when each cluster of maps identified as a distinct flow regime is sufficiently large and when they all contain a comparable number of maps. Monte Carlo simulation provides a more reliable way to test the statistical significance of transitions to and from small clusters. It can determine the most likely transitions, as well as the most unlikely ones, with a prescribed level of statistical significance.

  10. Algebraic Statistics for Network Models

    Science.gov (United States)

    2014-02-19

    AFRL-OSR-VA-TR-2014-0070 (DARPA) Algebraic Statistics for Network Models SONJA PETROVIC PENNSYLVANIA STATE UNIVERSITY 02/19/2014 Final Report...DARPA GRAPHS Phase I Algebraic Statistics for Network Models FA9550-12-1-0392 Sonja Petrović petrovic@psu.edu1 Department of Statistics Pennsylvania...Department of Statistics, Heinz College , Machine Learning Department, Cylab Carnegie Mellon University 1. Abstract This project focused on the family of

  11. On detection and assessment of statistical significance of Genomic Islands

    Directory of Open Access Journals (Sweden)

    Chaudhuri Probal

    2008-04-01

    Full Text Available Abstract Background Many of the available methods for detecting Genomic Islands (GIs in prokaryotic genomes use markers such as transposons, proximal tRNAs, flanking repeats etc., or they use other supervised techniques requiring training datasets. Most of these methods are primarily based on the biases in GC content or codon and amino acid usage of the islands. However, these methods either do not use any formal statistical test of significance or use statistical tests for which the critical values and the P-values are not adequately justified. We propose a method, which is unsupervised in nature and uses Monte-Carlo statistical tests based on randomly selected segments of a chromosome. Such tests are supported by precise statistical distribution theory, and consequently, the resulting P-values are quite reliable for making the decision. Results Our algorithm (named Design-Island, an acronym for Detection of Statistically Significant Genomic Island runs in two phases. Some 'putative GIs' are identified in the first phase, and those are refined into smaller segments containing horizontally acquired genes in the refinement phase. This method is applied to Salmonella typhi CT18 genome leading to the discovery of several new pathogenicity, antibiotic resistance and metabolic islands that were missed by earlier methods. Many of these islands contain mobile genetic elements like phage-mediated genes, transposons, integrase and IS elements confirming their horizontal acquirement. Conclusion The proposed method is based on statistical tests supported by precise distribution theory and reliable P-values along with a technique for visualizing statistically significant islands. The performance of our method is better than many other well known methods in terms of their sensitivity and accuracy, and in terms of specificity, it is comparable to other methods.

  12. Methods of statistical model estimation

    CERN Document Server

    Hilbe, Joseph

    2013-01-01

    Methods of Statistical Model Estimation examines the most important and popular methods used to estimate parameters for statistical models and provide informative model summary statistics. Designed for R users, the book is also ideal for anyone wanting to better understand the algorithms used for statistical model fitting. The text presents algorithms for the estimation of a variety of regression procedures using maximum likelihood estimation, iteratively reweighted least squares regression, the EM algorithm, and MCMC sampling. Fully developed, working R code is constructed for each method. Th

  13. LP Approach to Statistical Modeling

    OpenAIRE

    Mukhopadhyay, Subhadeep; Parzen, Emanuel

    2014-01-01

    We present an approach to statistical data modeling and exploratory data analysis called `LP Statistical Data Science.' It aims to generalize and unify traditional and novel statistical measures, methods, and exploratory tools. This article outlines fundamental concepts along with real-data examples to illustrate how the `LP Statistical Algorithm' can systematically tackle different varieties of data types, data patterns, and data structures under a coherent theoretical framework. A fundament...

  14. Common pitfalls in statistical analysis: "P" values, statistical significance and confidence intervals

    Directory of Open Access Journals (Sweden)

    Priya Ranganathan

    2015-01-01

    Full Text Available In the second part of a series on pitfalls in statistical analysis, we look at various ways in which a statistically significant study result can be expressed. We debunk some of the myths regarding the ′P′ value, explain the importance of ′confidence intervals′ and clarify the importance of including both values in a paper

  15. Common pitfalls in statistical analysis: “P” values, statistical significance and confidence intervals

    Science.gov (United States)

    Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc

    2015-01-01

    In the second part of a series on pitfalls in statistical analysis, we look at various ways in which a statistically significant study result can be expressed. We debunk some of the myths regarding the ‘P’ value, explain the importance of ‘confidence intervals’ and clarify the importance of including both values in a paper PMID:25878958

  16. Statistical Model of Extreme Shear

    DEFF Research Database (Denmark)

    Larsen, Gunner Chr.; Hansen, Kurt Schaldemose

    2004-01-01

    In order to continue cost-optimisation of modern large wind turbines, it is important to continously increase the knowledge on wind field parameters relevant to design loads. This paper presents a general statistical model that offers site-specific prediction of the probability density function...... by a model that, on a statistically consistent basis, describe the most likely spatial shape of an extreme wind shear event. Predictions from the model have been compared with results from an extreme value data analysis, based on a large number of high-sampled full-scale time series measurements...... are consistent, given the inevitabel uncertainties associated with model as well as with the extreme value data analysis. Keywords: Statistical model, extreme wind conditions, statistical analysis, turbulence, wind loading, statistical analysis, turbulence, wind loading, wind shear, wind turbines....

  17. Statistical modeling for degradation data

    CERN Document Server

    Lio, Yuhlong; Ng, Hon; Tsai, Tzong-Ru

    2017-01-01

    This book focuses on the statistical aspects of the analysis of degradation data. In recent years, degradation data analysis has come to play an increasingly important role in different disciplines such as reliability, public health sciences, and finance. For example, information on products’ reliability can be obtained by analyzing degradation data. In addition, statistical modeling and inference techniques have been developed on the basis of different degradation measures. The book brings together experts engaged in statistical modeling and inference, presenting and discussing important recent advances in degradation data analysis and related applications. The topics covered are timely and have considerable potential to impact both statistics and reliability engineering.

  18. Foundational Issues in Statistical Modeling: Statistical Model Specification and Validation

    Directory of Open Access Journals (Sweden)

    Aris Spanos

    2011-01-01

    Full Text Available Statistical model specification and validation raise crucial foundational problems whose pertinent resolution holds the key to learning from data by securing the reliability of frequentist inference. The paper questions the judiciousness of several current practices, including the theory-driven approach, and the Akaike-type model selection procedures, arguing that they often lead to unreliable inferences. This is primarily due to the fact that goodness-of-fit/prediction measures and other substantive and pragmatic criteria are of questionable value when the estimated model is statistically misspecified. Foisting one's favorite model on the data often yields estimated models which are both statistically and substantively misspecified, but one has no way to delineate between the two sources of error and apportion blame. The paper argues that the error statistical approach can address this Duhemian ambiguity by distinguishing between statistical and substantive premises and viewing empirical modeling in a piecemeal way with a view to delineate the various issues more effectively. It is also argued that Hendry's general to specific procedures does a much better job in model selection than the theory-driven and the Akaike-type procedures primary because of its error statistical underpinnings.

  19. Statistical modelling with quantile functions

    CERN Document Server

    Gilchrist, Warren

    2000-01-01

    Galton used quantiles more than a hundred years ago in describing data. Tukey and Parzen used them in the 60s and 70s in describing populations. Since then, the authors of many papers, both theoretical and practical, have used various aspects of quantiles in their work. Until now, however, no one put all the ideas together to form what turns out to be a general approach to statistics.Statistical Modelling with Quantile Functions does just that. It systematically examines the entire process of statistical modelling, starting with using the quantile function to define continuous distributions. The author shows that by using this approach, it becomes possible to develop complex distributional models from simple components. A modelling kit can be developed that applies to the whole model - deterministic and stochastic components - and this kit operates by adding, multiplying, and transforming distributions rather than data.Statistical Modelling with Quantile Functions adds a new dimension to the practice of stati...

  20. Systematic reviews of anesthesiologic interventions reported as statistically significant

    DEFF Research Database (Denmark)

    Imberger, Georgina; Gluud, Christian; Boylan, John

    2015-01-01

    statistically significant meta-analyses of anesthesiologic interventions, we used TSA to estimate power and imprecision in the context of sparse data and repeated updates. METHODS: We conducted a search to identify all systematic reviews with meta-analyses that investigated an intervention that may......: From 11,870 titles, we found 682 systematic reviews that investigated anesthesiologic interventions. In the 50 sampled meta-analyses, the median number of trials included was 8 (interquartile range [IQR], 5-14), the median number of participants was 964 (IQR, 523-1736), and the median number...

  1. Your Chi-Square Test Is Statistically Significant: Now What?

    Directory of Open Access Journals (Sweden)

    Donald Sharpe

    2015-04-01

    Full Text Available Applied researchers have employed chi-square tests for more than one hundred years. This paper addresses the question of how one should follow a statistically significant chi-square test result in order to determine the source of that result. Four approaches were evaluated: calculating residuals, comparing cells, ransacking, and partitioning. Data from two recent journal articles were used to illustrate these approaches. A call is made for greater consideration of foundational techniques such as the chi-square tests.

  2. Sensometrics: Thurstonian and Statistical Models

    DEFF Research Database (Denmark)

    Christensen, Rune Haubo Bojesen

    of human senses. Thurstonian models provide a stochastic model for the data-generating mechanism through a psychophysical model for the cognitive processes and in addition provides an independent measure for quantification of sensory differences. In the interest of cost-reduction and health...... of generalized linear mixed models, cumulative link models and cumulative link mixed models. The relation between the Wald, likelihood and score statistics is expanded upon using the shape of the (profile) likelihood function as common reference....

  3. Statistical downscaling rainfall using artificial neural network: significantly wetter Bangkok?

    Science.gov (United States)

    Vu, Minh Tue; Aribarg, Thannob; Supratid, Siriporn; Raghavan, Srivatsan V.; Liong, Shie-Yui

    2016-11-01

    Artificial neural network (ANN) is an established technique with a flexible mathematical structure that is capable of identifying complex nonlinear relationships between input and output data. The present study utilizes ANN as a method of statistically downscaling global climate models (GCMs) during the rainy season at meteorological site locations in Bangkok, Thailand. The study illustrates the applications of the feed forward back propagation using large-scale predictor variables derived from both the ERA-Interim reanalyses data and present day/future GCM data. The predictors are first selected over different grid boxes surrounding Bangkok region and then screened by using principal component analysis (PCA) to filter the best correlated predictors for ANN training. The reanalyses downscaled results of the present day climate show good agreement against station precipitation with a correlation coefficient of 0.8 and a Nash-Sutcliffe efficiency of 0.65. The final downscaled results for four GCMs show an increasing trend of precipitation for rainy season over Bangkok by the end of the twenty-first century. The extreme values of precipitation determined using statistical indices show strong increases of wetness. These findings will be useful for policy makers in pondering adaptation measures due to flooding such as whether the current drainage network system is sufficient to meet the changing climate and to plan for a range of related adaptation/mitigation measures.

  4. Lexical Co-occurrence, Statistical Significance, and Word Association

    CERN Document Server

    Chaudhari, Dipak; Laxman, Srivatsan

    2010-01-01

    Lexical co-occurrence is an important cue for detecting word associations. We present a theoretical framework for discovering statistically significant lexical co-occurrences from a given corpus. In contrast with the prevalent practice of giving weightage to unigram frequencies, we focus only on the documents containing both the terms (of a candidate bigram). We detect biases in span distributions of associated words, while being agnostic to variations in global unigram frequencies. Our framework has the fidelity to distinguish different classes of lexical co-occurrences, based on strengths of the document and corpuslevel cues of co-occurrence in the data. We perform extensive experiments on benchmark data sets to study the performance of various co-occurrence measures that are currently known in literature. We find that a relatively obscure measure called Ochiai, and a newly introduced measure CSA capture the notion of lexical co-occurrence best, followed next by LLR, Dice, and TTest, while another popular m...

  5. Statistical Model of Extreme Shear

    DEFF Research Database (Denmark)

    Hansen, Kurt Schaldemose; Larsen, Gunner Chr.

    2005-01-01

    In order to continue cost-optimisation of modern large wind turbines, it is important to continuously increase the knowledge of wind field parameters relevant to design loads. This paper presents a general statistical model that offers site-specific prediction of the probability density function...... by a model that, on a statistically consistent basis, describes the most likely spatial shape of an extreme wind shear event. Predictions from the model have been compared with results from an extreme value data analysis, based on a large number of full-scale measurements recorded with a high sampling rate...

  6. Statistical vs. Economic Significance in Economics and Econometrics: Further comments on McCloskey & Ziliak

    DEFF Research Database (Denmark)

    Engsted, Tom

    reliable estimates, and I argue that significance tests are useful tools in those cases where a statistical model serves as input in the quantification of an economic model. Finally, I provide a specific example from economics - asset return predictability - where the distinction between statistical......I comment on the controversy between McCloskey & Ziliak and Hoover & Siegler on statistical versus economic significance, in the March 2008 issue of the Journal of Economic Methodology. I argue that while McCloskey & Ziliak are right in emphasizing 'real error', i.e. non-sampling error that cannot...... be eliminated through specification testing, they fail to acknowledge those areas in economics, e.g. rational expectations macroeconomics and asset pricing, where researchers clearly distinguish between statistical and economic significance and where statistical testing plays a relatively minor role in model...

  7. Image quantization: statistics and modeling

    Science.gov (United States)

    Whiting, Bruce R.; Muka, Edward

    1998-07-01

    A method for analyzing the effects of quantization, developed for temporal one-dimensional signals, is extended to two- dimensional radiographic images. By calculating the probability density function for the second order statistics (the differences between nearest neighbor pixels) and utilizing its Fourier transform (the characteristic function), the effect of quantization on image statistics can be studied by the use of standard communication theory. The approach is demonstrated by characterizing the noise properties of a storage phosphor computed radiography system and the image statistics of a simple radiographic object (cylinder) and by comparing the model to experimental measurements. The role of quantization noise and the onset of contouring in image degradation are explained.

  8. Fostering Students' Statistical Literacy through Significant Learning Experience

    Science.gov (United States)

    Krishnan, Saras

    2015-01-01

    A major objective of statistics education is to develop students' statistical literacy that enables them to be educated users of data in context. Teaching statistics in today's educational settings is not an easy feat because teachers have a huge task in keeping up with the demands of the new generation of learners. The present day students have…

  9. A tutorial on hunting statistical significance by chasing N

    Directory of Open Access Journals (Sweden)

    Denes Szucs

    2016-09-01

    Full Text Available There is increasing concern about the replicability of studies in psychology and cognitive neuroscience. Hidden data dredging (also called p-hacking is a major contributor to this crisis because it substantially increases Type I error resulting in a much larger proportion of false positive findings than the usually expected 5%. In order to build better intuition to avoid, detect and criticise some typical problems, here I systematically illustrate the large impact of some easy to implement and so, perhaps frequent data dredging techniques on boosting false positive findings. I illustrate several forms of two special cases of data dredging. First, researchers may violate the data collection stopping rules of null hypothesis significance testing by repeatedly checking for statistical significance with various numbers of participants. Second, researchers may group participants post-hoc along potential but unplanned independent grouping variables. The first approach 'hacks' the number of participants in studies, the second approach ‘hacks’ the number of variables in the analysis. I demonstrate the high amount of false positive findings generated by these techniques with data from true null distributions. I also illustrate that it is extremely easy to introduce strong bias into data by very mild selection and re-testing. Similar, usually undocumented data dredging steps can easily lead to having 20-50%, or more false positives.

  10. Statistical modeling of program performance

    Directory of Open Access Journals (Sweden)

    A. P. Karpenko

    2014-01-01

    Full Text Available A task of evaluation of program performance often occurs in the process of design of computer systems or during iterative compilation. A traditional way to solve this problem is emulation of program execution on the target system. A modern alternative approach to evaluation of program performance is based on statistical modeling of program performance on a computer under investigation. This statistical method of modeling program performance called Velocitas was introduced in this work. The method and its implementation in the Adaptor framework were presented. Investigation of the method's effectiveness showed high adequacy of program performance prediction.

  11. Uncertainty the soul of modeling, probability & statistics

    CERN Document Server

    Briggs, William

    2016-01-01

    This book presents a philosophical approach to probability and probabilistic thinking, considering the underpinnings of probabilistic reasoning and modeling, which effectively underlie everything in data science. The ultimate goal is to call into question many standard tenets and lay the philosophical and probabilistic groundwork and infrastructure for statistical modeling. It is the first book devoted to the philosophy of data aimed at working scientists and calls for a new consideration in the practice of probability and statistics to eliminate what has been referred to as the "Cult of Statistical Significance". The book explains the philosophy of these ideas and not the mathematics, though there are a handful of mathematical examples. The topics are logically laid out, starting with basic philosophy as related to probability, statistics, and science, and stepping through the key probabilistic ideas and concepts, and ending with statistical models. Its jargon-free approach asserts that standard methods, suc...

  12. Textual information access statistical models

    CERN Document Server

    Gaussier, Eric

    2013-01-01

    This book presents statistical models that have recently been developed within several research communities to access information contained in text collections. The problems considered are linked to applications aiming at facilitating information access:- information extraction and retrieval;- text classification and clustering;- opinion mining;- comprehension aids (automatic summarization, machine translation, visualization).In order to give the reader as complete a description as possible, the focus is placed on the probability models used in the applications

  13. Improved model for statistical alignment

    Energy Technology Data Exchange (ETDEWEB)

    Miklos, I.; Toroczkai, Z. (Zoltan)

    2001-01-01

    The statistical approach to molecular sequence evolution involves the stochastic modeling of the substitution, insertion and deletion processes. Substitution has been modeled in a reliable way for more than three decades by using finite Markov-processes. Insertion and deletion, however, seem to be more difficult to model, and thc recent approaches cannot acceptably deal with multiple insertions and deletions. A new method based on a generating function approach is introduced to describe the multiple insertion process. The presented algorithm computes the approximate joint probability of two sequences in 0(13) running time where 1 is the geometric mean of the sequence lengths.

  14. Statistical bootstrap model and annihilations

    CERN Document Server

    Möhring, H J

    1974-01-01

    The statistical bootstrap model (SBM) describes the decay of single, high mass, hadronic states (fireballs, clusters) into stable particles. Coupling constants B, one for each isospin multiplet of stable particles, are the only free parameter of the model. They are related to the maximum temperature parameter T/sub 0/. The various versions of the SMB can be classified into two groups: full statistical bootstrap models and linear ones. The main results of the model are the following: i) All momentum spectra are isotropic; especially the exclusive ones are described by invariant phase space. The inclusive and semi-inclusive single-particle distributions are asymptotically of pure exponential shape; the slope is governed by T /sub 0/ only. ii) The model parameter B for pions has been obtained by fitting the multiplicity distribution in pp and pn at rest, and corresponds to T/sub 0/=0.167 GeV in the full SBM with exotics. The average pi /sup -/ multiplicity for the linear and the full SBM (both with exotics) is c...

  15. Tipping points in the arctic: eyeballing or statistical significance?

    Science.gov (United States)

    Carstensen, Jacob; Weydmann, Agata

    2012-02-01

    Arctic ecosystems have experienced and are projected to experience continued large increases in temperature and declines in sea ice cover. It has been hypothesized that small changes in ecosystem drivers can fundamentally alter ecosystem functioning, and that this might be particularly pronounced for Arctic ecosystems. We present a suite of simple statistical analyses to identify changes in the statistical properties of data, emphasizing that changes in the standard error should be considered in addition to changes in mean properties. The methods are exemplified using sea ice extent, and suggest that the loss rate of sea ice accelerated by factor of ~5 in 1996, as reported in other studies, but increases in random fluctuations, as an early warning signal, were observed already in 1990. We recommend to employ the proposed methods more systematically for analyzing tipping points to document effects of climate change in the Arctic.

  16. Determining sexual dimorphism in frog measurement data: integration of statistical significance, measurement error, effect size and biological significance

    Directory of Open Access Journals (Sweden)

    Hayek Lee-Ann C.

    2005-01-01

    Full Text Available Several analytic techniques have been used to determine sexual dimorphism in vertebrate morphological measurement data with no emergent consensus on which technique is superior. A further confounding problem for frog data is the existence of considerable measurement error. To determine dimorphism, we examine a single hypothesis (Ho = equal means for two groups (females and males. We demonstrate that frog measurement data meet assumptions for clearly defined statistical hypothesis testing with statistical linear models rather than those of exploratory multivariate techniques such as principal components, correlation or correspondence analysis. In order to distinguish biological from statistical significance of hypotheses, we propose a new protocol that incorporates measurement error and effect size. Measurement error is evaluated with a novel measurement error index. Effect size, widely used in the behavioral sciences and in meta-analysis studies in biology, proves to be the most useful single metric to evaluate whether statistically significant results are biologically meaningful. Definitions for a range of small, medium, and large effect sizes specifically for frog measurement data are provided. Examples with measurement data for species of the frog genus Leptodactylus are presented. The new protocol is recommended not only to evaluate sexual dimorphism for frog data but for any animal measurement data for which the measurement error index and observed or a priori effect sizes can be calculated.

  17. Statistical models for trisomic phenotypes

    Energy Technology Data Exchange (ETDEWEB)

    Lamb, N.E.; Sherman, S.L.; Feingold, E. [Emory Univ., Atlanta, GA (United States)

    1996-01-01

    Certain genetic disorders are rare in the general population but more common in individuals with specific trisomies, which suggests that the genes involved in the etiology of these disorders may be located on the trisomic chromosome. As with all aneuploid syndromes, however, a considerable degree of variation exists within each phenotype so that any given trait is present only among a subset of the trisomic population. We have previously presented a simple gene-dosage model to explain this phenotypic variation and developed a strategy to map genes for such traits. The mapping strategy does not depend on the simple model but works in theory under any model that predicts that affected individuals have an increased likelihood of disomic homozygosity at the trait locus. This paper explores the robustness of our mapping method by investigating what kinds of models give an expected increase in disomic homozygosity. We describe a number of basic statistical models for trisomic phenotypes. Some of these are logical extensions of standard models for disomic phenotypes, and some are more specific to trisomy. Where possible, we discuss genetic mechanisms applicable to each model. We investigate which models and which parameter values give an expected increase in disomic homozygosity in individuals with the trait. Finally, we determine the sample sizes required to identify the increased disomic homozygosity under each model. Most of the models we explore yield detectable increases in disomic homozygosity for some reasonable range of parameter values, usually corresponding to smaller trait frequencies. It therefore appears that our mapping method should be effective for a wide variety of moderately infrequent traits, even though the exact mode of inheritance is unlikely to be known. 21 refs., 8 figs., 1 tab.

  18. Detecting Statistically Significant Communities of Triangle Motifs in Undirected Networks

    Science.gov (United States)

    2016-04-26

    right of the red line correspond to individuals who became associated with the author through marriage . Essentially there are three main clusters...public release. [8] Zachary, W., 1977. “An information flow model for conflict and fission in small groups,” Journal of Anthropological Research 33, pp

  19. Statistically significant data base of rock properties for geothermal use

    Science.gov (United States)

    Koch, A.; Jorand, R.; Clauser, C.

    2009-04-01

    The high risk of failure due to the unknown properties of the target rocks at depth is a major obstacle for the exploration of geothermal energy. In general, the ranges of thermal and hydraulic properties given in compilations of rock properties are too large to be useful to constrain properties at a specific site. To overcome this problem, we study the thermal and hydraulic rock properties of the main rock types in Germany in a statistical approach. An important aspect is the use of data from exploration wells that are largely untapped for the purpose of geothermal exploration. In the current project stage, we have been analyzing mostly Devonian and Carboniferous drill cores from 20 deep boreholes in the region of the Lower Rhine Embayment and the Ruhr area (western North Rhine Westphalia). In total, we selected 230 core samples with a length of up to 30 cm from the core archive of the State Geological Survey. The use of core scanning technology allowed the rapid measurement of thermal conductivity, sonic velocity, and gamma density under dry and water saturated conditions with high resolution for a large number of samples. In addition, we measured porosity, bulk density, and matrix density based on Archimedes' principle and pycnometer analysis. As first results we present arithmetic means, medians and standard deviations characterizing the petrophysical properties and their variability for specific lithostratigraphic units. Bi- and multimodal frequency distributions correspond to the occurrence of different lithologies such as shale, limestone, dolomite, sandstone, siltstone, marlstone, and quartz-schist. In a next step, the data set will be combined with logging data and complementary mineralogical analyses to derive the variation of thermal conductivity with depth. As a final result, this may be used to infer thermal conductivity for boreholes without appropriate core data which were drilled in similar geological settings.

  20. Statistical significance of seasonal warming/cooling trends

    Science.gov (United States)

    Ludescher, Josef; Bunde, Armin; Schellnhuber, Hans Joachim

    2017-04-01

    The question whether a seasonal climate trend (e.g., the increase of summer temperatures in Antarctica in the last decades) is of anthropogenic or natural origin is of great importance for mitigation and adaption measures alike. The conventional significance analysis assumes that (i) the seasonal climate trends can be quantified by linear regression, (ii) the different seasonal records can be treated as independent records, and (iii) the persistence in each of these seasonal records can be characterized by short-term memory described by an autoregressive process of first order. Here we show that assumption ii is not valid, due to strong intraannual correlations by which different seasons are correlated. We also show that, even in the absence of correlations, for Gaussian white noise, the conventional analysis leads to a strong overestimation of the significance of the seasonal trends, because multiple testing has not been taken into account. In addition, when the data exhibit long-term memory (which is the case in most climate records), assumption iii leads to a further overestimation of the trend significance. Combining Monte Carlo simulations with the Holm-Bonferroni method, we demonstrate how to obtain reliable estimates of the significance of the seasonal climate trends in long-term correlated records. For an illustration, we apply our method to representative temperature records from West Antarctica, which is one of the fastest-warming places on Earth and belongs to the crucial tipping elements in the Earth system.

  1. Distinguishing between statistical significance and practical/clinical meaningfulness using statistical inference.

    Science.gov (United States)

    Wilkinson, Michael

    2014-03-01

    Decisions about support for predictions of theories in light of data are made using statistical inference. The dominant approach in sport and exercise science is the Neyman-Pearson (N-P) significance-testing approach. When applied correctly it provides a reliable procedure for making dichotomous decisions for accepting or rejecting zero-effect null hypotheses with known and controlled long-run error rates. Type I and type II error rates must be specified in advance and the latter controlled by conducting an a priori sample size calculation. The N-P approach does not provide the probability of hypotheses or indicate the strength of support for hypotheses in light of data, yet many scientists believe it does. Outcomes of analyses allow conclusions only about the existence of non-zero effects, and provide no information about the likely size of true effects or their practical/clinical value. Bayesian inference can show how much support data provide for different hypotheses, and how personal convictions should be altered in light of data, but the approach is complicated by formulating probability distributions about prior subjective estimates of population effects. A pragmatic solution is magnitude-based inference, which allows scientists to estimate the true magnitude of population effects and how likely they are to exceed an effect magnitude of practical/clinical importance, thereby integrating elements of subjective Bayesian-style thinking. While this approach is gaining acceptance, progress might be hastened if scientists appreciate the shortcomings of traditional N-P null hypothesis significance testing.

  2. Bayesian Model Selection and Statistical Modeling

    CERN Document Server

    Ando, Tomohiro

    2010-01-01

    Bayesian model selection is a fundamental part of the Bayesian statistical modeling process. The quality of these solutions usually depends on the goodness of the constructed Bayesian model. Realizing how crucial this issue is, many researchers and practitioners have been extensively investigating the Bayesian model selection problem. This book provides comprehensive explanations of the concepts and derivations of the Bayesian approach for model selection and related criteria, including the Bayes factor, the Bayesian information criterion (BIC), the generalized BIC, and the pseudo marginal lik

  3. Changing Statistical Significance with the Amount of Information: The Adaptive α Significance Level.

    Science.gov (United States)

    Pérez, María-Eglée; Pericchi, Luis Raúl

    2014-02-01

    We put forward an adaptive alpha which changes with the amount of sample information. This calibration may be interpreted as a Bayes/non-Bayes compromise, and leads to statistical consistency. The calibration can also be used to produce confidence intervals whose size take in consideration the amount of observed information.

  4. Changing Statistical Significance with the Amount of Information: The Adaptive α Significance Level☆

    Science.gov (United States)

    Pérez, María-Eglée; Pericchi, Luis Raúl

    2014-01-01

    We put forward an adaptive alpha which changes with the amount of sample information. This calibration may be interpreted as a Bayes/non-Bayes compromise, and leads to statistical consistency. The calibration can also be used to produce confidence intervals whose size take in consideration the amount of observed information. PMID:24511173

  5. Statistical significance of trends in monthly heavy precipitation over the US

    KAUST Repository

    Mahajan, Salil

    2011-05-11

    Trends in monthly heavy precipitation, defined by a return period of one year, are assessed for statistical significance in observations and Global Climate Model (GCM) simulations over the contiguous United States using Monte Carlo non-parametric and parametric bootstrapping techniques. The results from the two Monte Carlo approaches are found to be similar to each other, and also to the traditional non-parametric Kendall\\'s τ test, implying the robustness of the approach. Two different observational data-sets are employed to test for trends in monthly heavy precipitation and are found to exhibit consistent results. Both data-sets demonstrate upward trends, one of which is found to be statistically significant at the 95% confidence level. Upward trends similar to observations are observed in some climate model simulations of the twentieth century, but their statistical significance is marginal. For projections of the twenty-first century, a statistically significant upwards trend is observed in most of the climate models analyzed. The change in the simulated precipitation variance appears to be more important in the twenty-first century projections than changes in the mean precipitation. Stochastic fluctuations of the climate-system are found to be dominate monthly heavy precipitation as some GCM simulations show a downwards trend even in the twenty-first century projections when the greenhouse gas forcings are strong. © 2011 Springer-Verlag.

  6. Statistical Compressed Sensing of Gaussian Mixture Models

    CERN Document Server

    Yu, Guoshen

    2011-01-01

    A novel framework of compressed sensing, namely statistical compressed sensing (SCS), that aims at efficiently sampling a collection of signals that follow a statistical distribution, and achieving accurate reconstruction on average, is introduced. SCS based on Gaussian models is investigated in depth. For signals that follow a single Gaussian model, with Gaussian or Bernoulli sensing matrices of O(k) measurements, considerably smaller than the O(k log(N/k)) required by conventional CS based on sparse models, where N is the signal dimension, and with an optimal decoder implemented via linear filtering, significantly faster than the pursuit decoders applied in conventional CS, the error of SCS is shown tightly upper bounded by a constant times the best k-term approximation error, with overwhelming probability. The failure probability is also significantly smaller than that of conventional sparsity-oriented CS. Stronger yet simpler results further show that for any sensing matrix, the error of Gaussian SCS is u...

  7. Statistical Analysis by Statistical Physics Model for the STOCK Markets

    Science.gov (United States)

    Wang, Tiansong; Wang, Jun; Fan, Bingli

    A new stochastic stock price model of stock markets based on the contact process of the statistical physics systems is presented in this paper, where the contact model is a continuous time Markov process, one interpretation of this model is as a model for the spread of an infection. Through this model, the statistical properties of Shanghai Stock Exchange (SSE) and Shenzhen Stock Exchange (SZSE) are studied. In the present paper, the data of SSE Composite Index and the data of SZSE Component Index are analyzed, and the corresponding simulation is made by the computer computation. Further, we investigate the statistical properties, fat-tail phenomena, the power-law distributions, and the long memory of returns for these indices. The techniques of skewness-kurtosis test, Kolmogorov-Smirnov test, and R/S analysis are applied to study the fluctuation characters of the stock price returns.

  8. Lies, damned lies and statistics: Clinical importance versus statistical significance in research.

    Science.gov (United States)

    Mellis, Craig

    2017-02-28

    Correctly performed and interpreted statistics play a crucial role for both those who 'produce' clinical research, and for those who 'consume' this research. Unfortunately, however, there are many misunderstandings and misinterpretations of statistics by both groups. In particular, there is a widespread lack of appreciation for the severe limitations with p values. This is a particular problem with small sample sizes and low event rates - common features of many published clinical trials. These issues have resulted in increasing numbers of false positive clinical trials (false 'discoveries'), and the well-publicised inability to replicate many of the findings. While chance clearly plays a role in these errors, many more are due to either poorly performed or badly misinterpreted statistics. Consequently, it is essential that whenever p values appear, these need be accompanied by both 95% confidence limits and effect sizes. These will enable readers to immediately assess the plausible range of results, and whether or not the effect is clinically meaningful.

  9. Visualizing statistical models and concepts

    CERN Document Server

    Farebrother, RW

    2002-01-01

    Examines classic algorithms, geometric diagrams, and mechanical principles for enhancing visualization of statistical estimation procedures and mathematical concepts in physics, engineering, and computer programming.

  10. Identifying potentially induced seismicity and assessing statistical significance in Oklahoma and California

    CERN Document Server

    McClure, Mark; Chiu, Kitkwan; Ranganath, Rajesh

    2016-01-01

    In this study, we develop a statistical method for identifying induced seismicity from large datasets and apply the method to decades of wastewater disposal and seismicity data in California and Oklahoma. The method is robust against a variety of potential pitfalls. The study regions are divided into gridblocks. We use a longitudinal study design, seeking associations between seismicity and wastewater injection along time-series within each gridblock. The longitudinal design helps control for non-random application of wastewater injection. We define a statistical model that is flexible enough to describe the seismicity observations, which have temporal correlation and high kurtosis. In each gridblock, we find the maximum likelihood estimate for a model parameter that relates induced seismicity hazard to total volume of wastewater injected each year. To assess significance, we compute likelihood ratio test statistics in each gridblock and each state, California and Oklahoma. Resampling is used to empirically d...

  11. EasyGene – a prokaryotic gene finder that ranks ORFs by statistical significance

    Directory of Open Access Journals (Sweden)

    Larsen Thomas

    2003-06-01

    Full Text Available Abstract Background Contrary to other areas of sequence analysis, a measure of statistical significance of a putative gene has not been devised to help in discriminating real genes from the masses of random Open Reading Frames (ORFs in prokaryotic genomes. Therefore, many genomes have too many short ORFs annotated as genes. Results In this paper, we present a new automated gene-finding method, EasyGene, which estimates the statistical significance of a predicted gene. The gene finder is based on a hidden Markov model (HMM that is automatically estimated for a new genome. Using extensions of similarities in Swiss-Prot, a high quality training set of genes is automatically extracted from the genome and used to estimate the HMM. Putative genes are then scored with the HMM, and based on score and length of an ORF, the statistical significance is calculated. The measure of statistical significance for an ORF is the expected number of ORFs in one megabase of random sequence at the same significance level or better, where the random sequence has the same statistics as the genome in the sense of a third order Markov chain. Conclusions The result is a flexible gene finder whose overall performance matches or exceeds other methods. The entire pipeline of computer processing from the raw input of a genome or set of contigs to a list of putative genes with significance is automated, making it easy to apply EasyGene to newly sequenced organisms. EasyGene with pre-trained models can be accessed at http://www.cbs.dtu.dk/services/EasyGene.

  12. Robust statistical methods for significance evaluation and applications in cancer driver detection and biomarker discovery

    DEFF Research Database (Denmark)

    Madsen, Tobias

    2017-01-01

    are used to scale the aforementioned driver detection methods to a dataset consisting of more than 2,000 cancer genomes. The sizes and dimensionalities of genomic data sets, be it a large number of genes or multiple heterogeneous data sources, pose both great statistical opportunities and challenges....... This distribution can be learned across the entire set of genes and then be used to improve inference on the level of the individual gene. A practical way to implement this insight is using empirical Bayes. This idea is one of the main statistical underpinnings of the present work. The thesis consist of three main...... manuscripts as well as two supplementary manuscripts. In the first manuscript we explore efficient significance evaluation for models defined with factor graphs. Factor graphs are a class of graphical models encompassing both Bayesian networks and Markov models. We specifically develop a saddle...

  13. Fermi breakup and the statistical multifragmentation model

    Energy Technology Data Exchange (ETDEWEB)

    Carlson, B.V., E-mail: brett@ita.br [Departamento de Fisica, Instituto Tecnologico de Aeronautica - CTA, 12228-900 Sao Jose dos Campos (Brazil); Donangelo, R. [Instituto de Fisica, Universidade Federal do Rio de Janeiro, Cidade Universitaria, CP 68528, 21941-972, Rio de Janeiro (Brazil); Instituto de Fisica, Facultad de Ingenieria, Universidad de la Republica, Julio Herrera y Reissig 565, 11.300 Montevideo (Uruguay); Souza, S.R. [Instituto de Fisica, Universidade Federal do Rio de Janeiro, Cidade Universitaria, CP 68528, 21941-972, Rio de Janeiro (Brazil); Instituto de Fisica, Universidade Federal do Rio Grande do Sul, Av. Bento Goncalves 9500, CP 15051, 91501-970, Porto Alegre (Brazil); Lynch, W.G.; Steiner, A.W.; Tsang, M.B. [Joint Institute for Nuclear Astrophysics, National Superconducting Cyclotron Laboratory and the Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824 (United States)

    2012-02-15

    We demonstrate the equivalence of a generalized Fermi breakup model, in which densities of excited states are taken into account, to the microcanonical statistical multifragmentation model used to describe the disintegration of highly excited fragments of nuclear reactions. We argue that such a model better fulfills the hypothesis of statistical equilibrium than the Fermi breakup model generally used to describe statistical disintegration of light mass nuclei.

  14. Statistical significance of variables driving systematic variation in high-dimensional data

    Science.gov (United States)

    Chung, Neo Christopher; Storey, John D.

    2015-01-01

    Motivation: There are a number of well-established methods such as principal component analysis (PCA) for automatically capturing systematic variation due to latent variables in large-scale genomic data. PCA and related methods may directly provide a quantitative characterization of a complex biological variable that is otherwise difficult to precisely define or model. An unsolved problem in this context is how to systematically identify the genomic variables that are drivers of systematic variation captured by PCA. Principal components (PCs) (and other estimates of systematic variation) are directly constructed from the genomic variables themselves, making measures of statistical significance artificially inflated when using conventional methods due to over-fitting. Results: We introduce a new approach called the jackstraw that allows one to accurately identify genomic variables that are statistically significantly associated with any subset or linear combination of PCs. The proposed method can greatly simplify complex significance testing problems encountered in genomics and can be used to identify the genomic variables significantly associated with latent variables. Using simulation, we demonstrate that our method attains accurate measures of statistical significance over a range of relevant scenarios. We consider yeast cell-cycle gene expression data, and show that the proposed method can be used to straightforwardly identify genes that are cell-cycle regulated with an accurate measure of statistical significance. We also analyze gene expression data from post-trauma patients, allowing the gene expression data to provide a molecularly driven phenotype. Using our method, we find a greater enrichment for inflammatory-related gene sets compared to the original analysis that uses a clinically defined, although likely imprecise, phenotype. The proposed method provides a useful bridge between large-scale quantifications of systematic variation and gene

  15. Understanding the Sampling Distribution and Its Use in Testing Statistical Significance.

    Science.gov (United States)

    Breunig, Nancy A.

    Despite the increasing criticism of statistical significance testing by researchers, particularly in the publication of the 1994 American Psychological Association's style manual, statistical significance test results are still popular in journal articles. For this reason, it remains important to understand the logic of inferential statistics. A…

  16. Statistical assessment of predictive modeling uncertainty

    Science.gov (United States)

    Barzaghi, Riccardo; Marotta, Anna Maria

    2017-04-01

    When the results of geophysical models are compared with data, the uncertainties of the model are typically disregarded. We propose a method for defining the uncertainty of a geophysical model based on a numerical procedure that estimates the empirical auto and cross-covariances of model-estimated quantities. These empirical values are then fitted by proper covariance functions and used to compute the covariance matrix associated with the model predictions. The method is tested using a geophysical finite element model in the Mediterranean region. Using a novel χ2 analysis in which both data and model uncertainties are taken into account, the model's estimated tectonic strain pattern due to the Africa-Eurasia convergence in the area that extends from the Calabrian Arc to the Alpine domain is compared with that estimated from GPS velocities while taking into account the model uncertainty through its covariance structure and the covariance of the GPS estimates. The results indicate that including the estimated model covariance in the testing procedure leads to lower observed χ2 values that have better statistical significance and might help a sharper identification of the best-fitting geophysical models.

  17. Statistical modelling for falls count data.

    Science.gov (United States)

    Ullah, Shahid; Finch, Caroline F; Day, Lesley

    2010-03-01

    Falls and their injury outcomes have count distributions that are highly skewed toward the right with clumping at zero, posing analytical challenges. Different modelling approaches have been used in the published literature to describe falls count distributions, often without consideration of the underlying statistical and modelling assumptions. This paper compares the use of modified Poisson and negative binomial (NB) models as alternatives to Poisson (P) regression, for the analysis of fall outcome counts. Four different count-based regression models (P, NB, zero-inflated Poisson (ZIP), zero-inflated negative binomial (ZINB)) were each individually fitted to four separate fall count datasets from Australia, New Zealand and United States. The finite mixtures of P and NB regression models were also compared to the standard NB model. Both analytical (F, Vuong and bootstrap tests) and graphical approaches were used to select and compare models. Simulation studies assessed the size and power of each model fit. This study confirms that falls count distributions are over-dispersed, but not dispersed due to excess zero counts or heterogeneous population. Accordingly, the P model generally provided the poorest fit to all datasets. The fit improved significantly with NB and both zero-inflated models. The fit was also improved with the NB model, compared to finite mixtures of both P and NB regression models. Although there was little difference in fit between NB and ZINB models, in the interests of parsimony it is recommended that future studies involving modelling of falls count data routinely use the NB models in preference to the P or ZINB or finite mixture distribution. The fact that these conclusions apply across four separate datasets from four different samples of older people participating in studies of different methodology, adds strength to this general guiding principle.

  18. Statistical Decision-Tree Models for Parsing

    CERN Document Server

    Magerman, D M

    1995-01-01

    Syntactic natural language parsers have shown themselves to be inadequate for processing highly-ambiguous large-vocabulary text, as is evidenced by their poor performance on domains like the Wall Street Journal, and by the movement away from parsing-based approaches to text-processing in general. In this paper, I describe SPATTER, a statistical parser based on decision-tree learning techniques which constructs a complete parse for every sentence and achieves accuracy rates far better than any published result. This work is based on the following premises: (1) grammars are too complex and detailed to develop manually for most interesting domains; (2) parsing models must rely heavily on lexical and contextual information to analyze sentences accurately; and (3) existing {$n$}-gram modeling techniques are inadequate for parsing models. In experiments comparing SPATTER with IBM's computer manuals parser, SPATTER significantly outperforms the grammar-based parser. Evaluating SPATTER against the Penn Treebank Wall ...

  19. ARSENIC CONTAMINATION IN GROUNDWATER: A STATISTICAL MODELING

    Directory of Open Access Journals (Sweden)

    Palas Roy

    2013-01-01

    Full Text Available High arsenic in natural groundwater in most of the tubewells of the Purbasthali- Block II area of Burdwan district (W.B, India has recently been focused as a serious environmental concern. This paper is intending to illustrate the statistical modeling of the arsenic contaminated groundwater to identify the interrelation of that arsenic contain with other participating groundwater parameters so that the arsenic contamination level can easily be predicted by analyzing only such parameters. Multivariate data analysis was done with the collected groundwater samples from the 132 tubewells of this contaminated region shows that three variable parameters are significantly related with the arsenic. Based on these relationships, a multiple linear regression model has been developed that estimated the arsenic contamination by measuring such three predictor parameters of the groundwater variables in the contaminated aquifer. This model could also be a suggestive tool while designing the arsenic removal scheme for any affected groundwater.

  20. Statistical modelling of fish stocks

    DEFF Research Database (Denmark)

    Kvist, Trine

    1999-01-01

    for modelling the dynamics of a fish population is suggested. A new approach is introduced to analyse the sources of variation in age composition data, which is one of the most important sources of information in the cohort based models for estimation of stock abundancies and mortalities. The approach combines...... and it is argued that an approach utilising stochastic differential equations might be advantagous in fish stoch assessments....

  1. Statistical modelling of fish stocks

    DEFF Research Database (Denmark)

    Kvist, Trine

    1999-01-01

    for modelling the dynamics of a fish population is suggested. A new approach is introduced to analyse the sources of variation in age composition data, which is one of the most important sources of information in the cohort based models for estimation of stock abundancies and mortalities. The approach combines...... and it is argued that an approach utilising stochastic differential equations might be advantagous in fish stoch assessments....

  2. Statistical modelling for ship propulsion efficiency

    DEFF Research Database (Denmark)

    Petersen, Jóan Petur; Jacobsen, Daniel J.; Winther, Ole

    2012-01-01

    This paper presents a state-of-the-art systems approach to statistical modelling of fuel efficiency in ship propulsion, and also a novel and publicly available data set of high quality sensory data. Two statistical model approaches are investigated and compared: artificial neural networks...

  3. Statistical Models and Methods for Lifetime Data

    CERN Document Server

    Lawless, Jerald F

    2011-01-01

    Praise for the First Edition"An indispensable addition to any serious collection on lifetime data analysis and . . . a valuable contribution to the statistical literature. Highly recommended . . ."-Choice"This is an important book, which will appeal to statisticians working on survival analysis problems."-Biometrics"A thorough, unified treatment of statistical models and methods used in the analysis of lifetime data . . . this is a highly competent and agreeable statistical textbook."-Statistics in MedicineThe statistical analysis of lifetime or response time data is a key tool in engineering,

  4. Statistical Modeling of Bivariate Data.

    Science.gov (United States)

    1982-08-01

    end identify by lock nsum br) joint density-quantile function, dependence-density, non-parametric bivariate density estimation, entropy , exponential...estimated, by autoregressive or exponential model estimators I with maximum entropy properties, is investigated in this thesis. The results provide...important and useful procedures for nonparametric bivariate density estimation. The thesis discusses estimators of the entropy H(d) of ul2) which seem to me

  5. "What If" Analyses: Ways to Interpret Statistical Significance Test Results Using EXCEL or "R"

    Science.gov (United States)

    Ozturk, Elif

    2012-01-01

    The present paper aims to review two motivations to conduct "what if" analyses using Excel and "R" to understand the statistical significance tests through the sample size context. "What if" analyses can be used to teach students what statistical significance tests really do and in applied research either prospectively to estimate what sample size…

  6. Statistical tests of simple earthquake cycle models

    Science.gov (United States)

    DeVries, Phoebe M. R.; Evans, Eileen L.

    2016-12-01

    A central goal of observing and modeling the earthquake cycle is to forecast when a particular fault may generate an earthquake: a fault late in its earthquake cycle may be more likely to generate an earthquake than a fault early in its earthquake cycle. Models that can explain geodetic observations throughout the entire earthquake cycle may be required to gain a more complete understanding of relevant physics and phenomenology. Previous efforts to develop unified earthquake models for strike-slip faults have largely focused on explaining both preseismic and postseismic geodetic observations available across a few faults in California, Turkey, and Tibet. An alternative approach leverages the global distribution of geodetic and geologic slip rate estimates on strike-slip faults worldwide. Here we use the Kolmogorov-Smirnov test for similarity of distributions to infer, in a statistically rigorous manner, viscoelastic earthquake cycle models that are inconsistent with 15 sets of observations across major strike-slip faults. We reject a large subset of two-layer models incorporating Burgers rheologies at a significance level of α = 0.05 (those with long-term Maxwell viscosities ηM 4.6 × 1020 Pa s) but cannot reject models on the basis of transient Kelvin viscosity ηK. Finally, we examine the implications of these results for the predicted earthquake cycle timing of the 15 faults considered and compare these predictions to the geologic and historical record.

  7. Statistical Model-Based Face Pose Estimation

    Institute of Scientific and Technical Information of China (English)

    GE Xinliang; YANG Jie; LI Feng; WANG Huahua

    2007-01-01

    A robust face pose estimation approach is proposed by using face shape statistical model approach and pose parameters are represented by trigonometric functions. The face shape statistical model is firstly built by analyzing the face shapes from different people under varying poses. The shape alignment is vital in the process of building the statistical model. Then, six trigonometric functions are employed to represent the face pose parameters. Lastly, the mapping function is constructed between face image and face pose by linearly relating different parameters. The proposed approach is able to estimate different face poses using a few face training samples. Experimental results are provided to demonstrate its efficiency and accuracy.

  8. Accelerated life models modeling and statistical analysis

    CERN Document Server

    Bagdonavicius, Vilijandas

    2001-01-01

    Failure Time DistributionsIntroductionParametric Classes of Failure Time DistributionsAccelerated Life ModelsIntroductionGeneralized Sedyakin's ModelAccelerated Failure Time ModelProportional Hazards ModelGeneralized Proportional Hazards ModelsGeneralized Additive and Additive-Multiplicative Hazards ModelsChanging Shape and Scale ModelsGeneralizationsModels Including Switch-Up and Cycling EffectsHeredity HypothesisSummaryAccelerated Degradation ModelsIntroductionDegradation ModelsModeling the Influence of Explanatory Varia

  9. Multistructure Statistical Model Applied To Factor Analysis

    Science.gov (United States)

    Bentler, Peter M.

    1976-01-01

    A general statistical model for the multivariate analysis of mean and covariance structures is described. Matrix calculus is used to develop the statistical aspects of one new special case in detail. This special case separates the confounding of principal components and factor analysis. (DEP)

  10. Topology for statistical modeling of petascale data.

    Energy Technology Data Exchange (ETDEWEB)

    Pascucci, Valerio (University of Utah, Salt Lake City, UT); Mascarenhas, Ajith Arthur; Rusek, Korben (Texas A& M University, College Station, TX); Bennett, Janine Camille; Levine, Joshua (University of Utah, Salt Lake City, UT); Pebay, Philippe Pierre; Gyulassy, Attila (University of Utah, Salt Lake City, UT); Thompson, David C.; Rojas, Joseph Maurice (Texas A& M University, College Station, TX)

    2011-07-01

    This document presents current technical progress and dissemination of results for the Mathematics for Analysis of Petascale Data (MAPD) project titled 'Topology for Statistical Modeling of Petascale Data', funded by the Office of Science Advanced Scientific Computing Research (ASCR) Applied Math program. Many commonly used algorithms for mathematical analysis do not scale well enough to accommodate the size or complexity of petascale data produced by computational simulations. The primary goal of this project is thus to develop new mathematical tools that address both the petascale size and uncertain nature of current data. At a high level, our approach is based on the complementary techniques of combinatorial topology and statistical modeling. In particular, we use combinatorial topology to filter out spurious data that would otherwise skew statistical modeling techniques, and we employ advanced algorithms from algebraic statistics to efficiently find globally optimal fits to statistical models. This document summarizes the technical advances we have made to date that were made possible in whole or in part by MAPD funding. These technical contributions can be divided loosely into three categories: (1) advances in the field of combinatorial topology, (2) advances in statistical modeling, and (3) new integrated topological and statistical methods.

  11. Semantic Importance Sampling for Statistical Model Checking

    Science.gov (United States)

    2015-01-16

    approach called Statistical Model Checking (SMC) [16], which relies on Monte - Carlo -based simulations to solve this verification task more scalably...Conclusion Statistical model checking (SMC) is a prominent approach for rigorous analysis of stochastic systems using Monte - Carlo simulations. In this... Monte - Carlo simulations, for computing the bounded probability that a specific event occurs during a stochastic system’s execution. Estimating the

  12. Infinite Random Graphs as Statistical Mechanical Models

    DEFF Research Database (Denmark)

    Durhuus, Bergfinnur Jøgvan; Napolitano, George Maria

    2011-01-01

    We discuss two examples of infinite random graphs obtained as limits of finite statistical mechanical systems: a model of two-dimensional dis-cretized quantum gravity defined in terms of causal triangulated surfaces, and the Ising model on generic random trees. For the former model we describe...

  13. Probability and Statistics in Sensor Performance Modeling

    Science.gov (United States)

    2010-12-01

    transformed Rice- Nakagami distribution ......................................................................... 49 Report Documentation Page...acoustic or electromagnetic waves are scattered by both objects and turbulent wind. A version of the Rice- Nakagami model (specifically with a...Gaussian, lognormal, exponential, gamma, and the 2XX → transformed Rice- Nakagami —as well as a discrete model. (Other examples of statistical models

  14. Statistical physics of pairwise probability models

    DEFF Research Database (Denmark)

    Roudi, Yasser; Aurell, Erik; Hertz, John

    2009-01-01

    (dansk abstrakt findes ikke) Statistical models for describing the probability distribution over the states of biological systems are commonly used for dimensional reduction. Among these models, pairwise models are very attractive in part because they can be fit using a reasonable amount of  data...

  15. Matrix Tricks for Linear Statistical Models

    CERN Document Server

    Puntanen, Simo; Styan, George PH

    2011-01-01

    In teaching linear statistical models to first-year graduate students or to final-year undergraduate students there is no way to proceed smoothly without matrices and related concepts of linear algebra; their use is really essential. Our experience is that making some particular matrix tricks very familiar to students can substantially increase their insight into linear statistical models (and also multivariate statistical analysis). In matrix algebra, there are handy, sometimes even very simple "tricks" which simplify and clarify the treatment of a problem - both for the student and

  16. EasyGene – a prokaryotic gene finder that ranks ORFs by statistical significance

    DEFF Research Database (Denmark)

    Larsen, Thomas Schou; Krogh, Anders Stærmose

    2003-01-01

    in Swiss-Prot, a high quality training set of genes is automatically extracted from the genome and used to estimate the HMM. Putative genes are then scored with the HMM, and based on score and length of an ORF, the statistical significance is calculated. The measure of statistical significance for an ORF...... is the expected number of ORFs in one megabase of random sequence at the same significance level or better, where the random sequence has the same statistics as the genome in the sense of a third order Markov chain.Conclusions: The result is a flexible gene finder whose overall performance matches or exceeds...

  17. No difference found in time to publication by statistical significance of trial results: a methodological review

    Science.gov (United States)

    Jefferson, L; Cooper, E; Hewitt, C; Torgerson, T; Cook, L; Tharmanathan, P; Cockayne, S; Torgerson, D

    2016-01-01

    Objective Time-lag from study completion to publication is a potential source of publication bias in randomised controlled trials. This study sought to update the evidence base by identifying the effect of the statistical significance of research findings on time to publication of trial results. Design Literature searches were carried out in four general medical journals from June 2013 to June 2014 inclusive (BMJ, JAMA, the Lancet and the New England Journal of Medicine). Setting Methodological review of four general medical journals. Participants Original research articles presenting the primary analyses from phase 2, 3 and 4 parallel-group randomised controlled trials were included. Main outcome measures Time from trial completion to publication. Results The median time from trial completion to publication was 431 days (n = 208, interquartile range 278–618). A multivariable adjusted Cox model found no statistically significant difference in time to publication for trials reporting positive or negative results (hazard ratio: 0.86, 95% CI 0.64 to 1.16, p = 0.32). Conclusion In contrast to previous studies, this review did not demonstrate the presence of time-lag bias in time to publication. This may be a result of these articles being published in four high-impact general medical journals that may be more inclined to publish rapidly, whatever the findings. Further research is needed to explore the presence of time-lag bias in lower quality studies and lower impact journals. PMID:27757242

  18. Distributions with given marginals and statistical modelling

    CERN Document Server

    Fortiana, Josep; Rodriguez-Lallena, José

    2002-01-01

    This book contains a selection of the papers presented at the meeting `Distributions with given marginals and statistical modelling', held in Barcelona (Spain), July 17-20, 2000. In 24 chapters, this book covers topics such as the theory of copulas and quasi-copulas, the theory and compatibility of distributions, models for survival distributions and other well-known distributions, time series, categorical models, definition and estimation of measures of dependence, monotonicity and stochastic ordering, shape and separability of distributions, hidden truncation models, diagonal families, orthogonal expansions, tests of independence, and goodness of fit assessment. These topics share the use and properties of distributions with given marginals, this being the fourth specialised text on this theme. The innovative aspect of the book is the inclusion of statistical aspects such as modelling, Bayesian statistics, estimation, and tests.

  19. Using the Bootstrap Method for a Statistical Significance Test of Differences between Summary Histograms

    Science.gov (United States)

    Xu, Kuan-Man

    2006-01-01

    A new method is proposed to compare statistical differences between summary histograms, which are the histograms summed over a large ensemble of individual histograms. It consists of choosing a distance statistic for measuring the difference between summary histograms and using a bootstrap procedure to calculate the statistical significance level. Bootstrapping is an approach to statistical inference that makes few assumptions about the underlying probability distribution that describes the data. Three distance statistics are compared in this study. They are the Euclidean distance, the Jeffries-Matusita distance and the Kuiper distance. The data used in testing the bootstrap method are satellite measurements of cloud systems called cloud objects. Each cloud object is defined as a contiguous region/patch composed of individual footprints or fields of view. A histogram of measured values over footprints is generated for each parameter of each cloud object and then summary histograms are accumulated over all individual histograms in a given cloud-object size category. The results of statistical hypothesis tests using all three distances as test statistics are generally similar, indicating the validity of the proposed method. The Euclidean distance is determined to be most suitable after comparing the statistical tests of several parameters with distinct probability distributions among three cloud-object size categories. Impacts on the statistical significance levels resulting from differences in the total lengths of satellite footprint data between two size categories are also discussed.

  20. Performance modeling, loss networks, and statistical multiplexing

    CERN Document Server

    Mazumdar, Ravi

    2009-01-01

    This monograph presents a concise mathematical approach for modeling and analyzing the performance of communication networks with the aim of understanding the phenomenon of statistical multiplexing. The novelty of the monograph is the fresh approach and insights provided by a sample-path methodology for queueing models that highlights the important ideas of Palm distributions associated with traffic models and their role in performance measures. Also presented are recent ideas of large buffer, and many sources asymptotics that play an important role in understanding statistical multiplexing. I

  1. Statistical Modeling for Radiation Hardness Assurance

    Science.gov (United States)

    Ladbury, Raymond L.

    2014-01-01

    We cover the models and statistics associated with single event effects (and total ionizing dose), why we need them, and how to use them: What models are used, what errors exist in real test data, and what the model allows us to say about the DUT will be discussed. In addition, how to use other sources of data such as historical, heritage, and similar part and how to apply experience, physics, and expert opinion to the analysis will be covered. Also included will be concepts of Bayesian statistics, data fitting, and bounding rates.

  2. Simple statistical model for branched aggregates

    DEFF Research Database (Denmark)

    Lemarchand, Claire; Hansen, Jesper Schmidt

    2015-01-01

    , given that it already has bonds with others. The model is applied here to asphaltene nanoaggregates observed in molecular dynamics simulations of Cooee bitumen. The variation with temperature of the probabilities deduced from this model is discussed in terms of statistical mechanics arguments......We propose a statistical model that can reproduce the size distribution of any branched aggregate, including amylopectin, dendrimers, molecular clusters of monoalcohols, and asphaltene nanoaggregates. It is based on the conditional probability for one molecule to form a new bond with a molecule....... The relevance of the statistical model in the case of asphaltene nanoaggregates is checked by comparing the predicted value of the probability for one molecule to have exactly i bonds with the same probability directly measured in the molecular dynamics simulations. The agreement is satisfactory...

  3. Advances in statistical models for data analysis

    CERN Document Server

    Minerva, Tommaso; Vichi, Maurizio

    2015-01-01

    This edited volume focuses on recent research results in classification, multivariate statistics and machine learning and highlights advances in statistical models for data analysis. The volume provides both methodological developments and contributions to a wide range of application areas such as economics, marketing, education, social sciences and environment. The papers in this volume were first presented at the 9th biannual meeting of the Classification and Data Analysis Group (CLADAG) of the Italian Statistical Society, held in September 2013 at the University of Modena and Reggio Emilia, Italy.

  4. Statistical Model Checking for Stochastic Hybrid Systems

    DEFF Research Database (Denmark)

    David, Alexandre; Du, Dehui; Larsen, Kim Guldstrand

    2012-01-01

    This paper presents novel extensions and applications of the UPPAAL-SMC model checker. The extensions allow for statistical model checking of stochastic hybrid systems. We show how our race-based stochastic semantics extends to networks of hybrid systems, and indicate the integration technique ap...

  5. Dielectronic recombination rate in statistical model

    OpenAIRE

    Demura A.V.; Leontyev D.S.; Lisitsa V.S.; Shurigyn V.A.

    2017-01-01

    The dielectronic recombination rate of multielectron ions was calculated by means of the statistical approach. It is based on an idea of collective excitations of atomic electrons with the local plasma frequencies. These frequencies are expressed via the Thomas-Fermi model electron density distribution. The statistical approach provides fast computation of DR rates that are compared with the modern quantum mechanical calculations. The results are important for current studies of thermonuclear...

  6. Dielectronic recombination rate in statistical model

    Directory of Open Access Journals (Sweden)

    Demura A.V.

    2017-01-01

    Full Text Available The dielectronic recombination rate of multielectron ions was calculated by means of the statistical approach. It is based on an idea of collective excitations of atomic electrons with the local plasma frequencies. These frequencies are expressed via the Thomas-Fermi model electron density distribution. The statistical approach provides fast computation of DR rates that are compared with the modern quantum mechanical calculations. The results are important for current studies of thermonuclear plasmas with the tungsten impurities.

  7. Dielectronic recombination rate in statistical model

    Science.gov (United States)

    Demura, A. V.; Leontyev, D. S.; Lisitsa, V. S.; Shurigyn, V. A.

    2016-12-01

    The dielectronic recombination rate of multielectron ions was calculated by means of the statistical approach. It is based on an idea of collective excitations of atomic electrons with the local plasma frequencies. These frequencies are expressed via the Thomas-Fermi model electron density distribution. The statistical approach provides fast computation of DR rates that are compared with the modern quantum mechanical calculations. The results are important for current studies of thermonuclear plasmas with the tungsten impurities.

  8. Application of universal kriging for estimation of earthquake ground motion: Statistical significance of results

    Energy Technology Data Exchange (ETDEWEB)

    Carr, J.R.; Roberts, K.P.

    1989-02-01

    Universal kriging is compared with ordinary kriging for estimation of earthquake ground motion. Ordinary kriging is based on a stationary random function model; universal kriging is based on a nonstationary random function model representing first-order drift. Accuracy of universal kriging is compared with that for ordinary kriging; cross-validation is used as the basis for comparison. Hypothesis testing on these results shows that accuracy obtained using universal kriging is not significantly different from accuracy obtained using ordinary kriging. Test based on normal distribution assumptions are applied to errors measured in the cross-validation procedure; t and F tests reveal no evidence to suggest universal and ordinary kriging are different for estimation of earthquake ground motion. Nonparametric hypothesis tests applied to these errors and jackknife statistics yield the same conclusion: universal and ordinary kriging are not significantly different for this application as determined by a cross-validation procedure. These results are based on application to four independent data sets (four different seismic events).

  9. Mesoscopic full counting statistics and exclusion models

    Science.gov (United States)

    Roche, P.-E.; Derrida, B.; Douçot, B.

    2005-02-01

    We calculate the distribution of current fluctuations in two simple exclusion models. Although these models are classical, we recover even for small systems such as a simple or a double barrier, the same distibution of current as given by traditional formalisms for quantum mesoscopic conductors. Due to their simplicity, the full counting statistics in exclusion models can be reduced to the calculation of the largest eigenvalue of a matrix, the size of which is the number of internal configurations of the system. As examples, we derive the shot noise power and higher order statistics of current fluctuations (skewness, full counting statistics, ....) of various conductors, including multiple barriers, diffusive islands between tunnel barriers and diffusive media. A special attention is dedicated to the third cumulant, which experimental measurability has been demonstrated lately.

  10. The statistical significance of the N-S asymmetry of solar activity revisited

    CERN Document Server

    Carbonell, M; Oliver, R; Ballester, J L

    2007-01-01

    The main aim of this study is to point out the difficulties found when trying to assess the statistical significance of the North-South asymmetry (hereafter SSNSA) of the most usually considered time series of solar activity. First of all, we distinguish between solar activity time series composed by integer or non-integer and dimensionless data, or composed by non-integer and dimensional data. For each of these cases, we discuss the most suitable statistical tests which can be applied and highlight the difficulties to obtain valid information about the statistical significance of solar activity time series. Our results suggest that, apart from the need to apply the suitable statistical tests, other effects such as the data binning, the considered units and the need, in some tests, to consider groups of data, affect substantially the determination of the statistical significance of the asymmetry. Our main conclusion is that the assessment of the statistical significance of the N-S asymmetry of solar activity ...

  11. Growth curve models and statistical diagnostics

    CERN Document Server

    Pan, Jian-Xin

    2002-01-01

    Growth-curve models are generalized multivariate analysis-of-variance models. These models are especially useful for investigating growth problems on short times in economics, biology, medical research, and epidemiology. This book systematically introduces the theory of the GCM with particular emphasis on their multivariate statistical diagnostics, which are based mainly on recent developments made by the authors and their collaborators. The authors provide complete proofs of theorems as well as practical data sets and MATLAB code.

  12. Three Generative, Lexicalised Models for Statistical Parsing

    CERN Document Server

    Collins, M

    1997-01-01

    In this paper we first propose a new statistical parsing model, which is a generative model of lexicalised context-free grammar. We then extend the model to include a probabilistic treatment of both subcategorisation and wh-movement. Results on Wall Street Journal text show that the parser performs at 88.1/87.5% constituent precision/recall, an average improvement of 2.3% over (Collins 96).

  13. Topology for Statistical Modeling of Petascale Data

    Energy Technology Data Exchange (ETDEWEB)

    Pascucci, Valerio [Univ. of Utah, Salt Lake City, UT (United States); Levine, Joshua [Univ. of Utah, Salt Lake City, UT (United States); Gyulassy, Attila [Univ. of Utah, Salt Lake City, UT (United States); Bremer, P. -T. [Univ. of Utah, Salt Lake City, UT (United States)

    2017-03-23

    Many commonly used algorithms for mathematical analysis do not scale well enough to accommodate the size or complexity of petascale data produced by computational simulations. The primary goal of this project is to develop new mathematical tools that address both the petascale size and uncertain nature of current data. At a high level, the approach of the entire team involving all three institutions is based on the complementary techniques of combinatorial topology and statistical modelling. In particular, we use combinatorial topology to filter out spurious data that would otherwise skew statistical modelling techniques, and we employ advanced algorithms from algebraic statistics to efficiently find globally optimal fits to statistical models. The overall technical contributions can be divided loosely into three categories: (1) advances in the field of combinatorial topology, (2) advances in statistical modelling, and (3) new integrated topological and statistical methods. Roughly speaking, the division of labor between our 3 groups (Sandia Labs in Livermore, Texas A&M in College Station, and U Utah in Salt Lake City) is as follows: the Sandia group focuses on statistical methods and their formulation in algebraic terms, and finds the application problems (and data sets) most relevant to this project, the Texas A&M Group develops new algebraic geometry algorithms, in particular with fewnomial theory, and the Utah group develops new algorithms in computational topology via Discrete Morse Theory. However, we hasten to point out that our three groups stay in tight contact via videconference every 2 weeks, so there is much synergy of ideas between the groups. The following of this document is focused on the contributions that had grater direct involvement from the team at the University of Utah in Salt Lake City.

  14. Bayesian models a statistical primer for ecologists

    CERN Document Server

    Hobbs, N Thompson

    2015-01-01

    Bayesian modeling has become an indispensable tool for ecological research because it is uniquely suited to deal with complexity in a statistically coherent way. This textbook provides a comprehensive and accessible introduction to the latest Bayesian methods-in language ecologists can understand. Unlike other books on the subject, this one emphasizes the principles behind the computations, giving ecologists a big-picture understanding of how to implement this powerful statistical approach. Bayesian Models is an essential primer for non-statisticians. It begins with a definition of probabili

  15. An R companion to linear statistical models

    CERN Document Server

    Hay-Jahans, Christopher

    2011-01-01

    Focusing on user-developed programming, An R Companion to Linear Statistical Models serves two audiences: those who are familiar with the theory and applications of linear statistical models and wish to learn or enhance their skills in R; and those who are enrolled in an R-based course on regression and analysis of variance. For those who have never used R, the book begins with a self-contained introduction to R that lays the foundation for later chapters.This book includes extensive and carefully explained examples of how to write programs using the R programming language. These examples cove

  16. Codon Deviation Coefficient: A novel measure for estimating codon usage bias and its statistical significance

    KAUST Repository

    Zhang, Zhang

    2012-03-22

    Background: Genetic mutation, selective pressure for translational efficiency and accuracy, level of gene expression, and protein function through natural selection are all believed to lead to codon usage bias (CUB). Therefore, informative measurement of CUB is of fundamental importance to making inferences regarding gene function and genome evolution. However, extant measures of CUB have not fully accounted for the quantitative effect of background nucleotide composition and have not statistically evaluated the significance of CUB in sequence analysis.Results: Here we propose a novel measure--Codon Deviation Coefficient (CDC)--that provides an informative measurement of CUB and its statistical significance without requiring any prior knowledge. Unlike previous measures, CDC estimates CUB by accounting for background nucleotide compositions tailored to codon positions and adopts the bootstrapping to assess the statistical significance of CUB for any given sequence. We evaluate CDC by examining its effectiveness on simulated sequences and empirical data and show that CDC outperforms extant measures by achieving a more informative estimation of CUB and its statistical significance.Conclusions: As validated by both simulated and empirical data, CDC provides a highly informative quantification of CUB and its statistical significance, useful for determining comparative magnitudes and patterns of biased codon usage for genes or genomes with diverse sequence compositions. 2012 Zhang et al; licensee BioMed Central Ltd.

  17. Codon Deviation Coefficient: a novel measure for estimating codon usage bias and its statistical significance

    Directory of Open Access Journals (Sweden)

    Zhang Zhang

    2012-03-01

    Full Text Available Abstract Background Genetic mutation, selective pressure for translational efficiency and accuracy, level of gene expression, and protein function through natural selection are all believed to lead to codon usage bias (CUB. Therefore, informative measurement of CUB is of fundamental importance to making inferences regarding gene function and genome evolution. However, extant measures of CUB have not fully accounted for the quantitative effect of background nucleotide composition and have not statistically evaluated the significance of CUB in sequence analysis. Results Here we propose a novel measure--Codon Deviation Coefficient (CDC--that provides an informative measurement of CUB and its statistical significance without requiring any prior knowledge. Unlike previous measures, CDC estimates CUB by accounting for background nucleotide compositions tailored to codon positions and adopts the bootstrapping to assess the statistical significance of CUB for any given sequence. We evaluate CDC by examining its effectiveness on simulated sequences and empirical data and show that CDC outperforms extant measures by achieving a more informative estimation of CUB and its statistical significance. Conclusions As validated by both simulated and empirical data, CDC provides a highly informative quantification of CUB and its statistical significance, useful for determining comparative magnitudes and patterns of biased codon usage for genes or genomes with diverse sequence compositions.

  18. Statistical transmutation in doped quantum dimer models.

    Science.gov (United States)

    Lamas, C A; Ralko, A; Cabra, D C; Poilblanc, D; Pujol, P

    2012-07-06

    We prove a "statistical transmutation" symmetry of doped quantum dimer models on the square, triangular, and kagome lattices: the energy spectrum is invariant under a simultaneous change of statistics (i.e., bosonic into fermionic or vice versa) of the holes and of the signs of all the dimer resonance loops. This exact transformation enables us to define the duality equivalence between doped quantum dimer Hamiltonians and provides the analytic framework to analyze dynamical statistical transmutations. We investigate numerically the doping of the triangular quantum dimer model with special focus on the topological Z(2) dimer liquid. Doping leads to four (instead of two for the square lattice) inequivalent families of Hamiltonians. Competition between phase separation, superfluidity, supersolidity, and fermionic phases is investigated in the four families.

  19. STATISTICAL MODELS OF REPRESENTING INTELLECTUAL CAPITAL

    Directory of Open Access Journals (Sweden)

    Andreea Feraru

    2016-07-01

    Full Text Available This article entitled Statistical Models of Representing Intellectual Capital approaches and analyses the concept of intellectual capital, as well as the main models which can support enterprisers/managers in evaluating and quantifying the advantages of intellectual capital. Most authors examine intellectual capital from a static perspective and focus on the development of its various evaluation models. In this chapter we surveyed the classical static models: Sveiby, Edvisson, Balanced Scorecard, as well as the canonical model of intellectual capital. Among the group of static models for evaluating organisational intellectual capital the canonical model stands out. This model enables the structuring of organisational intellectual capital in: human capital, structural capital and relational capital. Although the model is widely spread, it is a static one and can thus create a series of errors in the process of evaluation, because all the three entities mentioned above are not independent from the viewpoint of their contents, as any logic of structuring complex entities requires.

  20. Topology for Statistical Modeling of Petascale Data

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, Janine Camille [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Pebay, Philippe Pierre [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Pascucci, Valerio [Univ. of Utah, Salt Lake City, UT (United States); Levine, Joshua [Univ. of Utah, Salt Lake City, UT (United States); Gyulassy, Attila [Univ. of Utah, Salt Lake City, UT (United States); Rojas, Maurice [Texas A & M Univ., College Station, TX (United States)

    2014-07-01

    This document presents current technical progress and dissemination of results for the Mathematics for Analysis of Petascale Data (MAPD) project titled "Topology for Statistical Modeling of Petascale Data", funded by the Office of Science Advanced Scientific Computing Research (ASCR) Applied Math program.

  1. Statistical Modeling Efforts for Headspace Gas

    Energy Technology Data Exchange (ETDEWEB)

    Weaver, Brian Phillip [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-03-17

    The purpose of this document is to describe the statistical modeling effort for gas concentrations in WIPP storage containers. The concentration (in ppm) of CO2 in the headspace volume of standard waste box (SWB) 68685 is shown. A Bayesian approach and an adaptive Metropolis-Hastings algorithm were used.

  2. Nonperturbative approach to the modified statistical model

    Energy Technology Data Exchange (ETDEWEB)

    Magdy, M.A.; Bekmezci, A.; Sever, R. [Middle East Technical Univ., Ankara (Turkey)

    1993-12-01

    The modified form of the statistical model is used without making any perturbation. The mass spectra of the lowest S, P and D levels of the (Q{bar Q}) and the non-self-conjugate (Q{bar q}) mesons are studied with the Song-Lin potential. The authors results are in good agreement with the experimental and theoretical findings.

  3. Confidence intervals permit, but do not guarantee, better inference than statistical significance testing.

    Science.gov (United States)

    Coulson, Melissa; Healey, Michelle; Fidler, Fiona; Cumming, Geoff

    2010-01-01

    A statistically significant result, and a non-significant result may differ little, although significance status may tempt an interpretation of difference. Two studies are reported that compared interpretation of such results presented using null hypothesis significance testing (NHST), or confidence intervals (CIs). Authors of articles published in psychology, behavioral neuroscience, and medical journals were asked, via email, to interpret two fictitious studies that found similar results, one statistically significant, and the other non-significant. Responses from 330 authors varied greatly, but interpretation was generally poor, whether results were presented as CIs or using NHST. However, when interpreting CIs respondents who mentioned NHST were 60% likely to conclude, unjustifiably, the two results conflicted, whereas those who interpreted CIs without reference to NHST were 95% likely to conclude, justifiably, the two results were consistent. Findings were generally similar for all three disciplines. An email survey of academic psychologists confirmed that CIs elicit better interpretations if NHST is not invoked. Improved statistical inference can result from encouragement of meta-analytic thinking and use of CIs but, for full benefit, such highly desirable statistical reform requires also that researchers interpret CIs without recourse to NHST.

  4. Confidence intervals permit, but don't guarantee, better inference than statistical significance testing

    Directory of Open Access Journals (Sweden)

    Melissa Coulson

    2010-07-01

    Full Text Available A statistically significant result, and a non-significant result may differ little, although significance status may tempt an interpretation of difference. Two studies are reported that compared interpretation of such results presented using null hypothesis significance testing (NHST, or confidence intervals (CIs. Authors of articles published in psychology, behavioural neuroscience, and medical journals were asked, via email, to interpret two fictitious studies that found similar results, one statistically significant, and the other non-significant. Responses from 330 authors varied greatly, but interpretation was generally poor, whether results were presented as CIs or using NHST. However, when interpreting CIs respondents who mentioned NHST were 60% likely to conclude, unjustifiably, the two results conflicted, whereas those who interpreted CIs without reference to NHST were 95% likely to conclude, justifiably, the two results were consistent. Findings were generally similar for all three disciplines. An email survey of academic psychologists confirmed that CIs elicit better interpretations if NHST is not invoked. Improved statistical inference can result from encouragement of meta-analytic thinking and use of CIs but, for full benefit, such highly desirable statistical reform requires also that researchers interpret CIs without recourse to NHST.

  5. Statistical Model Checking for Stochastic Hybrid Systems

    DEFF Research Database (Denmark)

    David, Alexandre; Du, Dehui; Larsen, Kim Guldstrand

    2012-01-01

    This paper presents novel extensions and applications of the UPPAAL-SMC model checker. The extensions allow for statistical model checking of stochastic hybrid systems. We show how our race-based stochastic semantics extends to networks of hybrid systems, and indicate the integration technique...... applied for implementing this semantics in the UPPAAL-SMC simulation engine. We report on two applications of the resulting tool-set coming from systems biology and energy aware buildings....

  6. Statistical modeling of space shuttle environmental data

    Science.gov (United States)

    Tubbs, J. D.; Brewer, D. W.

    1983-01-01

    Statistical models which use a class of bivariate gamma distribution are examined. Topics discussed include: (1) the ratio of positively correlated gamma varieties; (2) a method to determine if unequal shape parameters are necessary in bivariate gamma distribution; (3) differential equations for modal location of a family of bivariate gamma distribution; and (4) analysis of some wind gust data using the analytical results developed for modeling application.

  7. Performance modeling, stochastic networks, and statistical multiplexing

    CERN Document Server

    Mazumdar, Ravi R

    2013-01-01

    This monograph presents a concise mathematical approach for modeling and analyzing the performance of communication networks with the aim of introducing an appropriate mathematical framework for modeling and analysis as well as understanding the phenomenon of statistical multiplexing. The models, techniques, and results presented form the core of traffic engineering methods used to design, control and allocate resources in communication networks.The novelty of the monograph is the fresh approach and insights provided by a sample-path methodology for queueing models that highlights the importan

  8. Statistical physical models of cellular motility

    Science.gov (United States)

    Banigan, Edward J.

    Cellular motility is required for a wide range of biological behaviors and functions, and the topic poses a number of interesting physical questions. In this work, we construct and analyze models of various aspects of cellular motility using tools and ideas from statistical physics. We begin with a Brownian dynamics model for actin-polymerization-driven motility, which is responsible for cell crawling and "rocketing" motility of pathogens. Within this model, we explore the robustness of self-diffusiophoresis, which is a general mechanism of motility. Using this mechanism, an object such as a cell catalyzes a reaction that generates a steady-state concentration gradient that propels the object in a particular direction. We then apply these ideas to a model for depolymerization-driven motility during bacterial chromosome segregation. We find that depolymerization and protein-protein binding interactions alone are sufficient to robustly pull a chromosome, even against large loads. Next, we investigate how forces and kinetics interact during eukaryotic mitosis with a many-microtubule model. Microtubules exert forces on chromosomes, but since individual microtubules grow and shrink in a force-dependent way, these forces lead to bistable collective microtubule dynamics, which provides a mechanism for chromosome oscillations and microtubule-based tension sensing. Finally, we explore kinematic aspects of cell motility in the context of the immune system. We develop quantitative methods for analyzing cell migration statistics collected during imaging experiments. We find that during chronic infection in the brain, T cells run and pause stochastically, following the statistics of a generalized Levy walk. These statistics may contribute to immune function by mimicking an evolutionarily conserved efficient search strategy. Additionally, we find that naive T cells migrating in lymph nodes also obey non-Gaussian statistics. Altogether, our work demonstrates how physical

  9. Statistical physics of pairwise probability models

    Directory of Open Access Journals (Sweden)

    Yasser Roudi

    2009-11-01

    Full Text Available Statistical models for describing the probability distribution over the states of biological systems are commonly used for dimensional reduction. Among these models, pairwise models are very attractive in part because they can be fit using a reasonable amount of data: knowledge of the means and correlations between pairs of elements in the system is sufficient. Not surprisingly, then, using pairwise models for studying neural data has been the focus of many studies in recent years. In this paper, we describe how tools from statistical physics can be employed for studying and using pairwise models. We build on our previous work on the subject and study the relation between different methods for fitting these models and evaluating their quality. In particular, using data from simulated cortical networks we study how the quality of various approximate methods for inferring the parameters in a pairwise model depends on the time bin chosen for binning the data. We also study the effect of the size of the time bin on the model quality itself, again using simulated data. We show that using finer time bins increases the quality of the pairwise model. We offer new ways of deriving the expressions reported in our previous work for assessing the quality of pairwise models.

  10. Pitfalls in statistical landslide susceptibility modelling

    Science.gov (United States)

    Schröder, Boris; Vorpahl, Peter; Märker, Michael; Elsenbeer, Helmut

    2010-05-01

    The use of statistical methods is a well-established approach to predict landslide occurrence probabilities and to assess landslide susceptibility. This is achieved by applying statistical methods relating historical landslide inventories to topographic indices as predictor variables. In our contribution, we compare several new and powerful methods developed in machine learning and well-established in landscape ecology and macroecology for predicting the distribution of shallow landslides in tropical mountain rainforests in southern Ecuador (among others: boosted regression trees, multivariate adaptive regression splines, maximum entropy). Although these methods are powerful, we think it is necessary to follow a basic set of guidelines to avoid some pitfalls regarding data sampling, predictor selection, and model quality assessment, especially if a comparison of different models is contemplated. We therefore suggest to apply a novel toolbox to evaluate approaches to the statistical modelling of landslide susceptibility. Additionally, we propose some methods to open the "black box" as an inherent part of machine learning methods in order to achieve further explanatory insights into preparatory factors that control landslides. Sampling of training data should be guided by hypotheses regarding processes that lead to slope failure taking into account their respective spatial scales. This approach leads to the selection of a set of candidate predictor variables considered on adequate spatial scales. This set should be checked for multicollinearity in order to facilitate model response curve interpretation. Model quality assesses how well a model is able to reproduce independent observations of its response variable. This includes criteria to evaluate different aspects of model performance, i.e. model discrimination, model calibration, and model refinement. In order to assess a possible violation of the assumption of independency in the training samples or a possible

  11. Equilibrium statistical mechanics of lattice models

    CERN Document Server

    Lavis, David A

    2015-01-01

    Most interesting and difficult problems in equilibrium statistical mechanics concern models which exhibit phase transitions. For graduate students and more experienced researchers this book provides an invaluable reference source of approximate and exact solutions for a comprehensive range of such models. Part I contains background material on classical thermodynamics and statistical mechanics, together with a classification and survey of lattice models. The geometry of phase transitions is described and scaling theory is used to introduce critical exponents and scaling laws. An introduction is given to finite-size scaling, conformal invariance and Schramm—Loewner evolution. Part II contains accounts of classical mean-field methods. The parallels between Landau expansions and catastrophe theory are discussed and Ginzburg—Landau theory is introduced. The extension of mean-field theory to higher-orders is explored using the Kikuchi—Hijmans—De Boer hierarchy of approximations. In Part III the use of alge...

  12. Statistical shape and appearance models of bones.

    Science.gov (United States)

    Sarkalkan, Nazli; Weinans, Harrie; Zadpoor, Amir A

    2014-03-01

    When applied to bones, statistical shape models (SSM) and statistical appearance models (SAM) respectively describe the mean shape and mean density distribution of bones within a certain population as well as the main modes of variations of shape and density distribution from their mean values. The availability of this quantitative information regarding the detailed anatomy of bones provides new opportunities for diagnosis, evaluation, and treatment of skeletal diseases. The potential of SSM and SAM has been recently recognized within the bone research community. For example, these models have been applied for studying the effects of bone shape on the etiology of osteoarthritis, improving the accuracy of clinical osteoporotic fracture prediction techniques, design of orthopedic implants, and surgery planning. This paper reviews the main concepts, methods, and applications of SSM and SAM as applied to bone.

  13. Interpreting Statistical Significance Test Results: A Proposed New "What If" Method.

    Science.gov (United States)

    Kieffer, Kevin M.; Thompson, Bruce

    As the 1994 publication manual of the American Psychological Association emphasized, "p" values are affected by sample size. As a result, it can be helpful to interpret the results of statistical significant tests in a sample size context by conducting so-called "what if" analyses. However, these methods can be inaccurate…

  14. Recent Literature on Whether Statistical Significance Tests Should or Should Not Be Banned.

    Science.gov (United States)

    Deegear, James

    This paper summarizes the literature regarding statistical significant testing with an emphasis on recent literature in various discipline and literature exploring why researchers have demonstrably failed to be influenced by the American Psychological Association publication manual's encouragement to report effect sizes. Also considered are…

  15. A critical discussion of null hypothesis significance testing and statistical power analysis within psychological research

    DEFF Research Database (Denmark)

    Jones, Allan; Sommerlund, Bo

    2007-01-01

    The uses of null hypothesis significance testing (NHST) and statistical power analysis within psychological research are critically discussed. The article looks at the problems of relying solely on NHST when dealing with small and large sample sizes. The use of power-analysis in estimating...

  16. [Tests of statistical significance in three biomedical journals: a critical review].

    Science.gov (United States)

    Sarria Castro, Madelaine; Silva Ayçaguer, Luis Carlos

    2004-05-01

    To describe the use of conventional tests of statistical significance and the current trends shown by their use in three biomedical journals read in Spanish-speaking countries. All descriptive or explanatory original articles published in the five-year period of 1996 through 2000 were reviewed in three journals: Revista Cubana de Medicina General Integral [Cuban Journal of Comprehensive General Medicine], Revista Panamericana de Salud Pública/Pan American Journal of Public Health, and Medicina Clínica [Clinical Medicine] (which is published in Spain). In the three journals that were reviewed various shortcomings were found in their use of hypothesis tests based on P values and in the limited use of new tools that have been suggested for use in their place: confidence intervals (CIs) and Bayesian inference. The basic findings of our research were: minimal use of CIs, as either a complement to significance tests or as the only statistical tool; mentions of a small sample size as a possible explanation for the lack of statistical significance; a predominant use of rigid alpha values; a lack of uniformity in the presentation of results; and improper reference in the research conclusions to the results of hypothesis tests. Our results indicate the lack of compliance by authors and editors with accepted standards for the use of tests of statistical significance. The findings also highlight that the stagnant use of these tests continues to be a common practice in the scientific literature.

  17. Accelerator driven reactors, - the significance of the energy distribution of spallation neutrons on the neutron statistics

    Energy Technology Data Exchange (ETDEWEB)

    Fhager, V

    2000-01-01

    In order to make correct predictions of the second moment of statistical nuclear variables, such as the number of fissions and the number of thermalized neutrons, the dependence of the energy distribution of the source particles on their number should be considered. It has been pointed out recently that neglecting this number dependence in accelerator driven systems might result in bad estimates of the second moment, and this paper contains qualitative and quantitative estimates of the size of these efforts. We walk towards the requested results in two steps. First, models of the number dependent energy distributions of the neutrons that are ejected in the spallation reactions are constructed, both by simple assumptions and by extracting energy distributions of spallation neutrons from a high-energy particle transport code. Then, the second moment of nuclear variables in a sub-critical reactor, into which spallation neutrons are injected, is calculated. The results from second moment calculations using number dependent energy distributions for the source neutrons are compared to those where only the average energy distribution is used. Two physical models are employed to simulate the neutron transport in the reactor. One is analytical, treating only slowing down of neutrons by elastic scattering in the core material. For this model, equations are written down and solved for the second moment of thermalized neutrons that include the distribution of energy of the spallation neutrons. The other model utilizes Monte Carlo methods for tracking the source neutrons as they travel inside the reactor material. Fast and thermal fission reactions are considered, as well as neutron capture and elastic scattering, and the second moment of the number of fissions, the number of neutrons that leaked out of the system, etc. are calculated. Both models use a cylindrical core with a homogenous mixture of core material. Our results indicate that the number dependence of the energy

  18. Statistical Modelling of Wind Proles - Data Analysis and Modelling

    DEFF Research Database (Denmark)

    Jónsson, Tryggvi; Pinson, Pierre

    The aim of the analysis presented in this document is to investigate whether statistical models can be used to make very short-term predictions of wind profiles.......The aim of the analysis presented in this document is to investigate whether statistical models can be used to make very short-term predictions of wind profiles....

  19. A critical discussion of null hypothesis significance testing and statistical power analysis within psychological research

    DEFF Research Database (Denmark)

    Jones, Allan; Sommerlund, Bo

    2007-01-01

    The uses of null hypothesis significance testing (NHST) and statistical power analysis within psychological research are critically discussed. The article looks at the problems of relying solely on NHST when dealing with small and large sample sizes. The use of power-analysis in estimating...... the potential error introduced by small and large samples is advocated. Power analysis is not recommended as a replacement to NHST but as an additional source of information about the phenomena under investigation. Moreover, the importance of conceptual analysis in relation to statistical analysis of hypothesis...

  20. Statistical modeling of geopressured geothermal reservoirs

    Science.gov (United States)

    Ansari, Esmail; Hughes, Richard; White, Christopher D.

    2017-06-01

    Identifying attractive candidate reservoirs for producing geothermal energy requires predictive models. In this work, inspectional analysis and statistical modeling are used to create simple predictive models for a line drive design. Inspectional analysis on the partial differential equations governing this design yields a minimum number of fifteen dimensionless groups required to describe the physics of the system. These dimensionless groups are explained and confirmed using models with similar dimensionless groups but different dimensional parameters. This study models dimensionless production temperature and thermal recovery factor as the responses of a numerical model. These responses are obtained by a Box-Behnken experimental design. An uncertainty plot is used to segment the dimensionless time and develop a model for each segment. The important dimensionless numbers for each segment of the dimensionless time are identified using the Boosting method. These selected numbers are used in the regression models. The developed models are reduced to have a minimum number of predictors and interactions. The reduced final models are then presented and assessed using testing runs. Finally, applications of these models are offered. The presented workflow is generic and can be used to translate the output of a numerical simulator into simple predictive models in other research areas involving numerical simulation.

  1. Statistical Compressive Sensing of Gaussian Mixture Models

    CERN Document Server

    Yu, Guoshen

    2010-01-01

    A new framework of compressive sensing (CS), namely statistical compressive sensing (SCS), that aims at efficiently sampling a collection of signals that follow a statistical distribution and achieving accurate reconstruction on average, is introduced. For signals following a Gaussian distribution, with Gaussian or Bernoulli sensing matrices of O(k) measurements, considerably smaller than the O(k log(N/k)) required by conventional CS, where N is the signal dimension, and with an optimal decoder implemented with linear filtering, significantly faster than the pursuit decoders applied in conventional CS, the error of SCS is shown tightly upper bounded by a constant times the k-best term approximation error, with overwhelming probability. The failure probability is also significantly smaller than that of conventional CS. Stronger yet simpler results further show that for any sensing matrix, the error of Gaussian SCS is upper bounded by a constant times the k-best term approximation with probability one, and the ...

  2. Statistical Language Model for Chinese Text Proofreading

    Institute of Scientific and Technical Information of China (English)

    张仰森; 曹元大

    2003-01-01

    Statistical language modeling techniques are investigated so as to construct a language model for Chinese text proofreading. After the defects of n-gram model are analyzed, a novel statistical language model for Chinese text proofreading is proposed. This model takes full account of the information located before and after the target word wi, and the relationship between un-neighboring words wi and wj in linguistic environment(LE). First, the word association degree between wi and wj is defined by using the distance-weighted factor, wj is l words apart from wi in the LE, then Bayes formula is used to calculate the LE related degree of word wi, and lastly, the LE related degree is taken as criterion to predict the reasonability of word wi that appears in context. Comparing the proposed model with the traditional n-gram in a Chinese text automatic error detection system, the experiments results show that the error detection recall rate and precision rate of the system have been improved.

  3. Statistics, Computation, and Modeling in Cosmology

    Science.gov (United States)

    Jewell, Jeff; Guiness, Joe; SAMSI 2016 Working Group in Cosmology

    2017-01-01

    Current and future ground and space based missions are designed to not only detect, but map out with increasing precision, details of the universe in its infancy to the present-day. As a result we are faced with the challenge of analyzing and interpreting observations from a wide variety of instruments to form a coherent view of the universe. Finding solutions to a broad range of challenging inference problems in cosmology is one of the goals of the “Statistics, Computation, and Modeling in Cosmology” workings groups, formed as part of the year long program on ‘Statistical, Mathematical, and Computational Methods for Astronomy’, hosted by the Statistical and Applied Mathematical Sciences Institute (SAMSI), a National Science Foundation funded institute. Two application areas have emerged for focused development in the cosmology working group involving advanced algorithmic implementations of exact Bayesian inference for the Cosmic Microwave Background, and statistical modeling of galaxy formation. The former includes study and development of advanced Markov Chain Monte Carlo algorithms designed to confront challenging inference problems including inference for spatial Gaussian random fields in the presence of sources of galactic emission (an example of a source separation problem). Extending these methods to future redshift survey data probing the nonlinear regime of large scale structure formation is also included in the working group activities. In addition, the working group is also focused on the study of ‘Galacticus’, a galaxy formation model applied to dark matter-only cosmological N-body simulations operating on time-dependent halo merger trees. The working group is interested in calibrating the Galacticus model to match statistics of galaxy survey observations; specifically stellar mass functions, luminosity functions, and color-color diagrams. The group will use subsampling approaches and fractional factorial designs to statistically and

  4. Cognitive Constructivism and the Epistemic Significance of Sharp Statistical Hypotheses in Natural Sciences

    CERN Document Server

    Stern, J M

    2010-01-01

    This book presents our case in defense of a constructivist epistemological framework and the use of compatible statistical theory and inference tools. The basic metaphor of decision theory is the maximization of a gambler's expected fortune, according to his own subjective utility, prior beliefs an learned experiences. This metaphor has proven to be very useful, leading the development of Bayesian statistics since its XX-th century revival, rooted on the work of de Finetti, Savage and others. The basic metaphor presented in this text, as a foundation for cognitive constructivism, is that of an eigen-solution, and the verification of its objective epistemic status. The FBST - Full Bayesian Significance Test - is the cornerstone of a set of statistical tolls conceived to assess the epistemic value of such eigen-solutions, according to their four essential attributes, namely, sharpness, stability, separability and composability. We believe that this alternative perspective, complementary to the one ofered by dec...

  5. Homeopathy: statistical significance versus the sample size in experiments with Toxoplasma gondii

    Directory of Open Access Journals (Sweden)

    Ana Lúcia Falavigna Guilherme

    2011-09-01

    , examined in its full length. This study was approved by the Ethics Committee for animal experimentation of the UEM - Protocol 036/2009. The data were compared using the tests Mann Whitney and Bootstrap [7] with the statistical software BioStat 5.0. Results and discussion: There was no significant difference when analyzed with the Mann-Whitney, even multiplying the "n" ten times (p=0.0618. The number of cysts observed in BIOT 200DH group was 4.5 ± 3.3 and 12.8 ± 9.7 in the CONTROL group. Table 1 shows the results obtained using the bootstrap analysis for each data changed from 2n until 2n+5, and their respective p-values. With the inclusion of more elements in the different groups, tested one by one, randomly, increasing gradually the samples, we observed the sample size needed to statistically confirm the results seen experimentally. Using 17 mice in group BIOT 200DH and 19 in the CONTROL group we have already observed statistical significance. This result suggests that experiments involving highly diluted substances and infection of mice with T. gondii should work with experimental groups with 17 animals at least. Despite the current and relevant ethical discussions about the number of animals used for experimental procedures the number of animals involved in each experiment must meet the characteristics of each item to be studied. In the case of experiments involving highly diluted substances, experimental animal models are still rudimentary and the biological effects observed appear to be also individualized, as described in literature for homeopathy [8]. The fact that the statistical significance was achieved by increasing the sample observed in this trial, tell us about a rare event, with a strong individual behavior, difficult to demonstrate in a result set, treated simply with a comparison of means or medians. Conclusion: Bootstrap seems to be an interesting methodology for the analysis of data obtained from experiments with highly diluted

  6. Statistical physics inspired methods to assign statistical significance in bioinformatics and proteomics: From sequence comparison to mass spectrometry based peptide sequencing

    Science.gov (United States)

    Alves, Gelio

    After the sequencing of many complete genomes, we are in a post-genomic era in which the most important task has changed from gathering genetic information to organizing the mass of data as well as under standing how components interact with each other. The former is usually undertaking using bioinformatics methods, while the latter task is generally termed proteomics. Success in both parts demands correct statistical significance assignments for results found. In my dissertation. I study two concrete examples: global sequence alignment statistics and peptide sequencing/identification using mass spectrometry. High-performance liquid chromatography coupled to a mass spectrometer (HPLC/MS/MS), enabling peptide identifications and thus protein identifications, has become the tool of choice in large-scale proteomics experiments. Peptide identification is usually done by database searches methods. The lack of robust statistical significance assignment among current methods motivated the development of a novel de novo algorithm, RAId, whose score statistics then provide statistical significance for high scoring peptides found in our custom, enzyme-digested peptide library. The ease of incorporating post-translation modifications is another important feature of RAId. To organize the massive protein/DNA data accumulated, biologists often cluster proteins according to their similarity via tools such as sequence alignment. Homologous proteins share similar domains. To assess the similarity of two domains usually requires alignment from head to toe, ie. a global alignment. A good alignment score statistics with an appropriate null model enable us to distinguish the biologically meaningful similarity from chance similarity. There has been much progress in local alignment statistics, which characterize score statistics when alignments tend to appear as a short segment of the whole sequence. For global alignment, which is useful in domain alignment, there is still much room for

  7. Statistical Seasonal Sea Surface based Prediction Model

    Science.gov (United States)

    Suarez, Roberto; Rodriguez-Fonseca, Belen; Diouf, Ibrahima

    2014-05-01

    The interannual variability of the sea surface temperature (SST) plays a key role in the strongly seasonal rainfall regime on the West African region. The predictability of the seasonal cycle of rainfall is a field widely discussed by the scientific community, with results that fail to be satisfactory due to the difficulty of dynamical models to reproduce the behavior of the Inter Tropical Convergence Zone (ITCZ). To tackle this problem, a statistical model based on oceanic predictors has been developed at the Universidad Complutense of Madrid (UCM) with the aim to complement and enhance the predictability of the West African Monsoon (WAM) as an alternative to the coupled models. The model, called S4CAST (SST-based Statistical Seasonal Forecast) is based on discriminant analysis techniques, specifically the Maximum Covariance Analysis (MCA) and Canonical Correlation Analysis (CCA). Beyond the application of the model to the prediciton of rainfall in West Africa, its use extends to a range of different oceanic, atmospheric and helth related parameters influenced by the temperature of the sea surface as a defining factor of variability.

  8. Infinite Random Graphs as Statistical Mechanical Models

    DEFF Research Database (Denmark)

    Durhuus, Bergfinnur Jøgvan; Napolitano, George Maria

    2011-01-01

    We discuss two examples of infinite random graphs obtained as limits of finite statistical mechanical systems: a model of two-dimensional dis-cretized quantum gravity defined in terms of causal triangulated surfaces, and the Ising model on generic random trees. For the former model we describe...... a relation to the so-called uniform infinite tree and results on the Hausdorff and spectral dimension of two-dimensional space-time obtained in B. Durhuus, T. Jonsson, J.F. Wheater, J. Stat. Phys. 139, 859 (2010) are briefly outlined. For the latter we discuss results on the absence of spontaneous...... magnetization and argue that, in the generic case, the values of the Hausdorff and spectral dimension of the underlying infinite trees are not influenced by the coupling to an Ising model in a constant magnetic field (B. Durhuus, G.M. Napolitano, in preparation)...

  9. A survey of statistical network models

    CERN Document Server

    Goldenberg, Anna; Fienberg, Stephen E; Airoldi, Edoardo M

    2009-01-01

    Networks are ubiquitous in science and have become a focal point for discussion in everyday life. Formal statistical models for the analysis of network data have emerged as a major topic of interest in diverse areas of study, and most of these involve a form of graphical representation. Probability models on graphs date back to 1959. Along with empirical studies in social psychology and sociology from the 1960s, these early works generated an active network community and a substantial literature in the 1970s. This effort moved into the statistical literature in the late 1970s and 1980s, and the past decade has seen a burgeoning network literature in statistical physics and computer science. The growth of the World Wide Web and the emergence of online networking communities such as Facebook, MySpace, and LinkedIn, and a host of more specialized professional network communities has intensified interest in the study of networks and network data. Our goal in this review is to provide the reader with an entry poin...

  10. Statistical Modelling of the Soil Dielectric Constant

    Science.gov (United States)

    Usowicz, Boguslaw; Marczewski, Wojciech; Bogdan Usowicz, Jerzy; Lipiec, Jerzy

    2010-05-01

    The dielectric constant of soil is the physical property being very sensitive on water content. It funds several electrical measurement techniques for determining the water content by means of direct (TDR, FDR, and others related to effects of electrical conductance and/or capacitance) and indirect RS (Remote Sensing) methods. The work is devoted to a particular statistical manner of modelling the dielectric constant as the property accounting a wide range of specific soil composition, porosity, and mass density, within the unsaturated water content. Usually, similar models are determined for few particular soil types, and changing the soil type one needs switching the model on another type or to adjust it by parametrization of soil compounds. Therefore, it is difficult comparing and referring results between models. The presented model was developed for a generic representation of soil being a hypothetical mixture of spheres, each representing a soil fraction, in its proper phase state. The model generates a serial-parallel mesh of conductive and capacitive paths, which is analysed for a total conductive or capacitive property. The model was firstly developed to determine the thermal conductivity property, and now it is extended on the dielectric constant by analysing the capacitive mesh. The analysis is provided by statistical means obeying physical laws related to the serial-parallel branching of the representative electrical mesh. Physical relevance of the analysis is established electrically, but the definition of the electrical mesh is controlled statistically by parametrization of compound fractions, by determining the number of representative spheres per unitary volume per fraction, and by determining the number of fractions. That way the model is capable covering properties of nearly all possible soil types, all phase states within recognition of the Lorenz and Knudsen conditions. In effect the model allows on generating a hypothetical representative of

  11. Thresholds for statistical and clinical significance in systematic reviews with meta-analytic methods

    DEFF Research Database (Denmark)

    Jakobsen, Janus Christian; Wetterslev, Jørn; Winkel, Per;

    2014-01-01

    BACKGROUND: Thresholds for statistical significance when assessing meta-analysis results are being insufficiently demonstrated by traditional 95% confidence intervals and P-values. Assessment of intervention effects in systematic reviews with meta-analysis deserves greater rigour. METHODS......: Methodologies for assessing statistical and clinical significance of intervention effects in systematic reviews were considered. Balancing simplicity and comprehensiveness, an operational procedure was developed, based mainly on The Cochrane Collaboration methodology and the Grading of Recommendations...... Assessment, Development, and Evaluation (GRADE) guidelines. RESULTS: We propose an eight-step procedure for better validation of meta-analytic results in systematic reviews (1) Obtain the 95% confidence intervals and the P-values from both fixed-effect and random-effects meta-analyses and report the most...

  12. The orthopaedic trauma literature: an evaluation of statistically significant findings in orthopaedic trauma randomized trials

    Directory of Open Access Journals (Sweden)

    Tornetta Paul

    2008-01-01

    Full Text Available Abstract Background Evidence-based medicine posits that health care research is founded upon clinically important differences in patient centered outcomes. Statistically significant differences between two treatments may not necessarily reflect a clinically important difference. We aimed to quantify the sample sizes and magnitude of treatment effects in a review of orthopaedic randomized trials with statistically significant findings. Methods We conducted a comprehensive search (PubMed, Cochrane for all randomized controlled trials between 1/1/95 to 12/31/04. Eligible studies include those that focused upon orthopaedic trauma. Baseline characteristics and treatment effects were abstracted by two reviewers. Briefly, for continuous outcome measures (ie functional scores, we calculated effect sizes (mean difference/standard deviation. Dichotomous variables (ie infection, nonunion were summarized as absolute risk differences and relative risk reductions (RRR. Effect sizes >0.80 and RRRs>50% were defined as large effects. Using regression analysis we examined the association between the total number of outcome events and treatment effect (dichotomous outcomes. Results Our search yielded 433 randomized controlled trials (RCTs, of which 76 RCTs with statistically significant findings on 184 outcomes (122 continuous/62 dichotomous outcomes met study eligibility criteria. The mean effect size across studies with continuous outcome variables was 1.7 (95% confidence interval: 1.43–1.97. For dichotomous outcomes, the mean risk difference was 30% (95%confidence interval:24%–36% and the mean relative risk reduction was 61% (95% confidence interval: 55%–66%; range: 0%–97%. Fewer numbers of total outcome events in studies was strongly correlated with increasing magnitude of the treatment effect (Pearson's R = -0.70, p Conclusion Our review suggests that statistically significant results in orthopaedic trials have the following implications-1 On average

  13. Efficient statistical significance approximation for local similarity analysis of high-throughput time series data.

    Science.gov (United States)

    Xia, Li C; Ai, Dongmei; Cram, Jacob; Fuhrman, Jed A; Sun, Fengzhu

    2013-01-15

    Local similarity analysis of biological time series data helps elucidate the varying dynamics of biological systems. However, its applications to large scale high-throughput data are limited by slow permutation procedures for statistical significance evaluation. We developed a theoretical approach to approximate the statistical significance of local similarity analysis based on the approximate tail distribution of the maximum partial sum of independent identically distributed (i.i.d.) random variables. Simulations show that the derived formula approximates the tail distribution reasonably well (starting at time points > 10 with no delay and > 20 with delay) and provides P-values comparable with those from permutations. The new approach enables efficient calculation of statistical significance for pairwise local similarity analysis, making possible all-to-all local association studies otherwise prohibitive. As a demonstration, local similarity analysis of human microbiome time series shows that core operational taxonomic units (OTUs) are highly synergetic and some of the associations are body-site specific across samples. The new approach is implemented in our eLSA package, which now provides pipelines for faster local similarity analysis of time series data. The tool is freely available from eLSA's website: http://meta.usc.edu/softs/lsa. Supplementary data are available at Bioinformatics online. fsun@usc.edu.

  14. Identification of Microorganisms by High Resolution Tandem Mass Spectrometry with Accurate Statistical Significance

    Science.gov (United States)

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y.; Drake, Steven K.; Gucek, Marjan; Suffredini, Anthony F.; Sacks, David B.; Yu, Yi-Kuo

    2016-02-01

    Correct and rapid identification of microorganisms is the key to the success of many important applications in health and safety, including, but not limited to, infection treatment, food safety, and biodefense. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is challenging correct microbial identification because of the large number of choices present. To properly disentangle candidate microbes, one needs to go beyond apparent morphology or simple `fingerprinting'; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptidome profiles of microbes to better separate them and by designing an analysis method that yields accurate statistical significance. Here, we present an analysis pipeline that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using MS/MS data of 81 samples, each composed of a single known microorganism, that the proposed pipeline can correctly identify microorganisms at least at the genus and species levels. We have also shown that the proposed pipeline computes accurate statistical significances, i.e., E-values for identified peptides and unified E-values for identified microorganisms. The proposed analysis pipeline has been implemented in MiCId, a freely available software for Microorganism Classification and Identification. MiCId is available for download at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html.

  15. A Multi-Core Parallelization Strategy for Statistical Significance Testing in Learning Classifier Systems.

    Science.gov (United States)

    Rudd, James; Moore, Jason H; Urbanowicz, Ryan J

    2013-11-01

    Permutation-based statistics for evaluating the significance of class prediction, predictive attributes, and patterns of association have only appeared within the learning classifier system (LCS) literature since 2012. While still not widely utilized by the LCS research community, formal evaluations of test statistic confidence are imperative to large and complex real world applications such as genetic epidemiology where it is standard practice to quantify the likelihood that a seemingly meaningful statistic could have been obtained purely by chance. LCS algorithms are relatively computationally expensive on their own. The compounding requirements for generating permutation-based statistics may be a limiting factor for some researchers interested in applying LCS algorithms to real world problems. Technology has made LCS parallelization strategies more accessible and thus more popular in recent years. In the present study we examine the benefits of externally parallelizing a series of independent LCS runs such that permutation testing with cross validation becomes more feasible to complete on a single multi-core workstation. We test our python implementation of this strategy in the context of a simulated complex genetic epidemiological data mining problem. Our evaluations indicate that as long as the number of concurrent processes does not exceed the number of CPU cores, the speedup achieved is approximately linear.

  16. Electronic noise modeling in statistical iterative reconstruction.

    Science.gov (United States)

    Xu, Jingyan; Tsui, Benjamin M W

    2009-06-01

    We consider electronic noise modeling in tomographic image reconstruction when the measured signal is the sum of a Gaussian distributed electronic noise component and another random variable whose log-likelihood function satisfies a certain linearity condition. Examples of such likelihood functions include the Poisson distribution and an exponential dispersion (ED) model that can approximate the signal statistics in integration mode X-ray detectors. We formulate the image reconstruction problem as a maximum-likelihood estimation problem. Using an expectation-maximization approach, we demonstrate that a reconstruction algorithm can be obtained following a simple substitution rule from the one previously derived without electronic noise considerations. To illustrate the applicability of the substitution rule, we present examples of a fully iterative reconstruction algorithm and a sinogram smoothing algorithm both in transmission CT reconstruction when the measured signal contains additive electronic noise. Our simulation studies show the potential usefulness of accurate electronic noise modeling in low-dose CT applications.

  17. Statistical model with a standard Γ distribution

    Science.gov (United States)

    Patriarca, Marco; Chakraborti, Anirban; Kaski, Kimmo

    2004-07-01

    We study a statistical model consisting of N basic units which interact with each other by exchanging a physical entity, according to a given microscopic random law, depending on a parameter λ . We focus on the equilibrium or stationary distribution of the entity exchanged and verify through numerical fitting of the simulation data that the final form of the equilibrium distribution is that of a standard Gamma distribution. The model can be interpreted as a simple closed economy in which economic agents trade money and a saving criterion is fixed by the saving propensity λ . Alternatively, from the nature of the equilibrium distribution, we show that the model can also be interpreted as a perfect gas at an effective temperature T(λ) , where particles exchange energy in a space with an effective dimension D(λ) .

  18. Statistical model with a standard Gamma distribution

    Science.gov (United States)

    Chakraborti, Anirban; Patriarca, Marco

    2005-03-01

    We study a statistical model consisting of N basic units which interact with each other by exchanging a physical entity, according to a given microscopic random law, depending on a parameter λ. We focus on the equilibrium or stationary distribution of the entity exchanged and verify through numerical fitting of the simulation data that the final form of the equilibrium distribution is that of a standard Gamma distribution. The model can be interpreted as a simple closed economy in which economic agents trade money and a saving criterion is fixed by the saving propensity λ. Alternatively, from the nature of the equilibrium distribution, we show that the model can also be interpreted as a perfect gas at an effective temperature T (λ), where particles exchange energy in a space with an effective dimension D (λ).

  19. Statistical Model Checking for Product Lines

    DEFF Research Database (Denmark)

    ter Beek, Maurice H.; Legay, Axel; Lluch Lafuente, Alberto

    2016-01-01

    average cost of products (in terms of the attributes of the products’ features) and the probability of features to be (un)installed at runtime. The product lines must be modelled in QFLan, which extends the probabilistic feature-oriented language PFLan with novel quantitative constraints among features......We report on the suitability of statistical model checking for the analysis of quantitative properties of product line models by an extended treatment of earlier work by the authors. The type of analysis that can be performed includes the likelihood of specific product behaviour, the expected...... and on behaviour and with advanced feature installation options. QFLan is a rich process-algebraic specification language whose operational behaviour interacts with a store of constraints, neatly separating product configuration from product behaviour. The resulting probabilistic configurations and probabilistic...

  20. Challenges in Dental Statistics: Data and Modelling

    Directory of Open Access Journals (Sweden)

    Domenica Matranga

    2013-03-01

    Full Text Available The aim of this work is to present the reflections and proposals derived from the first Workshop of the SISMEC STATDENT working group on statistical methods and applications in dentistry, held in Ancona (Italy on 28th September 2011. STATDENT began as a forum of comparison and discussion for statisticians working in the field of dental research in order to suggest new and improve existing biostatistical and clinical epidemiological methods. During the meeting, we dealt with very important topics of statistical methodology for the analysis of dental data, covering the analysis of hierarchically structured and over-dispersed data, the issue of calibration and reproducibility, as well as some problems related to survey methodology, such as the design and construction of unbiased statistical indicators and of well conducted clinical trials. This paper gathers some of the methodological topics discussed during the meeting, concerning multilevel and zero-inflated models for the analysis of caries data and methods for the training and calibration of raters in dental epidemiology.

  1. Statistical Model Checking for Biological Systems

    DEFF Research Database (Denmark)

    David, Alexandre; Larsen, Kim Guldstrand; Legay, Axel

    2014-01-01

    Statistical Model Checking (SMC) is a highly scalable simulation-based verification approach for testing and estimating the probability that a stochastic system satisfies a given linear temporal property. The technique has been applied to (discrete and continuous time) Markov chains, stochastic...... proved very useful for identifying interesting properties of biological systems. Our aim is to offer the best of the two worlds: optimal domain specific interfaces and formalisms suited to biology combined with powerful SMC analysis techniques for stochastic and hybrid systems. This goal is obtained...

  2. Statistical shape and appearance models in osteoporosis.

    Science.gov (United States)

    Castro-Mateos, Isaac; Pozo, Jose M; Cootes, Timothy F; Wilkinson, J Mark; Eastell, Richard; Frangi, Alejandro F

    2014-06-01

    Statistical models (SMs) of shape (SSM) and appearance (SAM) have been acquiring popularity in medical image analysis since they were introduced in the early 1990s. They have been primarily used for segmentation, but they are also a powerful tool for 3D reconstruction and classification. All these tasks may be required in the osteoporosis domain, where fracture detection and risk estimation are key to reducing the mortality and/or morbidity of this bone disease. In this article, we review the different applications of SSMs and SAMs in the context of osteoporosis, and it concludes with a discussion of their advantages and disadvantages for this application.

  3. A Statistical Model of Skewed Associativity

    OpenAIRE

    Michaud, Pierre

    2002-01-01

    This paper presents a statistical model of set-associativity, victim caching and skewed-associativity, with an emphasis on skewed-associativity. We show that set-associativity is not efficient when the working-set size is close to the cache size. We refer to this as the unit working-set problem. We show that victim-caching is not a practical solution to the unit working-se- t problem either, although victim caching emulates full associativity for working-sets much larger than the victim buffe...

  4. Statistical pairwise interaction model of stock market

    Science.gov (United States)

    Bury, Thomas

    2013-03-01

    Financial markets are a classical example of complex systems as they are compound by many interacting stocks. As such, we can obtain a surprisingly good description of their structure by making the rough simplification of binary daily returns. Spin glass models have been applied and gave some valuable results but at the price of restrictive assumptions on the market dynamics or they are agent-based models with rules designed in order to recover some empirical behaviors. Here we show that the pairwise model is actually a statistically consistent model with the observed first and second moments of the stocks orientation without making such restrictive assumptions. This is done with an approach only based on empirical data of price returns. Our data analysis of six major indices suggests that the actual interaction structure may be thought as an Ising model on a complex network with interaction strengths scaling as the inverse of the system size. This has potentially important implications since many properties of such a model are already known and some techniques of the spin glass theory can be straightforwardly applied. Typical behaviors, as multiple equilibria or metastable states, different characteristic time scales, spatial patterns, order-disorder, could find an explanation in this picture.

  5. Projecting Policy Effects with Statistical Models Projecting Policy Effects with Statistical Models

    Directory of Open Access Journals (Sweden)

    Christopher Sims

    1988-03-01

    Full Text Available This paper attempts to briefly discus the current frontiers in quantitative modeling for forecastina and policy analvsis. It does so by summarizing some recent developmenrs in three areas: reduced form forecasting models; theoretical models including elements of stochastic optimization; and identification. In the process, the paper tries to provide some remarks on the direction we seem to be headed. Projecting Policy Effects with Statistical Models

  6. Statistically Non-significant Papers in Environmental Health Studies included more Outcome Variables

    Institute of Scientific and Technical Information of China (English)

    Pentti Nieminen; Khaled Abass; Kirsi Vhkanga; Arja Rautio

    2015-01-01

    Objective The number of analyzed outcome variables is important in the statistical analysis and interpretation of research findings. This study investigated published papers in the field of environmental health studies. We aimed to examine whether differences in the number of reported outcome variables exist between papers with non-significant findings compared to those with significant findings. Articles on the maternal exposure to mercury and child development were used as examples. Methods Articles published between 1995 and 2013 focusing on the relationships between maternal exposure to mercury and child development were collected from Medline and Scopus. Results Of 87 extracted papers, 73 used statistical significance testing and 38 (43.7%) of these reported ‘non-significant’ (P>0.05) findings. The median number of child development outcome variables in papers reporting ‘significant’ (n=35) and ‘non-significant’ (n=38) results was 4 versus 7, respectively (Mann-Whitney test P-value=0.014). An elevated number of outcome variables was especially found in papers reporting non-significant associations between maternal mercury and outcomes when mercury was the only analyzed exposure variable. Conclusion Authors often report analyzed health outcome variables based on their P-values rather than on stated primary research questions. Such a practice probably skews the research evidence.

  7. How to get statistically significant effects in any ERP experiment (and why you shouldn't).

    Science.gov (United States)

    Luck, Steven J; Gaspelin, Nicholas

    2017-01-01

    ERP experiments generate massive datasets, often containing thousands of values for each participant, even after averaging. The richness of these datasets can be very useful in testing sophisticated hypotheses, but this richness also creates many opportunities to obtain effects that are statistically significant but do not reflect true differences among groups or conditions (bogus effects). The purpose of this paper is to demonstrate how common and seemingly innocuous methods for quantifying and analyzing ERP effects can lead to very high rates of significant but bogus effects, with the likelihood of obtaining at least one such bogus effect exceeding 50% in many experiments. We focus on two specific problems: using the grand-averaged data to select the time windows and electrode sites for quantifying component amplitudes and latencies, and using one or more multifactor statistical analyses. Reanalyses of prior data and simulations of typical experimental designs are used to show how these problems can greatly increase the likelihood of significant but bogus results. Several strategies are described for avoiding these problems and for increasing the likelihood that significant effects actually reflect true differences among groups or conditions.

  8. Statistical Mechanical Models of Integer Factorization Problem

    Science.gov (United States)

    Nakajima, Chihiro H.; Ohzeki, Masayuki

    2017-01-01

    We formulate the integer factorization problem via a formulation of the searching problem for the ground state of a statistical mechanical Hamiltonian. The first passage time required to find a correct divisor of a composite number signifies the exponential computational hardness. The analysis of the density of states of two macroscopic quantities, i.e., the energy and the Hamming distance from the correct solutions, leads to the conclusion that the ground state (correct solution) is completely isolated from the other low-energy states, with the distance being proportional to the system size. In addition, the profile of the microcanonical entropy of the model has two peculiar features that are each related to two marked changes in the energy region sampled via Monte Carlo simulation or simulated annealing. Hence, we find a peculiar first-order phase transition in our model.

  9. Statistical model semiquantitatively approximates arabinoxylooligosaccharides' structural diversity

    DEFF Research Database (Denmark)

    Dotsenko, Gleb; Nielsen, Michael Krogsgaard; Lange, Lene

    2016-01-01

    A statistical model describing the random distribution of substituted xylopyranosyl residues in arabinoxylooligosaccharides is suggested and compared with existing experimental data. Structural diversity of arabinoxylooligosaccharides of various length, originating from different arabinoxylans...... (wheat flour arabinoxylan (arabinose/xylose, A/X = 0.47); grass arabinoxylan (A/X = 0.24); wheat straw arabinoxylan (A/X = 0.15); and hydrothermally pretreated wheat straw arabinoxylan (A/X = 0.05)), is semiquantitatively approximated using the proposed model. The suggested approach can be applied...... not only for prediction and quantification of arabinoxylooligosaccharides' structural diversity, but also for estimate of yield and selection of the optimal source of arabinoxylan for production of arabinoxylooligosaccharides with desired structural features....

  10. Statistical significance estimation of a signal within the GooFit framework on GPUs

    Science.gov (United States)

    Cristella, Leonardo; Di Florio, Adriano; Pompili, Alexis

    2017-03-01

    In order to test the computing capabilities of GPUs with respect to traditional CPU cores a high-statistics toy Monte Carlo technique has been implemented both in ROOT/RooFit and GooFit frameworks with the purpose to estimate the statistical significance of the structure observed by CMS close to the kinematical boundary of the J/ψϕ invariant mass in the three-body decay B+ → J/ψϕK+. GooFit is a data analysis open tool under development that interfaces ROOT/RooFit to CUDA platform on nVidia GPU. The optimized GooFit application running on GPUs hosted by servers in the Bari Tier2 provides striking speed-up performances with respect to the RooFit application parallelised on multiple CPUs by means of PROOF-Lite tool. The considerable resulting speed-up, evident when comparing concurrent GooFit processes allowed by CUDA Multi Process Service and a RooFit/PROOF-Lite process with multiple CPU workers, is presented and discussed in detail. By means of GooFit it has also been possible to explore the behaviour of a likelihood ratio test statistic in different situations in which the Wilks Theorem may or may not apply because its regularity conditions are not satisfied.

  11. Deriving statistical significance maps for SVM based image classification and group comparisons.

    Science.gov (United States)

    Gaonkar, Bilwaj; Davatzikos, Christos

    2012-01-01

    Population based pattern analysis and classification for quantifying structural and functional differences between diverse groups has been shown to be a powerful tool for the study of a number of diseases, and is quite commonly used especially in neuroimaging. The alternative to these pattern analysis methods, namely mass univariate methods such as voxel based analysis and all related methods, cannot detect multivariate patterns associated with group differences, and are not particularly suitable for developing individual-based diagnostic and prognostic biomarkers. A commonly used pattern analysis tool is the support vector machine (SVM). Unlike univariate statistical frameworks for morphometry, analytical tools for statistical inference are unavailable for the SVM. In this paper, we show that null distributions ordinarily obtained by permutation tests using SVMs can be analytically approximated from the data. The analytical computation takes a small fraction of the time it takes to do an actual permutation test, thereby rendering it possible to quickly create statistical significance maps derived from SVMs. Such maps are critical for understanding imaging patterns of group differences and interpreting which anatomical regions are important in determining the classifier's decision.

  12. Statistical validation of normal tissue complication probability models.

    Science.gov (United States)

    Xu, Cheng-Jian; van der Schaaf, Arjen; Van't Veld, Aart A; Langendijk, Johannes A; Schilstra, Cornelis

    2012-09-01

    To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use. Copyright © 2012 Elsevier Inc. All rights reserved.

  13. Statistical Validation of Normal Tissue Complication Probability Models

    Energy Technology Data Exchange (ETDEWEB)

    Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Veld, Aart A. van' t; Langendijk, Johannes A. [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schilstra, Cornelis [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Radiotherapy Institute Friesland, Leeuwarden (Netherlands)

    2012-09-01

    Purpose: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. Methods and Materials: A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Results: Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Conclusion: Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use.

  14. Integrated statistical modelling of spatial landslide probability

    Science.gov (United States)

    Mergili, M.; Chu, H.-J.

    2015-09-01

    Statistical methods are commonly employed to estimate spatial probabilities of landslide release at the catchment or regional scale. Travel distances and impact areas are often computed by means of conceptual mass point models. The present work introduces a fully automated procedure extending and combining both concepts to compute an integrated spatial landslide probability: (i) the landslide inventory is subset into release and deposition zones. (ii) We employ a simple statistical approach to estimate the pixel-based landslide release probability. (iii) We use the cumulative probability density function of the angle of reach of the observed landslide pixels to assign an impact probability to each pixel. (iv) We introduce the zonal probability i.e. the spatial probability that at least one landslide pixel occurs within a zone of defined size. We quantify this relationship by a set of empirical curves. (v) The integrated spatial landslide probability is defined as the maximum of the release probability and the product of the impact probability and the zonal release probability relevant for each pixel. We demonstrate the approach with a 637 km2 study area in southern Taiwan, using an inventory of 1399 landslides triggered by the typhoon Morakot in 2009. We observe that (i) the average integrated spatial landslide probability over the entire study area corresponds reasonably well to the fraction of the observed landside area; (ii) the model performs moderately well in predicting the observed spatial landslide distribution; (iii) the size of the release zone (or any other zone of spatial aggregation) influences the integrated spatial landslide probability to a much higher degree than the pixel-based release probability; (iv) removing the largest landslides from the analysis leads to an enhanced model performance.

  15. MSMBuilder: Statistical Models for Biomolecular Dynamics.

    Science.gov (United States)

    Harrigan, Matthew P; Sultan, Mohammad M; Hernández, Carlos X; Husic, Brooke E; Eastman, Peter; Schwantes, Christian R; Beauchamp, Kyle A; McGibbon, Robert T; Pande, Vijay S

    2017-01-10

    MSMBuilder is a software package for building statistical models of high-dimensional time-series data. It is designed with a particular focus on the analysis of atomistic simulations of biomolecular dynamics such as protein folding and conformational change. MSMBuilder is named for its ability to construct Markov state models (MSMs), a class of models that has gained favor among computational biophysicists. In addition to both well-established and newer MSM methods, the package includes complementary algorithms for understanding time-series data such as hidden Markov models and time-structure based independent component analysis. MSMBuilder boasts an easy to use command-line interface, as well as clear and consistent abstractions through its Python application programming interface. MSMBuilder was developed with careful consideration for compatibility with the broader machine learning community by following the design of scikit-learn. The package is used primarily by practitioners of molecular dynamics, but is just as applicable to other computational or experimental time-series measurements. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  16. ZERODUR strength modeling with Weibull statistical distributions

    Science.gov (United States)

    Hartmann, Peter

    2016-07-01

    The decisive influence on breakage strength of brittle materials such as the low expansion glass ceramic ZERODUR is the surface condition. For polished or etched surfaces it is essential if micro cracks are present and how deep they are. Ground surfaces have many micro cracks caused by the generation process. Here only the depths of the micro cracks are relevant. In any case presence and depths of micro cracks are statistical by nature. The Weibull distribution is the model used traditionally for the representation of such data sets. It is based on the weakest link ansatz. The use of the two or three parameter Weibull distribution for data representation and reliability prediction depends on the underlying crack generation mechanisms. Before choosing the model for a specific evaluation, some checks should be done. Is there only one mechanism present or is it to be expected that an additional mechanism might contribute deviating results? For ground surfaces the main mechanism is the diamond grains' action on the surface. However, grains breaking from their bonding might be moved by the tool across the surface introducing a slightly deeper crack. It is not to be expected that these scratches follow the same statistical distribution as the grinding process. Hence, their description with the same distribution parameters is not adequate. Before including them a dedicated discussion should be performed. If there is additional information available influencing the selection of the model, for example the existence of a maximum crack depth, this should be taken into account also. Micro cracks introduced by small diamond grains on tools working with limited forces cannot be arbitrarily deep. For data obtained with such surfaces the existence of a threshold breakage stress should be part of the hypothesis. This leads to the use of the three parameter Weibull distribution. A differentiation based on the data set alone without preexisting information is possible but requires a

  17. Fluctuations of offshore wind generation: Statistical modelling

    DEFF Research Database (Denmark)

    Pinson, Pierre; Christensen, Lasse E.A.; Madsen, Henrik

    2007-01-01

    The magnitude of power fluctuations at large offshore wind farms has a significant impact on the control and management strategies of their power output. If focusing on the minute scale, one observes successive periods with smaller and larger power fluctuations. It seems that different regimes...... production averaged at a 1, 5, and 10-minute rate. The exercise consists in one-step ahead forecasting of these time-series with the various regime-switching models. It is shown that the MSAR model, for which the succession of regimes is represented by a hidden Markov chain, significantly outperforms...

  18. RT-PSM, a real-time program for peptide-spectrum matching with statistical significance.

    Science.gov (United States)

    Wu, Fang-Xiang; Gagné, Pierre; Droit, Arnaud; Poirier, Guy G

    2006-01-01

    The analysis of complex biological peptide mixtures by tandem mass spectrometry (MS/MS) produces a huge body of collision-induced dissociation (CID) MS/MS spectra. Several methods have been developed for identifying peptide-spectrum matches (PSMs) by assigning MS/MS spectra to peptides in a database. However, most of these methods either do not give the statistical significance of PSMs (e.g., SEQUEST) or employ time-consuming computational methods to estimate the statistical significance (e.g., PeptideProphet). In this paper, we describe a new algorithm, RT-PSM, which can be used to identify PSMs and estimate their accuracy statistically in real time. RT-PSM first computes PSM scores between an MS/MS spectrum and a set of candidate peptides whose masses are within a preset tolerance of the MS/MS precursor ion mass. Then the computed PSM scores of all candidate peptides are employed to fit the expectation value distribution of the scores into a second-degree polynomial function in PSM score. The statistical significance of the best PSM is estimated by extrapolating the fitting polynomial function to the best PSM score. RT-PSM was tested on two pairs of MS/MS spectrum datasets and protein databases to investigate its performance. The MS/MS spectra were acquired using an ion trap mass spectrometer equipped with a nano-electrospray ionization source. The results show that RT-PSM has good sensitivity and specificity. Using a 55,577-entry protein database and running on a standard Pentium-4, 2.8-GHz CPU personal computer, RT-PSM can process peptide spectra on a sequential, one-by-one basis in 0.047 s on average, compared to more than 7 s per spectrum on average for Sequest and X!Tandem, in their current batch-mode processing implementations. RT-PSM is clearly shown to be fast enough for real-time PSM assignment of MS/MS spectra generated every 3 s or so by a 3D ion trap or by a QqTOF instrument.

  19. Statistically Significant Strings are Related to Regulatory Elements in the Promoter Regions of Saccharomyces cerevisiae

    CERN Document Server

    Hu, R; Hu, Rui; Wang, Bin

    2000-01-01

    Finding out statistically significant words in DNA and protein sequences forms the basis for many genetic studies. By applying the maximal entropy principle, we give one systematic way to study the nonrandom occurrence of words in DNA or protein sequences. Through comparison with experimental results, it was shown that patterns of regulatory binding sites in Saccharomyces cerevisiae(yeast) genomes tend to occur significantly in the promoter regions. We studied two correlated gene family of yeast. The method successfully extracts the binding sites varified by experiments in each family. Many putative regulatory sites in the upstream regions are proposed. The study also suggested that some regulatory sites are a ctive in both directions, while others show directional preference.

  20. Statistical model for OCT image denoising

    KAUST Repository

    Li, Muxingzi

    2017-08-01

    Optical coherence tomography (OCT) is a non-invasive technique with a large array of applications in clinical imaging and biological tissue visualization. However, the presence of speckle noise affects the analysis of OCT images and their diagnostic utility. In this article, we introduce a new OCT denoising algorithm. The proposed method is founded on a numerical optimization framework based on maximum-a-posteriori estimate of the noise-free OCT image. It combines a novel speckle noise model, derived from local statistics of empirical spectral domain OCT (SD-OCT) data, with a Huber variant of total variation regularization for edge preservation. The proposed approach exhibits satisfying results in terms of speckle noise reduction as well as edge preservation, at reduced computational cost.

  1. Physical and Statistical Modeling of Saturn's Troposphere

    Science.gov (United States)

    Yanamandra-Fisher, Padmavati A.; Braverman, Amy J.; Orton, Glenn S.

    2002-12-01

    The 5.2-μm atmospheric window on Saturn is dominated by thermal radiation and weak gaseous absorption, with a 20% contribution from sunlight reflected from clouds. The striking variability displayed by Saturn's clouds at 5.2 μm and the detection of PH3 (an atmospheric tracer) variability near or below the 2-bar level and possibly at lower pressures provide salient constraints on the dynamical organization of Saturn's atmosphere by constraining the strength of vertical motions at two levels across the disk. We analyse the 5.2-μm spectra of Saturn by utilising two independent methods: (a) physical models based on the relevant atmospheric parameters and (b) statistical analysis, based on principal components analysis (PCA), to determine the influence of the variation of phosphine and the opacity of clouds deep within Saturn's atmosphere to understand the dynamics in its atmosphere.

  2. New advances in statistical modeling and applications

    CERN Document Server

    Santos, Rui; Oliveira, Maria; Paulino, Carlos

    2014-01-01

    This volume presents selected papers from the XIXth Congress of the Portuguese Statistical Society, held in the town of Nazaré, Portugal, from September 28 to October 1, 2011. All contributions were selected after a thorough peer-review process. It covers a broad range of papers in the areas of statistical science, probability and stochastic processes, extremes and statistical applications.

  3. Understanding and forecasting polar stratospheric variability with statistical models

    Directory of Open Access Journals (Sweden)

    C. Blume

    2012-02-01

    Full Text Available The variability of the north-polar stratospheric vortex is a prominent aspect of the middle atmosphere. This work investigates a wide class of statistical models with respect to their ability to model geopotential and temperature anomalies, representing variability in the polar stratosphere. Four partly nonstationary, nonlinear models are assessed: linear discriminant analysis (LDA; a cluster method based on finite elements (FEM-VARX; a neural network, namely a multi-layer perceptron (MLP; and support vector regression (SVR. These methods model time series by incorporating all significant external factors simultaneously, including ENSO, QBO, the solar cycle, volcanoes, etc., to then quantify their statistical importance. We show that variability in reanalysis data from 1980 to 2005 is successfully modeled. FEM-VARX and MLP even satisfactorily forecast the period from 2005 to 2011. However, internal variability remains that cannot be statistically forecasted, such as the unexpected major warming in January 2009. Finally, the statistical model with the best generalization performance is used to predict a vortex breakdown in late January, early February 2012.

  4. Quantitative statistical assessment of conditional models for synthetic aperture radar.

    Science.gov (United States)

    DeVore, Michael D; O'Sullivan, Joseph A

    2004-02-01

    Many applications of object recognition in the presence of pose uncertainty rely on statistical models-conditioned on pose-for observations. The image statistics of three-dimensional (3-D) objects are often assumed to belong to a family of distributions with unknown model parameters that vary with one or more continuous-valued pose parameters. Many methods for statistical model assessment, for example the tests of Kolmogorov-Smirnov and K. Pearson, require that all model parameters be fully specified or that sample sizes be large. Assessing pose-dependent models from a finite number of observations over a variety of poses can violate these requirements. However, a large number of small samples, corresponding to unique combinations of object, pose, and pixel location, are often available. We develop methods for model testing which assume a large number of small samples and apply them to the comparison of three models for synthetic aperture radar images of 3-D objects with varying pose. Each model is directly related to the Gaussian distribution and is assessed both in terms of goodness-of-fit and underlying model assumptions, such as independence, known mean, and homoscedasticity. Test results are presented in terms of the functional relationship between a given significance level and the percentage of samples that wold fail a test at that level.

  5. Statistically significant faunal differences among Middle Ordovician age, Chickamauga Group bryozoan bioherms, central Alabama

    Energy Technology Data Exchange (ETDEWEB)

    Crow, C.J.

    1985-01-01

    Middle Ordovician age Chickamauga Group carbonates crop out along the Birmingham and Murphrees Valley anticlines in central Alabama. The macrofossil contents on exposed surfaces of seven bioherms have been counted to determine their various paleontologic characteristics. Twelve groups of organisms are present in these bioherms. Dominant organisms include bryozoans, algae, brachiopods, sponges, pelmatozoans, stromatoporoids and corals. Minor accessory fauna include predators, scavengers and grazers such as gastropods, ostracods, trilobites, cephalopods and pelecypods. Vertical and horizontal niche zonation has been detected for some of the bioherm dwelling fauna. No one bioherm of those studied exhibits all 12 groups of organisms; rather, individual bioherms display various subsets of the total diversity. Statistical treatment (G-test) of the diversity data indicates a lack of statistical homogeneity of the bioherms, both within and between localities. Between-locality population heterogeneity can be ascribed to differences in biologic responses to such gross environmental factors as water depth and clarity, and energy levels. At any one locality, gross aspects of the paleoenvironments are assumed to have been more uniform. Significant differences among bioherms at any one locality may have resulted from patchy distribution of species populations, differential preservation and other factors.

  6. Mining Statistically Significant Substrings Based on the Chi-Square Measure

    CERN Document Server

    Bhattacharya, Sourav Dutta Arnab

    2010-01-01

    Given the vast reservoirs of data stored worldwide, efficient mining of data from a large information store has emerged as a great challenge. Many databases like that of intrusion detection systems, web-click records, player statistics, texts, proteins etc., store strings or sequences. Searching for an unusual pattern within such long strings of data has emerged as a requirement for diverse applications. Given a string, the problem then is to identify the substrings that differs the most from the expected or normal behavior, i.e., the substrings that are statistically significant. In other words, these substrings are less likely to occur due to chance alone and may point to some interesting information or phenomenon that warrants further exploration. To this end, we use the chi-square measure. We propose two heuristics for retrieving the top-k substrings with the largest chi-square measure. We show that the algorithms outperform other competing algorithms in the runtime, while maintaining a high approximation...

  7. Scalable detection of statistically significant communities and hierarchies: message-passing for modularity

    CERN Document Server

    Zhang, Pan

    2014-01-01

    Modularity is a popular measure of community structure. However, maximizing the modularity can lead to many competing partitions with almost the same modularity that are poorly correlated to each other; it can also overfit, producing illusory "communities" in random graphs where none exist. We address this problem by using the modularity as a Hamiltonian, and computing the marginals of the resulting Gibbs distribution. If we assign each node to its most-likely community under these marginals, we claim that, unlike the ground state, the resulting partition is a good measure of statistically-significant community structure. We propose an efficient Belief Propagation (BP) algorithm to compute these marginals. In random networks with no true communities, the system has two phases as we vary the temperature: a paramagnetic phase where all marginals are equal, and a spin glass phase where BP fails to converge. In networks with real community structure, there is an additional retrieval phase where BP converges, and ...

  8. Statistical Significance of Non-Reproducibility of Cross Sections in Dissipative Reactions

    Institute of Scientific and Technical Information of China (English)

    王琦; 董玉川; 李松林; 田文栋; 李志常; 路秀琴; 赵葵; 符长波; 刘建成; 姜华; 胡桂青

    2003-01-01

    Two independent excitation function measurements have been performed in the reaction system of 19F+93 Nb using two target foils of the same nominal thickness. We measured the dissipative reaction products at incident energies of 102 through 108 MeV with a step of 250keV. The variance of energy autocorrelation functions of the reaction products was found to be three times of that originated from the randomized counting rates. By analysing the probability distributions of the deviations in the measured cross sections, we found that about 20% of all the deviations exceeds three standard deviations. This indicates that the non-reproducibility of the cross sections in the two independent measurements is of a statistical significance but not originated from randomized fluctuation of counting rates.

  9. Henry Eyring: Statistical Mechanics, Significant Structure Theory, and the Inductive-Deductive Method

    CERN Document Server

    Henderson, Douglas

    2010-01-01

    Henry Eyring was, and still is, a towering figure in science. Some aspects of his life and science, beginning in Mexico and continuing in Arizona, California, Wisconsin, Germany, Princeton, and finally Utah, are reviewed here. Eyring moved gradually from quantum theory toward statistical mechanics and the theory of liquids, motivated in part by his desire to understand reactions in condensed matter. Significant structure theory, while not as successful as Eyring thought, is better than his critics realize. Eyring won many awards. However, most chemists are surprised, if not shocked, that he was never awarded a Nobel Prize. He joined Lise Meitner, Rosalind Franklin, John Slater, and others, in an even more select group, those who should have received a Nobel Prize but did not.

  10. A network-based method to assess the statistical significance of mild co-regulation effects.

    Directory of Open Access Journals (Sweden)

    Emőke-Ágnes Horvát

    Full Text Available Recent development of high-throughput, multiplexing technology has initiated projects that systematically investigate interactions between two types of components in biological networks, for instance transcription factors and promoter sequences, or microRNAs (miRNAs and mRNAs. In terms of network biology, such screening approaches primarily attempt to elucidate relations between biological components of two distinct types, which can be represented as edges between nodes in a bipartite graph. However, it is often desirable not only to determine regulatory relationships between nodes of different types, but also to understand the connection patterns of nodes of the same type. Especially interesting is the co-occurrence of two nodes of the same type, i.e., the number of their common neighbours, which current high-throughput screening analysis fails to address. The co-occurrence gives the number of circumstances under which both of the biological components are influenced in the same way. Here we present SICORE, a novel network-based method to detect pairs of nodes with a statistically significant co-occurrence. We first show the stability of the proposed method on artificial data sets: when randomly adding and deleting observations we obtain reliable results even with noise exceeding the expected level in large-scale experiments. Subsequently, we illustrate the viability of the method based on the analysis of a proteomic screening data set to reveal regulatory patterns of human microRNAs targeting proteins in the EGFR-driven cell cycle signalling system. Since statistically significant co-occurrence may indicate functional synergy and the mechanisms underlying canalization, and thus hold promise in drug target identification and therapeutic development, we provide a platform-independent implementation of SICORE with a graphical user interface as a novel tool in the arsenal of high-throughput screening analysis.

  11. A Network-Based Method to Assess the Statistical Significance of Mild Co-Regulation Effects

    Science.gov (United States)

    Horvát, Emőke-Ágnes; Zhang, Jitao David; Uhlmann, Stefan; Sahin, Özgür; Zweig, Katharina Anna

    2013-01-01

    Recent development of high-throughput, multiplexing technology has initiated projects that systematically investigate interactions between two types of components in biological networks, for instance transcription factors and promoter sequences, or microRNAs (miRNAs) and mRNAs. In terms of network biology, such screening approaches primarily attempt to elucidate relations between biological components of two distinct types, which can be represented as edges between nodes in a bipartite graph. However, it is often desirable not only to determine regulatory relationships between nodes of different types, but also to understand the connection patterns of nodes of the same type. Especially interesting is the co-occurrence of two nodes of the same type, i.e., the number of their common neighbours, which current high-throughput screening analysis fails to address. The co-occurrence gives the number of circumstances under which both of the biological components are influenced in the same way. Here we present SICORE, a novel network-based method to detect pairs of nodes with a statistically significant co-occurrence. We first show the stability of the proposed method on artificial data sets: when randomly adding and deleting observations we obtain reliable results even with noise exceeding the expected level in large-scale experiments. Subsequently, we illustrate the viability of the method based on the analysis of a proteomic screening data set to reveal regulatory patterns of human microRNAs targeting proteins in the EGFR-driven cell cycle signalling system. Since statistically significant co-occurrence may indicate functional synergy and the mechanisms underlying canalization, and thus hold promise in drug target identification and therapeutic development, we provide a platform-independent implementation of SICORE with a graphical user interface as a novel tool in the arsenal of high-throughput screening analysis. PMID:24039936

  12. Modelling earthquake interaction and seismicity statistics

    Science.gov (United States)

    Steacy, S.; Hetherington, A.

    2009-04-01

    The effects of earthquake interaction and fault complexity on seismicity statistics are investigated in a 3D model composed of a number of cellular automata (each representing an individual fault) distributed in a volume. Each automaton is assigned a fractal distribution of strength. Failure occurs when the 3D Coulomb stress on any cell exceeds its strength and stress transfer during simulated earthquake rupture is via nearest-neighbor rules formulated to give realistic stress concentrations. An event continues until all neighboring cells whose stresses exceed their strengths have ruptured and the size of the event is determined from its area and stress drop. Long-range stress interactions are computed following the termination of simulated ruptures using a boundary element code. In practice, these stress perturbations are only computed for events above a certain size (e.g. a threshold length of 10 km) and stresses are updated on nearby structures. Events which occur as a result of these stress interactions are considered to be "triggered" earthquakes and they, in turn, can trigger further seismic activity. The threshold length for computing interaction stresses is a free parameter and hence interaction can be "turned off" by setting this to an unrealistically high value. We consider 3 synthetic fault networks of increasing degrees of complexity - modelled on the North Anatolian fault system, the structures in the San Francisco Bay Area, and the Southern California fault network. We find that the effect of interaction is dramatically different in networks of differing complexity. In the North Anatolian analogue, for example, interaction leads to a decreased number of events, increased b-values, and an increase in recurrence intervals. In the Bay Area model, by contrast, we observe that interaction increases the number of events, decreases the b-values, and has little effect on recurrence intervals. For all networks, we find that interaction can activate mis

  13. Methods for Determining the Statistical Significance of Enrichment or Depletion of Gene Ontology Classifications under Weighted Membership

    Directory of Open Access Journals (Sweden)

    Ernesto eIacucci

    2012-02-01

    Full Text Available High-throughput molecular biology studies, such as microarray assays of gene expression, two-hybrid experiments for detecting protein interactions, or ChIP-Seq experiments for transcription factor binding, often result in an interesting set of genes—say, genes that are co-expressed or bound by the same factor. One way of understanding the biological meaning of such a set is to consider what processes or functions, as defined in an ontology, are over-represented (enriched or under-represented (depleted among genes in the set. Usually, the significance of enrichment or depletion scores is based on simple statistical models and on the membership of genes in different classifications. We consider the more general problem of computing p-values for arbitrary integer additive statistics, or weighted membership functions. Such membership functions can be used to represent, for example, prior knowledge on the role of certain genes or classifications, differential importance of different classifications or genes to the experimenter, hierarchical relationships between classifications, or different degrees of interestingness or evidence for specific genes. We describe a generic dynamic programming algorithm that can compute exact p-values for arbitrary integer additive statistics. We also describe several optimizations for important special cases, which can provide orders-of-magnitude speed up in the computations. We apply our methods to datasets describing oxidative phosphorylation and parturition and compare p-values based on computations of several different statistics for measuring enrichment. We find major differences between p-values resulting from these statistics, and that some statistics recover gold standard annotations of the data better than others. Our work establishes a theoretical and algorithmic basis for far richer notions of enrichment or depletion of gene sets with respect to gene ontologies than has previously been available.

  14. Pathway Model and Nonextensive Statistical Mechanics

    Science.gov (United States)

    Mathai, A. M.; Haubold, H. J.; Tsallis, C.

    2015-12-01

    The established technique of eliminating upper or lower parameters in a general hypergeometric series is profitably exploited to create pathways among confluent hypergeometric functions, binomial functions, Bessel functions, and exponential series. One such pathway, from the mathematical statistics point of view, results in distributions which naturally emerge within nonextensive statistical mechanics and Beck-Cohen superstatistics, as pursued in generalizations of Boltzmann-Gibbs statistics.

  15. Statistical Ensemble Theory of Gompertz Growth Model

    Directory of Open Access Journals (Sweden)

    Takuya Yamano

    2009-11-01

    Full Text Available An ensemble formulation for the Gompertz growth function within the framework of statistical mechanics is presented, where the two growth parameters are assumed to be statistically distributed. The growth can be viewed as a self-referential process, which enables us to use the Bose-Einstein statistics picture. The analytical entropy expression pertain to the law can be obtained in terms of the growth velocity distribution as well as the Gompertz function itself for the whole process.

  16. Statistical significant changes in ground thermal conditions of alpine Austria during the last decade

    Science.gov (United States)

    Kellerer-Pirklbauer, Andreas

    2016-04-01

    Longer data series (e.g. >10 a) of ground temperatures in alpine regions are helpful to improve the understanding regarding the effects of present climate change on distribution and thermal characteristics of seasonal frost- and permafrost-affected areas. Beginning in 2004 - and more intensively since 2006 - a permafrost and seasonal frost monitoring network was established in Central and Eastern Austria by the University of Graz. This network consists of c.60 ground temperature (surface and near-surface) monitoring sites which are located at 1922-3002 m a.s.l., at latitude 46°55'-47°22'N and at longitude 12°44'-14°41'E. These data allow conclusions about general ground thermal conditions, potential permafrost occurrence, trend during the observation period, and regional pattern of changes. Calculations and analyses of several different temperature-related parameters were accomplished. At an annual scale a region-wide statistical significant warming during the observation period was revealed by e.g. an increase in mean annual temperature values (mean, maximum) or the significant lowering of the surface frost number (F+). At a seasonal scale no significant trend of any temperature-related parameter was in most cases revealed for spring (MAM) and autumn (SON). Winter (DJF) shows only a weak warming. In contrast, the summer (JJA) season reveals in general a significant warming as confirmed by several different temperature-related parameters such as e.g. mean seasonal temperature, number of thawing degree days, number of freezing degree days, or days without night frost. On a monthly basis August shows the statistically most robust and strongest warming of all months, although regional differences occur. Despite the fact that the general ground temperature warming during the last decade is confirmed by the field data in the study region, complications in trend analyses arise by temperature anomalies (e.g. warm winter 2006/07) or substantial variations in the winter

  17. Determining coding CpG islands by identifying regions significant for pattern statistics on Markov chains.

    Science.gov (United States)

    Singer, Meromit; Engström, Alexander; Schönhuth, Alexander; Pachter, Lior

    2011-09-23

    Recent experimental and computational work confirms that CpGs can be unmethylated inside coding exons, thereby showing that codons may be subjected to both genomic and epigenomic constraint. It is therefore of interest to identify coding CpG islands (CCGIs) that are regions inside exons enriched for CpGs. The difficulty in identifying such islands is that coding exons exhibit sequence biases determined by codon usage and constraints that must be taken into account. We present a method for finding CCGIs that showcases a novel approach we have developed for identifying regions of interest that are significant (with respect to a Markov chain) for the counts of any pattern. Our method begins with the exact computation of tail probabilities for the number of CpGs in all regions contained in coding exons, and then applies a greedy algorithm for selecting islands from among the regions. We show that the greedy algorithm provably optimizes a biologically motivated criterion for selecting islands while controlling the false discovery rate. We applied this approach to the human genome (hg18) and annotated CpG islands in coding exons. The statistical criterion we apply to evaluating islands reduces the number of false positives in existing annotations, while our approach to defining islands reveals significant numbers of undiscovered CCGIs in coding exons. Many of these appear to be examples of functional epigenetic specialization in coding exons.

  18. A visitor's guide to effect sizes: statistical significance versus practical (clinical) importance of research findings.

    Science.gov (United States)

    Hojat, Mohammadreza; Xu, Gang

    2004-01-01

    Effect Sizes (ES) are an increasingly important index used to quantify the degree of practical significance of study results. This paper gives an introduction to the computation and interpretation of effect sizes from the perspective of the consumer of the research literature. The key points made are: 1. ES is a useful indicator of the practical (clinical) importance of research results that can be operationally defined from being "negligible" to "moderate", to "important". 2. The ES has two advantages over statistical significance testing: (a) it is independent of the size of the sample; (b) it is a scale-free index. Therefore, ES can be uniformly interpreted in different studies regardless of the sample size and the original scales of the variables. 3. Calculations of the ES are illustrated by using examples of comparisons between two means, correlation coefficients, chi-square tests and two proportions, along with appropriate formulas. 4. Operational definitions for the ES s are given, along with numerical examples for the purpose of illustration.

  19. A common misapplication of statistical inference: Nuisance control with null-hypothesis significance tests.

    Science.gov (United States)

    Sassenhagen, Jona; Alday, Phillip M

    2016-11-01

    Experimental research on behavior and cognition frequently rests on stimulus or subject selection where not all characteristics can be fully controlled, even when attempting strict matching. For example, when contrasting patients to controls, variables such as intelligence or socioeconomic status are often correlated with patient status. Similarly, when presenting word stimuli, variables such as word frequency are often correlated with primary variables of interest. One procedure very commonly employed to control for such nuisance effects is conducting inferential tests on confounding stimulus or subject characteristics. For example, if word length is not significantly different for two stimulus sets, they are considered as matched for word length. Such a test has high error rates and is conceptually misguided. It reflects a common misunderstanding of statistical tests: interpreting significance not to refer to inference about a particular population parameter, but about 1. the sample in question, 2. the practical relevance of a sample difference (so that a nonsignificant test is taken to indicate evidence for the absence of relevant differences). We show inferential testing for assessing nuisance effects to be inappropriate both pragmatically and philosophically, present a survey showing its high prevalence, and briefly discuss an alternative in the form of regression including nuisance variables.

  20. A statistical mechanics model of carbon nanotube macro-films

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    Carbon nanotube macro-films are two-dimensional films with micrometer thickness and centimeter by centimeter in-plane dimension.These carbon nanotube macroscopic assemblies have attracted significant attention from the material and mechanics communities recently because they can be easily handled and tailored to meet specific engineering needs.This paper reports the experimental methods on the preparation and characterization of single-walled carbon nanotube macro-films,and a statistical mechanics model on ...

  1. Statistical model selection with “Big Data”

    Directory of Open Access Journals (Sweden)

    Jurgen A. Doornik

    2015-12-01

    Full Text Available Big Data offer potential benefits for statistical modelling, but confront problems including an excess of false positives, mistaking correlations for causes, ignoring sampling biases and selecting by inappropriate methods. We consider the many important requirements when searching for a data-based relationship using Big Data, and the possible role of Autometrics in that context. Paramount considerations include embedding relationships in general initial models, possibly restricting the number of variables to be selected over by non-statistical criteria (the formulation problem, using good quality data on all variables, analyzed with tight significance levels by a powerful selection procedure, retaining available theory insights (the selection problem while testing for relationships being well specified and invariant to shifts in explanatory variables (the evaluation problem, using a viable approach that resolves the computational problem of immense numbers of possible models.

  2. Modelling vocal anatomy's significant effect on speech

    NARCIS (Netherlands)

    de Boer, B.

    2010-01-01

    This paper investigates the effect of larynx position on the articulatory abilities of a humanlike vocal tract. Previous work has investigated models that were built to resemble the anatomy of existing species or fossil ancestors. This has led to conflicting conclusions about the relation between

  3. Online Statistical Modeling (Regression Analysis) for Independent Responses

    Science.gov (United States)

    Made Tirta, I.; Anggraeni, Dian; Pandutama, Martinus

    2017-06-01

    Regression analysis (statistical analmodelling) are among statistical methods which are frequently needed in analyzing quantitative data, especially to model relationship between response and explanatory variables. Nowadays, statistical models have been developed into various directions to model various type and complex relationship of data. Rich varieties of advanced and recent statistical modelling are mostly available on open source software (one of them is R). However, these advanced statistical modelling, are not very friendly to novice R users, since they are based on programming script or command line interface. Our research aims to developed web interface (based on R and shiny), so that most recent and advanced statistical modelling are readily available, accessible and applicable on web. We have previously made interface in the form of e-tutorial for several modern and advanced statistical modelling on R especially for independent responses (including linear models/LM, generalized linier models/GLM, generalized additive model/GAM and generalized additive model for location scale and shape/GAMLSS). In this research we unified them in the form of data analysis, including model using Computer Intensive Statistics (Bootstrap and Markov Chain Monte Carlo/ MCMC). All are readily accessible on our online Virtual Statistics Laboratory. The web (interface) make the statistical modeling becomes easier to apply and easier to compare them in order to find the most appropriate model for the data.

  4. On Wiener filtering and the physics behind statistical modeling.

    Science.gov (United States)

    Marbach, Ralf

    2002-01-01

    The closed-form solution of the so-called statistical multivariate calibration model is given in terms of the pure component spectral signal, the spectral noise, and the signal and noise of the reference method. The "statistical" calibration model is shown to be as much grounded on the physics of the pure component spectra as any of the "physical" models. There are no fundamental differences between the two approaches since both are merely different attempts to realize the same basic idea, viz., the spectrometric Wiener filter. The concept of the application-specific signal-to-noise ratio (SNR) is introduced, which is a combination of the two SNRs from the reference and the spectral data. Both are defined and the central importance of the latter for the assessment and development of spectroscopic instruments and methods is explained. Other statistics like the correlation coefficient, prediction error, slope deficiency, etc., are functions of the SNR. Spurious correlations and other practically important issues are discussed in quantitative terms. Most important, it is shown how to use a priori information about the pure component spectra and the spectral noise in an optimal way, thereby making the distinction between statistical and physical calibrations obsolete and combining the best of both worlds. Companies and research groups can use this article to realize significant savings in cost and time for development efforts.

  5. The Hall current system revealed as a statistical significant pattern during fast flows

    Directory of Open Access Journals (Sweden)

    K. Snekvik

    2008-11-01

    Full Text Available We have examined the dawn-dusk component of the magnetic field, BY, in the night side current sheet during fast flows in the neutral sheet. 237 h of Cluster data from the plasma sheet between 2 August 2002 and 2 October 2002 have been analysed. The spatial pattern of BY as a function of the distance from the centre of the current sheet has been estimated by using a Harris current sheet model. We have used the average slopes of these patterns to estimate earthward and tailward currents. For earthward fast flows there is a tailward current in the inner central plasma sheet and an earthward current in the outer central plasma sheet on average. For tailward fast flows the currents are oppositely directed. These observations are interpreted as signatures of Hall currents in the reconnection region or as field aligned currents which are connected with these currents. Although fast flows often are associated with a dawn-dusk current wedge, we believe that we have managed to filter out such currents from our statistical patterns.

  6. Post hoc pattern matching: assigning significance to statistically defined expression patterns in single channel microarray data

    Directory of Open Access Journals (Sweden)

    Blalock Eric M

    2007-07-01

    Full Text Available Abstract Background Researchers using RNA expression microarrays in experimental designs with more than two treatment groups often identify statistically significant genes with ANOVA approaches. However, the ANOVA test does not discriminate which of the multiple treatment groups differ from one another. Thus, post hoc tests, such as linear contrasts, template correlations, and pairwise comparisons are used. Linear contrasts and template correlations work extremely well, especially when the researcher has a priori information pointing to a particular pattern/template among the different treatment groups. Further, all pairwise comparisons can be used to identify particular, treatment group-dependent patterns of gene expression. However, these approaches are biased by the researcher's assumptions, and some treatment-based patterns may fail to be detected using these approaches. Finally, different patterns may have different probabilities of occurring by chance, importantly influencing researchers' conclusions about a pattern and its constituent genes. Results We developed a four step, post hoc pattern matching (PPM algorithm to automate single channel gene expression pattern identification/significance. First, 1-Way Analysis of Variance (ANOVA, coupled with post hoc 'all pairwise' comparisons are calculated for all genes. Second, for each ANOVA-significant gene, all pairwise contrast results are encoded to create unique pattern ID numbers. The # genes found in each pattern in the data is identified as that pattern's 'actual' frequency. Third, using Monte Carlo simulations, those patterns' frequencies are estimated in random data ('random' gene pattern frequency. Fourth, a Z-score for overrepresentation of the pattern is calculated ('actual' against 'random' gene pattern frequencies. We wrote a Visual Basic program (StatiGen that automates PPM procedure, constructs an Excel workbook with standardized graphs of overrepresented patterns, and lists of

  7. Statistical Model Checking of Rich Models and Properties

    DEFF Research Database (Denmark)

    Poulsen, Danny Bøgsted

    in undecidability issues for the traditional model checking approaches. Statistical model checking has proven itself a valuable supplement to model checking and this thesis is concerned with extending this software validation technique to stochastic hybrid systems. The thesis consists of two parts: the first part......Software is in increasing fashion embedded within safety- and business critical processes of society. Errors in these embedded systems can lead to human casualties or severe monetary loss. Model checking technology has proven formal methods capable of finding and correcting errors in software....... However, software is approaching the boundary in terms of the complexity and size that model checking can handle. Furthermore, software systems are nowadays more frequently interacting with their environment hence accurately modelling such systems requires modelling the environment as well - resulting...

  8. Statistical modelling of transcript profiles of differentially regulated genes

    Directory of Open Access Journals (Sweden)

    Sergeant Martin J

    2008-07-01

    allowed 11% of the Escherichia coli features to be fitted by an exponential function, and 25% of the Rattus norvegicus features could be described by the critical exponential model, all with statistical significance of p Conclusion The statistical non-linear regression approaches presented in this study provide detailed biologically oriented descriptions of individual gene expression profiles, using biologically variable data to generate a set of defining parameters. These approaches have application to the modelling and greater interpretation of profiles obtained across a wide range of platforms, such as microarrays. Through careful choice of appropriate model forms, such statistical regression approaches allow an improved comparison of gene expression profiles, and may provide an approach for the greater understanding of common regulatory mechanisms between genes.

  9. Isoscaling in Statistical Sequential Decay Model

    Institute of Scientific and Technical Information of China (English)

    TIAN Wen-Dong; SU Qian-Min; WANG Hong-Wei; WANG Kun; YAN Ting-ZHi; MA Yu-Gang; CAI Xiang-Zhou; FANG De-Qing; GUO Wei; MA Chun-Wang; LIU Gui-Hua; SHEN Wen-Qing; SHI Yu

    2007-01-01

    A sequential decay model is used to study isoscaling, I.e. The factorization of the isotope ratios from sources of different isospins and sizes over a broad range of excitation energies, into fugacity terms of proton and neutron number, R21(N, Z) = Y2(N, Z)/Y1(N, Z) = Cexp(αN +βZ). It is found that the isoscaling parameters α and β have a strong dependence on the isospin difference of equilibrated source and excitation energy, no significant influence of the source size on α andβ has been observed. It is found that α and β decrease with the excitation energy and are linear functions of 1/T and △(Z/A)2 or △(N/A)2 of the sources. Symmetry energy coefficient Csym is constrained from the relationship of α and source △(Z/A)2, β and source △(N/A)2.

  10. Integer Set Compression and Statistical Modeling

    DEFF Research Database (Denmark)

    Larsson, N. Jesper

    2014-01-01

    Compression of integer sets and sequences has been extensively studied for settings where elements follow a uniform probability distribution. In addition, methods exist that exploit clustering of elements in order to achieve higher compression performance. In this work, we address the case where...... enumeration of elements may be arbitrary or random, but where statistics is kept in order to estimate probabilities of elements. We present a recursive subset-size encoding method that is able to benefit from statistics, explore the effects of permuting the enumeration order based on element probabilities...

  11. Statistical models and methods for reliability and survival analysis

    CERN Document Server

    Couallier, Vincent; Huber-Carol, Catherine; Mesbah, Mounir; Huber -Carol, Catherine; Limnios, Nikolaos; Gerville-Reache, Leo

    2013-01-01

    Statistical Models and Methods for Reliability and Survival Analysis brings together contributions by specialists in statistical theory as they discuss their applications providing up-to-date developments in methods used in survival analysis, statistical goodness of fit, stochastic processes for system reliability, amongst others. Many of these are related to the work of Professor M. Nikulin in statistics over the past 30 years. The authors gather together various contributions with a broad array of techniques and results, divided into three parts - Statistical Models and Methods, Statistical

  12. FES Training in Aging: interim results show statistically significant improvements in mobility and muscle fiber size

    Directory of Open Access Journals (Sweden)

    Helmut Kern

    2012-03-01

    Full Text Available Aging is a multifactorial process that is characterized by decline in muscle mass and performance. Several factors, including reduced exercise, poor nutrition and modified hormonal metabolism, are responsible for changes in the rates of protein synthesis and degradation that drive skeletal muscle mass reduction with a consequent decline of force generation and mobility functional performances. Seniors with normal life style were enrolled: two groups in Vienna (n=32 and two groups in Bratislava: (n=19. All subjects were healthy and declared not to have any specific physical/disease problems. The two Vienna groups of seniors exercised for 10 weeks with two different types of training (leg press at the hospital or home-based functional electrical stimulation, h-b FES. Demografic data (age, height and weight were recorded before and after the training period and before and after the training period the patients were submitted to mobility functional analyses and muscle biopsies. The mobility functional analyses were: 1. gait speed (10m test fastest speed, in m/s; 2. time which the subject needed to rise from a chair for five times (5x Chair-Rise, in s; 3. Timed –Up-Go- Test, in s; 4. Stair-Test, in s; 5. isometric measurement of quadriceps force (Torque/kg, in Nm/kg; and 6. Dynamic Balance in mm. Preliminary analyses of muscle biopsies from quadriceps in some of the Vienna and Bratislava patients present morphometric results consistent with their functional behaviors. The statistically significant improvements in functional testings here reported demonstrates the effectiveness of h-b FES, and strongly support h-b FES, as a safe home-based method to improve contractility and performances of ageing muscles.

  13. Spatio-temporal statistical models with applications to atmospheric processes

    Energy Technology Data Exchange (ETDEWEB)

    Wikle, C.K.

    1996-12-31

    This doctoral dissertation is presented as three self-contained papers. An introductory chapter considers traditional spatio-temporal statistical methods used in the atmospheric sciences from a statistical perspective. Although this section is primarily a review, many of the statistical issues considered have not been considered in the context of these methods and several open questions are posed. The first paper attempts to determine a means of characterizing the semiannual oscillation (SAO) spatial variation in the northern hemisphere extratropical height field. It was discovered that the midlatitude SAO in 500hPa geopotential height could be explained almost entirely as a result of spatial and temporal asymmetries in the annual variation of stationary eddies. It was concluded that the mechanism for the SAO in the northern hemisphere is a result of land-sea contrasts. The second paper examines the seasonal variability of mixed Rossby-gravity waves (MRGW) in lower stratospheric over the equatorial Pacific. Advanced cyclostationary time series techniques were used for analysis. It was found that there are significant twice-yearly peaks in MRGW activity. Analyses also suggested a convergence of horizontal momentum flux associated with these waves. In the third paper, a new spatio-temporal statistical model is proposed that attempts to consider the influence of both temporal and spatial variability. This method is mainly concerned with prediction in space and time, and provides a spatially descriptive and temporally dynamic model.

  14. Spatio-temporal statistical models with applications to atmospheric processes

    Energy Technology Data Exchange (ETDEWEB)

    Wikle, Christopher K. [Iowa State Univ., Ames, IA (United States)

    1996-01-01

    This doctoral dissertation is presented as three self-contained papers. An introductory chapter considers traditional spatio-temporal statistical methods used in the atmospheric sciences from a statistical perspective. Although this section is primarily a review, many of the statistical issues considered have not been considered in the context of these methods and several open questions are posed. The first paper attempts to determine a means of characterizing the semiannual oscillation (SAO) spatial variation in the northern hemisphere extratropical height field. It was discovered that the midlatitude SAO in 500hPa geopotential height could be explained almost entirely as a result of spatial and temporal asymmetries in the annual variation of stationary eddies. It was concluded that the mechanism for the SAO in the northern hemisphere is a result of land-sea contrasts. The second paper examines the seasonal variability of mixed Rossby-gravity waves (MRGW) in lower stratospheric over the equatorial Pacific. Advanced cyclostationary time series techniques were used for analysis. It was found that there are significant twice-yearly peaks in MRGW activity. Analyses also suggested a convergence of horizontal momentum flux associated with these waves. In the third paper, a new spatio-temporal statistical model is proposed that attempts to consider the influence of both temporal and spatial variability. This method is mainly concerned with prediction in space and time, and provides a spatially descriptive and temporally dynamic model.

  15. Evaluation of significantly modified water bodies in Vojvodina by using multivariate statistical techniques

    Directory of Open Access Journals (Sweden)

    Vujović Svetlana R.

    2013-01-01

    Full Text Available This paper illustrates the utility of multivariate statistical techniques for analysis and interpretation of water quality data sets and identification of pollution sources/factors with a view to get better information about the water quality and design of monitoring network for effective management of water resources. Multivariate statistical techniques, such as factor analysis (FA/principal component analysis (PCA and cluster analysis (CA, were applied for the evaluation of variations and for the interpretation of a water quality data set of the natural water bodies obtained during 2010 year of monitoring of 13 parameters at 33 different sites. FA/PCA attempts to explain the correlations between the observations in terms of the underlying factors, which are not directly observable. Factor analysis is applied to physico-chemical parameters of natural water bodies with the aim classification and data summation as well as segmentation of heterogeneous data sets into smaller homogeneous subsets. Factor loadings were categorized as strong and moderate corresponding to the absolute loading values of >0.75, 0.75-0.50, respectively. Four principal factors were obtained with Eigenvalues >1 summing more than 78 % of the total variance in the water data sets, which is adequate to give good prior information regarding data structure. Each factor that is significantly related to specific variables represents a different dimension of water quality. The first factor F1 accounting for 28 % of the total variance and represents the hydrochemical dimension of water quality. The second factor F2 accounting for 18% of the total variance and may be taken factor of water eutrophication. The third factor F3 accounting 17 % of the total variance and represents the influence of point sources of pollution on water quality. The fourth factor F4 accounting 13 % of the total variance and may be taken as an ecological dimension of water quality. Cluster analysis (CA is an

  16. Statistical significance of rising and oscillatory trends in global ocean and land temperature in the past 160 years

    CERN Document Server

    Østvand, Lene; Rypdal, Martin

    2013-01-01

    Various interpretations of the notion of a trend in the context of global warming are discussed, contrasting the difference between viewing a trend as the deterministic response to an external forcing and viewing it as a slow variation which can be separated from the background spectral continuum of long-range persistent climate noise. The emphasis in this paper is on the latter notion, and a general scheme is presented for testing a multi-parameter trend model against a null hypothesis which models the observed climate record as an autocorrelated noise. The scheme is employed to the instrumental global sea-surface temperature record and the global land-temperature record. A trend model comprising a linear plus an oscillatory trend with period of approximately 60 yr, and the statistical significance of the trends, are tested against three different null models: first-order autoregressive process, fractional Gaussian noise, and fractional Brownian motion. The linear trend is significant in all cases, but the o...

  17. The issue of statistical power for overall model fit in evaluating structural equation models

    Directory of Open Access Journals (Sweden)

    Richard HERMIDA

    2015-06-01

    Full Text Available Statistical power is an important concept for psychological research. However, examining the power of a structural equation model (SEM is rare in practice. This article provides an accessible review of the concept of statistical power for the Root Mean Square Error of Approximation (RMSEA index of overall model fit in structural equation modeling. By way of example, we examine the current state of power in the literature by reviewing studies in top Industrial-Organizational (I/O Psychology journals using SEMs. Results indicate that in many studies, power is very low, which implies acceptance of invalid models. Additionally, we examined methodological situations which may have an influence on statistical power of SEMs. Results showed that power varies significantly as a function of model type and whether or not the model is the main model for the study. Finally, results indicated that power is significantly related to model fit statistics used in evaluating SEMs. The results from this quantitative review imply that researchers should be more vigilant with respect to power in structural equation modeling. We therefore conclude by offering methodological best practices to increase confidence in the interpretation of structural equation modeling results with respect to statistical power issues.

  18. Comparison of Statistical Models for Regional Crop Trial Analysis

    Institute of Scientific and Technical Information of China (English)

    ZHANG Qun-yuan; KONG Fan-ling

    2002-01-01

    Based on the review and comparison of main statistical analysis models for estimating varietyenvironment cell means in regional crop trials, a new statistical model, LR-PCA composite model was proposed, and the predictive precision of these models were compared by cross validation of an example data. Results showed that the order of model precision was LR-PCA model > AMMI model > PCA model > Treatment Means (TM) model > Linear Regression (LR) model > Additive Main Effects ANOVA model. The precision gain factor of LR-PCA model was 1.55, increasing by 8.4% compared with AMMI.

  19. Enhanced surrogate models for statistical design exploiting space mapping technology

    DEFF Research Database (Denmark)

    Koziel, Slawek; Bandler, John W.; Mohamed, Achmed S.;

    2005-01-01

    We present advances in microwave and RF device modeling exploiting Space Mapping (SM) technology. We propose new SM modeling formulations utilizing input mappings, output mappings, frequency scaling and quadratic approximations. Our aim is to enhance circuit models for statistical analysis...

  20. How to practise Bayesian statistics outside the Bayesian church: What philosophy for Bayesian statistical modelling?

    NARCIS (Netherlands)

    Borsboom, D.; Haig, B.D.

    2013-01-01

    Unlike most other statistical frameworks, Bayesian statistical inference is wedded to a particular approach in the philosophy of science (see Howson & Urbach, 2006); this approach is called Bayesianism. Rather than being concerned with model fitting, this position in the philosophy of science primar

  1. Testing statistical significance scores of sequence comparison methods with structure similarity

    NARCIS (Netherlands)

    Hulsen, T.; Vlieg, J. de; Leunissen, J.A.M.; Groenen, P.M.

    2006-01-01

    BACKGROUND: In the past years the Smith-Waterman sequence comparison algorithm has gained popularity due to improved implementations and rapidly increasing computing power. However, the quality and sensitivity of a database search is not only determined by the algorithm but also by the statistical s

  2. Testing statistical significance scores of sequence comparison methods with structure similarity

    NARCIS (Netherlands)

    Hulsen, T.; Vlieg, de J.; Leunissen, J.A.M.; Groenen, P.

    2006-01-01

    Background - In the past years the Smith-Waterman sequence comparison algorithm has gained popularity due to improved implementations and rapidly increasing computing power. However, the quality and sensitivity of a database search is not only determined by the algorithm but also by the statistical

  3. Statistical models of shape optimisation and evaluation

    CERN Document Server

    Davies, Rhodri; Taylor, Chris

    2014-01-01

    Deformable shape models have wide application in computer vision and biomedical image analysis. This book addresses a key issue in shape modelling: establishment of a meaningful correspondence between a set of shapes. Full implementation details are provided.

  4. Analysis and Evaluation of Statistical Models for Integrated Circuits Design

    Directory of Open Access Journals (Sweden)

    Sáenz-Noval J.J.

    2011-10-01

    Full Text Available Statistical models for integrated circuits (IC allow us to estimate the percentage of acceptable devices in the batch before fabrication. Actually, Pelgrom is the statistical model most accepted in the industry; however it was derived from a micrometer technology, which does not guarantee reliability in nanometric manufacturing processes. This work considers three of the most relevant statistical models in the industry and evaluates their limitations and advantages in analog design, so that the designer has a better criterion to make a choice. Moreover, it shows how several statistical models can be used for each one of the stages and design purposes.

  5. Statistics-based investigation on typhoon transition modeling

    DEFF Research Database (Denmark)

    Zhang, Shuoyun; Nishijima, Kazuyoshi

    and the seasonality are taken into account by developing the models for different spatial grids and seasons separately. An appropriate size of spatial grids is investigated. The statistical characteristics of the random residual terms in the models are also examined. Finally, Monte Carlo simulations are performed......The present study revisits the statistical modeling of typhoon transition. The objective of the study is to provide insights on plausible statistical typhoon transition models based on extensive statistical analysis. First, the correlation structures of the typhoon transition are estimated in terms...

  6. Statistical image processing and multidimensional modeling

    CERN Document Server

    Fieguth, Paul

    2010-01-01

    Images are all around us! The proliferation of low-cost, high-quality imaging devices has led to an explosion in acquired images. When these images are acquired from a microscope, telescope, satellite, or medical imaging device, there is a statistical image processing task: the inference of something - an artery, a road, a DNA marker, an oil spill - from imagery, possibly noisy, blurry, or incomplete. A great many textbooks have been written on image processing. However this book does not so much focus on images, per se, but rather on spatial data sets, with one or more measurements taken over

  7. Statistical Tests for Mixed Linear Models

    CERN Document Server

    Khuri, André I; Sinha, Bimal K

    2011-01-01

    An advanced discussion of linear models with mixed or random effects. In recent years a breakthrough has occurred in our ability to draw inferences from exact and optimum tests of variance component models, generating much research activity that relies on linear models with mixed and random effects. This volume covers the most important research of the past decade as well as the latest developments in hypothesis testing. It compiles all currently available results in the area of exact and optimum tests for variance component models and offers the only comprehensive treatment for these models a

  8. Statistics

    CERN Document Server

    Hayslett, H T

    1991-01-01

    Statistics covers the basic principles of Statistics. The book starts by tackling the importance and the two kinds of statistics; the presentation of sample data; the definition, illustration and explanation of several measures of location; and the measures of variation. The text then discusses elementary probability, the normal distribution and the normal approximation to the binomial. Testing of statistical hypotheses and tests of hypotheses about the theoretical proportion of successes in a binomial population and about the theoretical mean of a normal population are explained. The text the

  9. Statistical models for nuclear decay from evaporation to vaporization

    CERN Document Server

    Cole, A J

    2000-01-01

    Elements of equilibrium statistical mechanics: Introduction. Microstates and macrostates. Sub-systems and convolution. The Boltzmann distribution. Statistical mechanics and thermodynamics. The grand canonical ensemble. Equations of state for ideal and real gases. Pseudo-equilibrium. Statistical models of nuclear decay. Nuclear physics background: Introduction. Elements of the theory of nuclear reactions. Quantum mechanical description of scattering from a potential. Decay rates and widths. Level and state densities in atomic nuclei. Angular momentum in quantum mechanics. History of statistical

  10. Search for semileptonic decays of photoproduced charmed mesons. [100 to 300 GeV, no statistically significant evidence

    Energy Technology Data Exchange (ETDEWEB)

    Coleman, R. N.

    1977-01-01

    In the broad band neutral beam at Fermilab, a search for photoproduction of charmed D mesons was done using photons of 100 to 300 GeV. The reaction considered was ..gamma.. + Be ..-->.. DantiD + X, leptons + ..., K/sup 0//sub s/n..pi../sup +-/. No statistically significant evidence for D production is observed based on the K/sup 0//sub s/n..pi../sup +-/ mass spectrum. The sensitivity of the search is commensurate with theoretical estimates of sigma(..gamma..p ..-->.. DantiD + X) approximately 500 nb, however this is dependent on branching ratios and photoproduction models. Data are given on a similar search for semileptonic decays of charmed baryons. 48 references.

  11. Multivariate statistical modelling based on generalized linear models

    CERN Document Server

    Fahrmeir, Ludwig

    1994-01-01

    This book is concerned with the use of generalized linear models for univariate and multivariate regression analysis. Its emphasis is to provide a detailed introductory survey of the subject based on the analysis of real data drawn from a variety of subjects including the biological sciences, economics, and the social sciences. Where possible, technical details and proofs are deferred to an appendix in order to provide an accessible account for non-experts. Topics covered include: models for multi-categorical responses, model checking, time series and longitudinal data, random effects models, and state-space models. Throughout, the authors have taken great pains to discuss the underlying theoretical ideas in ways that relate well to the data at hand. As a result, numerous researchers whose work relies on the use of these models will find this an invaluable account to have on their desks. "The basic aim of the authors is to bring together and review a large part of recent advances in statistical modelling of m...

  12. The Statistical Significance Test of Regional Climate Change Caused by Land Use and Land Cover Variation in West China

    Institute of Scientific and Technical Information of China (English)

    WANG Hanjie; SHI Weilai; CHEN Xiaohong

    2006-01-01

    The West Development Policy being implemented in China is causing significant land use and land cover (LULC) changes in West China. With the up-to-date satellite database of the Global Land Cover Characteristics Database (GLCCD) that characterizes the lower boundary conditions, the regional climate model RIEMS-TEA is used to simulate possible impacts of the significant LULC variation. The model was run for five continuous three-month periods from 1 June to 1 September of 1993, 1994, 1995, 1996, and 1997, and the results of the five groups are examined by means of a student t-test to identify the statistical significance of regional climate variation. The main results are: (1) The regional climate is affected by the LULC variation because the equilibrium of water and heat transfer in the air-vegetation interface is changed. (2) The integrated impact of the LULC variation on regional climate is not only limited to West China where the LULC varies, but also to some areas in the model domain where the LULC does not vary at all. (3) The East Asian monsoon system and its vertical structure are adjusted by the large scale LULC variation in western China, where the consequences are the enhancement of the westward water vapor transfer from the east oast and the relevant increase of wet-hydrostatic energy in the middle-upper atmospheric layers. (4) The ecological engineering in West China affects significantly the regional climate in Northwest China, North China and the middle-lower reaches of the Yangtze River; there are obvious effects in South, Northeast, and Southwest China, but minor effects in Tibet.

  13. 12th Workshop on Stochastic Models, Statistics and Their Applications

    CERN Document Server

    Rafajłowicz, Ewaryst; Szajowski, Krzysztof

    2015-01-01

    This volume presents the latest advances and trends in stochastic models and related statistical procedures. Selected peer-reviewed contributions focus on statistical inference, quality control, change-point analysis and detection, empirical processes, time series analysis, survival analysis and reliability, statistics for stochastic processes, big data in technology and the sciences, statistical genetics, experiment design, and stochastic models in engineering. Stochastic models and related statistical procedures play an important part in furthering our understanding of the challenging problems currently arising in areas of application such as the natural sciences, information technology, engineering, image analysis, genetics, energy and finance, to name but a few. This collection arises from the 12th Workshop on Stochastic Models, Statistics and Their Applications, Wroclaw, Poland.

  14. Estimating Predictive Variance for Statistical Gas Distribution Modelling

    Science.gov (United States)

    Lilienthal, Achim J.; Asadi, Sahar; Reggente, Matteo

    2009-05-01

    Recent publications in statistical gas distribution modelling have proposed algorithms that model mean and variance of a distribution. This paper argues that estimating the predictive concentration variance entails not only a gradual improvement but is rather a significant step to advance the field. This is, first, since the models much better fit the particular structure of gas distributions, which exhibit strong fluctuations with considerable spatial variations as a result of the intermittent character of gas dispersal. Second, because estimating the predictive variance allows to evaluate the model quality in terms of the data likelihood. This offers a solution to the problem of ground truth evaluation, which has always been a critical issue for gas distribution modelling. It also enables solid comparisons of different modelling approaches, and provides the means to learn meta parameters of the model, to determine when the model should be updated or re-initialised, or to suggest new measurement locations based on the current model. We also point out directions of related ongoing or potential future research work.

  15. Functional summary statistics for the Johnson-Mehl model

    DEFF Research Database (Denmark)

    Møller, Jesper; Ghorbani, Mohammad

    of functional summary statistics. This paper therefore invents four functional summary statistics adapted to the Johnson-Mehl model, with two of them based on the second-order properties and the other two on the nuclei-boundary distances for the associated Johnson-Mehl tessellation. The functional summary...... statistics theoretical properties are investigated, non-parametric estimators are suggested, and their usefulness for model checking is examined in a simulation study. The functional summary statistics are also used for checking fitted parametric Johnson-Mehl models for a neurotransmitters dataset....

  16. RM-structure alignment based statistical machine translation model

    Institute of Scientific and Technical Information of China (English)

    Sun Jiadong; Zhao Tiejun

    2008-01-01

    A novel model based on structure alignments is proposed for statistical machine translation in this paper.Meta-structure and sequence of meta-structure for a parse tree are defined.During the translation process, a parse tree is decomposed to deal with the structure divergence and the alignments can be constructed at different levels of recombination of meta-structure (RM).This method can perform the structure mapping across the sub-tree structure between languages.As a result, we get not only the translation for the target language, but sequence of meta-structure of its parse tree at the same time.Experiments show that the model in the framework of log-linear model has better generative ability and significantly outperforms Pharaoh, a phrase-based system.

  17. Statistical modeling and recognition of surgical workflow.

    Science.gov (United States)

    Padoy, Nicolas; Blum, Tobias; Ahmadi, Seyed-Ahmad; Feussner, Hubertus; Berger, Marie-Odile; Navab, Nassir

    2012-04-01

    In this paper, we contribute to the development of context-aware operating rooms by introducing a novel approach to modeling and monitoring the workflow of surgical interventions. We first propose a new representation of interventions in terms of multidimensional time-series formed by synchronized signals acquired over time. We then introduce methods based on Dynamic Time Warping and Hidden Markov Models to analyze and process this data. This results in workflow models combining low-level signals with high-level information such as predefined phases, which can be used to detect actions and trigger an event. Two methods are presented to train these models, using either fully or partially labeled training surgeries. Results are given based on tool usage recordings from sixteen laparoscopic cholecystectomies performed by several surgeons.

  18. Statistical modelling of fine red wine production

    OpenAIRE

    María Rosa Castro; Marcelo Eduardo Echegaray; Rosa Ana Rodríguez; Stella Maris Udaquiola

    2010-01-01

    Producing wine is a very important economic activity in the province of San Juan in Argentina; it is therefore most important to predict production regarding the quantity of raw material needed. This work was aimed at obtaining a model relating kilograms of crushed grape to the litres of wine so produced. Such model will be used for predicting precise future values and confidence intervals for determined quantities of crushed grapes. Data from a vineyard in the province of San Juan was ...

  19. On the Logical Development of Statistical Models.

    Science.gov (United States)

    1983-12-01

    parameters t2 . Type I models include scalar and vectorial probability distributions. Usually, the noise has an expected value equal to zero, so that...qualitative variables. As might be expected, the vectorial representation of all these types of models lagged behind the scaler forms. The first...1978). "Modelos con parametros variables en el analisis de series temporales" Questiio, 4, 2, 75-87. [25] Seal, H. L. (1967). "The historical

  20. A Statistical Quality Model for Data-Driven Speech Animation.

    Science.gov (United States)

    Ma, Xiaohan; Deng, Zhigang

    2012-11-01

    In recent years, data-driven speech animation approaches have achieved significant successes in terms of animation quality. However, how to automatically evaluate the realism of novel synthesized speech animations has been an important yet unsolved research problem. In this paper, we propose a novel statistical model (called SAQP) to automatically predict the quality of on-the-fly synthesized speech animations by various data-driven techniques. Its essential idea is to construct a phoneme-based, Speech Animation Trajectory Fitting (SATF) metric to describe speech animation synthesis errors and then build a statistical regression model to learn the association between the obtained SATF metric and the objective speech animation synthesis quality. Through delicately designed user studies, we evaluate the effectiveness and robustness of the proposed SAQP model. To the best of our knowledge, this work is the first-of-its-kind, quantitative quality model for data-driven speech animation. We believe it is the important first step to remove a critical technical barrier for applying data-driven speech animation techniques to numerous online or interactive talking avatar applications.

  1. Model output statistics applied to wind power prediction

    Energy Technology Data Exchange (ETDEWEB)

    Joensen, A.; Giebel, G.; Landberg, L. [Risoe National Lab., Roskilde (Denmark); Madsen, H.; Nielsen, H.A. [The Technical Univ. of Denmark, Dept. of Mathematical Modelling, Lyngby (Denmark)

    1999-03-01

    Being able to predict the output of a wind farm online for a day or two in advance has significant advantages for utilities, such as better possibility to schedule fossil fuelled power plants and a better position on electricity spot markets. In this paper prediction methods based on Numerical Weather Prediction (NWP) models are considered. The spatial resolution used in NWP models implies that these predictions are not valid locally at a specific wind farm. Furthermore, due to the non-stationary nature and complexity of the processes in the atmosphere, and occasional changes of NWP models, the deviation between the predicted and the measured wind will be time dependent. If observational data is available, and if the deviation between the predictions and the observations exhibits systematic behavior, this should be corrected for; if statistical methods are used, this approaches is usually referred to as MOS (Model Output Statistics). The influence of atmospheric turbulence intensity, topography, prediction horizon length and auto-correlation of wind speed and power is considered, and to take the time-variations into account, adaptive estimation methods are applied. Three estimation techniques are considered and compared, Extended Kalman Filtering, recursive least squares and a new modified recursive least squares algorithm. (au) EU-JOULE-3. 11 refs.

  2. Book review: Statistical Analysis and Modelling of Spatial Point Patterns

    DEFF Research Database (Denmark)

    Møller, Jesper

    2009-01-01

    Statistical Analysis and Modelling of Spatial Point Patterns by J. Illian, A. Penttinen, H. Stoyan and D. Stoyan. Wiley (2008), ISBN 9780470014912......Statistical Analysis and Modelling of Spatial Point Patterns by J. Illian, A. Penttinen, H. Stoyan and D. Stoyan. Wiley (2008), ISBN 9780470014912...

  3. Confidence Intervals: From tests of statistical significance to confidence intervals, range hypotheses and substantial effects

    Directory of Open Access Journals (Sweden)

    Dominic Beaulieu-Prévost

    2006-03-01

    Full Text Available For the last 50 years of research in quantitative social sciences, the empirical evaluation of scientific hypotheses has been based on the rejection or not of the null hypothesis. However, more than 300 articles demonstrated that this method was problematic. In summary, null hypothesis testing (NHT is unfalsifiable, its results depend directly on sample size and the null hypothesis is both improbable and not plausible. Consequently, alternatives to NHT such as confidence intervals (CI and measures of effect size are starting to be used in scientific publications. The purpose of this article is, first, to provide the conceptual tools necessary to implement an approach based on confidence intervals, and second, to briefly demonstrate why such an approach is an interesting alternative to an approach based on NHT. As demonstrated in the article, the proposed CI approach avoids most problems related to a NHT approach and can often improve the scientific and contextual relevance of the statistical interpretations by testing range hypotheses instead of a point hypothesis and by defining the minimal value of a substantial effect. The main advantage of such a CI approach is that it replaces the notion of statistical power by an easily interpretable three-value logic (probable presence of a substantial effect, probable absence of a substantial effect and probabilistic undetermination. The demonstration includes a complete example.

  4. A statistical model of facial attractiveness.

    Science.gov (United States)

    Said, Christopher P; Todorov, Alexander

    2011-09-01

    Previous research has identified facial averageness and sexual dimorphism as important factors in facial attractiveness. The averageness and sexual dimorphism accounts provide important first steps in understanding what makes faces attractive, and should be valued for their parsimony. However, we show that they explain relatively little of the variance in facial attractiveness, particularly for male faces. As an alternative to these accounts, we built a regression model that defines attractiveness as a function of a face's position in a multidimensional face space. The model provides much more predictive power than the averageness and sexual dimorphism accounts and reveals previously unreported components of attractiveness. The model shows that averageness is attractive in some dimensions but not in others and resolves previous contradictory reports about the effects of sexual dimorphism on the attractiveness of male faces.

  5. Statistical modelling of traffic safety development

    DEFF Research Database (Denmark)

    Christens, Peter

    2004-01-01

    : - Statistisk modellering af trafik uheld, Trafikdage på Ålborg Univeristet, 2001. - Sociale karakteristika hos trafikofre, Danish Transport Research Institute, 2001. - Models for traffic accidents, FERSI Young Researchers' Seminar, 2001. - Evaluation of the Danish Automatic Mobile Speed Camera Project...... 2000 dræbte trafikuheld over 40.000 i EU og skadede over 1.7 millioner. I Danmark i 2001 var der 6861 politirapporteret trafikuheld med tilskadekomst. De resulterede i 4519 lettere tilskadekomne, 3946 alvorligt tilskadekomne og 431 dræbte. Det generelle formål med dette forskningsarbejde er at forbedre...

  6. Exponential order statistic models of software reliability growth

    Science.gov (United States)

    Miller, D. R.

    1986-01-01

    Failure times of a software reliability growth process are modeled as order statistics of independent, nonidentically distributed exponential random variables. The Jelinsky-Moranda, Goel-Okumoto, Littlewood, Musa-Okumoto Logarithmic, and Power Law models are all special cases of Exponential Order Statistic Models, but there are many additional examples also. Various characterizations, properties and examples of this class of models are developed and presented.

  7. Statistical Modeling of Large-Scale Scientific Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Eliassi-Rad, T; Baldwin, C; Abdulla, G; Critchlow, T

    2003-11-15

    With the advent of massively parallel computer systems, scientists are now able to simulate complex phenomena (e.g., explosions of a stars). Such scientific simulations typically generate large-scale data sets over the spatio-temporal space. Unfortunately, the sheer sizes of the generated data sets make efficient exploration of them impossible. Constructing queriable statistical models is an essential step in helping scientists glean new insight from their computer simulations. We define queriable statistical models to be descriptive statistics that (1) summarize and describe the data within a user-defined modeling error, and (2) are able to answer complex range-based queries over the spatiotemporal dimensions. In this chapter, we describe systems that build queriable statistical models for large-scale scientific simulation data sets. In particular, we present our Ad-hoc Queries for Simulation (AQSim) infrastructure, which reduces the data storage requirements and query access times by (1) creating and storing queriable statistical models of the data at multiple resolutions, and (2) evaluating queries on these models of the data instead of the entire data set. Within AQSim, we focus on three simple but effective statistical modeling techniques. AQSim's first modeling technique (called univariate mean modeler) computes the ''true'' (unbiased) mean of systematic partitions of the data. AQSim's second statistical modeling technique (called univariate goodness-of-fit modeler) uses the Andersen-Darling goodness-of-fit method on systematic partitions of the data. Finally, AQSim's third statistical modeling technique (called multivariate clusterer) utilizes the cosine similarity measure to cluster the data into similar groups. Our experimental evaluations on several scientific simulation data sets illustrate the value of using these statistical models on large-scale simulation data sets.

  8. Statistics

    Science.gov (United States)

    Links to sources of cancer-related statistics, including the Surveillance, Epidemiology and End Results (SEER) Program, SEER-Medicare datasets, cancer survivor prevalence data, and the Cancer Trends Progress Report.

  9. Advanced data analysis in neuroscience integrating statistical and computational models

    CERN Document Server

    Durstewitz, Daniel

    2017-01-01

    This book is intended for use in advanced graduate courses in statistics / machine learning, as well as for all experimental neuroscientists seeking to understand statistical methods at a deeper level, and theoretical neuroscientists with a limited background in statistics. It reviews almost all areas of applied statistics, from basic statistical estimation and test theory, linear and nonlinear approaches for regression and classification, to model selection and methods for dimensionality reduction, density estimation and unsupervised clustering.  Its focus, however, is linear and nonlinear time series analysis from a dynamical systems perspective, based on which it aims to convey an understanding also of the dynamical mechanisms that could have generated observed time series. Further, it integrates computational modeling of behavioral and neural dynamics with statistical estimation and hypothesis testing. This way computational models in neuroscience are not only explanat ory frameworks, but become powerfu...

  10. Statistical modeling and visualization of localized prostate cancer

    Science.gov (United States)

    Wang, Yue J.; Xuan, Jianhua; Sesterhenn, Isabell A.; Hayes, Wendelin S.; Ebert, David S.; Lynch, John H.; Mun, Seong K.

    1997-05-01

    In this paper, a statistically significant master model of localized prostate cancer is developed with pathologically- proven surgical specimens to spatially guide specific points in the biopsy technique for a higher rate of prostate cancer detection and the best possible representation of tumor grade and extension. Based on 200 surgical specimens of the prostates, we have developed a surface reconstruction technique to interactively visualize in the clinically significant objects of interest such as the prostate capsule, urethra, seminal vesicles, ejaculatory ducts and the different carcinomas, for each of these cases. In order to investigate the complex disease pattern including the tumor distribution, volume, and multicentricity, we created a statistically significant master model of localized prostate cancer by fusing these reconstructed computer models together, followed by a quantitative formulation of the 3D finite mixture distribution. Based on the reconstructed prostate capsule and internal structures, we have developed a technique to align all surgical specimens through elastic matching. By labeling the voxels of localized prostate cancer by '1' and the voxels of other internal structures by '0', we can generate a 3D binary image of the prostate that is simply a mutually exclusive random sampling of the underlying distribution f cancer to gram of localized prostate cancer characteristics. In order to quantify the key parameters such as distribution, multicentricity, and volume, we used a finite generalized Gaussian mixture to model the histogram, and estimate the parameter values through information theoretical criteria and a probabilistic self-organizing mixture. Utilizing minimally-immersive and stereoscopic interactive visualization, an augmented reality can be developed to allow the physician to virtually hold the master model in one hand and use the dominant hand to probe data values and perform a simulated needle biopsy. An adaptive self- organizing

  11. Statistical Model of the 3-D Braided Composites Strength

    Institute of Scientific and Technical Information of China (English)

    XIAO Laiyuan; ZUO Weiwei; CAI Ganwei; LIAO Daoxun

    2007-01-01

    Based on the statistical model for the tensile statistical strength of unidirectional composite materials and the stress analysis of 3-D braided composites, a new method is proposed to calculate the tensile statistical strength of the 3-D braided composites. With this method, the strength of 3-D braided composites can be calculated with very large accuracy, and the statistical parameters of 3-D braided composites can be determined. The numerical result shows that the tensile statistical strength of 3-D braided composites can be predicted using this method.

  12. Eigenfunction statistics in the localized Anderson model

    CERN Document Server

    Killip, R

    2006-01-01

    We consider the localized region of the Anderson model and study the distribution of eigenfunctions simultaneously in space and energy. In a natural scaling limit, we prove convergence to a Poisson process. This provides a counterpoint to recent work, which proves repulsion of the localization centres in a subtly different regime.

  13. Statistical modelling of fine red wine production

    Directory of Open Access Journals (Sweden)

    María Rosa Castro

    2010-05-01

    Full Text Available Producing wine is a very important economic activity in the province of San Juan in Argentina; it is therefore most important to predict production regarding the quantity of raw material needed. This work was aimed at obtaining a model relating kilograms of crushed grape to the litres of wine so produced. Such model will be used for predicting precise future values and confidence intervals for determined quantities of crushed grapes. Data from a vineyard in the province of San Juan was thus used in this work. The sampling coefficient of correlation was calculated and a dispersion diagram was then constructed; this indicated a li- neal relationship between the litres of wine obtained and the kilograms of crushed grape. Two lineal models were then adopted and variance analysis was carried out because the data came from normal populations having the same variance. The most appropriate model was obtained from this analysis; it was validated with experimental values, a good approach being obtained.

  14. Structured Statistical Models of Inductive Reasoning

    Science.gov (United States)

    Kemp, Charles; Tenenbaum, Joshua B.

    2009-01-01

    Everyday inductive inferences are often guided by rich background knowledge. Formal models of induction should aim to incorporate this knowledge and should explain how different kinds of knowledge lead to the distinctive patterns of reasoning found in different inductive contexts. This article presents a Bayesian framework that attempts to meet…

  15. Probing NWP model deficiencies by statistical postprocessing

    DEFF Research Database (Denmark)

    Rosgaard, Martin Haubjerg; Nielsen, Henrik Aalborg; Nielsen, Torben S.

    2016-01-01

    numerical weather prediction (NWP) model generating global weather forecasts four times daily, with numerous users worldwide. The analysis is based on two years of hourly wind speed time series measured at three locations; offshore, in coastal and flat terrain, and inland in complex topography, respectively...

  16. Network Data: Statistical Theory and New Models

    Science.gov (United States)

    2016-02-17

    Using AERONET DRAGON Campaign Data, IEEE Transactions on Geoscience and Remote Sensing, (08 2015): 0. doi: 10.1109/TGRS.2015.2395722 Geoffrey...are not viable, i.e. the fruit fly dies after the knock-out of the gene. Further examination of the ftz stained embryos indicates that the lack of...our approach for spatial gene expression analysis for early stage fruit fly embryos, we are in a process to extend it to model later stage gene

  17. Plan Recognition using Statistical Relational Models

    Science.gov (United States)

    2014-08-25

    implication has O(nk) literals. This in turn re- sults in cliques of size O(nk) in the ground network. This significantly increases the computational...complexity since probabilistic inference is exponential in the treewidth of the graph, which in turn is at least the size of the maximum clique (Koller...causes, thereby reducing the size of the reverse implication (and hence, the corresponding clique size) to O(n). The need for the pairwise constraints is

  18. Behavioral and Statistical Models of Educational Inequality

    DEFF Research Database (Denmark)

    Holm, Anders; Breen, Richard

    2016-01-01

    This article addresses the question of how students and their families make educational decisions. We describe three types of behavioral model that might underlie decision-making, and we show that they have consequences for what decisions are made. Our study, thus, has policy implications if we...... wish to encourage students and their families to make better educational choices. We also establish the conditions under which empirical analysis can distinguish between the three sorts of decision-making, and we illustrate our arguments using data from the National Educational Longitudinal Study....

  19. Behavioral and Statistical Models of Educational Inequality

    DEFF Research Database (Denmark)

    Holm, Anders; Breen, Richard

    2016-01-01

    This paper addresses the question of how students and their families make educational decisions. We describe three types of behavioral model that might underlie decision-making and we show that they have consequences for what decisions are made. Our study thus has policy implications if we wish...... to encourage students and their families to make better educational choices. We also establish the conditions under which empirical analysis can distinguish between the three sorts of decision-making and we illustrate our arguments using data from the National Educational Longitudinal Study....

  20. Statistical modelling in biostatistics and bioinformatics selected papers

    CERN Document Server

    Peng, Defen

    2014-01-01

    This book presents selected papers on statistical model development related mainly to the fields of Biostatistics and Bioinformatics. The coverage of the material falls squarely into the following categories: (a) Survival analysis and multivariate survival analysis, (b) Time series and longitudinal data analysis, (c) Statistical model development and (d) Applied statistical modelling. Innovations in statistical modelling are presented throughout each of the four areas, with some intriguing new ideas on hierarchical generalized non-linear models and on frailty models with structural dispersion, just to mention two examples. The contributors include distinguished international statisticians such as Philip Hougaard, John Hinde, Il Do Ha, Roger Payne and Alessandra Durio, among others, as well as promising newcomers. Some of the contributions have come from researchers working in the BIO-SI research programme on Biostatistics and Bioinformatics, centred on the Universities of Limerick and Galway in Ireland and fu...

  1. Process Model Construction and Optimization Using Statistical Experimental Design,

    Science.gov (United States)

    1988-04-01

    Memo No. 88-442 ~LECTE March 1988 31988 %,.. MvAY 1 98 0) PROCESS MODEL CONSTRUCTION AND OPTIMIZATION USING STATISTICAL EXPERIMENTAL DESIGN Emmanuel...Sachs and George Prueger Abstract A methodology is presented for the construction of process models by the combination of physically based mechanistic...253-8138. .% I " Process Model Construction and Optimization Using Statistical Experimental Design" by Emanuel Sachs Assistant Professor and George

  2. Daisy Models Semi-Poisson statistics and beyond

    CERN Document Server

    Hernández-Saldaña, H; Seligman, T H

    1999-01-01

    Semi-Poisson statistics are shown to be obtained by removing every other number from a random sequence. Retaining every (r+1)th level we obtain a family of secuences which we call daisy models. Their statistical properties coincide with those of Bogomolny's nearest-neighbour interaction Coulomb gas if the inverse temperature coincides with the integer r. In particular the case r=2 reproduces closely the statistics of quasi-optimal solutions of the traveling salesman problem.

  3. Five Methodology Errors in Educational Research: The Pantheon of Statistical Significance and Other Faux Pas.

    Science.gov (United States)

    Thompson, Bruce

    After presenting a general linear model as a framework for discussion, this paper reviews five methodology errors that occur in educational research: (1) the use of stepwise methods; (2) the failure to consider in result interpretation the context specificity of analytic weights (e.g., regression beta weights, factor pattern coefficients,…

  4. Deriving statistical significance maps for support vector regression using medical imaging data.

    Science.gov (United States)

    Gaonkar, Bilwaj; Sotiras, Aristeidis; Davatzikos, Christos

    2013-01-01

    Regression analysis involves predicting a continuous variable using imaging data. The Support Vector Regression (SVR) algorithm has previously been used in addressing regression analysis in neuroimaging. However, identifying the regions of the image that the SVR uses to model the dependence of a target variable remains an open problem. It is an important issue when one wants to biologically interpret the meaning of a pattern that predicts the variable(s) of interest, and therefore to understand normal or pathological process. One possible approach to the identification of these regions is the use of permutation testing. Permutation testing involves 1) generation of a large set of 'null SVR models' using randomly permuted sets of target variables, and 2) comparison of the SVR model trained using the original labels to the set of null models. These permutation tests often require prohibitively long computational time. Recent work in support vector classification shows that it is possible to analytically approximate the results of permutation testing in medical image analysis. We propose an analogous approach to approximate permutation testing based analysis for support vector regression with medical imaging data. In this paper we present 1) the theory behind our approximation, and 2) experimental results using two real datasets.

  5. Development of statistical models for data analysis

    Energy Technology Data Exchange (ETDEWEB)

    Downham, D.Y.

    2000-07-01

    Incidents that cause, or could cause, injury to personnel, and that satisfy specific criteria, are reported to the Offshore Safety Division (OSD) of the Health and Safety Executive (HSE). The underlying purpose of this report is to improve ways of quantifying risk, a recommendation in Lord Cullen's report into the Piper Alpha disaster. Records of injuries and hydrocarbon releases from 1 January, 1991, to 31 March 1996, are analysed, because the reporting of incidents was standardised after 1990. Models are identified for risk assessment and some are applied. The appropriate analyses of one or two factors (or variables) are tests of uniformity or of independence. Radar graphs are used to represent some temporal variables. Cusums are applied for the analysis of incident frequencies over time, and could be applied for regular monitoring. Log-linear models for Poisson-distributed data are identified as being suitable for identifying 'non-random' combinations of more than two factors. Some questions cannot be addressed with the available data: for example, more data are needed to assess the risk of injury per employee in a time interval. If the questions are considered sufficiently important, resources could be assigned to obtain the data. Some of the main results from the analyses are as follows: the cusum analyses identified a change-point at the end of July 1993, when the reported number of injuries reduced by 40%. Injuries were more likely to occur between 8am and 12am or between 2pm and 5pm than at other times: between 2pm and 3pm the number of injuries was almost twice the average and was more than three fold the smallest. No seasonal effects in the numbers of injuries were identified. Three-day injuries occurred more frequently on the 5th, 6th and 7th days into a tour of duty than on other days. Three-day injuries occurred less frequently on the 13th and 14th days of a tour of duty. An injury classified as 'lifting or craning' was

  6. Statistical significance of hair analysis of clenbuterol to discriminate therapeutic use from contamination.

    Science.gov (United States)

    Krumbholz, Aniko; Anielski, Patricia; Gfrerer, Lena; Graw, Matthias; Geyer, Hans; Schänzer, Wilhelm; Dvorak, Jiri; Thieme, Detlef

    2014-01-01

    Clenbuterol is a well-established β2-agonist, which is prohibited in sports and strictly regulated for use in the livestock industry. During the last few years clenbuterol-positive results in doping controls and in samples from residents or travellers from a high-risk country were suspected to be related the illegal use of clenbuterol for fattening. A sensitive liquid chromatography-tandem mass spectrometry (LC-MS/MS) method was developed to detect low clenbuterol residues in hair with a detection limit of 0.02 pg/mg. A sub-therapeutic application study and a field study with volunteers, who have a high risk of contamination, were performed. For the application study, a total dosage of 30 µg clenbuterol was applied to 20 healthy volunteers on 5 subsequent days. One month after the beginning of the application, clenbuterol was detected in the proximal hair segment (0-1 cm) in concentrations between 0.43 and 4.76 pg/mg. For the second part, samples of 66 Mexican soccer players were analyzed. In 89% of these volunteers, clenbuterol was detectable in their hair at concentrations between 0.02 and 1.90 pg/mg. A comparison of both parts showed no statistical difference between sub-therapeutic application and contamination. In contrast, discrimination to a typical abuse of clenbuterol is apparently possible. Due to these findings results of real doping control samples can be evaluated. Copyright © 2014 John Wiley & Sons, Ltd.

  7. Statistical Significance and Reliability Analyses in Recent "Journal of Counseling & Development" Research Articles.

    Science.gov (United States)

    Thompson, Bruce; Snyder, Patricia A.

    1998-01-01

    Investigates two aspects of research analyses in quantitative research studies reported in the 1996 issues of "Journal of Counseling & Development" (JCD). Acceptable methodological practice regarding significance testing and evaluation of score reliability has evolved considerably. Contemporary thinking on these issues is described; practice as…

  8. A Visitor's Guide to Effect Sizes--Statistical Significance versus Practical (Clinical) Importance of Research Findings

    Science.gov (United States)

    Hojat, Mohammadreza; Xu, Gang

    2004-01-01

    Effect Sizes (ES) are an increasingly important index used to quantify the degree of practical significance of study results. This paper gives an introduction to the computation and interpretation of effect sizes from the perspective of the consumer of the research literature. The key points made are: (1) "ES" is a useful indicator of the…

  9. ELISA: Structure-Function Inferences based on statistically significant and evolutionarily inspired observations

    Directory of Open Access Journals (Sweden)

    DeLisi Charles

    2003-09-01

    Full Text Available Abstract The problem of functional annotation based on homology modeling is primary to current bioinformatics research. Researchers have noted regularities in sequence, structure and even chromosome organization that allow valid functional cross-annotation. However, these methods provide a lot of false negatives due to limited specificity inherent in the system. We want to create an evolutionarily inspired organization of data that would approach the issue of structure-function correlation from a new, probabilistic perspective. Such organization has possible applications in phylogeny, modeling of functional evolution and structural determination. ELISA (Evolutionary Lineage Inferred from Structural Analysis, http://romi.bu.edu/elisa is an online database that combines functional annotation with structure and sequence homology modeling to place proteins into sequence-structure-function "neighborhoods". The atomic unit of the database is a set of sequences and structural templates that those sequences encode. A graph that is built from the structural comparison of these templates is called PDUG (protein domain universe graph. We introduce a method of functional inference through a probabilistic calculation done on an arbitrary set of PDUG nodes. Further, all PDUG structures are mapped onto all fully sequenced proteomes allowing an easy interface for evolutionary analysis and research into comparative proteomics. ELISA is the first database with applicability to evolutionary structural genomics explicitly in mind. Availability: The database is available at http://romi.bu.edu/elisa.

  10. Mixed deterministic statistical modelling of regional ozone air pollution

    KAUST Repository

    Kalenderski, Stoitchko Dimitrov

    2011-03-17

    We develop a physically motivated statistical model for regional ozone air pollution by separating the ground-level pollutant concentration field into three components, namely: transport, local production and large-scale mean trend mostly dominated by emission rates. The model is novel in the field of environmental spatial statistics in that it is a combined deterministic-statistical model, which gives a new perspective to the modelling of air pollution. The model is presented in a Bayesian hierarchical formalism, and explicitly accounts for advection of pollutants, using the advection equation. We apply the model to a specific case of regional ozone pollution-the Lower Fraser valley of British Columbia, Canada. As a predictive tool, we demonstrate that the model vastly outperforms existing, simpler modelling approaches. Our study highlights the importance of simultaneously considering different aspects of an air pollution problem as well as taking into account the physical bases that govern the processes of interest. © 2011 John Wiley & Sons, Ltd..

  11. Statistical properties of several models of fractional random point processes

    Science.gov (United States)

    Bendjaballah, C.

    2011-08-01

    Statistical properties of several models of fractional random point processes have been analyzed from the counting and time interval statistics points of view. Based on the criterion of the reduced variance, it is seen that such processes exhibit nonclassical properties. The conditions for these processes to be treated as conditional Poisson processes are examined. Numerical simulations illustrate part of the theoretical calculations.

  12. The Importance of Statistical Modeling in Data Analysis and Inference

    Science.gov (United States)

    Rollins, Derrick, Sr.

    2017-01-01

    Statistical inference simply means to draw a conclusion based on information that comes from data. Error bars are the most commonly used tool for data analysis and inference in chemical engineering data studies. This work demonstrates, using common types of data collection studies, the importance of specifying the statistical model for sound…

  13. Statistical Damage Detection of Civil Engineering Structures using ARMAV Models

    DEFF Research Database (Denmark)

    Andersen, P.; Kirkegaard, Poul Henning

    In this paper a statistically based damage detection of a lattice steel mast is performed. By estimation of the modal parameters and their uncertainties it is possible to detect whether some of the modal parameters have changed with a statistical significance. The estimation of the uncertainties ...

  14. The statistical significance of error probability as determined from decoding simulations for long codes

    Science.gov (United States)

    Massey, J. L.

    1976-01-01

    The very low error probability obtained with long error-correcting codes results in a very small number of observed errors in simulation studies of practical size and renders the usual confidence interval techniques inapplicable to the observed error probability. A natural extension of the notion of a 'confidence interval' is made and applied to such determinations of error probability by simulation. An example is included to show the surprisingly great significance of as few as two decoding errors in a very large number of decoding trials.

  15. Improving statistical reasoning theoretical models and practical implications

    CERN Document Server

    Sedlmeier, Peter

    1999-01-01

    This book focuses on how statistical reasoning works and on training programs that can exploit people''s natural cognitive capabilities to improve their statistical reasoning. Training programs that take into account findings from evolutionary psychology and instructional theory are shown to have substantially larger effects that are more stable over time than previous training regimens. The theoretical implications are traced in a neural network model of human performance on statistical reasoning problems. This book apppeals to judgment and decision making researchers and other cognitive scientists, as well as to teachers of statistics and probabilistic reasoning.

  16. WISCOD: A Statistical Web-Enabled Tool for the Identification of Significant Protein Coding Regions

    Directory of Open Access Journals (Sweden)

    Mireia Vilardell

    2014-01-01

    Full Text Available Classically, gene prediction programs are based on detecting signals such as boundary sites (splice sites, starts, and stops and coding regions in the DNA sequence in order to build potential exons and join them into a gene structure. Although nowadays it is possible to improve their performance with additional information from related species or/and cDNA databases, further improvement at any step could help to obtain better predictions. Here, we present WISCOD, a web-enabled tool for the identification of significant protein coding regions, a novel software tool that tackles the exon prediction problem in eukaryotic genomes. WISCOD has the capacity to detect real exons from large lists of potential exons, and it provides an easy way to use global P value called expected probability of being a false exon (EPFE that is useful for ranking potential exons in a probabilistic framework, without additional computational costs. The advantage of our approach is that it significantly increases the specificity and sensitivity (both between 80% and 90% in comparison to other ab initio methods (where they are in the range of 70–75%. WISCOD is written in JAVA and R and is available to download and to run in a local mode on Linux and Windows platforms.

  17. Powerline Communications Channel Modelling Methodology Based on Statistical Features

    CERN Document Server

    Tan, Bo

    2012-01-01

    This paper proposes a new channel modelling method for powerline communications networks based on the multipath profile in the time domain. The new channel model is developed to be applied in a range of Powerline Communications (PLC) research topics such as impulse noise modelling, deployment and coverage studies, and communications theory analysis. To develop the methodology, channels are categorised according to their propagation distance and power delay profile. The statistical multipath parameters such as path arrival time, magnitude and interval for each category are analyzed to build the model. Each generated channel based on the proposed statistical model represents a different realisation of a PLC network. Simulation results in similar the time and frequency domains show that the proposed statistical modelling method, which integrates the impact of network topology presents the PLC channel features as the underlying transmission line theory model. Furthermore, two potential application scenarios are d...

  18. Isospin dependence of nuclear multifragmentation in statistical model

    Institute of Scientific and Technical Information of China (English)

    张蕾; 谢东珠; 张艳萍; 高远

    2011-01-01

    The evolution of nuclear disintegration mechanisms with increasing excitation energy, from compound nucleus to multifragmentation, has been studied by using the Statistical Multifragmentation Model (SMM) within a micro-canonical ensemble. We discuss the o

  19. Statistical validation of normal tissue complication probability models

    NARCIS (Netherlands)

    Xu, Cheng-Jian; van der Schaaf, Arjen; van t Veld, Aart; Langendijk, Johannes A.; Schilstra, Cornelis

    2012-01-01

    PURPOSE: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. METHODS AND MATERIALS: A penalized regression method, LASSO (least absolute shrinkage

  20. Statistical modeling of a considering work-piece

    Directory of Open Access Journals (Sweden)

    Cornelia Victoria Anghel

    2008-10-01

    Full Text Available In this article are presented the stochastic predictive models for controlling properly the independent variables of the drilling operation a combined approach of statistical design and Response Surface Methodology (RSM.

  1. A New Method for Assessing the Statistical Significance in the Differential Functioning of Items and Tests (DFIT) Framework

    Science.gov (United States)

    Oshima, T. C.; Raju, Nambury S.; Nanda, Alice O.

    2006-01-01

    A new item parameter replication method is proposed for assessing the statistical significance of the noncompensatory differential item functioning (NCDIF) index associated with the differential functioning of items and tests framework. In this new method, a cutoff score for each item is determined by obtaining a (1-alpha ) percentile rank score…

  2. A no extensive statistical model for the nucleon structure function

    Science.gov (United States)

    Trevisan, Luis A.; Mirez, Carlos

    2013-03-01

    We studied an application of nonextensive thermodynamics to describe the structure function of nucleon, in a model where the usual Fermi-Dirac and Bose-Einstein energy distribution were replaced by the equivalent functions of the q-statistical. The parameters of the model are given by an effective temperature T, the q parameter (from Tsallis statistics), and two chemical potentials given by the corresponding up (u) and down (d) quark normalization in the nucleon.

  3. Model of risk assessment under ballistic statistical tests

    Science.gov (United States)

    Gabrovski, Ivan; Karakaneva, Juliana

    The material presents the application of a mathematical method for risk assessment under statistical determination of the ballistic limits of the protection equipment. The authors have implemented a mathematical model based on Pierson's criteria. The software accomplishment of the model allows to evaluate the V50 indicator and to assess the statistical hypothesis' reliability. The results supply the specialists with information about the interval valuations of the probability determined during the testing process.

  4. Statistical Significance of the Maximum Hardness Principle Applied to Some Selected Chemical Reactions.

    Science.gov (United States)

    Saha, Ranajit; Pan, Sudip; Chattaraj, Pratim K

    2016-11-05

    The validity of the maximum hardness principle (MHP) is tested in the cases of 50 chemical reactions, most of which are organic in nature and exhibit anomeric effect. To explore the effect of the level of theory on the validity of MHP in an exothermic reaction, B3LYP/6-311++G(2df,3pd) and LC-BLYP/6-311++G(2df,3pd) (def2-QZVP for iodine and mercury) levels are employed. Different approximations like the geometric mean of hardness and combined hardness are considered in case there are multiple reactants and/or products. It is observed that, based on the geometric mean of hardness, while 82% of the studied reactions obey the MHP at the B3LYP level, 84% of the reactions follow this rule at the LC-BLYP level. Most of the reactions possess the hardest species on the product side. A 50% null hypothesis is rejected at a 1% level of significance.

  5. Statistical Significance of the Maximum Hardness Principle Applied to Some Selected Chemical Reactions

    Directory of Open Access Journals (Sweden)

    Ranajit Saha

    2016-11-01

    Full Text Available The validity of the maximum hardness principle (MHP is tested in the cases of 50 chemical reactions, most of which are organic in nature and exhibit anomeric effect. To explore the effect of the level of theory on the validity of MHP in an exothermic reaction, B3LYP/6-311++G(2df,3pd and LC-BLYP/6-311++G(2df,3pd (def2-QZVP for iodine and mercury levels are employed. Different approximations like the geometric mean of hardness and combined hardness are considered in case there are multiple reactants and/or products. It is observed that, based on the geometric mean of hardness, while 82% of the studied reactions obey the MHP at the B3LYP level, 84% of the reactions follow this rule at the LC-BLYP level. Most of the reactions possess the hardest species on the product side. A 50% null hypothesis is rejected at a 1% level of significance.

  6. What's statistical about learning? Insights from modelling statistical learning as a set of memory processes.

    Science.gov (United States)

    Thiessen, Erik D

    2017-01-05

    Statistical learning has been studied in a variety of different tasks, including word segmentation, object identification, category learning, artificial grammar learning and serial reaction time tasks (e.g. Saffran et al. 1996 Science 274: , 1926-1928; Orban et al. 2008 Proceedings of the National Academy of Sciences 105: , 2745-2750; Thiessen & Yee 2010 Child Development 81: , 1287-1303; Saffran 2002 Journal of Memory and Language 47: , 172-196; Misyak & Christiansen 2012 Language Learning 62: , 302-331). The difference among these tasks raises questions about whether they all depend on the same kinds of underlying processes and computations, or whether they are tapping into different underlying mechanisms. Prior theoretical approaches to statistical learning have often tried to explain or model learning in a single task. However, in many cases these approaches appear inadequate to explain performance in multiple tasks. For example, explaining word segmentation via the computation of sequential statistics (such as transitional probability) provides little insight into the nature of sensitivity to regularities among simultaneously presented features. In this article, we will present a formal computational approach that we believe is a good candidate to provide a unifying framework to explore and explain learning in a wide variety of statistical learning tasks. This framework suggests that statistical learning arises from a set of processes that are inherent in memory systems, including activation, interference, integration of information and forgetting (e.g. Perruchet & Vinter 1998 Journal of Memory and Language 39: , 246-263; Thiessen et al. 2013 Psychological Bulletin 139: , 792-814). From this perspective, statistical learning does not involve explicit computation of statistics, but rather the extraction of elements of the input into memory traces, and subsequent integration across those memory traces that emphasize consistent information (Thiessen and Pavlik

  7. Improving Statistical Language Model Performance with Automatically Generated Word Hierarchies

    CERN Document Server

    McMahon, J; Mahon, John Mc

    1995-01-01

    An automatic word classification system has been designed which processes word unigram and bigram frequency statistics extracted from a corpus of natural language utterances. The system implements a binary top-down form of word clustering which employs an average class mutual information metric. Resulting classifications are hierarchical, allowing variable class granularity. Words are represented as structural tags --- unique $n$-bit numbers the most significant bit-patterns of which incorporate class information. Access to a structural tag immediately provides access to all classification levels for the corresponding word. The classification system has successfully revealed some of the structure of English, from the phonemic to the semantic level. The system has been compared --- directly and indirectly --- with other recent word classification systems. Class based interpolated language models have been constructed to exploit the extra information supplied by the classifications and some experiments have sho...

  8. Significant Association of Urinary Toxic Metals and Autism-Related Symptoms—A Nonlinear Statistical Analysis with Cross Validation

    Science.gov (United States)

    Adams, James; Kruger, Uwe; Geis, Elizabeth; Gehn, Eva; Fimbres, Valeria; Pollard, Elena; Mitchell, Jessica; Ingram, Julie; Hellmers, Robert; Quig, David; Hahn, Juergen

    2017-01-01

    Introduction A number of previous studies examined a possible association of toxic metals and autism, and over half of those studies suggest that toxic metal levels are different in individuals with Autism Spectrum Disorders (ASD). Additionally, several studies found that those levels correlate with the severity of ASD. Methods In order to further investigate these points, this paper performs the most detailed statistical analysis to date of a data set in this field. First morning urine samples were collected from 67 children and adults with ASD and 50 neurotypical controls of similar age and gender. The samples were analyzed to determine the levels of 10 urinary toxic metals (UTM). Autism-related symptoms were assessed with eleven behavioral measures. Statistical analysis was used to distinguish participants on the ASD spectrum and neurotypical participants based upon the UTM data alone. The analysis also included examining the association of autism severity with toxic metal excretion data using linear and nonlinear analysis. “Leave-one-out” cross-validation was used to ensure statistical independence of results. Results and Discussion Average excretion levels of several toxic metals (lead, tin, thallium, antimony) were significantly higher in the ASD group. However, ASD classification using univariate statistics proved difficult due to large variability, but nonlinear multivariate statistical analysis significantly improved ASD classification with Type I/II errors of 15% and 18%, respectively. These results clearly indicate that the urinary toxic metal excretion profiles of participants in the ASD group were significantly different from those of the neurotypical participants. Similarly, nonlinear methods determined a significantly stronger association between the behavioral measures and toxic metal excretion. The association was strongest for the Aberrant Behavior Checklist (including subscales on Irritability, Stereotypy, Hyperactivity, and Inappropriate

  9. Significant Association of Urinary Toxic Metals and Autism-Related Symptoms-A Nonlinear Statistical Analysis with Cross Validation.

    Science.gov (United States)

    Adams, James; Howsmon, Daniel P; Kruger, Uwe; Geis, Elizabeth; Gehn, Eva; Fimbres, Valeria; Pollard, Elena; Mitchell, Jessica; Ingram, Julie; Hellmers, Robert; Quig, David; Hahn, Juergen

    2017-01-01

    A number of previous studies examined a possible association of toxic metals and autism, and over half of those studies suggest that toxic metal levels are different in individuals with Autism Spectrum Disorders (ASD). Additionally, several studies found that those levels correlate with the severity of ASD. In order to further investigate these points, this paper performs the most detailed statistical analysis to date of a data set in this field. First morning urine samples were collected from 67 children and adults with ASD and 50 neurotypical controls of similar age and gender. The samples were analyzed to determine the levels of 10 urinary toxic metals (UTM). Autism-related symptoms were assessed with eleven behavioral measures. Statistical analysis was used to distinguish participants on the ASD spectrum and neurotypical participants based upon the UTM data alone. The analysis also included examining the association of autism severity with toxic metal excretion data using linear and nonlinear analysis. "Leave-one-out" cross-validation was used to ensure statistical independence of results. Average excretion levels of several toxic metals (lead, tin, thallium, antimony) were significantly higher in the ASD group. However, ASD classification using univariate statistics proved difficult due to large variability, but nonlinear multivariate statistical analysis significantly improved ASD classification with Type I/II errors of 15% and 18%, respectively. These results clearly indicate that the urinary toxic metal excretion profiles of participants in the ASD group were significantly different from those of the neurotypical participants. Similarly, nonlinear methods determined a significantly stronger association between the behavioral measures and toxic metal excretion. The association was strongest for the Aberrant Behavior Checklist (including subscales on Irritability, Stereotypy, Hyperactivity, and Inappropriate Speech), but significant associations were found

  10. A hybrid random field model for scalable statistical learning.

    Science.gov (United States)

    Freno, A; Trentin, E; Gori, M

    2009-01-01

    This paper introduces hybrid random fields, which are a class of probabilistic graphical models aimed at allowing for efficient structure learning in high-dimensional domains. Hybrid random fields, along with the learning algorithm we develop for them, are especially useful as a pseudo-likelihood estimation technique (rather than a technique for estimating strict joint probability distributions). In order to assess the generality of the proposed model, we prove that the class of pseudo-likelihood distributions representable by hybrid random fields strictly includes the class of joint probability distributions representable by Bayesian networks. Once we establish this result, we develop a scalable algorithm for learning the structure of hybrid random fields, which we call 'Markov Blanket Merging'. On the one hand, we characterize some complexity properties of Markov Blanket Merging both from a theoretical and from the experimental point of view, using a series of synthetic benchmarks. On the other hand, we evaluate the accuracy of hybrid random fields (as learned via Markov Blanket Merging) by comparing them to various alternative statistical models in a number of pattern classification and link-prediction applications. As the results show, learning hybrid random fields by the Markov Blanket Merging algorithm not only reduces significantly the computational cost of structure learning with respect to several considered alternatives, but it also leads to models that are highly accurate as compared to the alternative ones.

  11. Flashover of a vacuum-insulator interface: A statistical model

    Directory of Open Access Journals (Sweden)

    W. A. Stygar

    2004-07-01

    Full Text Available We have developed a statistical model for the flashover of a 45° vacuum-insulator interface (such as would be found in an accelerator subject to a pulsed electric field. The model assumes that the initiation of a flashover plasma is a stochastic process, that the characteristic statistical component of the flashover delay time is much greater than the plasma formative time, and that the average rate at which flashovers occur is a power-law function of the instantaneous value of the electric field. Under these conditions, we find that the flashover probability is given by 1-exp(-E_{p}^{β}t_{eff}C/k^{β}, where E_{p} is the peak value in time of the spatially averaged electric field E(t, t_{eff}≡∫[E(t/E_{p}]^{β}dt is the effective pulse width, C is the insulator circumference, k∝exp(λ/d, and β and λ are constants. We define E(t as V(t/d, where V(t is the voltage across the insulator and d is the insulator thickness. Since the model assumes that flashovers occur at random azimuthal locations along the insulator, it does not apply to systems that have a significant defect, i.e., a location contaminated with debris or compromised by an imperfection at which flashovers repeatedly take place, and which prevents a random spatial distribution. The model is consistent with flashover measurements to within 7% for pulse widths between 0.5 ns and 10   μs, and to within a factor of 2 between 0.5 ns and 90 s (a span of over 11 orders of magnitude. For these measurements, E_{p} ranges from 64 to 651  kV/cm, d from 0.50 to 4.32 cm, and C from 4.96 to 95.74 cm. The model is significantly more accurate, and is valid over a wider range of parameters, than the J. C. Martin flashover relation that has been in use since 1971 [J. C. Martin on Pulsed Power, edited by T. H. Martin, A. H. Guenther, and M. Kristiansen (Plenum, New York, 1996]. We have generalized the statistical model to estimate the total-flashover probability of an

  12. Models for probability and statistical inference theory and applications

    CERN Document Server

    Stapleton, James H

    2007-01-01

    This concise, yet thorough, book is enhanced with simulations and graphs to build the intuition of readersModels for Probability and Statistical Inference was written over a five-year period and serves as a comprehensive treatment of the fundamentals of probability and statistical inference. With detailed theoretical coverage found throughout the book, readers acquire the fundamentals needed to advance to more specialized topics, such as sampling, linear models, design of experiments, statistical computing, survival analysis, and bootstrapping.Ideal as a textbook for a two-semester sequence on probability and statistical inference, early chapters provide coverage on probability and include discussions of: discrete models and random variables; discrete distributions including binomial, hypergeometric, geometric, and Poisson; continuous, normal, gamma, and conditional distributions; and limit theory. Since limit theory is usually the most difficult topic for readers to master, the author thoroughly discusses mo...

  13. Electron impact ionization of tungsten ions in a statistical model

    Science.gov (United States)

    Demura, A. V.; Kadomtsev, M. B.; Lisitsa, V. S.; Shurygin, V. A.

    2015-01-01

    The statistical model for calculations of the electron impact ionization cross sections of multielectron ions is developed for the first time. The model is based on the idea of collective excitations of atomic electrons with the local plasma frequency, while the Thomas-Fermi model is used for atomic electrons density distribution. The electron impact ionization cross sections and related ionization rates of tungsten ions from W+ up to W63+ are calculated and then compared with the vast collection of modern experimental and modeling results. The reasonable correspondence between experimental and theoretical data demonstrates the universal nature of statistical approach to the description of atomic processes in multielectron systems.

  14. A Statistical Model for Regional Tornado Climate Studies.

    Science.gov (United States)

    Jagger, Thomas H; Elsner, James B; Widen, Holly M

    2015-01-01

    Tornado reports are locally rare, often clustered, and of variable quality making it difficult to use them directly to describe regional tornado climatology. Here a statistical model is demonstrated that overcomes some of these difficulties and produces a smoothed regional-scale climatology of tornado occurrences. The model is applied to data aggregated at the level of counties. These data include annual population, annual tornado counts and an index of terrain roughness. The model has a term to capture the smoothed frequency relative to the state average. The model is used to examine whether terrain roughness is related to tornado frequency and whether there are differences in tornado activity by County Warning Area (CWA). A key finding is that tornado reports increase by 13% for a two-fold increase in population across Kansas after accounting for improvements in rating procedures. Independent of this relationship, tornadoes have been increasing at an annual rate of 1.9%. Another finding is the pattern of correlated residuals showing more Kansas tornadoes in a corridor of counties running roughly north to south across the west central part of the state consistent with the dryline climatology. The model is significantly improved by adding terrain roughness. The effect amounts to an 18% reduction in the number of tornadoes for every ten meter increase in elevation standard deviation. The model indicates that tornadoes are 51% more likely to occur in counties served by the CWAs of DDC and GID than elsewhere in the state. Flexibility of the model is illustrated by fitting it to data from Illinois, Mississippi, South Dakota, and Ohio.

  15. A Statistical Model for Regional Tornado Climate Studies.

    Directory of Open Access Journals (Sweden)

    Thomas H Jagger

    Full Text Available Tornado reports are locally rare, often clustered, and of variable quality making it difficult to use them directly to describe regional tornado climatology. Here a statistical model is demonstrated that overcomes some of these difficulties and produces a smoothed regional-scale climatology of tornado occurrences. The model is applied to data aggregated at the level of counties. These data include annual population, annual tornado counts and an index of terrain roughness. The model has a term to capture the smoothed frequency relative to the state average. The model is used to examine whether terrain roughness is related to tornado frequency and whether there are differences in tornado activity by County Warning Area (CWA. A key finding is that tornado reports increase by 13% for a two-fold increase in population across Kansas after accounting for improvements in rating procedures. Independent of this relationship, tornadoes have been increasing at an annual rate of 1.9%. Another finding is the pattern of correlated residuals showing more Kansas tornadoes in a corridor of counties running roughly north to south across the west central part of the state consistent with the dryline climatology. The model is significantly improved by adding terrain roughness. The effect amounts to an 18% reduction in the number of tornadoes for every ten meter increase in elevation standard deviation. The model indicates that tornadoes are 51% more likely to occur in counties served by the CWAs of DDC and GID than elsewhere in the state. Flexibility of the model is illustrated by fitting it to data from Illinois, Mississippi, South Dakota, and Ohio.

  16. An Order Statistics Approach to the Halo Model for Galaxies

    Science.gov (United States)

    Paul, Niladri; Paranjape, Aseem; Sheth, Ravi K.

    2017-01-01

    We use the Halo Model to explore the implications of assuming that galaxy luminosities in groups are randomly drawn from an underlying luminosity function. We show that even the simplest of such order statistics models - one in which this luminosity function p(L) is universal - naturally produces a number of features associated with previous analyses based on the `central plus Poisson satellites' hypothesis. These include the monotonic relation of mean central luminosity with halo mass, the Lognormal distribution around this mean, and the tight relation between the central and satellite mass scales. In stark contrast to observations of galaxy clustering, however, this model predicts no luminosity dependence of large scale clustering. We then show that an extended version of this model, based on the order statistics of a halo mass dependent luminosity function p(L|m), is in much better agreement with the clustering data as well as satellite luminosities, but systematically under-predicts central luminosities. This brings into focus the idea that central galaxies constitute a distinct population that is affected by different physical processes than are the satellites. We model this physical difference as a statistical brightening of the central luminosities, over and above the order statistics prediction. The magnitude gap between the brightest and second brightest group galaxy is predicted as a by-product, and is also in good agreement with observations. We propose that this order statistics framework provides a useful language in which to compare the Halo Model for galaxies with more physically motivated galaxy formation models.

  17. Equilibrium Statistical-Thermal Models in High-Energy Physics

    CERN Document Server

    Tawfik, Abdel Nasser

    2014-01-01

    We review some recent highlights from the applications of statistical-thermal models to different experimental measurements and lattice QCD thermodynamics, that have been made during the last decade. We start with a short review of the historical milestones on the path of constructing statistical-thermal models for heavy-ion physics. We discovered that Heinz Koppe formulated in 1948 an almost complete recipe for the statistical-thermal models. In 1950, Enrico Fermi generalized this statistical approach, in which he started with a general cross-section formula and inserted into it simplifying assumptions about the matrix element of the interaction process that likely reflects many features of the high-energy reactions dominated by density in the phase space of final states. In 1964, Hagedorn systematically analysed the high-energy phenomena using all tools of statistical physics and introduced the concept of limiting temperature based on the statistical bootstrap model. It turns to be quite often that many-par...

  18. Statistical Model and the mesonic-baryonic transition region

    CERN Document Server

    Oeschler, H.; Redlich, K.; Wheaton, S.

    2009-01-01

    The statistical model assuming chemical equilibriumand local strangeness conservation describes most of the observed features of strange particle production from SIS up to RHIC. Deviations are found as the maximum in the measured K+/pi+ ratio is much sharper than in the model calculations. At the incident energy of the maximum, the statistical model shows that freeze out changes regime from one being dominated by baryons at the lower energies toward one being dominated by mesons. It will be shown how deviations from the usual freeze-out curve influence the various particle ratios. Furthermore, other observables exhibit also changes just in this energy regime.

  19. Linear mixed models a practical guide using statistical software

    CERN Document Server

    West, Brady T; Galecki, Andrzej T

    2006-01-01

    Simplifying the often confusing array of software programs for fitting linear mixed models (LMMs), Linear Mixed Models: A Practical Guide Using Statistical Software provides a basic introduction to primary concepts, notation, software implementation, model interpretation, and visualization of clustered and longitudinal data. This easy-to-navigate reference details the use of procedures for fitting LMMs in five popular statistical software packages: SAS, SPSS, Stata, R/S-plus, and HLM. The authors introduce basic theoretical concepts, present a heuristic approach to fitting LMMs based on bo

  20. A statistical model for the excitation of cavities through apertures

    CERN Document Server

    Gradoni, Gabriele; Anlage, Steven M; Ott, Edward

    2015-01-01

    In this paper, a statistical model for the coupling of electromagnetic radiation into enclosures through apertures is presented. The model gives a unified picture bridging deterministic theories of aperture radiation, and statistical models necessary for capturing the properties of irregular shaped enclosures. A Monte Carlo technique based on random matrix theory is used to predict and study the power transmitted through the aperture into the enclosure. Universal behavior of the net power entering the aperture is found. Results are of interest for predicting the coupling of external radiation through openings in irregular enclosures and reverberation chambers.

  1. Multiple commodities in statistical microeconomics: Model and market

    Science.gov (United States)

    Baaquie, Belal E.; Yu, Miao; Du, Xin

    2016-11-01

    A statistical generalization of microeconomics has been made in Baaquie (2013). In Baaquie et al. (2015), the market behavior of single commodities was analyzed and it was shown that market data provides strong support for the statistical microeconomic description of commodity prices. The case of multiple commodities is studied and a parsimonious generalization of the single commodity model is made for the multiple commodities case. Market data shows that the generalization can accurately model the simultaneous correlation functions of up to four commodities. To accurately model five or more commodities, further terms have to be included in the model. This study shows that the statistical microeconomics approach is a comprehensive and complete formulation of microeconomics, and which is independent to the mainstream formulation of microeconomics.

  2. Probabilistic Quantitative Precipitation Forecasting Using Ensemble Model Output Statistics

    CERN Document Server

    Scheuerer, Michael

    2013-01-01

    Statistical post-processing of dynamical forecast ensembles is an essential component of weather forecasting. In this article, we present a post-processing method that generates full predictive probability distributions for precipitation accumulations based on ensemble model output statistics (EMOS). We model precipitation amounts by a generalized extreme value distribution that is left-censored at zero. This distribution permits modelling precipitation on the original scale without prior transformation of the data. A closed form expression for its continuous rank probability score can be derived and permits computationally efficient model fitting. We discuss an extension of our approach that incorporates further statistics characterizing the spatial variability of precipitation amounts in the vicinity of the location of interest. The proposed EMOS method is applied to daily 18-h forecasts of 6-h accumulated precipitation over Germany in 2011 using the COSMO-DE ensemble prediction system operated by the Germa...

  3. Speech emotion recognition based on statistical pitch model

    Institute of Scientific and Technical Information of China (English)

    WANG Zhiping; ZHAO Li; ZOU Cairong

    2006-01-01

    A modified Parzen-window method, which keep high resolution in low frequencies and keep smoothness in high frequencies, is proposed to obtain statistical model. Then, a gender classification method utilizing the statistical model is proposed, which have a 98% accuracy of gender classification while long sentence is dealt with. By separation the male voice and female voice, the mean and standard deviation of speech training samples with different emotion are used to create the corresponding emotion models. Then the Bhattacharyya distance between the test sample and statistical models of pitch, are utilized for emotion recognition in speech.The normalization of pitch for the male voice and female voice are also considered, in order to illustrate them into a uniform space. Finally, the speech emotion recognition experiment based on K Nearest Neighbor shows that, the correct rate of 81% is achieved, where it is only 73.85%if the traditional parameters are utilized.

  4. What is the meaning of the statistical hadronization model?

    CERN Document Server

    Becattini, F

    2005-01-01

    The statistical model of hadronization succeeds in reproducing particle abundances and transverse momentum spectra in high energy collisions of elementary particles as well as of heavy ions. Despite its apparent success, the interpretation of these results is controversial and the validity of the approach very often questioned. In this paper, we would like to summarize the whole issue by first outlining a basic formulation of the model and then comment on the main criticisms and different kinds of interpretations, with special emphasis on the so-called "phase space dominance". While the ultimate answer to the question why the statistical model works should certainly be pursued, we stress that it is a priority to confirm or disprove the fundamental scheme of the statistical model by performing some detailed tests on the rates of exclusive channels at lower energy.

  5. Complex Data Modeling and Computationally Intensive Statistical Methods

    CERN Document Server

    Mantovan, Pietro

    2010-01-01

    The last years have seen the advent and development of many devices able to record and store an always increasing amount of complex and high dimensional data; 3D images generated by medical scanners or satellite remote sensing, DNA microarrays, real time financial data, system control datasets. The analysis of this data poses new challenging problems and requires the development of novel statistical models and computational methods, fueling many fascinating and fast growing research areas of modern statistics. The book offers a wide variety of statistical methods and is addressed to statistici

  6. In all likelihood statistical modelling and inference using likelihood

    CERN Document Server

    Pawitan, Yudi

    2001-01-01

    Based on a course in the theory of statistics this text concentrates on what can be achieved using the likelihood/Fisherian method of taking account of uncertainty when studying a statistical problem. It takes the concept ot the likelihood as providing the best methods for unifying the demands of statistical modelling and the theory of inference. Every likelihood concept is illustrated by realistic examples, which are not compromised by computational problems. Examples range from asimile comparison of two accident rates, to complex studies that require generalised linear or semiparametric mode

  7. Binary and Ternary Fission Within the Statistical Model

    Science.gov (United States)

    Adamian, Gurgen G.; Andreev, Alexander V.; Antonenko, Nikolai V.; Scheid, Werner

    The binary and ternary nuclear fission are treated within the statistical model. At the scission point we calculate the potentials as functions of the deformations of the fragments in the dinuclear model. The potentials give the mass and charge distributions of the fission fragments. The ternary fission is assumed to occur during the binary fission.

  8. Modelling diversity in building occupant behaviour: a novel statistical approach

    DEFF Research Database (Denmark)

    Haldi, Frédéric; Calì, Davide; Andersen, Rune Korsholm

    2016-01-01

    We propose an advanced modelling framework to predict the scope and effects of behavioural diversity regarding building occupant actions on window openings, shading devices and lighting. We develop a statistical approach based on generalised linear mixed models to account for the longitudinal nat...

  9. Statistical model of the classification of shale in a hydrocyclone

    Energy Technology Data Exchange (ETDEWEB)

    Lopachenok, L.V.; Punin, A.E.; Belyanin, Yu.I.; Proskuryakov, V.A.

    1977-10-01

    The mathematical model obtained by experimental and statistical methods for the classification of shale in a hydrocyclone is adequate for a real industrial-scale process, as indicated by the statistical analysis carried out for it, and together with the material-balance relationships it permits the calculation of the engineering parameters for any classification conditions within the region of the factor space investigated, as well as the search for the optimum conditions for the industrial realization of the process.

  10. General Linear Models: An Integrated Approach to Statistics

    OpenAIRE

    Andrew Faulkner; Sylvain Chartier

    2008-01-01

    Generally, in psychology, the various statistical analyses are taught independently from each other. As a consequence, students struggle to learn new statistical analyses, in contexts that differ from their textbooks. This paper gives a short introduction to the general linear model (GLM), in which it is showed that ANOVA (one-way, factorial, repeated measure and analysis of covariance) is simply a multiple correlation/regression analysis (MCRA). Generalizations to other cases, such as multiv...

  11. Thermodynamic stability and statistical significance of potential stem-loop structures situated at the frameshift sites of retroviruses.

    OpenAIRE

    Le, S.Y.; Chen, J H; Maizel, J. V.

    1989-01-01

    RNA stem-loop structures situated just 3' to the frameshift sites of the retroviral gag-pol or gag-pro and pro-pol regions may make important contributions to frame-shifting in retroviruses. In this study, the thermodynamic stability and statistical significance of such secondary structural features relative to others in the sequence have been assessed using a newly developed method that combines calculations of the lowest free energy of formation of RNA secondary structures and the Monte Car...

  12. Assessing Statistically Significant Heavy-Metal Concentrations in Abandoned Mine Areas via Hot Spot Analysis of Portable XRF Data

    Directory of Open Access Journals (Sweden)

    Sung-Min Kim

    2017-06-01

    Full Text Available To develop appropriate measures to prevent soil contamination in abandoned mining areas, an understanding of the spatial variation of the potentially toxic trace elements (PTEs in the soil is necessary. For the purpose of effective soil sampling, this study uses hot spot analysis, which calculates a z-score based on the Getis-Ord Gi* statistic to identify a statistically significant hot spot sample. To constitute a statistically significant hot spot, a feature with a high value should also be surrounded by other features with high values. Using relatively cost- and time-effective portable X-ray fluorescence (PXRF analysis, sufficient input data are acquired from the Busan abandoned mine and used for hot spot analysis. To calibrate the PXRF data, which have a relatively low accuracy, the PXRF analysis data are transformed using the inductively coupled plasma atomic emission spectrometry (ICP-AES data. The transformed PXRF data of the Busan abandoned mine are classified into four groups according to their normalized content and z-scores: high content with a high z-score (HH, high content with a low z-score (HL, low content with a high z-score (LH, and low content with a low z-score (LL. The HL and LH cases may be due to measurement errors. Additional or complementary surveys are required for the areas surrounding these suspect samples or for significant hot spot areas. The soil sampling is conducted according to a four-phase procedure in which the hot spot analysis and proposed group classification method are employed to support the development of a sampling plan for the following phase. Overall, 30, 50, 80, and 100 samples are investigated and analyzed in phases 1–4, respectively. The method implemented in this case study may be utilized in the field for the assessment of statistically significant soil contamination and the identification of areas for which an additional survey is required.

  13. Validation of statistical models for creep rupture by parametric analysis

    Energy Technology Data Exchange (ETDEWEB)

    Bolton, J., E-mail: john.bolton@uwclub.net [65, Fisher Ave., Rugby, Warks CV22 5HW (United Kingdom)

    2012-01-15

    Statistical analysis is an efficient method for the optimisation of any candidate mathematical model of creep rupture data, and for the comparative ranking of competing models. However, when a series of candidate models has been examined and the best of the series has been identified, there is no statistical criterion to determine whether a yet more accurate model might be devised. Hence there remains some uncertainty that the best of any series examined is sufficiently accurate to be considered reliable as a basis for extrapolation. This paper proposes that models should be validated primarily by parametric graphical comparison to rupture data and rupture gradient data. It proposes that no mathematical model should be considered reliable for extrapolation unless the visible divergence between model and data is so small as to leave no apparent scope for further reduction. This study is based on the data for a 12% Cr alloy steel used in BS PD6605:1998 to exemplify its recommended statistical analysis procedure. The models considered in this paper include a) a relatively simple model, b) the PD6605 recommended model and c) a more accurate model of somewhat greater complexity. - Highlights: Black-Right-Pointing-Pointer The paper discusses the validation of creep rupture models derived from statistical analysis. Black-Right-Pointing-Pointer It demonstrates that models can be satisfactorily validated by a visual-graphic comparison of models to data. Black-Right-Pointing-Pointer The method proposed utilises test data both as conventional rupture stress and as rupture stress gradient. Black-Right-Pointing-Pointer The approach is shown to be more reliable than a well-established and widely used method (BS PD6605).

  14. Structural Characterization and Statistical-Mechanical Model of Epidermal Patterns.

    Science.gov (United States)

    Chen, Duyu; Aw, Wen Yih; Devenport, Danelle; Torquato, Salvatore

    2016-12-06

    In proliferating epithelia of mammalian skin, cells of irregular polygon-like shapes pack into complex, nearly flat two-dimensional structures that are pliable to deformations. In this work, we employ various sensitive correlation functions to quantitatively characterize structural features of evolving packings of epithelial cells across length scales in mouse skin. We find that the pair statistics in direct space (correlation function) and Fourier space (structure factor) of the cell centroids in the early stages of embryonic development show structural directional dependence (statistical anisotropy), which is a reflection of the fact that cells are stretched, which promotes uniaxial growth along the epithelial plane. In the late stages, the patterns tend toward statistically isotropic states, as cells attain global polarization and epidermal growth shifts to produce the skin's outer stratified layers. We construct a minimalist four-component statistical-mechanical model involving effective isotropic pair interactions consisting of hard-core repulsion and extra short-range soft-core repulsion beyond the hard core, whose length scale is roughly the same as the hard core. The model parameters are optimized to match the sample pair statistics in both direct and Fourier spaces. By doing this, the parameters are biologically constrained. In contrast with many vertex-based models, our statistical-mechanical model does not explicitly incorporate information about the cell shapes and interfacial energy between cells; nonetheless, our model predicts essentially the same polygonal shape distribution and size disparity of cells found in experiments, as measured by Voronoi statistics. Moreover, our simulated equilibrium liquid-like configurations are able to match other nontrivial unconstrained statistics, which is a testament to the power and novelty of the model. The array of structural descriptors that we deploy enable us to distinguish between normal, mechanically

  15. Statistical Design Model (SDM) of satellite thermal control subsystem

    Science.gov (United States)

    Mirshams, Mehran; Zabihian, Ehsan; Aarabi Chamalishahi, Mahdi

    2016-07-01

    Satellites thermal control, is a satellite subsystem that its main task is keeping the satellite components at its own survival and activity temperatures. Ability of satellite thermal control plays a key role in satisfying satellite's operational requirements and designing this subsystem is a part of satellite design. In the other hand due to the lack of information provided by companies and designers still doesn't have a specific design process while it is one of the fundamental subsystems. The aim of this paper, is to identify and extract statistical design models of spacecraft thermal control subsystem by using SDM design method. This method analyses statistical data with a particular procedure. To implement SDM method, a complete database is required. Therefore, we first collect spacecraft data and create a database, and then we extract statistical graphs using Microsoft Excel, from which we further extract mathematical models. Inputs parameters of the method are mass, mission, and life time of the satellite. For this purpose at first thermal control subsystem has been introduced and hardware using in the this subsystem and its variants has been investigated. In the next part different statistical models has been mentioned and a brief compare will be between them. Finally, this paper particular statistical model is extracted from collected statistical data. Process of testing the accuracy and verifying the method use a case study. Which by the comparisons between the specifications of thermal control subsystem of a fabricated satellite and the analyses results, the methodology in this paper was proved to be effective. Key Words: Thermal control subsystem design, Statistical design model (SDM), Satellite conceptual design, Thermal hardware

  16. Statistical Inference of Biometrical Genetic Model With Cultural Transmission.

    Science.gov (United States)

    Guo, Xiaobo; Ji, Tian; Wang, Xueqin; Zhang, Heping; Zhong, Shouqiang

    2013-01-01

    Twin and family studies establish the foundation for studying the genetic, environmental and cultural transmission effects for phenotypes. In this work, we make use of the well established statistical methods and theory for mixed models to assess cultural transmission in twin and family studies. Specifically, we address two critical yet poorly understood issues: the model identifiability in assessing cultural transmission for twin and family data and the biases in the estimates when sub-models are used. We apply our models and theory to two real data sets. A simulation is conducted to verify the bias in the estimates of genetic effects when the working model is a sub-model.

  17. Analyzing sickness absence with statistical models for survival data

    DEFF Research Database (Denmark)

    Christensen, Karl Bang; Andersen, Per Kragh; Smith-Hansen, Lars;

    2007-01-01

    absence data deal with events occurring over time, the use of statistical models for survival data has been reviewed, and the use of frailty models has been proposed for the analysis of such data. METHODS: Three methods for analyzing data on sickness absences were compared using a simulation study...... involving the following: (i) Poisson regression using a single outcome variable (number of sickness absences), (ii) analysis of time to first event using the Cox proportional hazards model, and (iii) frailty models, which are random effects proportional hazards models. Data from a study of the relation...... between the psychosocial work environment and sickness absence were used to illustrate the results. RESULTS: Standard methods were found to underestimate true effect sizes by approximately one-tenth [method i] and one-third [method ii] and to have lower statistical power than frailty models. CONCLUSIONS...

  18. Modern statistical models for forensic fingerprint examinations: a critical review.

    Science.gov (United States)

    Abraham, Joshua; Champod, Christophe; Lennard, Chris; Roux, Claude

    2013-10-10

    Over the last decade, the development of statistical models in support of forensic fingerprint identification has been the subject of increasing research attention, spurned on recently by commentators who claim that the scientific basis for fingerprint identification has not been adequately demonstrated. Such models are increasingly seen as useful tools in support of the fingerprint identification process within or in addition to the ACE-V framework. This paper provides a critical review of recent statistical models from both a practical and theoretical perspective. This includes analysis of models of two different methodologies: Probability of Random Correspondence (PRC) models that focus on calculating probabilities of the occurrence of fingerprint configurations for a given population, and Likelihood Ratio (LR) models which use analysis of corresponding features of fingerprints to derive a likelihood value representing the evidential weighting for a potential source.

  19. Growth Curve Models and Applications : Indian Statistical Institute

    CERN Document Server

    2017-01-01

    Growth curve models in longitudinal studies are widely used to model population size, body height, biomass, fungal growth, and other variables in the biological sciences, but these statistical methods for modeling growth curves and analyzing longitudinal data also extend to general statistics, economics, public health, demographics, epidemiology, SQC, sociology, nano-biotechnology, fluid mechanics, and other applied areas.   There is no one-size-fits-all approach to growth measurement. The selected papers in this volume build on presentations from the GCM workshop held at the Indian Statistical Institute, Giridih, on March 28-29, 2016. They represent recent trends in GCM research on different subject areas, both theoretical and applied. This book includes tools and possibilities for further work through new techniques and modification of existing ones. The volume includes original studies, theoretical findings and case studies from a wide range of app lied work, and these contributions have been externally r...

  20. Statistical Modeling for Wind-Temperature Meteorological Elements in Troposphere

    CERN Document Server

    Virtser, A; Golbraikh, E

    2010-01-01

    A comprehensive statistical model for vertical profiles of the horizontal wind and temperature throughout the troposphere is presented. The model is based on radiosonde measurements of wind and temperature during several years. The profiles measured under quite different atmospheric conditions exhibit qualitative similarity, and a proper choice of the reference scales for the wind, temperature and altitude levels allows to consider the measurement data as realizations of a random process with universal characteristics: means, the basic functions and parameters of standard distributions for transform coefficients of the Principal Component Analysis. The features of the atmospheric conditions are described by statistical characteristics of the wind-temperature ensemble of dimensional reference scales. The high effectiveness of the proposed approach is provided by a similarity of wind - temperature vertical profiles, which allow to carry out the statistical modeling in the low-dimension space of the dimensional ...

  1. Sensitivity Analysis and Statistical Convergence of a Saltating Particle Model

    CERN Document Server

    Maldonado, S

    2016-01-01

    Saltation models provide considerable insight into near-bed sediment transport. This paper outlines a simple, efficient numerical model of stochastic saltation, which is validated against previously published experimental data on saltation in a channel of nearly horizontal bed. Convergence tests are systematically applied to ensure the model is free from statistical errors emanating from the number of particle hops considered. Two criteria for statistical convergence are derived; according to the first criterion, at least $10^3$ hops appear to be necessary for convergent results, whereas $10^4$ saltations seem to be the minimum required in order to achieve statistical convergence in accordance with the second criterion. Two empirical formulae for lift force are considered: one dependent on the slip (relative) velocity of the particle multiplied by the vertical gradient of the horizontal flow velocity component; the other dependent on the difference between the squares of the slip velocity components at the to...

  2. Computationally efficient statistical differential equation modeling using homogenization

    Science.gov (United States)

    Hooten, Mevin B.; Garlick, Martha J.; Powell, James A.

    2013-01-01

    Statistical models using partial differential equations (PDEs) to describe dynamically evolving natural systems are appearing in the scientific literature with some regularity in recent years. Often such studies seek to characterize the dynamics of temporal or spatio-temporal phenomena such as invasive species, consumer-resource interactions, community evolution, and resource selection. Specifically, in the spatial setting, data are often available at varying spatial and temporal scales. Additionally, the necessary numerical integration of a PDE may be computationally infeasible over the spatial support of interest. We present an approach to impose computationally advantageous changes of support in statistical implementations of PDE models and demonstrate its utility through simulation using a form of PDE known as “ecological diffusion.” We also apply a statistical ecological diffusion model to a data set involving the spread of mountain pine beetle (Dendroctonus ponderosae) in Idaho, USA.

  3. LETTER: Statistical physics of the Schelling model of segregation

    Science.gov (United States)

    Dall'Asta, L.; Castellano, C.; Marsili, M.

    2008-07-01

    We investigate the static and dynamic properties of a celebrated model of social segregation, providing a complete explanation of the mechanisms leading to segregation both in one- and two-dimensional systems. Standard statistical physics methods shed light on the rich phenomenology of this simple model, exhibiting static phase transitions typical of kinetic constrained models, non-trivial coarsening like in driven-particle systems and percolation-related phenomena.

  4. PROBABILISTIC-STATISTICAL MODELING THE INTERFERENCES FROM ELECTRIC LOCOMOTIVES

    Directory of Open Access Journals (Sweden)

    Orlov A. I.

    2015-02-01

    Full Text Available The movements of electric locomotives create the interferences affecting the wired link. The creation of sufficiently technical effective and at the same time cost-effective means of protection from wireline interferences generated traction networks assumes as a preparatory phase to develop mathematical models of interference caused by electric locomotives. We have developed a probabilistic-statistical model of interferences caused by electric locomotives. The asymptotic distribution of the total interference is the distribution of the length of the two-dimensional random vector whose coordinates - independent normally distributed random variables with mean 0 and variance 1. Limit theorem is proved for the expectation of the total amplitude of the interferences. Monte-Carlo method is used to study the rate of convergence of the expectation of the total amplitude of the interferences to the limiting value. We used an algorithm of mixing developed by MacLaren-Marsaglia (M-algorithm. Five sets of amplitudes are analyzed, selected in accordance with the recommendations of experts in the field of traction AC networks. The most rapid convergence to the limit takes place in the case of equal amplitudes. It was found that the maximum possible average value of the amplitude of the random noise by 7.4% less than the previously used value, which promises a significant economic impact

  5. Mathematical-statistical models of generated hazardous hospital solid waste.

    Science.gov (United States)

    Awad, A R; Obeidat, M; Al-Shareef, M

    2004-01-01

    This research work was carried out under the assumption that wastes generated from hospitals in Irbid, Jordan were hazardous. The hazardous and non-hazardous wastes generated from the different divisions in the three hospitals under consideration were not separated during collection process. Three hospitals, Princess Basma hospital (public), Princess Bade'ah hospital (teaching), and Ibn Al-Nafis hospital (private) in Irbid were selected for this study. The research work took into account the amounts of solid waste accumulated from each division and also determined the total amount generated from each hospital. The generation rates were determined (kilogram per patient, per day; kilogram per bed, per day) for the three hospitals. These generation rates were compared with similar hospitals in Europe. The evaluation suggested that the current situation regarding the management of these wastes in the three studied hospitals needs revision as these hospitals do not follow methods of waste disposals that would reduce risk to human health and the environment practiced in developed countries. Statistical analysis was carried out to develop models for the prediction of the quantity of waste generated at each hospital (public, teaching, private). In these models number of patients, beds, and type of hospital were revealed to be significant factors on quantity of waste generated. Multiple regressions were also used to estimate the quantities of wastes generated from similar divisions in the three hospitals (surgery, internal diseases, and maternity).

  6. A statistical shape model of the human second cervical vertebra.

    Science.gov (United States)

    Clogenson, Marine; Duff, John M; Luethi, Marcel; Levivier, Marc; Meuli, Reto; Baur, Charles; Henein, Simon

    2015-07-01

    Statistical shape and appearance models play an important role in reducing the segmentation processing time of a vertebra and in improving results for 3D model development. Here, we describe the different steps in generating a statistical shape model (SSM) of the second cervical vertebra (C2) and provide the shape model for general use by the scientific community. The main difficulties in its construction are the morphological complexity of the C2 and its variability in the population. The input dataset is composed of manually segmented anonymized patient computerized tomography (CT) scans. The alignment of the different datasets is done with the procrustes alignment on surface models, and then, the registration is cast as a model-fitting problem using a Gaussian process. A principal component analysis (PCA)-based model is generated which includes the variability of the C2. The SSM was generated using 92 CT scans. The resulting SSM was evaluated for specificity, compactness and generalization ability. The SSM of the C2 is freely available to the scientific community in Slicer (an open source software for image analysis and scientific visualization) with a module created to visualize the SSM using Statismo, a framework for statistical shape modeling. The SSM of the vertebra allows the shape variability of the C2 to be represented. Moreover, the SSM will enable semi-automatic segmentation and 3D model generation of the vertebra, which would greatly benefit surgery planning.

  7. Calculation of precise firing statistics in a neural network model

    Science.gov (United States)

    Cho, Myoung Won

    2017-08-01

    A precise prediction of neural firing dynamics is requisite to understand the function of and the learning process in a biological neural network which works depending on exact spike timings. Basically, the prediction of firing statistics is a delicate manybody problem because the firing probability of a neuron at a time is determined by the summation over all effects from past firing states. A neural network model with the Feynman path integral formulation is recently introduced. In this paper, we present several methods to calculate firing statistics in the model. We apply the methods to some cases and compare the theoretical predictions with simulation results.

  8. The Statistical Modeling of the Trends Concerning the Romanian Population

    Directory of Open Access Journals (Sweden)

    Gabriela OPAIT

    2014-11-01

    Full Text Available This paper reflects the statistical modeling concerning the resident population in Romania, respectively the total of the romanian population, through by means of the „Least Squares Method”. Any country it develops by increasing of the population, respectively of the workforce, which is a factor of influence for the growth of the Gross Domestic Product (G.D.P.. The „Least Squares Method” represents a statistical technique for to determine the trend line of the best fit concerning a model.

  9. Applied systems ecology: models, data, and statistical methods

    Energy Technology Data Exchange (ETDEWEB)

    Eberhardt, L L

    1976-01-01

    In this report, systems ecology is largely equated to mathematical or computer simulation modelling. The need for models in ecology stems from the necessity to have an integrative device for the diversity of ecological data, much of which is observational, rather than experimental, as well as from the present lack of a theoretical structure for ecology. Different objectives in applied studies require specialized methods. The best predictive devices may be regression equations, often non-linear in form, extracted from much more detailed models. A variety of statistical aspects of modelling, including sampling, are discussed. Several aspects of population dynamics and food-chain kinetics are described, and it is suggested that the two presently separated approaches should be combined into a single theoretical framework. It is concluded that future efforts in systems ecology should emphasize actual data and statistical methods, as well as modelling.

  10. Schedulability of Herschel revisited using statistical model checking

    DEFF Research Database (Denmark)

    David, Alexandre; Larsen, Kim Guldstrand; Legay, Axel

    2015-01-01

    Schedulability analysis is a main concern for several embedded applications due to their safety-critical nature. The classical method of response time analysis provides an efficient technique used in industrial practice. However, the method is based on conservative assumptions related to execution...... to obtain some guarantee on the (un)schedulability of the model even in the presence of undecidability. Two methods are considered: symbolic model checking and statistical model checking. Since the model uses stop-watches, the reachability problem becomes undecidable so we are using an over......-approximation technique. We can safely conclude that the system is schedulable for varying values of BCET. For the cases where deadlines are violated, we use polyhedra to try to confirm the witnesses. Our alternative method to confirm non-schedulability uses statistical model-checking (SMC) to generate counter...

  11. Statistical Modelling of Cardiovascular Data. An Introduction to Linear Mixed Models

    OpenAIRE

    Gonçalves, Paulo; Lenoir, Christophe; Heymes, Christophe; Swynghedauw, Bernard; Lavergne, Christian

    2005-01-01

    Most of statistical approaches in cardiovascular research were based on variance analysis (ANOVA). However, most of the time, the assumption that data are independent is violated since several measures are performed on the same subject (repeated measures). In addition, the presence of intra- and inter-observers variability can potentially obscure significant differences. The linear mixed model (LMM) is an extended multivariate linear regression method of analysis that accounts for both fixed ...

  12. A Review of Modeling Bioelectrochemical Systems: Engineering and Statistical Aspects

    Directory of Open Access Journals (Sweden)

    Shuai Luo

    2016-02-01

    Full Text Available Bioelectrochemical systems (BES are promising technologies to convert organic compounds in wastewater to electrical energy through a series of complex physical-chemical, biological and electrochemical processes. Representative BES such as microbial fuel cells (MFCs have been studied and advanced for energy recovery. Substantial experimental and modeling efforts have been made for investigating the processes involved in electricity generation toward the improvement of the BES performance for practical applications. However, there are many parameters that will potentially affect these processes, thereby making the optimization of system performance hard to be achieved. Mathematical models, including engineering models and statistical models, are powerful tools to help understand the interactions among the parameters in BES and perform optimization of BES configuration/operation. This review paper aims to introduce and discuss the recent developments of BES modeling from engineering and statistical aspects, including analysis on the model structure, description of application cases and sensitivity analysis of various parameters. It is expected to serves as a compass for integrating the engineering and statistical modeling strategies to improve model accuracy for BES development.

  13. Development of 3D statistical mandible models for cephalometric measurements

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sung Goo; Yi, Won Jin; Hwang, Soon Jung; Choi, Soon Chul; Lee, Sam Sun; Heo, Min Suk; Huh, Kyung Hoe; Kim, Tae Il [School of Dentistry, Seoul National University, Seoul (Korea, Republic of); Hong, Helen; Yoo, Ji Hyun [Division of Multimedia Engineering, Seoul Women' s University, Seoul (Korea, Republic of)

    2012-09-15

    The aim of this study was to provide sex-matched three-dimensional (3D) statistical shape models of the mandible, which would provide cephalometric parameters for 3D treatment planning and cephalometric measurements in orthognathic surgery. The subjects used to create the 3D shape models of the mandible included 23 males and 23 females. The mandibles were segmented semi-automatically from 3D facial CT images. Each individual mandible shape was reconstructed as a 3D surface model, which was parameterized to establish correspondence between different individual surfaces. The principal component analysis (PCA) applied to all mandible shapes produced a mean model and characteristic models of variation. The cephalometric parameters were measured directly from the mean models to evaluate the 3D shape models. The means of the measured parameters were compared with those from other conventional studies. The male and female 3D statistical mean models were developed from 23 individual mandibles, respectively. The male and female characteristic shapes of variation produced by PCA showed a large variability included in the individual mandibles. The cephalometric measurements from the developed models were very close to those from some conventional studies. We described the construction of 3D mandibular shape models and presented the application of the 3D mandibular template in cephalometric measurements. Optimal reference models determined from variations produced by PCA could be used for craniofacial patients with various types of skeletal shape.

  14. Modelling geographical graduate job search using circular statistics

    NARCIS (Netherlands)

    Faggian, Alessandra; Corcoran, Jonathan; McCann, Philip

    2013-01-01

    Theory suggests that the spatial patterns of migration flows are contingent both on individual human capital and underlying geographical structures. Here we demonstrate these features by using circular statistics in an econometric modelling framework applied to the flows of UK university graduates.

  15. Interactive comparison of hypothesis tests for statistical model checking

    NARCIS (Netherlands)

    de Boer, Pieter-Tjerk; Reijsbergen, D.P.; Scheinhardt, Willem R.W.

    2015-01-01

    We present a web-based interactive comparison of hypothesis tests as are used in statistical model checking, providing users and tool developers with more insight into their characteristics. Parameters can be modified easily and their influence is visualized in real time; an integrated simulation

  16. Validation of Models : Statistical Techniques and Data Availability

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    1999-01-01

    This paper shows which statistical techniques can be used to validate simulation models, depending on which real-life data are available. Concerning this availability three situations are distinguished (i) no data, (ii) only output data, and (iii) both input and output data. In case (i) - no real

  17. Statistical Modeling for Radiation Hardness Assurance: Toward Bigger Data

    Science.gov (United States)

    Ladbury, R.; Campola, M. J.

    2015-01-01

    New approaches to statistical modeling in radiation hardness assurance are discussed. These approaches yield quantitative bounds on flight-part radiation performance even in the absence of conventional data sources. This allows the analyst to bound radiation risk at all stages and for all decisions in the RHA process. It also allows optimization of RHA procedures for the project's risk tolerance.

  18. Nowcasting GDP Growth: statistical models versus professional analysts

    NARCIS (Netherlands)

    J.M. de Winter (Jasper)

    2016-01-01

    markdownabstractThis thesis contains four chapters that cast new light on the ability of professional analysts and statistical models to assess economic growth in the current quarter (nowcast) and its development in the near future. This is not a trivial issue. An accurate assessment of the current

  19. Hypersonic Vehicle Tracking Based on Improved Current Statistical Model

    Directory of Open Access Journals (Sweden)

    He Guangjun

    2013-11-01

    Full Text Available A new method of tracking the near space hypersonic vehicle is put forward. According to hypersonic vehicles’ characteristics, we improved current statistical model through online identification of the maneuvering frequency. A Monte Carlo simulation is used to analyze the performance of the method. The results show that the improved method exhibits very good tracking performance in comparison with the old method.

  20. Hierarchical modelling for the environmental sciences statistical methods and applications

    CERN Document Server

    Clark, James S

    2006-01-01

    New statistical tools are changing the way in which scientists analyze and interpret data and models. Hierarchical Bayes and Markov Chain Monte Carlo methods for analysis provide a consistent framework for inference and prediction where information is heterogeneous and uncertain, processes are complicated, and responses depend on scale. Nowhere are these methods more promising than in the environmental sciences.

  1. Octet magnetic Moments and their sum rules in statistical model

    CERN Document Server

    Batra, M

    2013-01-01

    The statistical model is implemented to find the magnetic moments of all octet baryons. The well-known sum rules like GMO and CG sum rules has been checked in order to check the consistency of our approach. The small discrepancy between the results suggests the importance of breaking in SU(3) symmetry.

  2. Environmental Concern and Sociodemographic Variables: A Study of Statistical Models

    Science.gov (United States)

    Xiao, Chenyang; McCright, Aaron M.

    2007-01-01

    Studies of the social bases of environmental concern over the past 30 years have produced somewhat inconsistent results regarding the effects of sociodemographic variables, such as gender, income, and place of residence. The authors argue that model specification errors resulting from violation of two statistical assumptions (interval-level…

  3. Statistical sampling and modelling for cork oak and eucalyptus stands

    NARCIS (Netherlands)

    Paulo, M.J.

    2002-01-01

    This thesis focuses on the use of modern statistical methods to solve problems on sampling, optimal cutting time and agricultural modelling in Portuguese cork oak and eucalyptus stands. The results are contained in five chapters that have been submitted for publication as scientific manuscripts.The

  4. Monte-Carlo simulation-based statistical modeling

    CERN Document Server

    Chen, John

    2017-01-01

    This book brings together expert researchers engaged in Monte-Carlo simulation-based statistical modeling, offering them a forum to present and discuss recent issues in methodological development as well as public health applications. It is divided into three parts, with the first providing an overview of Monte-Carlo techniques, the second focusing on missing data Monte-Carlo methods, and the third addressing Bayesian and general statistical modeling using Monte-Carlo simulations. The data and computer programs used here will also be made publicly available, allowing readers to replicate the model development and data analysis presented in each chapter, and to readily apply them in their own research. Featuring highly topical content, the book has the potential to impact model development and data analyses across a wide spectrum of fields, and to spark further research in this direction.

  5. Workshop on Model Uncertainty and its Statistical Implications

    CERN Document Server

    1988-01-01

    In this book problems related to the choice of models in such diverse fields as regression, covariance structure, time series analysis and multinomial experiments are discussed. The emphasis is on the statistical implications for model assessment when the assessment is done with the same data that generated the model. This is a problem of long standing, notorious for its difficulty. Some contributors discuss this problem in an illuminating way. Others, and this is a truly novel feature, investigate systematically whether sample re-use methods like the bootstrap can be used to assess the quality of estimators or predictors in a reliable way given the initial model uncertainty. The book should prove to be valuable for advanced practitioners and statistical methodologists alike.

  6. Statistical mechanics models for motion and force planning

    Science.gov (United States)

    Rodriguez, G.

    1990-01-01

    The models of statistical mechanics provide an alternative to the methods of classical mechanics more traditionally used in robotics. They have a potential to: improve analysis of object collisions; handle kinematic and dynamic contact interactions within the same frmework; and reduce the need for perfect deterministic world model information. The statistical mechanics models characterize the state of the system as a probability density function (p.d.f.) whose time evolution is governed by a partial differential equation subject to boundary and initial conditions. The boundary conditions when rigid objects collide reflect the conservation of momentum. The models are being developed to embedd in remote semi-autonomous systems with a need to reason and interact with a multiobject environment.

  7. An Order Statistics Approach to the Halo Model for Galaxies

    CERN Document Server

    Paul, Niladri; Sheth, Ravi K

    2016-01-01

    We use the Halo Model to explore the implications of assuming that galaxy luminosities in groups are randomly drawn from an underlying luminosity function. We show that even the simplest of such order statistics models -- one in which this luminosity function $p(L)$ is universal -- naturally produces a number of features associated with previous analyses based on the `central plus Poisson satellites' hypothesis. These include the monotonic relation of mean central luminosity with halo mass, the Lognormal distribution around this mean, and the tight relation between the central and satellite mass scales. In stark contrast to observations of galaxy clustering, however, this model predicts $\\textit{no}$ luminosity dependence of large scale clustering. We then show that an extended version of this model, based on the order statistics of a $\\textit{halo mass dependent}$ luminosity function $p(L|m)$, is in much better agreement with the clustering data as well as satellite luminosities, but systematically under-pre...

  8. Statistical models describing the energy signature of buildings

    DEFF Research Database (Denmark)

    Bacher, Peder; Madsen, Henrik; Thavlov, Anders

    2010-01-01

    Approximately one third of the primary energy production in Denmark is used for heating in buildings. Therefore efforts to accurately describe and improve energy performance of the building mass are very important. For this purpose statistical models describing the energy signature of a building, i.......e. the heat dynamics of the building, have been developed. The models can be used to obtain rather detailed knowledge of the energy performance of the building and to optimize the control of the energy consumption for heating, which will be vital in conditions with increasing fluctuation of the energy supply...... or varying energy prices. The paper will give an overview of statistical methods and applied models based on experiments carried out in FlexHouse, which is an experimental building in SYSLAB, Risø DTU. The models are of different complexity and can provide estimates of physical quantities such as UA...

  9. Applications of spatial statistical network models to stream data

    Science.gov (United States)

    Daniel J. Isaak; Erin E. Peterson; Jay M. Ver Hoef; Seth J. Wenger; Jeffrey A. Falke; Christian E. Torgersen; Colin Sowder; E. Ashley Steel; Marie-Josee Fortin; Chris E. Jordan; Aaron S. Ruesch; Nicholas Som; Pascal. Monestiez

    2014-01-01

    Streams and rivers host a significant portion of Earth's biodiversity and provide important ecosystem services for human populations. Accurate information regarding the status and trends of stream resources is vital for their effective conservation and management. Most statistical techniques applied to data measured on stream networks were developed for...

  10. Applying the luminosity function statistics in the fireshell model

    Science.gov (United States)

    Rangel Lemos, L. J.; Bianco, C. L.; Ruffini, R.

    2015-12-01

    The luminosity function (LF) statistics applied to the data of BATSE, GBM/Fermi and BAT/Swift is the theme approached in this work. The LF is a strong statistical tool to extract useful information from astrophysical samples, and the key point of this statistical analysis is in the detector sensitivity, where we have performed careful analysis. We applied the tool of the LF statistics to three GRB classes predicted by the Fireshell model. We produced, by LF statistics, predicted distributions of: peak ux N(Fph pk), redshift N(z) and peak luminosity N(Lpk) for the three GRB classes predicted by Fireshell model; we also used three GRB rates. We looked for differences among the distributions, and in fact we found. We performed a comparison between the distributions predicted and observed (with and without redshifts), where we had to build a list with 217 GRBs with known redshifts. Our goal is transform the GRBs in a standard candle, where a alternative is find a correlation between the isotropic luminosity and the Band peak spectral energy (Liso - Epk).

  11. Statistical multiscale image segmentation via Alpha-stable modeling

    OpenAIRE

    Wan, Tao; Canagarajah, CN; Achim, AM

    2007-01-01

    This paper presents a new statistical image segmentation algorithm, in which the texture features are modeled by symmetric alpha-stable (SalphaS) distributions. These features are efficiently combined with the dominant color feature to perform automatic segmentation. First, the image is roughly segmented into textured and nontextured regions using the dual-tree complex wavelet transform (DT-CWT) with the sub-band coefficients modeled as SalphaS random variables. A mul-tiscale segmentation is ...

  12. Generalized statistical model for multicomponent adsorption equilibria on zeolites

    Energy Technology Data Exchange (ETDEWEB)

    Rota, R.; Gamba, G.; Paludetto, R.; Carra, S.; Morbidelli, M. (Dipartimento di Chimica Fisica Applicata, Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milano (IT))

    1988-05-01

    The statistical thermodynamic approach to multicomponent adsorption equilibria on zeolites has been extended to nonideal systems, through the correction of cross coefficients characterizing the interaction between unlike molecules. Estimation of the model parameters requires experimental binary equilibrium data. Comparisons with the classical model based on adsorbed solution theory are reported for three nonideal ternary systems. The two approaches provide comparable results in the simulation of binary and ternary adsorption equilibrium data at constant temperature and pressure.

  13. Development of 3D statistical mandible models for cephalometric measurements

    OpenAIRE

    2012-01-01

    Purpose The aim of this study was to provide sex-matched three-dimensional (3D) statistical shape models of the mandible, which would provide cephalometric parameters for 3D treatment planning and cephalometric measurements in orthognathic surgery. Materials and Methods The subjects used to create the 3D shape models of the mandible included 23 males and 23 females. The mandibles were segmented semi-automatically from 3D facial CT images. Each individual mandible shape was reconstructed as a ...

  14. Bregman divergence as general framework to estimate unnormalized statistical models

    CERN Document Server

    Gutmann, Michael

    2012-01-01

    We show that the Bregman divergence provides a rich framework to estimate unnormalized statistical models for continuous or discrete random variables, that is, models which do not integrate or sum to one, respectively. We prove that recent estimation methods such as noise-contrastive estimation, ratio matching, and score matching belong to the proposed framework, and explain their interconnection based on supervised learning. Further, we discuss the role of boosting in unsupervised learning.

  15. Advances on statistical/thermodynamical models for unpolarized structure functions

    Science.gov (United States)

    Trevisan, Luis A.; Mirez, Carlos; Tomio, Lauro

    2013-03-01

    During the eights and nineties many statistical/thermodynamical models were proposed to describe the nucleons' structure functions and distribution of the quarks in the hadrons. Most of these models describe the compound quarks and gluons inside the nucleon as a Fermi / Bose gas respectively, confined in a MIT bag[1] with continuous energy levels. Another models considers discrete spectrum. Some interesting features of the nucleons are obtained by these models, like the sea asymmetries ¯d/¯u and ¯d-¯u.

  16. STATISTICAL MODELS FOR SEMI-RIGID NEMATIC POLYMERS

    Institute of Scientific and Technical Information of China (English)

    WANG Xinjiu

    1995-01-01

    Semi-rigid liquid crystal polymer is a class of liquid crystal polymers different from long rigid rod liquid crystal polymer to which the well-known Onsager and Flory theories are applied. In this paper, three statistical models for the semi-rigid nematic polymer were addressed. They are the elastically jointed rod model, worm-like chain model, and non-homogeneous chain model.The nematic-isotropic transition temperature was examined. The pseudo-second transition temperature is expressed analytically. Comparisons with the experiments were made and the agreements were found.

  17. The estimation of yearly probability gain for seismic statistical model

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Based on the calculation method of information gain in the stochastic process presented by Vere-Jones, the relation between information gain and probability gain is studied, which is very common in earthquake prediction, and the yearly probability gain for seismic statistical model is proposed. The method is applied to the non-stationary Poisson model with whole-process exponential increase and stress release model. In addition, the prediction method of stress release model is obtained based on the inverse function simulation method of stochastic variable.

  18. An exercise in model validation: Comparing univariate statistics and Monte Carlo-based multivariate statistics

    Energy Technology Data Exchange (ETDEWEB)

    Weathers, J.B. [Shock, Noise, and Vibration Group, Northrop Grumman Shipbuilding, P.O. Box 149, Pascagoula, MS 39568 (United States)], E-mail: James.Weathers@ngc.com; Luck, R. [Department of Mechanical Engineering, Mississippi State University, 210 Carpenter Engineering Building, P.O. Box ME, Mississippi State, MS 39762-5925 (United States)], E-mail: Luck@me.msstate.edu; Weathers, J.W. [Structural Analysis Group, Northrop Grumman Shipbuilding, P.O. Box 149, Pascagoula, MS 39568 (United States)], E-mail: Jeffrey.Weathers@ngc.com

    2009-11-15

    The complexity of mathematical models used by practicing engineers is increasing due to the growing availability of sophisticated mathematical modeling tools and ever-improving computational power. For this reason, the need to define a well-structured process for validating these models against experimental results has become a pressing issue in the engineering community. This validation process is partially characterized by the uncertainties associated with the modeling effort as well as the experimental results. The net impact of the uncertainties on the validation effort is assessed through the 'noise level of the validation procedure', which can be defined as an estimate of the 95% confidence uncertainty bounds for the comparison error between actual experimental results and model-based predictions of the same quantities of interest. Although general descriptions associated with the construction of the noise level using multivariate statistics exists in the literature, a detailed procedure outlining how to account for the systematic and random uncertainties is not available. In this paper, the methodology used to derive the covariance matrix associated with the multivariate normal pdf based on random and systematic uncertainties is examined, and a procedure used to estimate this covariance matrix using Monte Carlo analysis is presented. The covariance matrices are then used to construct approximate 95% confidence constant probability contours associated with comparison error results for a practical example. In addition, the example is used to show the drawbacks of using a first-order sensitivity analysis when nonlinear local sensitivity coefficients exist. Finally, the example is used to show the connection between the noise level of the validation exercise calculated using multivariate and univariate statistics.

  19. Stochastic geometry, spatial statistics and random fields models and algorithms

    CERN Document Server

    2015-01-01

    Providing a graduate level introduction to various aspects of stochastic geometry, spatial statistics and random fields, this volume places a special emphasis on fundamental classes of models and algorithms as well as on their applications, for example in materials science, biology and genetics. This book has a strong focus on simulations and includes extensive codes in Matlab and R, which are widely used in the mathematical community. It can be regarded as a continuation of the recent volume 2068 of Lecture Notes in Mathematics, where other issues of stochastic geometry, spatial statistics and random fields were considered, with a focus on asymptotic methods.

  20. Level statistics of a pseudo-Hermitian Dicke model.

    Science.gov (United States)

    Deguchi, Tetsuo; Ghosh, Pijush K; Kudo, Kazue

    2009-08-01

    A non-Hermitian operator that is related to its adjoint through a similarity transformation is defined as a pseudo-Hermitian operator. We study the level statistics of a pseudo-Hermitian Dicke Hamiltonian that undergoes quantum phase transition (QPT). We find that the level-spacing distribution of this Hamiltonian near the integrable limit is close to Poisson distribution, while it is Wigner distribution for the ranges of the parameters for which the Hamiltonian is nonintegrable. We show that the assertion in the context of the standard Dicke model that QPT is a precursor to a change in the level statistics is not valid in general.

  1. Convex Combination of Multiple Statistical Models with Application to VAD

    DEFF Research Database (Denmark)

    Petsatodis, Theodoros; Boukis, Christos; Talantzis, Fotios

    2011-01-01

    This paper proposes a robust Voice Activity Detector (VAD) based on the observation that the distribution of speech captured with far-field microphones is highly varying, depending on the noise and reverberation conditions. The proposed VAD employs a convex combination scheme comprising three...... statistical distributions - a Gaussian, a Laplacian, and a two-sided Gamma - to effectively model captured speech. This scheme shows increased ability to adapt to dynamic acoustic environments. The contribution of each distribution to this convex combination is automatically adjusted based on the statistical...

  2. General Linear Models: An Integrated Approach to Statistics

    Directory of Open Access Journals (Sweden)

    Andrew Faulkner

    2008-09-01

    Full Text Available Generally, in psychology, the various statistical analyses are taught independently from each other. As a consequence, students struggle to learn new statistical analyses, in contexts that differ from their textbooks. This paper gives a short introduction to the general linear model (GLM, in which it is showed that ANOVA (one-way, factorial, repeated measure and analysis of covariance is simply a multiple correlation/regression analysis (MCRA. Generalizations to other cases, such as multivariate and nonlinear analysis, are also discussed. It can easily be shown that every popular linear analysis can be derived from understanding MCRA.

  3. Statistical skull models from 3D X-ray images

    CERN Document Server

    Berar, M; Bailly, G; Payan, Y; Berar, Maxime; Desvignes, Michel; Payan, Yohan

    2006-01-01

    We present 2 statistical models of the skull and mandible built upon an elastic registration method of 3D meshes. The aim of this work is to relate degrees of freedom of skull anatomy, as static relations are of main interest for anthropology and legal medicine. Statistical models can effectively provide reconstructions together with statistical precision. In our applications, patient-specific meshes of the skull and the mandible are high-density meshes, extracted from 3D CT scans. All our patient-specific meshes are registrated in a subject-shared reference system using our 3D-to-3D elastic matching algorithm. Registration is based upon the minimization of a distance between the high density mesh and a shared low density mesh, defined on the vertexes, in a multi resolution approach. A Principal Component analysis is performed on the normalised registrated data to build a statistical linear model of the skull and mandible shape variation. The accuracy of the reconstruction is under the millimetre in the shape...

  4. Statistical traffic modeling of MPEG frame size: Experiments and Analysis

    Directory of Open Access Journals (Sweden)

    Haniph A. Latchman

    2009-12-01

    Full Text Available For guaranteed quality of service (QoS and sufficient bandwidth in a communication network which provides an integrated multimedia service, it is important to obtain an analytical and tractable model of the compressed MPEG data. This paper presents a statistical approach to a group of picture (GOP MPEG frame size model to increase network traffic performance in a communication network. We extract MPEG frame data from commercial DVD movies and make probability histograms to analyze the statistical characteristics of MPEG frame data. Six candidates of probability distributions are considered here and their parameters are obtained from the empirical data using the maximum likelihood estimation (MLE. This paper shows that the lognormal distribution is the best fitting model of MPEG-2 total frame data.

  5. Statistical 3D damage accumulation model for ion implant simulators

    CERN Document Server

    Hernandez-Mangas, J M; Enriquez, L E; Bailon, L; Barbolla, J; Jaraiz, M

    2003-01-01

    A statistical 3D damage accumulation model, based on the modified Kinchin-Pease formula, for ion implant simulation has been included in our physically based ion implantation code. It has only one fitting parameter for electronic stopping and uses 3D electron density distributions for different types of targets including compound semiconductors. Also, a statistical noise reduction mechanism based on the dose division is used. The model has been adapted to be run under parallel execution in order to speed up the calculation in 3D structures. Sequential ion implantation has been modelled including previous damage profiles. It can also simulate the implantation of molecular and cluster projectiles. Comparisons of simulated doping profiles with experimental SIMS profiles are presented. Also comparisons between simulated amorphization and experimental RBS profiles are shown. An analysis of sequential versus parallel processing is provided.

  6. Statistical 3D damage accumulation model for ion implant simulators

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez-Mangas, J.M. E-mail: jesman@ele.uva.es; Lazaro, J.; Enriquez, L.; Bailon, L.; Barbolla, J.; Jaraiz, M

    2003-04-01

    A statistical 3D damage accumulation model, based on the modified Kinchin-Pease formula, for ion implant simulation has been included in our physically based ion implantation code. It has only one fitting parameter for electronic stopping and uses 3D electron density distributions for different types of targets including compound semiconductors. Also, a statistical noise reduction mechanism based on the dose division is used. The model has been adapted to be run under parallel execution in order to speed up the calculation in 3D structures. Sequential ion implantation has been modelled including previous damage profiles. It can also simulate the implantation of molecular and cluster projectiles. Comparisons of simulated doping profiles with experimental SIMS profiles are presented. Also comparisons between simulated amorphization and experimental RBS profiles are shown. An analysis of sequential versus parallel processing is provided.

  7. Experimental, statistical, and biological models of radon carcinogenesis

    Energy Technology Data Exchange (ETDEWEB)

    Cross, F.T.

    1991-09-01

    Risk models developed for underground miners have not been consistently validated in studies of populations exposed to indoor radon. Imprecision in risk estimates results principally from differences between exposures in mines as compared to domestic environments and from uncertainties about the interaction between cigarette-smoking and exposure to radon decay products. Uncertainties in extrapolating miner data to domestic exposures can be reduced by means of a broad-based health effects research program that addresses the interrelated issues of exposure, respiratory tract dose, carcinogenesis (molecular/cellular and animal studies, plus developing biological and statistical models), and the relationship of radon to smoking and other copollutant exposures. This article reviews experimental animal data on radon carcinogenesis observed primarily in rats at Pacific Northwest Laboratory. Recent experimental and mechanistic carcinogenesis models of exposures to radon, uranium ore dust, and cigarette smoke are presented with statistical analyses of animal data. 20 refs., 1 fig.

  8. A statistical model for characterization of histopathology images

    Science.gov (United States)

    Álvarez, Pablo; Castro, Guatizalema; Corredor, Germán.; Romero, Eduardo

    2015-01-01

    Accessing information of interest in collections of histopathology images is a challenging task. To address such issue, previous works have designed searching strategies based on the use of keywords and low-level features. However, those methods have demonstrated to not be enough or practical for this purpose. Alternative low-level features such as cell area, distance among cells and cell density are directly associated to simple histological concepts and could serve as good descriptors for this purpose. In this paper, a statistical model is adapted to represent the distribution of the areas occupied by cells for its use in whole histopathology image characterization. This novel descriptor facilitates the design of metrics based on distribution parameters and also provides new elements for a better image understanding. The proposed model was validated using image processing and statistical techniques. Results showed low error rates, demonstrating the accuracy of the model.

  9. SoS contract verification using statistical model checking

    Directory of Open Access Journals (Sweden)

    Alessandro Mignogna

    2013-11-01

    Full Text Available Exhaustive formal verification for systems of systems (SoS is impractical and cannot be applied on a large scale. In this paper we propose to use statistical model checking for efficient verification of SoS. We address three relevant aspects for systems of systems: 1 the model of the SoS, which includes stochastic aspects; 2 the formalization of the SoS requirements in the form of contracts; 3 the tool-chain to support statistical model checking for SoS. We adapt the SMC technique for application to heterogeneous SoS. We extend the UPDM/SysML specification language to express the SoS requirements that the implemented strategies over the SoS must satisfy. The requirements are specified with a new contract language specifically designed for SoS, targeting a high-level English- pattern language, but relying on an accurate semantics given by the standard temporal logics. The contracts are verified against the UPDM/SysML specification using the Statistical Model Checker (SMC PLASMA combined with the simulation engine DESYRE, which integrates heterogeneous behavioral models through the functional mock-up interface (FMI standard. The tool-chain allows computing an estimation of the satisfiability of the contracts by the SoS. The results help the system architect to trade-off different solutions to guide the evolution of the SoS.

  10. Improved head-driven statistical models for natural language parsing

    Institute of Scientific and Technical Information of China (English)

    袁里驰

    2013-01-01

    Head-driven statistical models for natural language parsing are the most representative lexicalized syntactic parsing models, but they only utilize semantic dependency between words, and do not incorporate other semantic information such as semantic collocation and semantic category. Some improvements on this distinctive parser are presented. Firstly, "valency" is an essential semantic feature of words. Once the valency of word is determined, the collocation of the word is clear, and the sentence structure can be directly derived. Thus, a syntactic parsing model combining valence structure with semantic dependency is purposed on the base of head-driven statistical syntactic parsing models. Secondly, semantic role labeling(SRL) is very necessary for deep natural language processing. An integrated parsing approach is proposed to integrate semantic parsing into the syntactic parsing process. Experiments are conducted for the refined statistical parser. The results show that 87.12% precision and 85.04% recall are obtained, and F measure is improved by 5.68% compared with the head-driven parsing model introduced by Collins.

  11. Statistical procedures for evaluating daily and monthly hydrologic model predictions

    Science.gov (United States)

    Coffey, M.E.; Workman, S.R.; Taraba, J.L.; Fogle, A.W.

    2004-01-01

    The overall study objective was to evaluate the applicability of different qualitative and quantitative methods for comparing daily and monthly SWAT computer model hydrologic streamflow predictions to observed data, and to recommend statistical methods for use in future model evaluations. Statistical methods were tested using daily streamflows and monthly equivalent runoff depths. The statistical techniques included linear regression, Nash-Sutcliffe efficiency, nonparametric tests, t-test, objective functions, autocorrelation, and cross-correlation. None of the methods specifically applied to the non-normal distribution and dependence between data points for the daily predicted and observed data. Of the tested methods, median objective functions, sign test, autocorrelation, and cross-correlation were most applicable for the daily data. The robust coefficient of determination (CD*) and robust modeling efficiency (EF*) objective functions were the preferred methods for daily model results due to the ease of comparing these values with a fixed ideal reference value of one. Predicted and observed monthly totals were more normally distributed, and there was less dependence between individual monthly totals than was observed for the corresponding predicted and observed daily values. More statistical methods were available for comparing SWAT model-predicted and observed monthly totals. The 1995 monthly SWAT model predictions and observed data had a regression Rr2 of 0.70, a Nash-Sutcliffe efficiency of 0.41, and the t-test failed to reject the equal data means hypothesis. The Nash-Sutcliffe coefficient and the R r2 coefficient were the preferred methods for monthly results due to the ability to compare these coefficients to a set ideal value of one.

  12. Editorial to: Six papers on Dynamic Statistical Models

    DEFF Research Database (Denmark)

    2014-01-01

    The following six papers are based on invited lectures at the satellite meeting held at the University of Copenhagen before the 58th World Statistics Congress of the International Statistical Institute in Dublin in 2011. At the invitation of the Bernoulli Society, the satellite meeting was organi......The following six papers are based on invited lectures at the satellite meeting held at the University of Copenhagen before the 58th World Statistics Congress of the International Statistical Institute in Dublin in 2011. At the invitation of the Bernoulli Society, the satellite meeting...... areas working with frontier research topics in statistics for dynamic models. This issue of SJS contains a quite diverse collection of six papers from the conference: Spectral Estimation of Covolatility from Noisy Observations Using Local Weights Markus Bibinger and Markus Reiß One-Way Anova...... of Copenhagen Program of Excellence and Elsevier. We would also like to thank the authors for contributing interesting papers, the referees for their helpful reports, and the present and previous editors of SJS for their support of the publication of the papers from the satellite meeting....

  13. Physics-based statistical learning approach to mesoscopic model selection

    Science.gov (United States)

    Taverniers, Søren; Haut, Terry S.; Barros, Kipton; Alexander, Francis J.; Lookman, Turab

    2015-11-01

    In materials science and many other research areas, models are frequently inferred without considering their generalization to unseen data. We apply statistical learning using cross-validation to obtain an optimally predictive coarse-grained description of a two-dimensional kinetic nearest-neighbor Ising model with Glauber dynamics (GD) based on the stochastic Ginzburg-Landau equation (sGLE). The latter is learned from GD "training" data using a log-likelihood analysis, and its predictive ability for various complexities of the model is tested on GD "test" data independent of the data used to train the model on. Using two different error metrics, we perform a detailed analysis of the error between magnetization time trajectories simulated using the learned sGLE coarse-grained description and those obtained using the GD model. We show that both for equilibrium and out-of-equilibrium GD training trajectories, the standard phenomenological description using a quartic free energy does not always yield the most predictive coarse-grained model. Moreover, increasing the amount of training data can shift the optimal model complexity to higher values. Our results are promising in that they pave the way for the use of statistical learning as a general tool for materials modeling and discovery.

  14. Nuclear EMC effect in non-extensive statistical model

    Science.gov (United States)

    Trevisan, Luis A.; Mirez, Carlos

    2013-05-01

    In the present work, we attempt to describe the nuclear EMC effect by using the proton structure functions obtained from the non-extensive statistical quark model. We record that such model has three fundamental variables, the temperature T, the radius, and the Tsallis parameter q. By combining different small changes, a good agreement with the experimental data may be obtained. Another interesting point of the model is to allow phenomenological interpretation, for instance, with q constant and changing the radius and the temperature or changing the radius and q and keeping the temperature.

  15. New statistical lattice model with double honeycomb symmetry

    Science.gov (United States)

    Naji, S.; Belhaj, A.; Labrim, H.; Bhihi, M.; Benyoussef, A.; El Kenz, A.

    2014-04-01

    Inspired from the connection between Lie symmetries and two-dimensional materials, we propose a new statistical lattice model based on a double hexagonal structure appearing in the G2 symmetry. We first construct an Ising-1/2 model, with spin values σ = ±1, exhibiting such a symmetry. The corresponding ground state shows the ferromagnetic, the antiferromagnetic, the partial ferrimagnetic and the topological ferrimagnetic phases depending on the exchange couplings. Then, we examine the phase diagrams and the magnetization using the mean field approximation (MFA). Among others, it has been suggested that the present model could be localized between systems involving the triangular and the single hexagonal lattice geometries.

  16. Statistical shape model with random walks for inner ear segmentation

    DEFF Research Database (Denmark)

    Pujadas, Esmeralda Ruiz; Kjer, Hans Martin; Piella, Gemma

    2016-01-01

    Cochlear implants can restore hearing to completely or partially deaf patients. The intervention planning can be aided by providing a patient-specific model of the inner ear. Such a model has to be built from high resolution images with accurate segmentations. Thus, a precise segmentation...... is required. We propose a new framework for segmentation of micro-CT cochlear images using random walks combined with a statistical shape model (SSM). The SSM allows us to constrain the less contrasted areas and ensures valid inner ear shape outputs. Additionally, a topology preservation method is proposed...

  17. Think continuous: Markovian Gaussian models in spatial statistics

    CERN Document Server

    Simpson, Daniel; Rue, Håvard

    2011-01-01

    Gaussian Markov random fields (GMRFs) are frequently used as computationally efficient models in spatial statistics. Unfortunately, it has traditionally been difficult to link GMRFs with the more traditional Gaussian random field models as the Markov property is difficult to deploy in continuous space. Following the pioneering work of Lindgren et al. (2011), we expound on the link between Markovian Gaussian random fields and GMRFs. In particular, we discuss the theoretical and practical aspects of fast computation with continuously specified Markovian Gaussian random fields, as well as the clear advantages they offer in terms of clear, parsimonious and interpretable models of anisotropy and non-stationarity.

  18. Statistics of a neuron model driven by asymmetric colored noise.

    Science.gov (United States)

    Müller-Hansen, Finn; Droste, Felix; Lindner, Benjamin

    2015-02-01

    Irregular firing of neurons can be modeled as a stochastic process. Here we study the perfect integrate-and-fire neuron driven by dichotomous noise, a Markovian process that jumps between two states (i.e., possesses a non-Gaussian statistics) and exhibits nonvanishing temporal correlations (i.e., represents a colored noise). Specifically, we consider asymmetric dichotomous noise with two different transition rates. Using a first-passage-time formulation, we derive exact expressions for the probability density and the serial correlation coefficient of the interspike interval (time interval between two subsequent neural action potentials) and the power spectrum of the spike train. Furthermore, we extend the model by including additional Gaussian white noise, and we give approximations for the interspike interval (ISI) statistics in this case. Numerical simulations are used to validate the exact analytical results for pure dichotomous noise, and to test the approximations of the ISI statistics when Gaussian white noise is included. The results may help to understand how correlations and asymmetry of noise and signals in nerve cells shape neuronal firing statistics.

  19. RANDOM SYSTEMS OF HARD PARTICLES:MODELS AND STATISTICS

    Institute of Scientific and Technical Information of China (English)

    Dietrich Stoyan

    2002-01-01

    This paper surveys models and statistical properties of random systems of hard particles. Such systems appear frequently in materials science, biology and elsewhere. In mathematical - statistical investigations, simulations of such structures play an important role. In these simulations various methods and models are applied, namely the RSA model, sedimentation and collective rearrangement algorithms, molecular dynamics, and Monte Carlo methods such as the Metropolis - Hastings algorithm. The statistical description of real and simulated particle systems uses ideas of the mathematical theories of random sets and point processes. This leads to characteristics such as volume fraction or porosity, covariance,contact distribution functions, specific connectivity number from the random set approach and intensity, pair correlation function and mark correlation functions from the point process approach. Some of them can be determined stereologically using planar sections, while others can only be obtained using three - dimensional data and 3D image analysis. They are valuable tools for fitting models to empirical data and, consequently, for understanding various materials, biological structures, porous media and other practically important spatial structures.

  20. Bayesian statistic methods and theri application in probabilistic simulation models

    Directory of Open Access Journals (Sweden)

    Sergio Iannazzo

    2007-03-01

    Full Text Available Bayesian statistic methods are facing a rapidly growing level of interest and acceptance in the field of health economics. The reasons of this success are probably to be found on the theoretical fundaments of the discipline that make these techniques more appealing to decision analysis. To this point should be added the modern IT progress that has developed different flexible and powerful statistical software framework. Among them probably one of the most noticeably is the BUGS language project and its standalone application for MS Windows WinBUGS. Scope of this paper is to introduce the subject and to show some interesting applications of WinBUGS in developing complex economical models based on Markov chains. The advantages of this approach reside on the elegance of the code produced and in its capability to easily develop probabilistic simulations. Moreover an example of the integration of bayesian inference models in a Markov model is shown. This last feature let the analyst conduce statistical analyses on the available sources of evidence and exploit them directly as inputs in the economic model.

  1. Statistical model of clutter suppression in tissue harmonic imaging

    Science.gov (United States)

    Yan, Xiang; Hamilton, Mark F.

    2011-01-01

    A statistical model is developed for the suppression of clutter in tissue harmonic imaging (THI). Tissue heterogeneity is modeled as a random phase screen that is characterized by its correlation length and variance. With the autocorrelation function taken to be Gaussian and for small variance, statistical solutions are derived for the mean intensities at the fundamental and second-harmonic frequencies in the field of a focused sound beam that propagates through the phase screen. The statistical solutions are verified by comparison with ensemble averaging of direct numerical simulations. The model demonstrates that THI reduces the aberration clutter appearing in the focal region regardless of the depth of the aberrating layer, with suppression of the clutter most effective when the layer is close to the source. The model is also applied to the reverberation clutter that is transmitted forward along the axis of the beam. As with aberration clutter, suppression of such reverberation clutter by THI is most pronounced when the tissue heterogeneity is located close to the source. PMID:21428483

  2. A statistical model for porous structure of rocks

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    The geometric features and the distribution properties of pores in rocks were in- vestigated by means of CT scanning tests of sandstones. The centroidal coordi- nates of pores, the statistic characterristics of pore distance, quantity, size and their probability density functions were formulated in this paper. The Monte Carlo method and the random number generating algorithm were employed to generate two series of random numbers with the desired statistic characteristics and prob- ability density functions upon which the random distribution of pore position, dis- tance and quantity were determined. A three-dimensional porous structural model of sandstone was constructed based on the FLAC3D program and the information of the pore position and distribution that the series of random numbers defined. On the basis of modelling, the Brazil split tests of rock discs were carried out to ex- amine the stress distribution, the pattern of element failure and the inosculation of failed elements. The simulation indicated that the proposed model was consistent with the realistic porous structure of rock in terms of their statistic properties of pores and geometric similarity. The built-up model disclosed the influence of pores on the stress distribution, failure mode of material elements and the inosculation of failed elements.

  3. Olive mill wastewater characteristics: modelling and statistical analysis

    Directory of Open Access Journals (Sweden)

    Martins-Dias, Susete

    2004-09-01

    Full Text Available A synthesis of the work carried out on Olive Mill Wastewater (OMW characterisation is given, covering articles published over the last 50 years. Data on OMW characterisation found in the literature are summarised and correlations between them and with phenolic compounds content are sought. This permits the characteristics of an OMW to be estimated from one simple measurement: the phenolic compounds concentration. A model based on OMW characterisations accounting 6 countries was developed along with a model for Portuguese OMW. The statistical analysis of the correlations obtained indicates that Chemical Oxygen Demand of a given OMW is a second-degree polynomial function of its phenolic compounds concentration. Tests to evaluate the regressions significance were carried out, based on multivariable ANOVA analysis, on visual standardised residuals distribution and their means for confidence levels of 95 and 99 %, validating clearly these models. This modelling work will help in the future planning, operation and monitoring of an OMW treatment plant.Presentamos una síntesis de los trabajos realizados en los últimos 50 años relacionados con la caracterización del alpechín. Realizamos una recopilación de los datos publicados, buscando correlaciones entre los datos relativos al alpechín y los compuestos fenólicos. Esto permite la determinación de las características del alpechín a partir de una sola medida: La concentración de compuestos fenólicos. Proponemos dos modelos, uno basado en datos relativos a seis países y un segundo aplicado únicamente a Portugal. El análisis estadístico de las correlaciones obtenidas indica que la demanda química de oxígeno de un determinado alpechín es una función polinómica de segundo grado de su concentración de compuestos fenólicos. Se comprobó la significancia de esta correlación mediante la aplicación del análisis multivariable ANOVA, y además se evaluó la distribución de residuos y sus

  4. Real-Time Statistical Modeling of Blood Sugar.

    Science.gov (United States)

    Otoom, Mwaffaq; Alshraideh, Hussam; Almasaeid, Hisham M; López-de-Ipiña, Diego; Bravo, José

    2015-10-01

    Diabetes is considered a chronic disease that incurs various types of cost to the world. One major challenge in the control of Diabetes is the real time determination of the proper insulin dose. In this paper, we develop a prototype for real time blood sugar control, integrated with the cloud. Our system controls blood sugar by observing the blood sugar level and accordingly determining the appropriate insulin dose based on patient's historical data, all in real time and automatically. To determine the appropriate insulin dose, we propose two statistical models for modeling blood sugar profiles, namely ARIMA and Markov-based model. Our experiment used to evaluate the performance of the two models shows that the ARIMA model outperforms the Markov-based model in terms of prediction accuracy.

  5. Prognostic Significance of Perineural Invasion in Patients with Rectal Cancer using R Environment for Statistical Computing and Graphics

    Directory of Open Access Journals (Sweden)

    Ioan Catalin VLAD

    2012-11-01

    Full Text Available Purpose: In recent studies perineural invasion (PNI is associated with poor survival rates in rectal cancer, but the impact of PNI it’s still controversial. We assessed PNI as a potential prognostic factor in rectal cancer. Patients and Methods: We analyzed 317 patients with rectal cancer resected at The Oncology Institute”Prof. Dr. Ion Chiricuţă” Cluj-Napoca, between January 2000 and December 2008. Tumors were reviewed for PNI by a pathologist. Patients data were reviewed and entered into a comprehensive database. The statistical analysis in our study was carried out in R environment for statistical computing and graphics, version 1.15.1. Overall and disease-free survivals were determined using the Kaplan-Meier method, and multivariate analysis using the Cox multiple hazards model. Results were compared using the log-rank test. Results: In our study PNI was identified in 19% of tumors. The 5-year disease-free survival rate was higher for patients with PNI-negative tumors versus those with PNI-positive tumors (57.31% vs. 36.99%, p=0.009. The 5-year overall survival rate was 59.15% for PNI-negative tumors versus 39.19% for PNI-positive tumors (p=0.014. On multivariate analysis, PNI was an independent prognostic factor for overall survival (Hazard Ratio = 0.6; 95% CI = 0.41 to 0.87; p = 0.0082. Conclusions: PNI can be considered an independent prognostic factor of outcomes in patients with rectal cancer. PNI should be taken into account when selecting patients for adjuvant treatment. R environment for statistical computing and graphics is complex yet easy to use software that has proven to be efficient in our clinical study.

  6. The distribution of P-values in medical research articles suggested selective reporting associated with statistical significance.

    Science.gov (United States)

    Perneger, Thomas V; Combescure, Christophe

    2017-07-01

    Published P-values provide a window into the global enterprise of medical research. The aim of this study was to use the distribution of published P-values to estimate the relative frequencies of null and alternative hypotheses and to seek irregularities suggestive of publication bias. This cross-sectional study included P-values published in 120 medical research articles in 2016 (30 each from the BMJ, JAMA, Lancet, and New England Journal of Medicine). The observed distribution of P-values was compared with expected distributions under the null hypothesis (i.e., uniform between 0 and 1) and the alternative hypothesis (strictly decreasing from 0 to 1). P-values were categorized according to conventional levels of statistical significance and in one-percent intervals. Among 4,158 recorded P-values, 26.1% were highly significant (P P ≥ 0.001 to P ≥ 0.01 to P ≥ 0.05). We noted three irregularities: (1) high proportion of P-values P-values equal to 1, and (3) about twice as many P-values less than 0.05 compared with those more than 0.05. The latter finding was seen in both randomized trials and observational studies, and in most types of analyses, excepting heterogeneity tests and interaction tests. Under plausible assumptions, we estimate that about half of the tested hypotheses were null and the other half were alternative. This analysis suggests that statistical tests published in medical journals are not a random sample of null and alternative hypotheses but that selective reporting is prevalent. In particular, significant results are about twice as likely to be reported as nonsignificant results. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Statistically significant dependence of the Xaa-Pro peptide bond conformation on secondary structure and amino acid sequence

    Directory of Open Access Journals (Sweden)

    Leitner Dietmar

    2005-04-01

    Full Text Available Abstract Background A reliable prediction of the Xaa-Pro peptide bond conformation would be a useful tool for many protein structure calculation methods. We have analyzed the Protein Data Bank and show that the combined use of sequential and structural information has a predictive value for the assessment of the cis versus trans peptide bond conformation of Xaa-Pro within proteins. For the analysis of the data sets different statistical methods such as the calculation of the Chou-Fasman parameters and occurrence matrices were used. Furthermore we analyzed the relationship between the relative solvent accessibility and the relative occurrence of prolines in the cis and in the trans conformation. Results One of the main results of the statistical investigations is the ranking of the secondary structure and sequence information with respect to the prediction of the Xaa-Pro peptide bond conformation. We observed a significant impact of secondary structure information on the occurrence of the Xaa-Pro peptide bond conformation, while the sequence information of amino acids neighboring proline is of little predictive value for the conformation of this bond. Conclusion In this work, we present an extensive analysis of the occurrence of the cis and trans proline conformation in proteins. Based on the data set, we derived patterns and rules for a possible prediction of the proline conformation. Upon adoption of the Chou-Fasman parameters, we are able to derive statistically relevant correlations between the secondary structure of amino acid fragments and the Xaa-Pro peptide bond conformation.

  8. Can spatial statistical river temperature models be transferred between catchments?

    Science.gov (United States)

    Jackson, Faye L.; Fryer, Robert J.; Hannah, David M.; Malcolm, Iain A.

    2017-09-01

    There has been increasing use of spatial statistical models to understand and predict river temperature (Tw) from landscape covariates. However, it is not financially or logistically feasible to monitor all rivers and the transferability of such models has not been explored. This paper uses Tw data from four river catchments collected in August 2015 to assess how well spatial regression models predict the maximum 7-day rolling mean of daily maximum Tw (Twmax) within and between catchments. Models were fitted for each catchment separately using (1) landscape covariates only (LS models) and (2) landscape covariates and an air temperature (Ta) metric (LS_Ta models). All the LS models included upstream catchment area and three included a river network smoother (RNS) that accounted for unexplained spatial structure. The LS models transferred reasonably to other catchments, at least when predicting relative levels of Twmax. However, the predictions were biased when mean Twmax differed between catchments. The RNS was needed to characterise and predict finer-scale spatially correlated variation. Because the RNS was unique to each catchment and thus non-transferable, predictions were better within catchments than between catchments. A single model fitted to all catchments found no interactions between the landscape covariates and catchment, suggesting that the landscape relationships were transferable. The LS_Ta models transferred less well, with particularly poor performance when the relationship with the Ta metric was physically implausible or required extrapolation outside the range of the data. A single model fitted to all catchments found catchment-specific relationships between Twmax and the Ta metric, indicating that the Ta metric was not transferable. These findings improve our understanding of the transferability of spatial statistical river temperature models and provide a foundation for developing new approaches for predicting Tw at unmonitored locations across

  9. Statistical mechanics models for multimode lasers and random lasers

    CERN Document Server

    Antenucci, F; Berganza, M Ibáñez; Marruzzo, A; Leuzzi, L

    2015-01-01

    We review recent statistical mechanical approaches to multimode laser theory. The theory has proved very effective to describe standard lasers. We refer of the mean field theory for passive mode locking and developments based on Monte Carlo simulations and cavity method to study the role of the frequency matching condition. The status for a complete theory of multimode lasing in open and disordered cavities is discussed and the derivation of the general statistical models in this framework is presented. When light is propagating in a disordered medium, the system can be analyzed via the replica method. For high degrees of disorder and nonlinearity, a glassy behavior is expected at the lasing threshold, providing a suggestive link between glasses and photonics. We describe in details the results for the general Hamiltonian model in mean field approximation and mention an available test for replica symmetry breaking from intensity spectra measurements. Finally, we summary some perspectives still opened for such...

  10. Passive Target Tracking Based on Current Statistical Model

    Institute of Scientific and Technical Information of China (English)

    DENG Xiao-long; XIE Jian-ying; YANG Yu-pu

    2005-01-01

    Bearing-only passive tracking is regarded as a nonlinear hard tracking problem. There are still no completely good solutions to this problem until now. Based on current statistical model, the novel solution to this problem utilizing particle filter (PF) and the unscented Kalman filter (UKF) is proposed. The new solution adopts data fusion from two observers to increase the observability of passive tracking. It applies the residual resampling step to reduce the degeneracy of PF and it introduces the Markov Chain Monte Carlo methods (MCMC) to reduce the effect of the "sample impoverish". Based on current statistical model, the EKF, the UKF and particle filter with various proposal distributions are compared in the passive tracking experiments with two observers. The simulation results demonstrate the good performance of the proposed new filtering methods with the novel techniques.

  11. Statistical detection of structural damage based on model reduction

    Institute of Scientific and Technical Information of China (English)

    Tao YIN; Heung-fai LAM; Hong-ping ZHU

    2009-01-01

    This paper proposes a statistical method for damage detection based on the finite element (FE) model reduction technique that utilizes measured modal data with a limited number of sensors.A deterministic damage detection process is formulated based on the model reduction technique.The probabilistic process is integrated into the deterministic damage detection process using a perturbation technique,resulting in a statistical structural damage detection method.This is achieved by deriving the firstand second-order partial derivatives of uncertain parameters,such as elasticity of the damaged member,with respect to the measurement noise,which allows expectation and covariance matrix of the uncertain parameters to be calculated.Besides the theoretical development,this paper reports numerical verification of the proposed method using a portal frame example and Monte Carlo simulation.

  12. Statistical inference to advance network models in epidemiology.

    Science.gov (United States)

    Welch, David; Bansal, Shweta; Hunter, David R

    2011-03-01

    Contact networks are playing an increasingly important role in the study of epidemiology. Most of the existing work in this area has focused on considering the effect of underlying network structure on epidemic dynamics by using tools from probability theory and computer simulation. This work has provided much insight on the role that heterogeneity in host contact patterns plays on infectious disease dynamics. Despite the important understanding afforded by the probability and simulation paradigm, this approach does not directly address important questions about the structure of contact networks such as what is the best network model for a particular mode of disease transmission, how parameter values of a given model should be estimated, or how precisely the data allow us to estimate these parameter values. We argue that these questions are best answered within a statistical framework and discuss the role of statistical inference in estimating contact networks from epidemiological data.

  13. Exploring Explanations of Subglacial Bedform Sizes Using Statistical Models.

    Directory of Open Access Journals (Sweden)

    John K Hillier

    Full Text Available Sediments beneath modern ice sheets exert a key control on their flow, but are largely inaccessible except through geophysics or boreholes. In contrast, palaeo-ice sheet beds are accessible, and typically characterised by numerous bedforms. However, the interaction between bedforms and ice flow is poorly constrained and it is not clear how bedform sizes might reflect ice flow conditions. To better understand this link we present a first exploration of a variety of statistical models to explain the size distribution of some common subglacial bedforms (i.e., drumlins, ribbed moraine, MSGL. By considering a range of models, constructed to reflect key aspects of the physical processes, it is possible to infer that the size distributions are most effectively explained when the dynamics of ice-water-sediment interaction associated with bedform growth is fundamentally random. A 'stochastic instability' (SI model, which integrates random bedform growth and shrinking through time with exponential growth, is preferred and is consistent with other observations of palaeo-bedforms and geophysical surveys of active ice sheets. Furthermore, we give a proof-of-concept demonstration that our statistical approach can bridge the gap between geomorphological observations and physical models, directly linking measurable size-frequency parameters to properties of ice sheet flow (e.g., ice velocity. Moreover, statistically developing existing models as proposed allows quantitative predictions to be made about sizes, making the models testable; a first illustration of this is given for a hypothesised repeat geophysical survey of bedforms under active ice. Thus, we further demonstrate the potential of size-frequency distributions of subglacial bedforms to assist the elucidation of subglacial processes and better constrain ice sheet models.

  14. Statistical Quark Model for the Nucleon Structure Function

    Science.gov (United States)

    Mirez, Carlos; Tomio, Lauro; Trevisan, Luis A.; Frederico, Tobias

    2009-06-01

    A statistical quark model, with quark energy levels given by a central linear confining potential is used to obtain the light sea-quark asymmetry, d¯/ū, and also for the ratio d/u, inside the nucleon. After adjusting a temperature parameter by the Gottfried sum rule violation, and chemical potentials by the valence up and down quark normalizations, the results are compared with experimental data available.

  15. Physical-Statistical Model of Thermal Conductivity of Nanofluids

    Directory of Open Access Journals (Sweden)

    B. Usowicz

    2014-01-01

    Full Text Available A physical-statistical model for predicting the effective thermal conductivity of nanofluids is proposed. The volumetric unit of nanofluids in the model consists of solid, liquid, and gas particles and is treated as a system made up of regular geometric figures, spheres, filling the volumetric unit by layers. The model assumes that connections between layers of the spheres and between neighbouring spheres in the layer are represented by serial and parallel connections of thermal resistors, respectively. This model is expressed in terms of thermal resistance of nanoparticles and fluids and the multinomial distribution of particles in the nanofluids. The results for predicted and measured effective thermal conductivity of several nanofluids (Al2O3/ethylene glycol-based and Al2O3/water-based; CuO/ethylene glycol-based and CuO/water-based; and TiO2/ethylene glycol-based are presented. The physical-statistical model shows a reasonably good agreement with the experimental results and gives more accurate predictions for the effective thermal conductivity of nanofluids compared to existing classical models.

  16. The Ising model in physics and statistical genetics.

    Science.gov (United States)

    Majewski, J; Li, H; Ott, J

    2001-10-01

    Interdisciplinary communication is becoming a crucial component of the present scientific environment. Theoretical models developed in diverse disciplines often may be successfully employed in solving seemingly unrelated problems that can be reduced to similar mathematical formulation. The Ising model has been proposed in statistical physics as a simplified model for analysis of magnetic interactions and structures of ferromagnetic substances. Here, we present an application of the one-dimensional, linear Ising model to affected-sib-pair (ASP) analysis in genetics. By analyzing simulated genetics data, we show that the simplified Ising model with only nearest-neighbor interactions between genetic markers has statistical properties comparable to much more complex algorithms from genetics analysis, such as those implemented in the Allegro and Mapmaker-Sibs programs. We also adapt the model to include epistatic interactions and to demonstrate its usefulness in detecting modifier loci with weak individual genetic contributions. A reanalysis of data on type 1 diabetes detects several susceptibility loci not previously found by other methods of analysis.

  17. Statistical mechanics of the Huxley-Simmons model.

    Science.gov (United States)

    Caruel, M; Truskinovsky, L

    2016-06-01

    The chemomechanical model of Huxley and Simmons (HS) [A. F. Huxley and R. M. Simmons, Nature 233, 533 (1971)NATUAS0028-083610.1038/233533a0] provides a paradigmatic description of mechanically induced collective conformational changes relevant in a variety of biological contexts, from muscles power stroke and hair cell gating to integrin binding and hairpin unzipping. We develop a statistical mechanical perspective on the HS model by exploiting a formal analogy with a paramagnetic Ising model. We first study the equilibrium HS model with a finite number of elements and compute explicitly its mechanical and thermal properties. To model kinetics, we derive a master equation and solve it for several loading protocols. The developed formalism is applicable to a broad range of allosteric systems with mean-field interactions.

  18. Statistical mechanics of the Huxley-Simmons model

    CERN Document Server

    Caruel, M

    2016-01-01

    The chemomechanical model of Huxley and Simmons (HS) [A. F. Huxley and R. M. Simmons, Nature 233, 533 (1971)] provides a paradigmatic description of mechanically induced collective conformational changes relevant in a variety of biological contexts, from muscles power-stroke and hair cell gating to integrin binding and hairpin unzipping. We develop a statistical mechanical perspective on the HS model by exploiting a formal analogy with a paramagnetic Ising model. We first study the equilibrium HS model with a finite number of elements and compute explicitly its mechanical and thermal properties. To model kinetics, we derive a master equation and solve it for several loading protocols. The developed formalism is applicable to a broad range of allosteric systems with mean-field interactions.

  19. Statistical mechanics of the Huxley-Simmons model

    Science.gov (United States)

    Caruel, M.; Truskinovsky, L.

    2016-06-01

    The chemomechanical model of Huxley and Simmons (HS) [A. F. Huxley and R. M. Simmons, Nature 233, 533 (1971), 10.1038/233533a0] provides a paradigmatic description of mechanically induced collective conformational changes relevant in a variety of biological contexts, from muscles power stroke and hair cell gating to integrin binding and hairpin unzipping. We develop a statistical mechanical perspective on the HS model by exploiting a formal analogy with a paramagnetic Ising model. We first study the equilibrium HS model with a finite number of elements and compute explicitly its mechanical and thermal properties. To model kinetics, we derive a master equation and solve it for several loading protocols. The developed formalism is applicable to a broad range of allosteric systems with mean-field interactions.

  20. A generalized statistical model for the size distribution of wealth

    Science.gov (United States)

    Clementi, F.; Gallegati, M.; Kaniadakis, G.

    2012-12-01

    In a recent paper in this journal (Clementi et al 2009 J. Stat. Mech. P02037), we proposed a new, physically motivated, distribution function for modeling individual incomes, having its roots in the framework of the κ-generalized statistical mechanics. The performance of the κ-generalized distribution was checked against real data on personal income for the United States in 2003. In this paper we extend our previous model so as to be able to account for the distribution of wealth. Probabilistic functions and inequality measures of this generalized model for wealth distribution are obtained in closed form. In order to check the validity of the proposed model, we analyze the US household wealth distributions from 1984 to 2009 and conclude an excellent agreement with the data that is superior to any other model already known in the literature.

  1. A Statistical Model for In Vivo Neuronal Dynamics.

    Directory of Open Access Journals (Sweden)

    Simone Carlo Surace

    Full Text Available Single neuron models have a long tradition in computational neuroscience. Detailed biophysical models such as the Hodgkin-Huxley model as well as simplified neuron models such as the class of integrate-and-fire models relate the input current to the membrane potential of the neuron. Those types of models have been extensively fitted to in vitro data where the input current is controlled. Those models are however of little use when it comes to characterize intracellular in vivo recordings since the input to the neuron is not known. Here we propose a novel single neuron model that characterizes the statistical properties of in vivo recordings. More specifically, we propose a stochastic process where the subthreshold membrane potential follows a Gaussian process and the spike emission intensity depends nonlinearly on the membrane potential as well as the spiking history. We first show that the model has a rich dynamical repertoire since it can capture arbitrary subthreshold autocovariance functions, firing-rate adaptations as well as arbitrary shapes of the action potential. We then show that this model can be efficiently fitted to data without overfitting. We finally show that this model can be used to characterize and therefore precisely compare various intracellular in vivo recordings from different animals and experimental conditions.

  2. Spatial Statistical Procedures to Validate Input Data in Energy Models

    Energy Technology Data Exchange (ETDEWEB)

    Johannesson, G.; Stewart, J.; Barr, C.; Brady Sabeff, L.; George, R.; Heimiller, D.; Milbrandt, A.

    2006-01-01

    Energy modeling and analysis often relies on data collected for other purposes such as census counts, atmospheric and air quality observations, economic trends, and other primarily non-energy related uses. Systematic collection of empirical data solely for regional, national, and global energy modeling has not been established as in the abovementioned fields. Empirical and modeled data relevant to energy modeling is reported and available at various spatial and temporal scales that might or might not be those needed and used by the energy modeling community. The incorrect representation of spatial and temporal components of these data sets can result in energy models producing misleading conclusions, especially in cases of newly evolving technologies with spatial and temporal operating characteristics different from the dominant fossil and nuclear technologies that powered the energy economy over the last two hundred years. Increased private and government research and development and public interest in alternative technologies that have a benign effect on the climate and the environment have spurred interest in wind, solar, hydrogen, and other alternative energy sources and energy carriers. Many of these technologies require much finer spatial and temporal detail to determine optimal engineering designs, resource availability, and market potential. This paper presents exploratory and modeling techniques in spatial statistics that can improve the usefulness of empirical and modeled data sets that do not initially meet the spatial and/or temporal requirements of energy models. In particular, we focus on (1) aggregation and disaggregation of spatial data, (2) predicting missing data, and (3) merging spatial data sets. In addition, we introduce relevant statistical software models commonly used in the field for various sizes and types of data sets.

  3. Spatial Statistical Procedures to Validate Input Data in Energy Models

    Energy Technology Data Exchange (ETDEWEB)

    Lawrence Livermore National Laboratory

    2006-01-27

    Energy modeling and analysis often relies on data collected for other purposes such as census counts, atmospheric and air quality observations, economic trends, and other primarily non-energy-related uses. Systematic collection of empirical data solely for regional, national, and global energy modeling has not been established as in the above-mentioned fields. Empirical and modeled data relevant to energy modeling is reported and available at various spatial and temporal scales that might or might not be those needed and used by the energy modeling community. The incorrect representation of spatial and temporal components of these data sets can result in energy models producing misleading conclusions, especially in cases of newly evolving technologies with spatial and temporal operating characteristics different from the dominant fossil and nuclear technologies that powered the energy economy over the last two hundred years. Increased private and government research and development and public interest in alternative technologies that have a benign effect on the climate and the environment have spurred interest in wind, solar, hydrogen, and other alternative energy sources and energy carriers. Many of these technologies require much finer spatial and temporal detail to determine optimal engineering designs, resource availability, and market potential. This paper presents exploratory and modeling techniques in spatial statistics that can improve the usefulness of empirical and modeled data sets that do not initially meet the spatial and/or temporal requirements of energy models. In particular, we focus on (1) aggregation and disaggregation of spatial data, (2) predicting missing data, and (3) merging spatial data sets. In addition, we introduce relevant statistical software models commonly used in the field for various sizes and types of data sets.

  4. Statistics Related Self-Efficacy A Confirmatory Factor Analysis Demonstrating a Significant Link to Prior Mathematics Experiences for Graduate Level Students

    Directory of Open Access Journals (Sweden)

    Karen Larwin

    2014-02-01

    Full Text Available The present study examined students' statistics-related self-efficacy, as measured with the current statistics self-efficacy (CSSE inventory developed by Finney and Schraw (2003. Structural equation modeling was used to check the confirmatory factor analysis of the one-dimensional factor of CSSE. Once confirmed, this factor was used to test whether a significant link to prior mathematics experiences exists. Additionally a new post-structural equation modeling (SEM application was employed to compute error-free latent variable score for CSSE in an effort to examine the ancillary effects of gender, age, ethnicity, department, degree level, hours completed, expected course grade, number of college-level math classes, current GPA on students' CSSE scores. Results support the one-dimensional construct and as expected, the model demonstrated a significant link between CSSE scores and prior mathematics experiences to CSSE. Additionally the students' department, expected grade, and number of prior math classes were found to have a significant effect on student's CSSE scores.

  5. Calculation of statistical entropic measures in a model of solids

    CERN Document Server

    Sanudo, Jaime

    2012-01-01

    In this work, a one-dimensional model of crystalline solids based on the Dirac comb limit of the Kronig-Penney model is considered. From the wave functions of the valence electrons, we calculate a statistical measure of complexity and the Fisher-Shannon information for the lower energy electronic bands appearing in the system. All these magnitudes present an extremal value for the case of solids having half-filled bands, a configuration where in general a high conductivity is attained in real solids, such as it happens with the monovalent metals.

  6. Linguistically motivated statistical machine translation models and algorithms

    CERN Document Server

    Xiong, Deyi

    2015-01-01

    This book provides a wide variety of algorithms and models to integrate linguistic knowledge into Statistical Machine Translation (SMT). It helps advance conventional SMT to linguistically motivated SMT by enhancing the following three essential components: translation, reordering and bracketing models. It also serves the purpose of promoting the in-depth study of the impacts of linguistic knowledge on machine translation. Finally it provides a systematic introduction of Bracketing Transduction Grammar (BTG) based SMT, one of the state-of-the-art SMT formalisms, as well as a case study of linguistically motivated SMT on a BTG-based platform.

  7. Efficient Parallel Statistical Model Checking of Biochemical Networks

    Directory of Open Access Journals (Sweden)

    Paolo Ballarini

    2009-12-01

    Full Text Available We consider the problem of verifying stochastic models of biochemical networks against behavioral properties expressed in temporal logic terms. Exact probabilistic verification approaches such as, for example, CSL/PCTL model checking, are undermined by a huge computational demand which rule them out for most real case studies. Less demanding approaches, such as statistical model checking, estimate the likelihood that a property is satisfied by sampling executions out of the stochastic model. We propose a methodology for efficiently estimating the likelihood that a LTL property P holds of a stochastic model of a biochemical network. As with other statistical verification techniques, the methodology we propose uses a stochastic simulation algorithm for generating execution samples, however there are three key aspects that improve the efficiency: first, the sample generation is driven by on-the-fly verification of P which results in optimal overall simulation time. Second, the confidence interval estimation for the probability of P to hold is based on an efficient variant of the Wilson method which ensures a faster convergence. Third, the whole methodology is designed according to a parallel fashion and a prototype software tool has been implemented that performs the sampling/verification process in parallel over an HPC architecture.

  8. Non-gaussianity and Statistical Anisotropy in Cosmological Inflationary Models

    CERN Document Server

    Valenzuela-Toledo, Cesar A

    2010-01-01

    We study the statistical descriptors for some cosmological inflationary models that allow us to get large levels of non-gaussianity and violations of statistical isotropy. Basically, we study two different class of models: a model that include only scalar field perturbations, specifically a subclass of small-field slow-roll models of inflation with canonical kinetic terms, and models that admit both vector and scalar field perturbations. We study the former to show that it is possible to attain very high, including observable, values for the levels of non-gaussianity f_{NL} and \\tao_{NL} in the bispectrum B_\\zeta and trispectrum T_\\zeta of the primordial curvature perturbation \\zeta respectively. Such a result is obtained by taking care of loop corrections in the spectrum P_\\zeta, the bispectrum B_\\zeta and the trispectrum T_\\zeta . Sizeable values for f_{NL} and \\tao_{NL} arise even if \\zeta is generated during inflation. For the latter we study the spectrum P_\\zeta, bispectrum B_\\zeta and trispectrum $T_\\ze...

  9. Anyonic behavior of an intermediate-statistics fermion gas model.

    Science.gov (United States)

    Algin, Abdullah; Irk, Dursun; Topcu, Gozde

    2015-06-01

    We study the high-temperature behavior of an intermediate-statistics fermionic gas model whose quantum statistical properties enable us to effectively deduce the details about both the interaction among deformed (quasi)particles and their anyonic behavior. Starting with a deformed fermionic grand partition function, we calculate, in the thermodynamical limit, several thermostatistical functions of the model such as the internal energy and the entropy by means of a formalism of the fermionic q calculus. For high temperatures, a virial expansion of the equation of state for the system is obtained in two and three dimensions and the first five virial coefficients are derived in terms of the model deformation parameter q. From the results obtained by the effect of fermionic deformation, it is found that the model parameter q interpolates completely between bosonlike and fermionic systems via the behaviors of the third and fifth virial coefficients in both two and three spatial dimensions and in addition it characterizes effectively the interaction among quasifermions. Our results reveal that the present deformed (quasi)fermion model could be very efficient and effective in accounting for the nonlinear behaviors in interacting composite particle systems.

  10. A statistical permafrost distribution model for the European Alps

    Directory of Open Access Journals (Sweden)

    L. Boeckli

    2011-05-01

    Full Text Available Permafrost distribution modeling in densely populated mountain regions is an important task to support the construction of infrastructure and for the assessment of climate change effects on permafrost and related natural systems. In order to analyze permafrost distribution and evolution on an Alpine-wide scale, one consistent model for the entire domain is needed.

    We present a statistical permafrost model for the entire Alps based on rock glacier inventories and rock surface temperatures. Starting from an integrated model framework, two different sub-models were developed, one for debris covered areas (debris model and one for steep rock faces (rock model. For the debris model a generalized linear mixed-effect model (GLMM was used to predict the probability of a rock glacier being intact as opposed to relict. The model is based on the explanatory variables mean annual air temperature (MAAT, potential incoming solar radiation (PISR and the mean annual sum of precipitation (PRECIP, and achieves an excellent discrimination (area under the receiver-operating characteristic, AUROC = 0.91. Surprisingly, the probability of a rock glacier being intact is positively associated with increasing PRECIP for given MAAT and PISR conditions. The rock model was calibrated with mean annual rock surface temperatures (MARST and is based on MAAT and PISR. The linear regression achieves a root mean square error (RMSE of 1.6 °C. The final model combines the two sub-models and accounts for the different scales used for model calibration. Further steps to transfer this model into a map-based product are outlined.

  11. The Impact of Statistical Leakage Models on Design Yield Estimation

    Directory of Open Access Journals (Sweden)

    Rouwaida Kanj

    2011-01-01

    Full Text Available Device mismatch and process variation models play a key role in determining the functionality and yield of sub-100 nm design. Average characteristics are often of interest, such as the average leakage current or the average read delay. However, detecting rare functional fails is critical for memory design and designers often seek techniques that enable accurately modeling such events. Extremely leaky devices can inflict functionality fails. The plurality of leaky devices on a bitline increase the dimensionality of the yield estimation problem. Simplified models are possible by adopting approximations to the underlying sum of lognormals. The implications of such approximations on tail probabilities may in turn bias the yield estimate. We review different closed form approximations and compare against the CDF matching method, which is shown to be most effective method for accurate statistical leakage modeling.

  12. Statistical models of video structure for content analysis and characterization.

    Science.gov (United States)

    Vasconcelos, N; Lippman, A

    2000-01-01

    Content structure plays an important role in the understanding of video. In this paper, we argue that knowledge about structure can be used both as a means to improve the performance of content analysis and to extract features that convey semantic information about the content. We introduce statistical models for two important components of this structure, shot duration and activity, and demonstrate the usefulness of these models with two practical applications. First, we develop a Bayesian formulation for the shot segmentation problem that is shown to extend the standard thresholding model in an adaptive and intuitive way, leading to improved segmentation accuracy. Second, by applying the transformation into the shot duration/activity feature space to a database of movie clips, we also illustrate how the Bayesian model captures semantic properties of the content. We suggest ways in which these properties can be used as a basis for intuitive content-based access to movie libraries.

  13. Liver recognition based on statistical shape model in CT images

    Science.gov (United States)

    Xiang, Dehui; Jiang, Xueqing; Shi, Fei; Zhu, Weifang; Chen, Xinjian

    2016-03-01

    In this paper, an automatic method is proposed to recognize the liver on clinical 3D CT images. The proposed method effectively use statistical shape model of the liver. Our approach consist of three main parts: (1) model training, in which shape variability is detected using principal component analysis from the manual annotation; (2) model localization, in which a fast Euclidean distance transformation based method is able to localize the liver in CT images; (3) liver recognition, the initial mesh is locally and iteratively adapted to the liver boundary, which is constrained with the trained shape model. We validate our algorithm on a dataset which consists of 20 3D CT images obtained from different patients. The average ARVD was 8.99%, the average ASSD was 2.69mm, the average RMSD was 4.92mm, the average MSD was 28.841mm, and the average MSD was 13.31%.

  14. Uncertainty analysis of statistical downscaling models using general circulation model over an international wetland

    Science.gov (United States)

    Etemadi, H.; Samadi, S.; Sharifikia, M.

    2014-06-01

    Regression-based statistical downscaling model (SDSM) is an appropriate method which broadly uses to resolve the coarse spatial resolution of general circulation models (GCMs). Nevertheless, the assessment of uncertainty propagation linked with climatic variables is essential to any climate change impact study. This study presents a procedure to characterize uncertainty analysis of two GCM models link with Long Ashton Research Station Weather Generator (LARS-WG) and SDSM in one of the most vulnerable international wetland, namely "Shadegan" in an arid region of Southwest Iran. In the case of daily temperature, uncertainty is estimated by comparing monthly mean and variance of downscaled and observed daily data at a 95 % confidence level. Uncertainties were then evaluated from comparing monthly mean dry and wet spell lengths and their 95 % CI in daily precipitation downscaling using 1987-2005 interval. The uncertainty results indicated that the LARS-WG is the most proficient model at reproducing various statistical characteristics of observed data at a 95 % uncertainty bounds while the SDSM model is the least capable in this respect. The results indicated a sequences uncertainty analysis at three different climate stations and produce significantly different climate change responses at 95 % CI. Finally the range of plausible climate change projections suggested a need for the decision makers to augment their long-term wetland management plans to reduce its vulnerability to climate change impacts.

  15. Statistical Inference for Partially Linear Regression Models with Measurement Errors

    Institute of Scientific and Technical Information of China (English)

    Jinhong YOU; Qinfeng XU; Bin ZHOU

    2008-01-01

    In this paper, the authors investigate three aspects of statistical inference for the partially linear regression models where some covariates are measured with errors. Firstly,a bandwidth selection procedure is proposed, which is a combination of the difference-based technique and GCV method. Secondly, a goodness-of-fit test procedure is proposed,which is an extension of the generalized likelihood technique. Thirdly, a variable selection procedure for the parametric part is provided based on the nonconcave penalization and corrected profile least squares. Same as "Variable selection via nonconcave penalized like-lihood and its oracle properties" (J. Amer. Statist. Assoc., 96, 2001, 1348-1360), it is shown that the resulting estimator has an oracle property with a proper choice of regu-larization parameters and penalty function. Simulation studies are conducted to illustrate the finite sample performances of the proposed procedures.

  16. The Statistical Multifragmentation Model with Skyrme Effective Interactions

    CERN Document Server

    Souza, S R; Donangelo, R; Lynch, W G; Steiner, A W; Tsang, M B

    2009-01-01

    The Statistical Multifragmentation Model is modified to incorporate the Helmholtz free energies calculated in the finite temperature Thomas-Fermi approximation using Skyrme effective interactions. In this formulation, the density of the fragments at the freeze-out configuration corresponds to the equilibrium value obtained in the Thomas-Fermi approximation at the given temperature. The behavior of the nuclear caloric curve at constant volume is investigated in the micro-canonical ensemble and a plateau is observed for excitation energies between 8 and 10 MeV per nucleon. A kink in the caloric curve is found at the onset of this gas transition, indicating the existence of a small excitation energy region with negative heat capacity. In contrast to previous statistical calculations, this situation takes place even in this case in which the system is constrained to fixed volume. The observed phase transition takes place at approximately constant entropy. The charge distribution and other observables also turn ou...

  17. The Statistical Multifragmentation Model with Skyrme Effective Interactions

    CERN Document Server

    Carlson, B V; Donangelo, R; Lynch, W G; Steiner, A W; Tsang, M B

    2010-01-01

    The Statistical Multifragmentation Model is modified to incorporate Helmholtz free energies calculated in the finite temperature Thomas-Fermi approximation using Skyrme effective interactions. In this formulation, the density of the fragments at the freeze-out configuration corresponds to the equilibrium value obtained in the Thomas-Fermi approximation at the given temperature. The behavior of the nuclear caloric curve, at constant volume, is investigated in the micro-canonical ensemble and a plateau is observed for excitation energies between 8 and 10 MeV per nucleon. A small kink in the caloric curve is found at the onset of this gas transition, indicating the existence of negative heat capacity, even in this case in which the system is constrained to a fixed volume, in contrast to former statistical calculations.

  18. WE-A-201-02: Modern Statistical Modeling.

    Science.gov (United States)

    Niemierko, A

    2016-06-01

    Chris Marshall: Memorial Introduction Donald Edmonds Herbert Jr., or Don to his colleagues and friends, exemplified the "big tent" vision of medical physics, specializing in Applied Statistics and Dynamical Systems theory. He saw, more clearly than most, that "Making models is the difference between doing science and just fooling around [ref Woodworth, 2004]". Don developed an interest in chemistry at school by "reading a book" - a recurring theme in his story. He was awarded a Westinghouse Science scholarship and attended the Carnegie Institute of Technology (later Carnegie Mellon University) where his interest turned to physics and led to a BS in Physics after transfer to Northwestern University. After (voluntary) service in the Navy he earned his MS in Physics from the University of Oklahoma, which led him to Johns Hopkins University in Baltimore to pursue a PhD. The early death of his wife led him to take a salaried position in the Physics Department of Colorado College in Colorado Springs so as to better care for their young daughter. There, a chance invitation from Dr. Juan del Regato to teach physics to residents at the Penrose Cancer Hospital introduced him to Medical Physics, and he decided to enter the field. He received his PhD from the University of London (UK) under Prof. Joseph Rotblat, where I first met him, and where he taught himself statistics. He returned to Penrose as a clinical medical physicist, also largely self-taught. In 1975 he formalized an evolving interest in statistical analysis as Professor of Radiology and Head of the Division of Physics and Statistics at the College of Medicine of the University of South Alabama in Mobile, AL where he remained for the rest of his career. He also served as the first Director of their Bio-Statistics and Epidemiology Core Unit working in part on a sickle-cell disease. After retirement he remained active as Professor Emeritus. Don served for several years as a consultant to the Nuclear Regulatory

  19. Modeling phenotypic plasticity in growth trajectories: a statistical framework.

    Science.gov (United States)

    Wang, Zhong; Pang, Xiaoming; Wu, Weimiao; Wang, Jianxin; Wang, Zuoheng; Wu, Rongling

    2014-01-01

    Phenotypic plasticity, that is multiple phenotypes produced by a single genotype in response to environmental change, has been thought to play an important role in evolution and speciation. Historically, knowledge about phenotypic plasticity has resulted from the analysis of static traits measured at a single time point. New insight into the adaptive nature of plasticity can be gained by an understanding of how organisms alter their developmental processes in a range of environments. Recent advances in statistical modeling of functional data and developmental genetics allow us to construct a dynamic framework of plastic response in developmental form and pattern. Under this framework, development, genetics, and evolution can be synthesized through statistical bridges to better address how evolution results from phenotypic variation in the process of development via genetic alterations.

  20. Hybrid perturbation methods based on statistical time series models

    Science.gov (United States)

    San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario

    2016-04-01

    In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.

  1. Statistical Process Control of a Kalman Filter Model

    Science.gov (United States)

    Gamse, Sonja; Nobakht-Ersi, Fereydoun; Sharifi, Mohammad A.

    2014-01-01

    For the evaluation of measurement data, different functional and stochastic models can be used. In the case of time series, a Kalman filtering (KF) algorithm can be implemented. In this case, a very well-known stochastic model, which includes statistical tests in the domain of measurements and in the system state domain, is used. Because the output results depend strongly on input model parameters and the normal distribution of residuals is not always fulfilled, it is very important to perform all possible tests on output results. In this contribution, we give a detailed description of the evaluation of the Kalman filter model. We describe indicators of inner confidence, such as controllability and observability, the determinant of state transition matrix and observing the properties of the a posteriori system state covariance matrix and the properties of the Kalman gain matrix. The statistical tests include the convergence of standard deviations of the system state components and normal distribution beside standard tests. Especially, computing controllability and observability matrices and controlling the normal distribution of residuals are not the standard procedures in the implementation of KF. Practical implementation is done on geodetic kinematic observations. PMID:25264959

  2. Statistical Process Control of a Kalman Filter Model

    Directory of Open Access Journals (Sweden)

    Sonja Gamse

    2014-09-01

    Full Text Available For the evaluation of measurement data, different functional and stochastic models can be used. In the case of time series, a Kalman filtering (KF algorithm can be implemented. In this case, a very well-known stochastic model, which includes statistical tests in the domain of measurements and in the system state domain, is used. Because the output results depend strongly on input model parameters and the normal distribution of residuals is not always fulfilled, it is very important to perform all possible tests on output results. In this contribution, we give a detailed description of the evaluation of the Kalman filter model. We describe indicators of inner confidence, such as controllability and observability, the determinant of state transition matrix and observing the properties of the a posteriori system state covariance matrix and the properties of the Kalman gain matrix. The statistical tests include the convergence of standard deviations of the system state components and normal distribution beside standard tests. Especially, computing controllability and observability matrices and controlling the normal distribution of residuals are not the standard procedures in the implementation of KF. Practical implementation is done on geodetic kinematic observations.

  3. Statistical process control of a Kalman filter model.

    Science.gov (United States)

    Gamse, Sonja; Nobakht-Ersi, Fereydoun; Sharifi, Mohammad A

    2014-09-26

    For the evaluation of measurement data, different functional and stochastic models can be used. In the case of time series, a Kalman filtering (KF) algorithm can be implemented. In this case, a very well-known stochastic model, which includes statistical tests in the domain of measurements and in the system state domain, is used. Because the output results depend strongly on input model parameters and the normal distribution of residuals is not always fulfilled, it is very important to perform all possible tests on output results. In this contribution, we give a detailed description of the evaluation of the Kalman filter model. We describe indicators of inner confidence, such as controllability and observability, the determinant of state transition matrix and observing the properties of the a posteriori system state covariance matrix and the properties of the Kalman gain matrix. The statistical tests include the convergence of standard deviations of the system state components and normal distribution beside standard tests. Especially, computing controllability and observability matrices and controlling the normal distribution of residuals are not the standard procedures in the implementation of KF. Practical implementation is done on geodetic kinematic observations.

  4. A statistical model for interpreting computerized dynamic posturography data

    Science.gov (United States)

    Feiveson, Alan H.; Metter, E. Jeffrey; Paloski, William H.

    2002-01-01

    Computerized dynamic posturography (CDP) is widely used for assessment of altered balance control. CDP trials are quantified using the equilibrium score (ES), which ranges from zero to 100, as a decreasing function of peak sway angle. The problem of how best to model and analyze ESs from a controlled study is considered. The ES often exhibits a skewed distribution in repeated trials, which can lead to incorrect inference when applying standard regression or analysis of variance models. Furthermore, CDP trials are terminated when a patient loses balance. In these situations, the ES is not observable, but is assigned the lowest possible score--zero. As a result, the response variable has a mixed discrete-continuous distribution, further compromising inference obtained by standard statistical methods. Here, we develop alternative methodology for analyzing ESs under a stochastic model extending the ES to a continuous latent random variable that always exists, but is unobserved in the event of a fall. Loss of balance occurs conditionally, with probability depending on the realized latent ES. After fitting the model by a form of quasi-maximum-likelihood, one may perform statistical inference to assess the effects of explanatory variables. An example is provided, using data from the NIH/NIA Baltimore Longitudinal Study on Aging.

  5. Monthly to seasonal low flow prediction: statistical versus dynamical models

    Science.gov (United States)

    Ionita-Scholz, Monica; Klein, Bastian; Meissner, Dennis; Rademacher, Silke

    2016-04-01

    the Alfred Wegener Institute a purely statistical scheme to generate streamflow forecasts for several months ahead. Instead of directly using teleconnection indices (e.g. NAO, AO) the idea is to identify regions with stable teleconnections between different global climate information (e.g. sea surface temperature, geopotential height etc.) and streamflow at different gauges relevant for inland waterway transport. So-called stability (correlation) maps are generated showing regions where streamflow and climate variable from previous months are significantly correlated in a 21 (31) years moving window. Finally, the optimal forecast model is established based on a multiple regression analysis of the stable predictors. We will present current results of the aforementioned approaches with focus on the River Rhine (being one of the world's most frequented waterways and the backbone of the European inland waterway network) and the Elbe River. Overall, our analysis reveals the existence of a valuable predictability of the low flows at monthly and seasonal time scales, a result that may be useful to water resources management. Given that all predictors used in the models are available at the end of each month, the forecast scheme can be used operationally to predict extreme events and to provide early warnings for upcoming low flows.

  6. THE STATISTIC ANALYSIS OF THREE MATHEMATICAL MODELS REFFERING TO THE EVOLUTION OF BROILERS CORPORAL MASS

    Directory of Open Access Journals (Sweden)

    ANDREEA POPESCU

    2013-12-01

    Full Text Available The evolution of the broilers corporal mass has been studied depending on the controllable variables: L(lysine, M(metionine+cystine, obtaining the mathematical models 1 G , 2 G , 3 G .This work analyses statistically if the values of these models significantly differ from the control lot 0 G .(NRC 1994.Calculating the reports of correlation we specify which model is considered decisional in broilers nutrition.

  7. Hybrid Perturbation methods based on Statistical Time Series models

    CERN Document Server

    San-Juan, Juan Félix; Pérez, Iván; López, Rosario

    2016-01-01

    In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of a...

  8. STATISTICAL ANALYSIS OF THE TM- MODEL VIA BAYESIAN APPROACH

    Directory of Open Access Journals (Sweden)

    Muhammad Aslam

    2012-11-01

    Full Text Available The method of paired comparisons calls for the comparison of treatments presented in pairs to judges who prefer the better one based on their sensory evaluations. Thurstone (1927 and Mosteller (1951 employ the method of maximum likelihood to estimate the parameters of the Thurstone-Mosteller model for the paired comparisons. A Bayesian analysis of the said model using the non-informative reference (Jeffreys prior is presented in this study. The posterior estimates (means and joint modes of the parameters and the posterior probabilities comparing the two parameters are obtained for the analysis. The predictive probabilities that one treatment (Ti in preferred to any other treatment (Tj in a future single comparison are also computed. In addition, the graphs of the marginal posterior distributions of the individual parameter are drawn. The appropriateness of the model is also tested using the Chi-Square test statistic.

  9. Dynamic statistical models of biological cognition: insights from communications theory

    Science.gov (United States)

    Wallace, Rodrick

    2014-10-01

    Maturana's cognitive perspective on the living state, Dretske's insight on how information theory constrains cognition, the Atlan/Cohen cognitive paradigm, and models of intelligence without representation, permit construction of a spectrum of dynamic necessary conditions statistical models of signal transduction, regulation, and metabolism at and across the many scales and levels of organisation of an organism and its context. Nonequilibrium critical phenomena analogous to physical phase transitions, driven by crosstalk, will be ubiquitous, representing not only signal switching, but the recruitment of underlying cognitive modules into tunable dynamic coalitions that address changing patterns of need and opportunity at all scales and levels of organisation. The models proposed here, while certainly providing much conceptual insight, should be most useful in the analysis of empirical data, much as are fitted regression equations.

  10. Exploiting linkage disequilibrium in statistical modelling in quantitative genomics

    DEFF Research Database (Denmark)

    Wang, Lei

    Alleles at two loci are said to be in linkage disequilibrium (LD) when they are correlated or statistically dependent. Genomic prediction and gene mapping rely on the existence of LD between gentic markers and causul variants of complex traits. In the first part of the thesis, a novel method...... to quantify and visualize local variation in LD along chromosomes in describet, and applied to characterize LD patters at the local and genome-wide scale in three Danish pig breeds. In the second part, different ways of taking LD into account in genomic prediction models are studied. One approach is to use...... the recently proposed antedependence models, which treat neighbouring marker effects as correlated; another approach involves use of haplotype block information derived using the program Beagle. The overall conclusion is that taking LD information into account in genomic prediction models potentially improves...

  11. Modeling, dependence, classification, united statistical science, many cultures

    CERN Document Server

    Parzen, Emanuel

    2012-01-01

    Breiman (2001) proposed to statisticians awareness of two cultures: 1. Parametric modeling culture, pioneered by R.A.Fisher and Jerzy Neyman; 2. Algorithmic predictive culture, pioneered by machine learning research. Parzen (2001), as a part of discussing Breiman (2001), proposed that researchers be aware of many cultures, including the focus of our research: 3. Nonparametric, quantile based, information theoretic modeling. Our research seeks to unify statistical problem solving in terms of comparison density, copula density, measure of dependence, correlation, information, new measures (called LP score comoments) that apply to long tailed distributions with out finite second order moments. A very important goal is to unify methods for discrete and continuous random variables. We are actively developing these ideas, which have a history of many decades, since Parzen (1979, 1983) and Eubank et al. (1987). Our research extends these methods to modern high dimensional data modeling.

  12. Discrete dynamical models: combinatorics, statistics and continuum approximations

    CERN Document Server

    Kornyak, Vladimir V

    2015-01-01

    This essay advocates the view that any problem that has a meaningful empirical content, can be formulated in constructive, more definitely, finite terms. We consider combinatorial models of dynamical systems and approaches to statistical description of such models. We demonstrate that many concepts of continuous physics --- such as continuous symmetries, the principle of least action, Lagrangians, deterministic evolution equations --- can be obtained from combinatorial structures as a result of the large number approximation. We propose a constructive description of quantum behavior that provides, in particular, a natural explanation of appearance of complex numbers in the formalism of quantum mechanics. Some approaches to construction of discrete models of quantum evolution that involve gauge connections are discussed.

  13. Statistical mechanics of Monod-Wyman-Changeux (MWC) models.

    Science.gov (United States)

    Marzen, Sarah; Garcia, Hernan G; Phillips, Rob

    2013-05-13

    The 50th anniversary of the classic Monod-Wyman-Changeux (MWC) model provides an opportunity to survey the broader conceptual and quantitative implications of this quintessential biophysical model. With the use of statistical mechanics, the mathematical implementation of the MWC concept links problems that seem otherwise to have no ostensible biological connection including ligand-receptor binding, ligand-gated ion channels, chemotaxis, chromatin structure and gene regulation. Hence, a thorough mathematical analysis of the MWC model can illuminate the performance limits of a number of unrelated biological systems in one stroke. The goal of our review is twofold. First, we describe in detail the general physical principles that are used to derive the activity of MWC molecules as a function of their regulatory ligands. Second, we illustrate the power of ideas from information theory and dynamical systems for quantifying how well the output of MWC molecules tracks their sensory input, giving a sense of the "design" constraints faced by these receptors.

  14. Symmetry Energy Effects in a Statistical Multifragmentation Model

    Institute of Scientific and Technical Information of China (English)

    ZHANG Lei; GAO Yuan1; ZHANG Hong-Fei; CHEN Xi-Meng; Yu Mei-Ling; LI Jun-Qing

    2011-01-01

    The symmetry energy effects on the nuclear disintegration mechanisms of the neutron-rich system (A0 = 200, Z0 = 78) are studied in the framework of the statistical multifragmentation model (SMM) within its micro-canonical ensemble. A modified symmetry energy term with consideration of the volume and surface asymmetry is adopted instead of the original invariable value in the standard SMM model. The results indicate that as the volume and surface asymmetries are considered, the neutron-rich system translates to a fission-like process from evaporation earlier than the original standard SMM model at lower excitation energies, and its mass distribution has larger probabilities in the medium-heavy nuclei range so that the system breaks up more averagely. When the excitation energy becomes higher, the volume and surface asymmetry lead to a smaller average multiplicity.%The symmetry energy effects on the nuclear disintegration mechanisms of the neutron-rich system (A0 =200,Z0 =78) are studied in the framework of the statistical multifragmentation model (SMM) within its micro-canonical ensemble.A modified symmetry energy term with consideration of the volume and surface asymmetry is adopted instead of the original invariable value in the standard SMM model.The results indicate that as the volume and surface asymmetries are considered,the neutron-rich system translates to a fission-like process from evaporation earlier than the original standard SMM model at lower excitation energies,and its mass distribution has larger probabilities in the medium-heavy nuclei range so that the system breaks up more averagely.When the excitation energy becomes higher,the volume and surface asymmetry lead to a smaller average multiplicity.

  15. Masked areas in shear peak statistics. A forward modeling approach

    Energy Technology Data Exchange (ETDEWEB)

    Bard, D.; Kratochvil, J. M.; Dawson, W.

    2016-03-09

    The statistics of shear peaks have been shown to provide valuable cosmological information beyond the power spectrum, and will be an important constraint of models of cosmology in forthcoming astronomical surveys. Surveys include masked areas due to bright stars, bad pixels etc., which must be accounted for in producing constraints on cosmology from shear maps. We advocate a forward-modeling approach, where the impacts of masking and other survey artifacts are accounted for in the theoretical prediction of cosmological parameters, rather than correcting survey data to remove them. We use masks based on the Deep Lens Survey, and explore the impact of up to 37% of the survey area being masked on LSST and DES-scale surveys. By reconstructing maps of aperture mass the masking effect is smoothed out, resulting in up to 14% smaller statistical uncertainties compared to simply reducing the survey area by the masked area. We show that, even in the presence of large survey masks, the bias in cosmological parameter estimation produced in the forward-modeling process is ≈1%, dominated by bias caused by limited simulation volume. We also explore how this potential bias scales with survey area and evaluate how much small survey areas are impacted by the differences in cosmological structure in the data and simulated volumes, due to cosmic variance.

  16. Error statistics of hidden Markov model and hidden Boltzmann model results

    Directory of Open Access Journals (Sweden)

    Newberg Lee A

    2009-07-01

    Full Text Available Abstract Background Hidden Markov models and hidden Boltzmann models are employed in computational biology and a variety of other scientific fields for a variety of analyses of sequential data. Whether the associated algorithms are used to compute an actual probability or, more generally, an odds ratio or some other score, a frequent requirement is that the error statistics of a given score be known. What is the chance that random data would achieve that score or better? What is the chance that a real signal would achieve a given score threshold? Results Here we present a novel general approach to estimating these false positive and true positive rates that is significantly more efficient than are existing general approaches. We validate the technique via an implementation within the HMMER 3.0 package, which scans DNA or protein sequence databases for patterns of interest, using a profile-HMM. Conclusion The new approach is faster than general naïve sampling approaches, and more general than other current approaches. It provides an efficient mechanism by which to estimate error statistics for hidden Markov model and hidden Boltzmann model results.

  17. Error statistics of hidden Markov model and hidden Boltzmann model results

    Science.gov (United States)

    Newberg, Lee A

    2009-01-01

    Background Hidden Markov models and hidden Boltzmann models are employed in computational biology and a variety of other scientific fields for a variety of analyses of sequential data. Whether the associated algorithms are used to compute an actual probability or, more generally, an odds ratio or some other score, a frequent requirement is that the error statistics of a given score be known. What is the chance that random data would achieve that score or better? What is the chance that a real signal would achieve a given score threshold? Results Here we present a novel general approach to estimating these false positive and true positive rates that is significantly more efficient than are existing general approaches. We validate the technique via an implementation within the HMMER 3.0 package, which scans DNA or protein sequence databases for patterns of interest, using a profile-HMM. Conclusion The new approach is faster than general naïve sampling approaches, and more general than other current approaches. It provides an efficient mechanism by which to estimate error statistics for hidden Markov model and hidden Boltzmann model results. PMID:19589158

  18. Statistical Inference for Point Process Models of Rainfall

    Science.gov (United States)

    Smith, James A.; Karr, Alan F.

    1985-01-01

    In this paper we develop maximum likelihood procedures for parameter estimation and model selection that apply to a large class of point process models that have been used to model rainfall occurrences, including Cox processes, Neyman-Scott processes, and renewal processes. The statistical inference procedures are based on the stochastic intensity λ(t) = lims→0,s>0 (1/s)E[N(t + s) - N(t)|N(u), u process is shown to have a simple expression in terms of the stochastic intensity. The main result of this paper is a recursive procedure for computing stochastic intensities; the procedure is applicable to a broad class of point process models, including renewal Cox process with Markovian intensity processes and an important class of Neyman-Scott processes. The model selection procedure we propose, which is based on likelihood ratios, allows direct comparison of two classes of point processes to determine which provides a better model for a given data set. The estimation and model selection procedures are applied to two data sets of simulated Cox process arrivals and a data set of daily rainfall occurrences in the Potomac River basin.

  19. How Good Are Statistical Models at Approximating Complex Fitness Landscapes?

    Science.gov (United States)

    du Plessis, Louis; Leventhal, Gabriel E.; Bonhoeffer, Sebastian

    2016-01-01

    Fitness landscapes determine the course of adaptation by constraining and shaping evolutionary trajectories. Knowledge of the structure of a fitness landscape can thus predict evolutionary outcomes. Empirical fitness landscapes, however, have so far only offered limited insight into real-world questions, as the high dimensionality of sequence spaces makes it impossible to exhaustively measure the fitness of all variants of biologically meaningful sequences. We must therefore revert to statistical descriptions of fitness landscapes that are based on a sparse sample of fitness measurements. It remains unclear, however, how much data are required for such statistical descriptions to be useful. Here, we assess the ability of regression models accounting for single and pairwise mutations to correctly approximate a complex quasi-empirical fitness landscape. We compare approximations based on various sampling regimes of an RNA landscape and find that the sampling regime strongly influences the quality of the regression. On the one hand it is generally impossible to generate sufficient samples to achieve a good approximation of the complete fitness landscape, and on the other hand systematic sampling schemes can only provide a good description of the immediate neighborhood of a sequence of interest. Nevertheless, we obtain a remarkably good and unbiased fit to the local landscape when using sequences from a population that has evolved under strong selection. Thus, current statistical methods can provide a good approximation to the landscape of naturally evolving populations. PMID:27189564

  20. How Good Are Statistical Models at Approximating Complex Fitness Landscapes?

    Science.gov (United States)

    du Plessis, Louis; Leventhal, Gabriel E; Bonhoeffer, Sebastian

    2016-09-01

    Fitness landscapes determine the course of adaptation by constraining and shaping evolutionary trajectories. Knowledge of the structure of a fitness landscape can thus predict evolutionary outcomes. Empirical fitness landscapes, however, have so far only offered limited insight into real-world questions, as the high dimensionality of sequence spaces makes it impossible to exhaustively measure the fitness of all variants of biologically meaningful sequences. We must therefore revert to statistical descriptions of fitness landscapes that are based on a sparse sample of fitness measurements. It remains unclear, however, how much data are required for such statistical descriptions to be useful. Here, we assess the ability of regression models accounting for single and pairwise mutations to correctly approximate a complex quasi-empirical fitness landscape. We compare approximations based on various sampling regimes of an RNA landscape and find that the sampling regime strongly influences the quality of the regression. On the one hand it is generally impossible to generate sufficient samples to achieve a good approximation of the complete fitness landscape, and on the other hand systematic sampling schemes can only provide a good description of the immediate neighborhood of a sequence of interest. Nevertheless, we obtain a remarkably good and unbiased fit to the local landscape when using sequences from a population that has evolved under strong selection. Thus, current statistical methods can provide a good approximation to the landscape of naturally evolving populations.

  1. Random matrices as models for the statistics of quantum mechanics

    Science.gov (United States)

    Casati, Giulio; Guarneri, Italo; Mantica, Giorgio

    1986-05-01

    Random matrices from the Gaussian unitary ensemble generate in a natural way unitary groups of evolution in finite-dimensional spaces. The statistical properties of this time evolution can be investigated by studying the time autocorrelation functions of dynamical variables. We prove general results on the decay properties of such autocorrelation functions in the limit of infinite-dimensional matrices. We discuss the relevance of random matrices as models for the dynamics of quantum systems that are chaotic in the classical limit. Permanent address: Dipartimento di Fisica, Via Celoria 16, 20133 Milano, Italy.

  2. Stochastical modeling for Viral Disease: Statistical Mechanics and Network Theory

    Science.gov (United States)

    Zhou, Hao; Deem, Michael

    2007-04-01

    Theoretical methods of statistical mechanics are developed and applied to study the immunological response against viral disease, such as dengue. We use this theory to show how the immune response to four different dengue serotypes may be sculpted. It is the ability of avian influenza, to change and to mix, that has given rise to the fear of a new human flu pandemic. Here we propose to utilize a scale free network based stochastic model to investigate the mitigation strategies and analyze the risk.

  3. Efficiency of a statistical transport model for turbulent particle dispersion

    Science.gov (United States)

    Litchford, Ron J.; Jeng, San-Mou

    1992-01-01

    In developing its theory for turbulent dispersion transport, the Litchford and Jeng (1991) statistical transport model for turbulent particle dispersion took a generalized approach in which the perturbing influence of each turbulent eddy on consequent interactions was transported through all subsequent eddies. Nevertheless, examinations of this transport relation shows it to be able to decay rapidly: this implies that additional computational efficiency may be obtained via truncation of unneccessary transport terms. Attention is here given to the criterion for truncation, as well as to expected efficiency gains.

  4. Social inequality: from data to statistical physics modeling

    Science.gov (United States)

    Chatterjee, Arnab; Ghosh, Asim; Inoue, Jun-ichi; Chakrabarti, Bikas K.

    2015-09-01

    Social inequality is a topic of interest since ages, and has attracted researchers across disciplines to ponder over it origin, manifestation, characteristics, consequences, and finally, the question of how to cope with it. It is manifested across different strata of human existence, and is quantified in several ways. In this review we discuss the origins of social inequality, the historical and commonly used non-entropic measures such as Lorenz curve, Gini index and the recently introduced k index. We also discuss some analytical tools that aid in understanding and characterizing them. Finally, we argue how statistical physics modeling helps in reproducing the results and interpreting them.

  5. Social inequality: from data to statistical physics modeling

    CERN Document Server

    Chatterjee, Arnab; Inoue, Jun-ichi; Chakrabarti, Bikas K

    2015-01-01

    Social inequality is a topic of interest since ages, and has attracted researchers across disciplines to ponder over it origin, manifestation, characteristics, consequences, and finally, the question of how to cope with it. It is manifested across different strata of human existence, and is quantified in several ways. In this review we discuss the origins of social inequality, the historical and commonly used non-entropic measures such as Lorenz curve, Gini index and the recently introduced $k$ index. We also discuss some analytical tools that aid in understanding and characterizing them. Finally, we argue how statistical physics modeling helps in reproducing the results and interpreting them.

  6. A Probabilistic Rain Diagnostic Model Based on Cyclone Statistical Analysis

    OpenAIRE

    Iordanidou, V.; A. G. Koutroulis; I. K. Tsanis

    2014-01-01

    Data from a dense network of 69 daily precipitation gauges over the island of Crete and cyclone climatological analysis over middle-eastern Mediterranean are combined in a statistical approach to develop a rain diagnostic model. Regarding the dataset, 0.5 × 0.5, 33-year (1979–2011) European Centre for Medium-Range Weather Forecasts (ECMWF) reanalysis (ERA-Interim) is used. The cyclone tracks and their characteristics are identified with the aid of Melbourne University algorithm (MS scheme). T...

  7. AD Model Builder: using automatic differentiation for statistical inference of highly parameterized complex nonlinear models

    DEFF Research Database (Denmark)

    Fournier, David A.; Skaug, Hans J.; Ancheta, Johnoel

    2011-01-01

    Many criteria for statistical parameter estimation, such as maximum likelihood, are formulated as a nonlinear optimization problem.Automatic Differentiation Model Builder (ADMB) is a programming framework based on automatic differentiation, aimed at highly nonlinear models with a large number...

  8. Macro-indicators of citation impacts of six prolific countries: InCites data and the statistical significance of trends.

    Directory of Open Access Journals (Sweden)

    Lutz Bornmann

    Full Text Available Using the InCites tool of Thomson Reuters, this study compares normalized citation impact values calculated for China, Japan, France, Germany, United States, and the UK throughout the time period from 1981 to 2010. InCites offers a unique opportunity to study the normalized citation impacts of countries using (i a long publication window (1981 to 2010, (ii a differentiation in (broad or more narrow subject areas, and (iii allowing for the use of statistical procedures in order to obtain an insightful investigation of national citation trends across the years. Using four broad categories, our results show significantly increasing trends in citation impact values for France, the UK, and especially Germany across the last thirty years in all areas. The citation impact of papers from China is still at a relatively low level (mostly below the world average, but the country follows an increasing trend line. The USA exhibits a stable pattern of high citation impact values across the years. With small impact differences between the publication years, the US trend is increasing in engineering and technology but decreasing in medical and health sciences as well as in agricultural sciences. Similar to the USA, Japan follows increasing as well as decreasing trends in different subject areas, but the variability across the years is small. In most of the years, papers from Japan perform below or approximately at the world average in each subject area.

  9. An Extended Clustering Algorithm for Statistical Language Models

    CERN Document Server

    Ueberla, J P

    1994-01-01

    Statistical language models frequently suffer from a lack of training data. This problem can be alleviated by clustering, because it reduces the number of free parameters that need to be trained. However, clustered models have the following drawback: if there is ``enough'' data to train an unclustered model, then the clustered variant may perform worse. On currently used language modeling corpora, e.g. the Wall Street Journal corpus, how do the performances of a clustered and an unclustered model compare? While trying to address this question, we develop the following two ideas. First, to get a clustering algorithm with potentially high performance, an existing algorithm is extended to deal with higher order N-grams. Second, to make it possible to cluster large amounts of training data more efficiently, a heuristic to speed up the algorithm is presented. The resulting clustering algorithm can be used to cluster trigrams on the Wall Street Journal corpus and the language models it produces can compete with exi...

  10. Parameter and Process Significance in Mechanistic Modeling of Cellulose Hydrolysis

    Science.gov (United States)

    Rotter, B.; Barry, A.; Gerhard, J.; Small, J.; Tahar, B.

    2005-12-01

    The rate of cellulose hydrolysis, and of associated microbial processes, is important in determining the stability of landfills and their potential impact on the environment, as well as associated time scales. To permit further exploration in this field, a process-based model of cellulose hydrolysis was developed. The model, which is relevant to both landfill and anaerobic digesters, includes a novel approach to biomass transfer between a cellulose-bound biofilm and biomass in the surrounding liquid. Model results highlight the significance of the bacterial colonization of cellulose particles by attachment through contact in solution. Simulations revealed that enhanced colonization, and therefore cellulose degradation, was associated with reduced cellulose particle size, higher biomass populations in solution, and increased cellulose-binding ability of the biomass. A sensitivity analysis of the system parameters revealed different sensitivities to model parameters for a typical landfill scenario versus that for an anaerobic digester. The results indicate that relative surface area of cellulose and proximity of hydrolyzing bacteria are key factors determining the cellulose degradation rate.

  11. An efficient simulator of 454 data using configurable statistical models

    Directory of Open Access Journals (Sweden)

    Persson Bengt

    2011-10-01

    Full Text Available Abstract Background Roche 454 is one of the major 2nd generation sequencing platforms. The particular characteristics of 454 sequence data pose new challenges for bioinformatic analyses, e.g. assembly and alignment search algorithms. Simulation of these data is therefore useful, in order to further assess how bioinformatic applications and algorithms handle 454 data. Findings We developed a new application named 454sim for simulation of 454 data at high speed and accuracy. The program is multi-thread capable and is available as C++ source code or pre-compiled binaries. Sequence reads are simulated by 454sim using a set of statistical models for each chemistry. 454sim simulates recorded peak intensities, peak quality deterioration and it calculates quality values. All three generations of the Roche 454 chemistry ('GS20', 'GS FLX' and 'Titanium' are supported and defined in external text files for easy access and tweaking. Conclusions We present a new platform independent application named 454sim. 454sim is generally 200 times faster compared to previous programs and it allows for simple adjustments of the statistical models. These improvements make it possible to carry out more complex and rigorous algorithm evaluations in a reasonable time scale.

  12. Energy Level Statistics in Particle—Rotor Model

    Institute of Scientific and Technical Information of China (English)

    ZHOUXian-Rong; MENGJie; 等

    2002-01-01

    Energy level statistics of a system consisting of six particles interacting by delta force in a two-j model coupled with a deformed core is studied in particle-rotor model.For single-j shell (i13/2) and two-j shell (g7/2+d5/2) the exact energies for our statistical analysis are obtained from a full diagonalization of the Hamiltonian,whilt in two-j case (i13/2+g9/2) the configuration truncation is used.The nearest-neighbor distribution of energy levels and spectral rigidity are studied as the function of spin.The results of single-j shell are compared with those in two-j case.It is showed that the system becomes more regular when single-j space (i13/2) is replaced by two-j shell (g7/2+d5/2) although the basis size of the configuration space is unchanged.The degree of chaoticity of the system,however,changes slightly when configuration space is enlarged by extending single-j shell (i13/2) to two-j shell (i13/2+g9/2).

  13. Statistic Model Based Dynamic Channel Compensation for Telephony Speech Recognition

    Institute of Scientific and Technical Information of China (English)

    ZHANGHuayun; HANZhaobing; XUBo

    2004-01-01

    The degradation of speech recognition performance in real-life environments and through transmission channels is a main embarrassment for many speechbased applications around the world, especially when nonstationary noise and changing channel exist. Previous works have shown that the main reason for this performance degradation is the variational mismatch caused by different telephone channels between the testing and training sets. In this paper, we propose a statistic model based implementation to dynamically compensate this mismatch. Firstly, we focus on a Maximum-likelihood (ML) estimation algorithm for telephone channels. In experiments on Mandarin Large vocabulary continuous speech recognition (LVCSR) over telephone lines, the Character error rate (CER) decreases more than 20%. The average delay is about 300-400ms. Secondly, we will extend it by introducing a phone-conditioned prior statistic model for the channels and applying Maximum a posteriori (MAP) estimation technique. Compared to the ML based method, the MAP based algorithm follows with the variations within channels more effectively. Average delay of the algorithm is decreased to 200ms. An additional 7-8% CER relative reduction is observed in LVCSR.

  14. Isospin dependence of nuclear multifragmentation in statistical model

    Institute of Scientific and Technical Information of China (English)

    ZHANG Lei; XIE Dong-Zhu; ZHANG Yan-Ping; GAO Yuan

    2011-01-01

    The evolution of nuclear disintegration mechanisms with increasing excitation energy, from com- pound nucleus to multifragmentation, has been studied by using the Statistical Multifragmentation Model (SMM) within a micro-canonical ensemble. We discuss the observable characteristics as functions of excitation energy in multifragmentation, concentrating on the isospin dependence of the model in its decaying mechanism and break-up fragment configuration by comparing the A = 200, Z = 78 and A = 200, Z = 100 systems. The calculations indicate that the neutron-rich system (Z = 78) translates to a fission-like process from evaporation later than the symmetric nucleus at a lower excitation energy, but gets a larger average multiplicity as the excitation energy increases above 1.0 MeV/u.

  15. Image Watermarking Using Visual Perception Model and Statistical Features

    Directory of Open Access Journals (Sweden)

    Mrs.C.Akila

    2010-06-01

    Full Text Available This paper presents an effective method for the image watermarking using visual perception model based on statistical features in the low frequency domain. In the image watermarking community watermark resistance to geometric attacks is an important issue. Most countermeasures proposed in the literature usually focus on the problem of global affine transforms such as rotation, scaling and translation (RST, but few are resistant to challenging cropping and random bending attacks (RBAs. Normally in the case of watermarking there may be an occurrence of distortion in the form of artifacts. A visual perception model is proposed to quantify the localized tolerance to noise for arbitrary imagery which achieves the reduction of artifacts. As a result, the watermarking system provides a satisfactory performance for those content-preserving geometric deformations and image processing operations, including JPEG ompression, low pass filtering, cropping and RBAs.

  16. Role of scaling in the statistical modelling of finance

    Indian Academy of Sciences (India)

    Attilio L Stella; Fulvio Baldovin

    2008-08-01

    Modelling the evolution of a financial index as a stochastic process is a problem awaiting a full, satisfactory solution since it was first formulated by Bachelier in 1900. Here it is shown that the scaling with time of the return probability density function sampled from the historical series suggests a successful model. The resulting stochastic process is a heteroskedastic, non-Markovian martingale, which can be used to simulate index evolution on the basis of an autoregressive strategy. Results are fully consistent with volatility clustering and with the multiscaling properties of the return distribution. The idea of basing the process construction on scaling, and the construction itself, are closely inspired by the probabilistic renormalization group approach of statistical mechanics and by a recent formulation of the central limit theorem for sums of strongly correlated random variables.

  17. Helicity statistics in homogeneous and isotropic turbulence and turbulence models

    CERN Document Server

    Sahoo, Ganapati; Biferale, Luca

    2016-01-01

    We study the statistical properties of helicity in direct numerical simulations of fully developed homogeneous and isotropic turbulence and in a class of turbulence shell models. We consider correlation functions based on combinations of vorticity and velocity increments that are not invariant under mirror symmetry. We also study the scaling properties of high-order structure functions based on the moments of the velocity increments projected on a subset of modes with either positive or negative helicity (chirality). We show that mirror symmetry is recovered at small-scales, i.e. chiral terms are always subleading and they are well captured by a dimensional argument plus a small anomalous correction. We confirm these findings with numerical study of helical shell models at high Reynolds numbers.

  18. Helicity statistics in homogeneous and isotropic turbulence and turbulence models

    Science.gov (United States)

    Sahoo, Ganapati; De Pietro, Massimo; Biferale, Luca

    2017-02-01

    We study the statistical properties of helicity in direct numerical simulations of fully developed homogeneous and isotropic turbulence and in a class of turbulence shell models. We consider correlation functions based on combinations of vorticity and velocity increments that are not invariant under mirror symmetry. We also study the scaling properties of high-order structure functions based on the moments of the velocity increments projected on a subset of modes with either positive or negative helicity (chirality). We show that mirror symmetry is recovered at small scales, i.e., chiral terms are subleading and they are well captured by a dimensional argument plus anomalous corrections. These findings are also supported by a high Reynolds numbers study of helical shell models with the same chiral symmetry of Navier-Stokes equations.

  19. Statistical Agent Based Modelization of the Phenomenon of Drug Abuse

    Science.gov (United States)

    di Clemente, Riccardo; Pietronero, Luciano

    2012-07-01

    We introduce a statistical agent based model to describe the phenomenon of drug abuse and its dynamical evolution at the individual and global level. The agents are heterogeneous with respect to their intrinsic inclination to drugs, to their budget attitude and social environment. The various levels of drug use were inspired by the professional description of the phenomenon and this permits a direct comparison with all available data. We show that certain elements have a great importance to start the use of drugs, for example the rare events in the personal experiences which permit to overcame the barrier of drug use occasionally. The analysis of how the system reacts to perturbations is very important to understand its key elements and it provides strategies for effective policy making. The present model represents the first step of a realistic description of this phenomenon and can be easily generalized in various directions.

  20. Statistical Agent Based Modelization of the Phenomenon of Drug Abuse

    CERN Document Server

    Di Clemente, Riccardo; 10.1038/srep00532

    2012-01-01

    We introduce a statistical agent based model to describe the phenomenon of drug abuse and its dynamical evolution at the individual and global level. The agents are heterogeneous with respect to their intrinsic inclination to drugs, to their budget attitude and social environment. The various levels of drug use were inspired by the professional description of the phenomenon and this permits a direct comparison with all available data. We show that certain elements have a great importance to start the use of drugs, for example the rare events in the personal experiences which permit to overcame the barrier of drug use occasionally. The analysis of how the system reacts to perturbations is very important to understand its key elements and it provides strategies for effective policy making. The present model represents the first step of a realistic description of this phenomenon and can be easily generalized in various directions.

  1. Statistical properties of cloud lifecycles in cloud-resolving models

    Directory of Open Access Journals (Sweden)

    R. S. Plant

    2008-12-01

    Full Text Available A new technique is described for the analysis of cloud-resolving model simulations, which allows one to investigate the statistics of the lifecycles of cumulus clouds. Clouds are tracked from timestep-to-timestep within the model run. This allows for a very simple method of tracking, but one which is both comprehensive and robust. An approach for handling cloud splits and mergers is described which allows clouds with simple and complicated time histories to be compared within a single framework. This is found to be important for the analysis of an idealized simulation of radiative-convective equilibrium, in which the moist, buoyant, updrafts (i.e., the convective cores were tracked. Around half of all such cores were subject to splits and mergers during their lifecycles. For cores without any such events, the average lifetime is 30 min, but events can lengthen the typical lifetime considerably.

  2. Statistical analysis and model of spread F occurrence in China

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    The spread F data obtained over Lanzhou (36.1°N,103.9°E),Chongqing (29.5°N,106.4°E) and Haikou (20.0°N,110.3°E) of China during the period from 1978 to 1997 are used to analyze the occurrence characteristics.The statistical results show that the post midnight spread F occurrence is maximum during the summer solstice months of the lower solar activity period,while post sunset spread F is dominant in equinoxes of higher solar activity period over Haikou station.Over Chongqing and Lanzhou stations,spread F mostly occurs at post midnight and relates negatively with solar activity.Using regression method and Fourier expansion,the preliminary single-station model of spread F occurrence is established and the accuracy of the model is evaluated.

  3. Statistical Modeling of Robotic Random Walks on Different Terrain

    Science.gov (United States)

    Naylor, Austin; Kinnaman, Laura

    Issues of public safety, especially with crowd dynamics and pedestrian movement, have been modeled by physicists using methods from statistical mechanics over the last few years. Complex decision making of humans moving on different terrains can be modeled using random walks (RW) and correlated random walks (CRW). The effect of different terrains, such as a constant increasing slope, on RW and CRW was explored. LEGO robots were programmed to make RW and CRW with uniform step sizes. Level ground tests demonstrated that the robots had the expected step size distribution and correlation angles (for CRW). The mean square displacement was calculated for each RW and CRW on different terrains and matched expected trends. The step size distribution was determined to change based on the terrain; theoretical predictions for the step size distribution were made for various simple terrains. It's Dr. Laura Kinnaman, not sure where to put the Prefix.

  4. Comparison of Statistical Multifragmentation Model simulations with Canonical Thermodynamical Model results: a few representative cases

    CERN Document Server

    Botvina, A; Gupta, S Das; Mishustin, I

    2008-01-01

    The statistical multifragmentation model (SMM) has been widely used to explain experimental data of intermediate energy heavy ion collisions. A later entrant in the field is the canonical thermodynamic model (CTM) which is also being used to fit experimental data. The basic physics of both the models is the same, namely that fragments are produced according to their statistical weights in the available phase space. However, they are based on different statistical ensembles, and the methods of calculation are different: while the SMM uses Monte-Carlo simulations, the CTM solves recursion relations. In this paper we compare the predictions of the two models for a few representative cases.

  5. A statistically predictive model for future monsoon failure in India

    Science.gov (United States)

    Schewe, Jacob; Levermann, Anders

    2012-12-01

    Indian monsoon rainfall is vital for a large share of the world’s population. Both reliably projecting India’s future precipitation and unraveling abrupt cessations of monsoon rainfall found in paleorecords require improved understanding of its stability properties. While details of monsoon circulations and the associated rainfall are complex, full-season failure is dominated by large-scale positive feedbacks within the region. Here we find that in a comprehensive climate model, monsoon failure is possible but very rare under pre-industrial conditions, while under future warming it becomes much more frequent. We identify the fundamental intraseasonal feedbacks that are responsible for monsoon failure in the climate model, relate these to observational data, and build a statistically predictive model for such failure. This model provides a simple dynamical explanation for future changes in the frequency distribution of seasonal mean all-Indian rainfall. Forced only by global mean temperature and the strength of the Pacific Walker circulation in spring, it reproduces the trend as well as the multidecadal variability in the mean and skewness of the distribution, as found in the climate model. The approach offers an alternative perspective on large-scale monsoon variability as the result of internal instabilities modulated by pre-seasonal ambient climate conditions.

  6. phenix.model_vs_data: a high-level tool for the calculation of crystallographic model and data statistics.

    Science.gov (United States)

    Afonine, Pavel V; Grosse-Kunstleve, Ralf W; Chen, Vincent B; Headd, Jeffrey J; Moriarty, Nigel W; Richardson, Jane S; Richardson, David C; Urzhumtsev, Alexandre; Zwart, Peter H; Adams, Paul D

    2010-08-01

    phenix.model_vs_data is a high-level command-line tool for the computation of crystallographic model and data statistics, and the evaluation of the fit of the model to data. Analysis of all Protein Data Bank structures that have experimental data available shows that in most cases the reported statistics, in particular R factors, can be reproduced within a few percentage points. However, there are a number of outliers where the recomputed R values are significantly different from those originally reported. The reasons for these discrepancies are discussed.

  7. Is flow velocity a significant parameter in flood damage modelling?

    Directory of Open Access Journals (Sweden)

    H. Kreibich

    2009-10-01

    Full Text Available Flow velocity is generally presumed to influence flood damage. However, this influence is hardly quantified and virtually no damage models take it into account. Therefore, the influences of flow velocity, water depth and combinations of these two impact parameters on various types of flood damage were investigated in five communities affected by the Elbe catchment flood in Germany in 2002. 2-D hydraulic models with high to medium spatial resolutions were used to calculate the impact parameters at the sites in which damage occurred. A significant influence of flow velocity on structural damage, particularly on roads, could be shown in contrast to a minor influence on monetary losses and business interruption. Forecasts of structural damage to road infrastructure should be based on flow velocity alone. The energy head is suggested as a suitable flood impact parameter for reliable forecasting of structural damage to residential buildings above a critical impact level of 2 m of energy head or water depth. However, general consideration of flow velocity in flood damage modelling, particularly for estimating monetary loss, cannot be recommended.

  8. A Generalized Statistical Uncertainty Model for Satellite Precipitation Products

    Science.gov (United States)

    Sarachi, S.

    2013-12-01

    A mixture model of Generalized Normal Distribution and Gamma distribution (GND-G) is used to model the joint probability distribution of satellite-based and stage IV radar rainfall under a given spatial and temporal resolution (e.g. 1°x1° and daily rainfall). The distribution parameters of GND-G are extended across various rainfall rates and spatial and temporal resolutions. In the study, GND-G is used to describe the uncertainty of the estimates from Precipitation Estimation from Remote Sensing Information using Artificial Neural Network algorithm (PERSIANN). The stage IV-based multi-sensor precipitation estimates (MPE) are used as reference measurements .The study area for constructing the uncertainty model covers a 15°×15°box of 0.25°×0.25° cells over the eastern United States for summer 2004 to 2009. Cells are aggregated in space and time to obtain data with different resolutions for the construction of the model's parameter space. Result shows that comparing to the other statistical uncertainty models, GND-G fits better than the other models, such as Gaussian and Gamma distributions, to the reference precipitation data. The impact of precipitation uncertainty to the stream flow is further demonstrated by Monte Carlo simulation of precipitation forcing in the hydrologic model. The NWS DMIP2 basins over Illinois River basin south of Siloam is selected in this case study. The data covers the time period of 2006 to 2008.The uncertainty range of stream flow from precipitation of GND-G distributions calculated and will be discussed.

  9. Emerging Trends and Statistical Analysis in Computational Modeling in Agriculture

    Directory of Open Access Journals (Sweden)

    Sunil Kumar

    2015-03-01

    Full Text Available In this paper the authors have tried to describe emerging trend in computational modelling used in the sphere of agriculture. Agricultural computational modelling with the use of intelligence techniques for computing the agricultural output by providing minimum input data to lessen the time through cutting down the multi locational field trials and also the labours and other inputs is getting momentum. Development of locally suitable integrated farming systems (IFS is the utmost need of the day, particularly in India where about 95% farms are under small and marginal holding size. Optimization of the size and number of the various enterprises to the desired IFS model for a particular set of agro-climate is essential components of the research to sustain the agricultural productivity for not only filling the stomach of the bourgeoning population of the country, but also to enhance the nutritional security and farms return for quality life. Review of literature pertaining to emerging trends in computational modelling applied in field of agriculture is done and described below for the purpose of understanding its trends mechanism behavior and its applications. Computational modelling is increasingly effective for designing and analysis of the system. Computa-tional modelling is an important tool to analyses the effect of different scenarios of climate and management options on the farming systems and its interaction among themselves. Further, authors have also highlighted the applications of computational modeling in integrated farming system, crops, weather, soil, climate, horticulture and statistical used in agriculture which can show the path to the agriculture researcher and rural farming community to replace some of the traditional techniques.

  10. Testing the DGP model with gravitational lensing statistics

    Science.gov (United States)

    Zhu, Zong-Hong; Sereno, M.

    2008-09-01

    Aims: The self-accelerating braneworld model (DGP) appears to provide a simple alternative to the standard ΛCDM cosmology to explain the current cosmic acceleration, which is strongly indicated by measurements of type Ia supernovae, as well as other concordant observations. Methods: We investigate observational constraints on this scenario provided by gravitational-lensing statistics using the Cosmic Lens All-Sky Survey (CLASS) lensing sample. Results: We show that a substantial part of the parameter space of the DGP model agrees well with that of radio source gravitational lensing sample. Conclusions: In the flat case, Ω_K=0, the likelihood is maximized, L=L_max, for ΩM = 0.30-0.11+0.19. If we relax the prior on Ω_K, the likelihood peaks at Ω_M,Ωr_c ≃ 0.29, 0.12, slightly in the region of open models. The confidence contours are, however, elongated such that we are unable to discard any of the close, flat or open models.

  11. Linear mixed models a practical guide using statistical software

    CERN Document Server

    West, Brady T; Galecki, Andrzej T

    2014-01-01

    Highly recommended by JASA, Technometrics, and other journals, the first edition of this bestseller showed how to easily perform complex linear mixed model (LMM) analyses via a variety of software programs. Linear Mixed Models: A Practical Guide Using Statistical Software, Second Edition continues to lead readers step by step through the process of fitting LMMs. This second edition covers additional topics on the application of LMMs that are valuable for data analysts in all fields. It also updates the case studies using the latest versions of the software procedures and provides up-to-date information on the options and features of the software procedures available for fitting LMMs in SAS, SPSS, Stata, R/S-plus, and HLM.New to the Second Edition A new chapter on models with crossed random effects that uses a case study to illustrate software procedures capable of fitting these models Power analysis methods for longitudinal and clustered study designs, including software options for power analyses and suggest...

  12. Glass viscosity calculation based on a global statistical modelling approach

    Energy Technology Data Exchange (ETDEWEB)

    Fluegel, Alex

    2007-02-01

    A global statistical glass viscosity model was developed for predicting the complete viscosity curve, based on more than 2200 composition-property data of silicate glasses from the scientific literature, including soda-lime-silica container and float glasses, TV panel glasses, borosilicate fiber wool and E type glasses, low expansion borosilicate glasses, glasses for nuclear waste vitrification, lead crystal glasses, binary alkali silicates, and various further compositions from over half a century. It is shown that within a measurement series from a specific laboratory the reported viscosity values are often over-estimated at higher temperatures due to alkali and boron oxide evaporation during the measurement and glass preparation, including data by Lakatos et al. (1972) and the recently published High temperature glass melt property database for process modeling by Seward et al. (2005). Similarly, in the glass transition range many experimental data of borosilicate glasses are reported too high due to phase separation effects. The developed global model corrects those errors. The model standard error was 9-17°C, with R^2 = 0.985-0.989. The prediction 95% confidence interval for glass in mass production largely depends on the glass composition of interest, the composition uncertainty, and the viscosity level. New insights in the mixed-alkali effect are provided.

  13. Improving statistical forecasts of seasonal streamflows using hydrological model output

    Directory of Open Access Journals (Sweden)

    D. E. Robertson

    2013-02-01

    Full Text Available Statistical methods traditionally applied for seasonal streamflow forecasting use predictors that represent the initial catchment condition and future climate influences on future streamflows. Observations of antecedent streamflows or rainfall commonly used to represent the initial catchment conditions are surrogates for the true source of predictability and can potentially have limitations. This study investigates a hybrid seasonal forecasting system that uses the simulations from a dynamic hydrological model as a predictor to represent the initial catchment condition in a statistical seasonal forecasting method. We compare the skill and reliability of forecasts made using the hybrid forecasting approach to those made using the existing operational practice of the Australian Bureau of Meteorology for 21 catchments in eastern Australia. We investigate the reasons for differences. In general, the hybrid forecasting system produces forecasts that are more skilful than the existing operational practice and as reliable. The greatest increases in forecast skill tend to be (1 when the catchment is wetting up but antecedent streamflows have not responded to antecedent rainfall, (2 when the catchment is drying and the dominant source of antecedent streamflow is in transition between surface runoff and base flow, and (3 when the initial catchment condition is near saturation intermittently throughout the historical record.

  14. Improving statistical forecasts of seasonal streamflows using hydrological model output

    Science.gov (United States)

    Robertson, D. E.; Pokhrel, P.; Wang, Q. J.

    2013-02-01

    Statistical methods traditionally applied for seasonal streamflow forecasting use predictors that represent the initial catchment condition and future climate influences on future streamflows. Observations of antecedent streamflows or rainfall commonly used to represent the initial catchment conditions are surrogates for the true source of predictability and can potentially have limitations. This study investigates a hybrid seasonal forecasting system that uses the simulations from a dynamic hydrological model as a predictor to represent the initial catchment condition in a statistical seasonal forecasting method. We compare the skill and reliability of forecasts made using the hybrid forecasting approach to those made using the existing operational practice of the Australian Bureau of Meteorology for 21 catchments in eastern Australia. We investigate the reasons for differences. In general, the hybrid forecasting system produces forecasts that are more skilful than the existing operational practice and as reliable. The greatest increases in forecast skill tend to be (1) when the catchment is wetting up but antecedent streamflows have not responded to antecedent rainfall, (2) when the catchment is drying and the dominant source of antecedent streamflow is in transition between surface runoff and base flow, and (3) when the initial catchment condition is near saturation intermittently throughout the historical record.

  15. A Statistical Toolbox For Mining And Modeling Spatial Data

    Directory of Open Access Journals (Sweden)

    D’Aubigny Gérard

    2016-12-01

    Full Text Available Most data mining projects in spatial economics start with an evaluation of a set of attribute variables on a sample of spatial entities, looking for the existence and strength of spatial autocorrelation, based on the Moran’s and the Geary’s coefficients, the adequacy of which is rarely challenged, despite the fact that when reporting on their properties, many users seem likely to make mistakes and to foster confusion. My paper begins by a critical appraisal of the classical definition and rational of these indices. I argue that while intuitively founded, they are plagued by an inconsistency in their conception. Then, I propose a principled small change leading to corrected spatial autocorrelation coefficients, which strongly simplifies their relationship, and opens the way to an augmented toolbox of statistical methods of dimension reduction and data visualization, also useful for modeling purposes. A second section presents a formal framework, adapted from recent work in statistical learning, which gives theoretical support to our definition of corrected spatial autocorrelation coefficients. More specifically, the multivariate data mining methods presented here, are easily implementable on the existing (free software, yield methods useful to exploit the proposed corrections in spatial data analysis practice, and, from a mathematical point of view, whose asymptotic behavior, already studied in a series of papers by Belkin & Niyogi, suggests that they own qualities of robustness and a limited sensitivity to the Modifiable Areal Unit Problem (MAUP, valuable in exploratory spatial data analysis.

  16. Energy Level Statistics in Particle-Rotor Model

    Institute of Scientific and Technical Information of China (English)

    ZHOU Xian-Rong; GUO Lu; MENG Jie; ZHAO En-Guang

    2002-01-01

    Energy level statistics of a system consisting of six particles interacting by delta force in a two-j modelcoupled with a deformed core is studied in particle-rotor model. For single-j shell (i13/2) and two-j shell (g7/2 + d5/2)the exact energies for our statistical analysis are obtained from a full diagonalization of the Hamiltonian, while in two-jcase (i13/2 + g9/2) the configuration truncation is used. The nearest-neighbor distribution of energy levels and spectralrigidity are studied as the function of spin. The results of single-j shell are compared with those in two-j case. It isshowed that the system becomes more regular when single-j space (i13/2) is replaced by two-j shell (g7/2 +d5/2) althoughthe basis size of the configuration space is unchanged. The degree of chaoticity of the system, however, changes slightlywhen configuration space is enlarged by extending single-j shell (i13/2) to two-j shell (i13/2 + g9/2).

  17. Assessing the relative effectiveness of statistical downscaling and distribution mapping in reproducing rainfall statistics based on climate model results

    Science.gov (United States)

    Langousis, Andreas; Mamalakis, Antonios; Deidda, Roberto; Marrocu, Marino

    2016-01-01

    To improve the level skill of climate models (CMs) in reproducing the statistics of daily rainfall at a basin level, two types of statistical approaches have been suggested. One is statistical correction of CM rainfall outputs based on historical series of precipitation. The other, usually referred to as statistical rainfall downscaling, is the use of stochastic models to conditionally simulate rainfall series, based on large-scale atmospheric forcing from CMs. While promising, the latter approach attracted reduced attention in recent years, since the developed downscaling schemes involved complex weather identification procedures, while demonstrating limited success in reproducing several statistical features of rainfall. In a recent effort, Langousis and Kaleris () developed a statistical framework for simulation of daily rainfall intensities conditional on upper-air variables, which is simpler to implement and more accurately reproduces several statistical properties of actual rainfall records. Here we study the relative performance of: (a) direct statistical correction of CM rainfall outputs using nonparametric distribution mapping, and (b) the statistical downscaling scheme of Langousis and Kaleris (), in reproducing the historical rainfall statistics, including rainfall extremes, at a regional level. This is done for an intermediate-sized catchment in Italy, i.e., the Flumendosa catchment, using rainfall and atmospheric data from four CMs of the ENSEMBLES project. The obtained results are promising, since the proposed downscaling scheme is more accurate and robust in reproducing a number of historical rainfall statistics, independent of the CM used and the characteristics of the calibration period. This is particularly the case for yearly rainfall maxima.

  18. STATISTICAL MECHANICS MODELING OF MESOSCALE DEFORMATION IN METALS

    Energy Technology Data Exchange (ETDEWEB)

    Anter El-Azab

    2013-04-08

    The research under this project focused on a theoretical and computational modeling of dislocation dynamics of mesoscale deformation of metal single crystals. Specifically, the work aimed to implement a continuum statistical theory of dislocations to understand strain hardening and cell structure formation under monotonic loading. These aspects of crystal deformation are manifestations of the evolution of the underlying dislocation system under mechanical loading. The project had three research tasks: 1) Investigating the statistical characteristics of dislocation systems in deformed crystals. 2) Formulating kinetic equations of dislocations and coupling these kinetics equations and crystal mechanics. 3) Computational solution of coupled crystal mechanics and dislocation kinetics. Comparison of dislocation dynamics predictions with experimental results in the area of statistical properties of dislocations and their field was also a part of the proposed effort. In the first research task, the dislocation dynamics simulation method was used to investigate the spatial, orientation, velocity, and temporal statistics of dynamical dislocation systems, and on the use of the results from this investigation to complete the kinetic description of dislocations. The second task focused on completing the formulation of a kinetic theory of dislocations that respects the discrete nature of crystallographic slip and the physics of dislocation motion and dislocation interaction in the crystal. Part of this effort also targeted the theoretical basis for establishing the connection between discrete and continuum representation of dislocations and the analysis of discrete dislocation simulation results within the continuum framework. This part of the research enables the enrichment of the kinetic description with information representing the discrete dislocation systems behavior. The third task focused on the development of physics-inspired numerical methods of solution of the coupled

  19. A statistical downscaling model for summer rainfall over Pakistan

    Science.gov (United States)

    Kazmi, Dildar Hussain; Li, Jianping; Ruan, Chengqing; Zhao, Sen; Li, Yanjie

    2016-10-01

    A statistical approach is utilized to construct an interannual model for summer (July-August) rainfall over the western parts of South Asian Monsoon. Observed monthly rainfall data for selected stations of Pakistan for the last 55 years (1960-2014) is taken as predictand. Recommended climate indices along with the oceanic and atmospheric data on global scales, for the period April-June are employed as predictors. First 40 years data has been taken as training period and the rest as validation period. Cross-validation stepwise regression approach adopted to select the robust predictors. Upper tropospheric zonal wind at 200 hPa over the northeastern Atlantic is finally selected as the best predictor for interannual model. Besides, the next possible candidate `geopotential height at upper troposphere' is taken as the indirect predictor for being a source of energy transportation from core region (northeast Atlantic/western Europe) to the study area. The model performed well for both the training as well as validation period with correlation coefficient of 0.71 and tolerable root mean square errors. Cross-validation of the model has been processed by incorporating JRA-55 data for potential predictors in addition to NCEP and fragmentation of study period to five non-overlapping test samples. Subsequently, to verify the outcome of the model on physical grounds, observational analyses as well as the model simulations are incorporated. It is revealed that originating from the jet exit region through large vorticity gradients, zonally dominating waves may transport energy and momentum to the downstream areas of west-central Asia, that ultimately affect interannual variability of the specific rainfall. It has been detected that both the circumglobal teleconnection and Rossby wave propagation play vital roles in modulating the proposed mechanism.

  20. Statistical Models and Methods for Network Meta-Analysis.

    Science.gov (United States)

    Madden, L V; Piepho, H-P; Paul, P A

    2016-08-01

    Meta-analysis, the methodology for analyzing the results from multiple independent studies, has grown tremendously in popularity over the last four decades. Although most meta-analyses involve a single effect size (summary result, such as a treatment difference) from each study, there are often multiple treatments of interest across the network of studies in the analysis. Multi-treatment (or network) meta-analysis can be used for simultaneously analyzing the results from all the treatments. However, the methodology is considerably more complicated than for the analysis of a single effect size, and there have not been adequate explanations of the approach for agricultural investigations. We review the methods and models for conducting a network meta-analysis based on frequentist statistical principles, and demonstrate the procedures using a published multi-treatment plant pathology data set. A major advantage of network meta-analysis is that correlations of estimated treatment effects are automatically taken into account when an appropriate model is used. Moreover, treatment comparisons may be possible in a network meta-analysis that are not possible in a single study because all treatments of interest may not be included in any given study. We review several models that consider the study effect as either fixed or random, and show how to interpret model-fitting output. We further show how to model the effect of moderator variables (study-level characteristics) on treatment effects, and present one approach to test for the consistency of treatment effects across the network. Online supplemental files give explanations on fitting the network meta-analytical models using SAS.

  1. Statistical characteristics of irreversible predictability time in regional ocean models

    Directory of Open Access Journals (Sweden)

    P. C. Chu

    2005-01-01

    Full Text Available Probabilistic aspects of regional ocean model predictability is analyzed using the probability density function (PDF of the irreversible predictability time (IPT (called τ-PDF computed from an unconstrained ensemble of stochastic perturbations in initial conditions, winds, and open boundary conditions. Two-attractors (a chaotic attractor and a small-amplitude stable limit cycle are found in the wind-driven circulation. Relationship between attractor's residence time and IPT determines the τ-PDF for the short (up to several weeks and intermediate (up to two months predictions. The τ-PDF is usually non-Gaussian but not multi-modal for red-noise perturbations in initial conditions and perturbations in the wind and open boundary conditions. Bifurcation of τ-PDF occurs as the tolerance level varies. Generally, extremely successful predictions (corresponding to the τ-PDF's tail toward large IPT domain are not outliers and share the same statistics as a whole ensemble of predictions.

  2. A context dependent pair hidden Markov model for statistical alignment

    CERN Document Server

    Arribas-Gil, Ana

    2011-01-01

    This article proposes a novel approach to statistical alignment of nucleotide sequences by introducing a context dependent structure on the substitution process in the underlying evolutionary model. We propose to estimate alignments and context dependent mutation rates relying on the observation of two homologous sequences. The procedure is based on a generalized pair-hidden Markov structure, where conditional on the alignment path, the nucleotide sequences follow a Markov distribution. We use a stochastic approximation expectation maximization (saem) algorithm to give accurate estimators of parameters and alignments. We provide results both on simulated data and vertebrate genomes, which are known to have a high mutation rate from CG dinucleotide. In particular, we establish that the method improves the accuracy of the alignment of a human pseudogene and its functional gene.

  3. Quantum statistics of Raman scattering model with Stokes mode generation

    Science.gov (United States)

    Tanatar, Bilal; Shumovsky, Alexander S.

    1994-01-01

    The model describing three coupled quantum oscillators with decay of Rayleigh mode into the Stokes and vibration (phonon) modes is examined. Due to the Manley-Rowe relations the problem of exact eigenvalues and eigenstates is reduced to the calculation of new orthogonal polynomials defined both by the difference and differential equations. The quantum statistical properties are examined in the case when initially: the Stokes mode is in the vacuum state; the Rayleigh mode is in the number state; and the vibration mode is in the number of or squeezed states. The collapses and revivals are obtained for different initial conditions as well as the change in time the sub-Poisson distribution by the super-Poisson distribution and vice versa.

  4. Modelling the influence of photospheric turbulence on solar flare statistics

    Science.gov (United States)

    Mendoza, M.; Kaydul, A.; de Arcangelis, L.; Andrade, J. S., Jr.; Herrmann, H. J.

    2014-09-01

    Solar flares stem from the reconnection of twisted magnetic field lines in the solar photosphere. The energy and waiting time distributions of these events follow complex patterns that have been carefully considered in the past and that bear some resemblance with earthquakes and stockmarkets. Here we explore in detail the tangling motion of interacting flux tubes anchored in the plasma and the energy ejections resulting when they recombine. The mechanism for energy accumulation and release in the flow is reminiscent of self-organized criticality. From this model, we suggest the origin for two important and widely studied properties of solar flare statistics, including the time-energy correlations. We first propose that the scale-free energy distribution of solar flares is largely due to the twist exerted by the vorticity of the turbulent photosphere. Second, the long-range temporal and time-energy correlations appear to arise from the tube-tube interactions. The agreement with satellite measurements is encouraging.

  5. Statistical model on the surface elevation of waves with breaking

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    In the surface wind drift layer with constant momentum flux, two sets of the consistent surface eleva- tion expressions with breaking and occurrence conditions for breaking are deduced from the first in- tegrals of the energy and vortex variations and the kinetic and mathematic breaking criterions, then the expression of the surface elevation with wave breaking is established by using the Heaviside function. On the basis of the form of the sea surface elevation with wave breaking and the understanding of small slope sea waves, a triple composite function of real sea waves is presented including the func- tions for the breaking, weak-nonlinear and basic waves. The expression of the triple composite func- tion and the normal distribution of basic waves are the expected theoretical model for surface elevation statistics.

  6. Population stratification using a statistical model on hypergraphs

    CERN Document Server

    Vazquez, Alexei

    2007-01-01

    Population stratification is a problem encountered in several areas of biology and public health. We tackle this problem by mapping a population and its elements attributes into a hypergraph, a natural extension of the concept of graph or network to encode associations among any number of elements. On this hypergraph, we construct a statistical model reflecting our intuition about how the elements attributes can emerge from a postulated population structure. Finally, we introduce the concept of stratification representativeness as a mean to identify the simplest stratification already containing most of the information about the population structure. We demonstrate the power of this framework stratifying an animal and a human population based on phenotypic and genotypic properties, respectively.

  7. Smooth extrapolation of unknown anatomy via statistical shape models

    Science.gov (United States)

    Grupp, R. B.; Chiang, H.; Otake, Y.; Murphy, R. J.; Gordon, C. R.; Armand, M.; Taylor, R. H.

    2015-03-01

    Several methods to perform extrapolation of unknown anatomy were evaluated. The primary application is to enhance surgical procedures that may use partial medical images or medical images of incomplete anatomy. Le Fort-based, face-jaw-teeth transplant is one such procedure. From CT data of 36 skulls and 21 mandibles separate Statistical Shape Models of the anatomical surfaces were created. Using the Statistical Shape Models, incomplete surfaces were projected to obtain complete surface estimates. The surface estimates exhibit non-zero error in regions where the true surface is known; it is desirable to keep the true surface and seamlessly merge the estimated unknown surface. Existing extrapolation techniques produce non-smooth transitions from the true surface to the estimated surface, resulting in additional error and a less aesthetically pleasing result. The three extrapolation techniques evaluated were: copying and pasting of the surface estimate (non-smooth baseline), a feathering between the patient surface and surface estimate, and an estimate generated via a Thin Plate Spline trained from displacements between the surface estimate and corresponding vertices of the known patient surface. Feathering and Thin Plate Spline approaches both yielded smooth transitions. However, feathering corrupted known vertex values. Leave-one-out analyses were conducted, with 5% to 50% of known anatomy removed from the left-out patient and estimated via the proposed approaches. The Thin Plate Spline approach yielded smaller errors than the other two approaches, with an average vertex error improvement of 1.46 mm and 1.38 mm for the skull and mandible respectively, over the baseline approach.

  8. Critical, statistical, and thermodynamical properties of lattice models

    Energy Technology Data Exchange (ETDEWEB)

    Varma, Vipin Kerala

    2013-10-15

    In this thesis we investigate zero temperature and low temperature properties - critical, statistical and thermodynamical - of lattice models in the contexts of bosonic cold atom systems, magnetic materials, and non-interacting particles on various lattice geometries. We study quantum phase transitions in the Bose-Hubbard model with higher body interactions, as relevant for optical lattice experiments of strongly interacting bosons, in one and two dimensions; the universality of the Mott insulator to superfluid transition is found to remain unchanged for even large three body interaction strengths. A systematic renormalization procedure is formulated to fully re-sum these higher (three and four) body interactions into the two body terms. In the strongly repulsive limit, we analyse the zero and low temperature physics of interacting hard-core bosons on the kagome lattice at various fillings. Evidence for a disordered phase in the Ising limit of the model is presented; in the strong coupling limit, the transition between the valence bond solid and the superfluid is argued to be first order at the tip of the solid lobe.

  9. System models for PET statistical iterative reconstruction: A review.

    Science.gov (United States)

    Iriarte, A; Marabini, R; Matej, S; Sorzano, C O S; Lewitt, R M

    2016-03-01

    Positron emission tomography (PET) is a nuclear imaging modality that provides in vivo quantitative measurements of the spatial and temporal distribution of compounds labeled with a positron emitting radionuclide. In the last decades, a tremendous effort has been put into the field of mathematical tomographic image reconstruction algorithms that transform the data registered by a PET camera into an image that represents slices through the scanned object. Iterative image reconstruction methods often provide higher quality images than conventional direct analytical methods. Aside from taking into account the statistical nature of the data, the key advantage of iterative reconstruction techniques is their ability to incorporate detailed models of the data acquisition process. This is mainly realized through the use of the so-called system matrix, that defines the mapping from the object space to the measurement space. The quality of the reconstructed images relies to a great extent on the accuracy with which the system matrix is estimated. Unfortunately, an accurate system matrix is often associated with high reconstruction times and huge storage requirements. Many attempts have been made to achieve realistic models without incurring excessive computational costs. As a result, a wide range of alternatives to the calculation of the system matrix exists. In this article we present a review of the different approaches used to address the problem of how to model, calculate and store the system matrix.

  10. The statistical multifragmentation model: Origins and recent advances

    Science.gov (United States)

    Donangelo, R.; Souza, S. R.

    2016-07-01

    We review the Statistical Multifragmentation Model (SMM) which considers a generalization of the liquid-drop model for hot nuclei and allows one to calculate thermodynamic quantities characterizing the nuclear ensemble at the disassembly stage. We show how to determine probabilities of definite partitions of finite nuclei and how to determine, through Monte Carlo calculations, observables such as the caloric curve, multiplicity distributions, heat capacity, among others. Some experimental measurements of the caloric curve confirmed the SMM predictions of over 10 years before, leading to a surge in the interest in the model. However, the experimental determination of the fragmentation temperatures relies on the yields of different isotopic species, which were not correctly calculated in the schematic, liquid-drop picture, employed in the SMM. This led to a series of improvements in the SMM, in particular to the more careful choice of nuclear masses and energy densities, specially for the lighter nuclei. With these improvements the SMM is able to make quantitative determinations of isotope production. We show the application of SMM to the production of exotic nuclei through multifragmentation. These preliminary calculations demonstrate the need for a careful choice of the system size and excitation energy to attain maximum yields.

  11. The statistical multifragmentation model: Origins and recent advances

    Energy Technology Data Exchange (ETDEWEB)

    Donangelo, R., E-mail: donangel@fing.edu.uy [Instituto de Física, Facultad de Ingeniería, Universidad de la República, Julio Herrera y Reissig 565, 11300, Montevideo (Uruguay); Instituto de Física, Universidade Federal do Rio de Janeiro, C.P. 68528, 21941-972 Rio de Janeiro - RJ (Brazil); Souza, S. R., E-mail: srsouza@if.ufrj.br [Instituto de Física, Universidade Federal do Rio de Janeiro, C.P. 68528, 21941-972 Rio de Janeiro - RJ (Brazil); Instituto de Física, Universidade Federal do Rio Grande do Sul, C.P. 15051, 91501-970 Porto Alegre - RS (Brazil)

    2016-07-07

    We review the Statistical Multifragmentation Model (SMM) which considers a generalization of the liquid-drop model for hot nuclei and allows one to calculate thermodynamic quantities characterizing the nuclear ensemble at the disassembly stage. We show how to determine probabilities of definite partitions of finite nuclei and how to determine, through Monte Carlo calculations, observables such as the caloric curve, multiplicity distributions, heat capacity, among others. Some experimental measurements of the caloric curve confirmed the SMM predictions of over 10 years before, leading to a surge in the interest in the model. However, the experimental determination of the fragmentation temperatures relies on the yields of different isotopic species, which were not correctly calculated in the schematic, liquid-drop picture, employed in the SMM. This led to a series of improvements in the SMM, in particular to the more careful choice of nuclear masses and energy densities, specially for the lighter nuclei. With these improvements the SMM is able to make quantitative determinations of isotope production. We show the application of SMM to the production of exotic nuclei through multifragmentation. These preliminary calculations demonstrate the need for a careful choice of the system size and excitation energy to attain maximum yields.

  12. Terminal-Dependent Statistical Inference for the FBSDEs Models

    Directory of Open Access Journals (Sweden)

    Yunquan Song

    2014-01-01

    Full Text Available The original stochastic differential equations (OSDEs and forward-backward stochastic differential equations (FBSDEs are often used to model complex dynamic process that arise in financial, ecological, and many other areas. The main difference between OSDEs and FBSDEs is that the latter is designed to depend on a terminal condition, which is a key factor in some financial and ecological circumstances. It is interesting but challenging to estimate FBSDE parameters from noisy data and the terminal condition. However, to the best of our knowledge, the terminal-dependent statistical inference for such a model has not been explored in the existing literature. We proposed a nonparametric terminal control variables estimation method to address this problem. The reason why we use the terminal control variables is that the newly proposed inference procedures inherit the terminal-dependent characteristic. Through this new proposed method, the estimators of the functional coefficients of the FBSDEs model are obtained. The asymptotic properties of the estimators are also discussed. Simulation studies show that the proposed method gives satisfying estimates for the FBSDE parameters from noisy data and the terminal condition. A simulation is performed to test the feasibility of our method.

  13. Statistical emulation of a tsunami model for sensitivity analysis and uncertainty quantification

    CERN Document Server

    Sarri, A; Dias, F

    2012-01-01

    Due to the catastrophic consequences of tsunamis, early warnings need to be issued quickly in order to mitigate the hazard. Additionally, there is a need to represent the uncertainty in the predictions of tsunami characteristics corresponding to the uncertain trigger features (e.g. either position, shape and speed of a landslide, or sea floor deformation associated with an earthquake). Unfortunately, computer models are expensive to run. This leads to significant delays in predictions and makes the uncertainty quantification impractical. Statistical emulators run almost instantaneously and may represent well the outputs of the computer model. In this paper, we use the Outer Product Emulator to build a fast statistical surrogate of a landslide-generated tsunami computer model. This Bayesian framework enables us to build the emulator by combining prior knowledge of the computer model properties with a few carefully chosen model evaluations. The good performance of the emulator is validated using the Leave-One-O...

  14. Statistical emulation of a tsunami model for sensitivity analysis and uncertainty quantification

    Directory of Open Access Journals (Sweden)

    A. Sarri

    2012-06-01

    Full Text Available Due to the catastrophic consequences of tsunamis, early warnings need to be issued quickly in order to mitigate the hazard. Additionally, there is a need to represent the uncertainty in the predictions of tsunami characteristics corresponding to the uncertain trigger features (e.g. either position, shape and speed of a landslide, or sea floor deformation associated with an earthquake. Unfortunately, computer models are expensive to run. This leads to significant delays in predictions and makes the uncertainty quantification impractical. Statistical emulators run almost instantaneously and may represent well the outputs of the computer model. In this paper, we use the outer product emulator to build a fast statistical surrogate of a landslide-generated tsunami computer model. This Bayesian framework enables us to build the emulator by combining prior knowledge of the computer model properties with a few carefully chosen model evaluations. The good performance of the emulator is validated using the leave-one-out method.

  15. Multivariate Statistical Modelling of Drought and Heat Wave Events

    Science.gov (United States)

    Manning, Colin; Widmann, Martin; Vrac, Mathieu; Maraun, Douglas; Bevaqua, Emanuele

    2016-04-01

    Multivariate Statistical Modelling of Drought and Heat Wave Events C. Manning1,2, M. Widmann1, M. Vrac2, D. Maraun3, E. Bevaqua2,3 1. School of Geography, Earth and Environmental Sciences, University of Birmingham, Edgbaston, Birmingham, UK 2. Laboratoire des Sciences du Climat et de l'Environnement, (LSCE-IPSL), Centre d'Etudes de Saclay, Gif-sur-Yvette, France 3. Wegener Center for Climate and Global Change, University of Graz, Brandhofgasse 5, 8010 Graz, Austria Compound extreme events are a combination of two or more contributing events which in themselves may not be extreme but through their joint occurrence produce an extreme impact. Compound events are noted in the latest IPCC report as an important type of extreme event that have been given little attention so far. As part of the CE:LLO project (Compound Events: muLtivariate statisticaL mOdelling) we are developing a multivariate statistical model to gain an understanding of the dependence structure of certain compound events. One focus of this project is on the interaction between drought and heat wave events. Soil moisture has both a local and non-local effect on the occurrence of heat waves where it strongly controls the latent heat flux affecting the transfer of sensible heat to the atmosphere. These processes can create a feedback whereby a heat wave maybe amplified or suppressed by the soil moisture preconditioning, and vice versa, the heat wave may in turn have an effect on soil conditions. An aim of this project is to capture this dependence in order to correctly describe the joint probabilities of these conditions and the resulting probability of their compound impact. We will show an application of Pair Copula Constructions (PCCs) to study the aforementioned compound event. PCCs allow in theory for the formulation of multivariate dependence structures in any dimension where the PCC is a decomposition of a multivariate distribution into a product of bivariate components modelled using copulas. A

  16. Uncertainty analysis in statistical modeling of extreme hydrological events

    NARCIS (Netherlands)

    Xu, Yue-Ping; Booij, Martijn J.; Tong, Yang-Bin

    2010-01-01

    With the increase of both magnitude and frequency of hydrological extreme events such as drought and flooding, the significance of adequately modeling hydrological extreme events is fully recognized. Estimation of extreme rainfall/flood for various return periods is of prime importance for hydrologi

  17. Robust model selection and the statistical classification of languages

    Science.gov (United States)

    García, J. E.; González-López, V. A.; Viola, M. L. L.

    2012-10-01

    In this paper we address the problem of model selection for the set of finite memory stochastic processes with finite alphabet, when the data is contaminated. We consider m independent samples, with more than half of them being realizations of the same stochastic process with law Q, which is the one we want to retrieve. We devise a model selection procedure such that for a sample size large enough, the selected process is the one with law Q. Our model selection strategy is based on estimating relative entropies to select a subset of samples that are realizations of the same law. Although the procedure is valid for any family of finite order Markov models, we will focus on the family of variable length Markov chain models, which include the fixed order Markov chain model family. We define the asymptotic breakdown point (ABDP) for a model selection procedure, and we show the ABDP for our procedure. This means that if the proportion of contaminated samples is smaller than the ABDP, then, as the sample size grows our procedure selects a model for the process with law Q. We also use our procedure in a setting where we have one sample conformed by the concatenation of sub-samples of two or more stochastic processes, with most of the subsamples having law Q. We conducted a simulation study. In the application section we address the question of the statistical classification of languages according to their rhythmic features using speech samples. This is an important open problem in phonology. A persistent difficulty on this problem is that the speech samples correspond to several sentences produced by diverse speakers, corresponding to a mixture of distributions. The usual procedure to deal with this problem has been to choose a subset of the original sample which seems to best represent each language. The selection is made by listening to the samples. In our application we use the full dataset without any preselection of samples. We apply our robust methodology estimating

  18. A randomized trial in a massive online open course shows people don’t know what a statistically significant relationship looks like, but they can learn

    Directory of Open Access Journals (Sweden)

    Aaron Fisher

    2014-10-01

    Full Text Available Scatterplots are the most common way for statisticians, scientists, and the public to visually detect relationships between measured variables. At the same time, and despite widely publicized controversy, P-values remain the most commonly used measure to statistically justify relationships identified between variables. Here we measure the ability to detect statistically significant relationships from scatterplots in a randomized trial of 2,039 students in a statistics massive open online course (MOOC. Each subject was shown a random set of scatterplots and asked to visually determine if the underlying relationships were statistically significant at the P < 0.05 level. Subjects correctly classified only 47.4% (95% CI [45.1%–49.7%] of statistically significant relationships, and 74.6% (95% CI [72.5%–76.6%] of non-significant relationships. Adding visual aids such as a best fit line or scatterplot smooth increased the probability a relationship was called significant, regardless of whether the relationship was actually significant. Classification of statistically significant relationships improved on repeat attempts of the survey, although classification of non-significant relationships did not. Our results suggest: (1 that evidence-based data analysis can be used to identify weaknesses in theoretical procedures in the hands of average users, (2 data analysts can be trained to improve detection of statistically significant results with practice, but (3 data analysts have incorrect intuition about what statistically significant relationships look like, particularly for small effects. We have built a web tool for people to compare scatterplots with their corresponding p-values which is available here: http://glimmer.rstudio.com/afisher/EDA/.

  19. A randomized trial in a massive online open course shows people don't know what a statistically significant relationship looks like, but they can learn.

    Science.gov (United States)

    Fisher, Aaron; Anderson, G Brooke; Peng, Roger; Leek, Jeff

    2014-01-01

    Scatterplots are the most common way for statisticians, scientists, and the public to visually detect relationships between measured variables. At the same time, and despite widely publicized controversy, P-values remain the most commonly used measure to statistically justify relationships identified between variables. Here we measure the ability to detect statistically significant relationships from scatterplots in a randomized trial of 2,039 students in a statistics massive open online course (MOOC). Each subject was shown a random set of scatterplots and asked to visually determine if the underlying relationships were statistically significant at the P < 0.05 level. Subjects correctly classified only 47.4% (95% CI [45.1%-49.7%]) of statistically significant relationships, and 74.6% (95% CI [72.5%-76.6%]) of non-significant relationships. Adding visual aids such as a best fit line or scatterplot smooth increased the probability a relationship was called significant, regardless of whether the relationship was actually significant. Classification of statistically significant relationships improved on repeat attempts of the survey, although classification of non-significant relationships did not. Our results suggest: (1) that evidence-based data analysis can be used to identify weaknesses in theoretical procedures in the hands of average users, (2) data analysts can be trained to improve detection of statistically significant results with practice, but (3) data analysts have incorrect intuition about what statistically significant relationships look like, particularly for small effects. We have built a web tool for people to compare scatterplots with their corresponding p-values which is available here: http://glimmer.rstudio.com/afisher/EDA/.

  20. A randomized trial in a massive online open course shows people don’t know what a statistically significant relationship looks like, but they can learn

    Science.gov (United States)

    Fisher, Aaron; Anderson, G. Brooke; Peng, Roger

    2014-01-01

    Scatterplots are the most common way for statisticians, scientists, and the public to visually detect relationships between measured variables. At the same time, and despite widely publicized controversy, P-values remain the most commonly used measure to statistically justify relationships identified between variables. Here we measure the ability to detect statistically significant relationships from scatterplots in a randomized trial of 2,039 students in a statistics massive open online course (MOOC). Each subject was shown a random set of scatterplots and asked to visually determine if the underlying relationships were statistically significant at the P < 0.05 level. Subjects correctly classified only 47.4% (95% CI [45.1%–49.7%]) of statistically significant relationships, and 74.6% (95% CI [72.5%–76.6%]) of non-significant relationships. Adding visual aids such as a best fit line or scatterplot smooth increased the probability a relationship was called significant, regardless of whether the relationship was actually significant. Classification of statistically significant relationships improved on repeat attempts of the survey, although classification of non-significant relationships did not. Our results suggest: (1) that evidence-based data analysis can be used to identify weaknesses in theoretical procedures in the hands of average users, (2) data analysts can be trained to improve detection of statistically significant results with practice, but (3) data analysts have incorrect intuition about what statistically significant relationships look like, particularly for small effects. We have built a web tool for people to compare scatterplots with their corresponding p-values which is available here: http://glimmer.rstudio.com/afisher/EDA/. PMID:25337457