WorldWideScience

Sample records for similar statistical analysis

  1. Network similarity and statistical analysis of earthquake seismic data

    OpenAIRE

    Deyasi, Krishanu; Chakraborty, Abhijit; Banerjee, Anirban

    2016-01-01

    We study the structural similarity of earthquake networks constructed from seismic catalogs of different geographical regions. A hierarchical clustering of underlying undirected earthquake networks is shown using Jensen-Shannon divergence in graph spectra. The directed nature of links indicates that each earthquake network is strongly connected, which motivates us to study the directed version statistically. Our statistical analysis of each earthquake region identifies the hub regions. We cal...

  2. Western classical music development: a statistical analysis of composers similarity, differentiation and evolution.

    Science.gov (United States)

    Georges, Patrick

    2017-01-01

    This paper proposes a statistical analysis that captures similarities and differences between classical music composers with the eventual aim to understand why particular composers 'sound' different even if their 'lineages' (influences network) are similar or why they 'sound' alike if their 'lineages' are different. In order to do this we use statistical methods and measures of association or similarity (based on presence/absence of traits such as specific 'ecological' characteristics and personal musical influences) that have been developed in biosystematics, scientometrics, and bibliographic coupling. This paper also represents a first step towards a more ambitious goal of developing an evolutionary model of Western classical music.

  3. Testing statistical self-similarity in the topology of river networks

    Science.gov (United States)

    Troutman, Brent M.; Mantilla, Ricardo; Gupta, Vijay K.

    2010-01-01

    Recent work has demonstrated that the topological properties of real river networks deviate significantly from predictions of Shreve's random model. At the same time the property of mean self-similarity postulated by Tokunaga's model is well supported by data. Recently, a new class of network model called random self-similar networks (RSN) that combines self-similarity and randomness has been introduced to replicate important topological features observed in real river networks. We investigate if the hypothesis of statistical self-similarity in the RSN model is supported by data on a set of 30 basins located across the continental United States that encompass a wide range of hydroclimatic variability. We demonstrate that the generators of the RSN model obey a geometric distribution, and self-similarity holds in a statistical sense in 26 of these 30 basins. The parameters describing the distribution of interior and exterior generators are tested to be statistically different and the difference is shown to produce the well-known Hack's law. The inter-basin variability of RSN parameters is found to be statistically significant. We also test generator dependence on two climatic indices, mean annual precipitation and radiative index of dryness. Some indication of climatic influence on the generators is detected, but this influence is not statistically significant with the sample size available. Finally, two key applications of the RSN model to hydrology and geomorphology are briefly discussed.

  4. Similar tests and the standardized log likelihood ratio statistic

    DEFF Research Database (Denmark)

    Jensen, Jens Ledet

    1986-01-01

    When testing an affine hypothesis in an exponential family the 'ideal' procedure is to calculate the exact similar test, or an approximation to this, based on the conditional distribution given the minimal sufficient statistic under the null hypothesis. By contrast to this there is a 'primitive......' approach in which the marginal distribution of a test statistic considered and any nuisance parameter appearing in the test statistic is replaced by an estimate. We show here that when using standardized likelihood ratio statistics the 'primitive' procedure is in fact an 'ideal' procedure to order O(n -3...

  5. Similar words analysis based on POS-CBOW language model

    Directory of Open Access Journals (Sweden)

    Dongru RUAN

    2015-10-01

    Full Text Available Similar words analysis is one of the important aspects in the field of natural language processing, and it has important research and application values in text classification, machine translation and information recommendation. Focusing on the features of Sina Weibo's short text, this paper presents a language model named as POS-CBOW, which is a kind of continuous bag-of-words language model with the filtering layer and part-of-speech tagging layer. The proposed approach can adjust the word vectors' similarity according to the cosine similarity and the word vectors' part-of-speech metrics. It can also filter those similar words set on the base of the statistical analysis model. The experimental result shows that the similar words analysis algorithm based on the proposed POS-CBOW language model is better than that based on the traditional CBOW language model.

  6. EVALUATION OF SEMANTIC SIMILARITY FOR SENTENCES IN NATURAL LANGUAGE BY MATHEMATICAL STATISTICS METHODS

    Directory of Open Access Journals (Sweden)

    A. E. Pismak

    2016-03-01

    Full Text Available Subject of Research. The paper is focused on Wiktionary articles structural organization in the aspect of its usage as the base for semantic network. Wiktionary community references, article templates and articles markup features are analyzed. The problem of numerical estimation for semantic similarity of structural elements in Wiktionary articles is considered. Analysis of existing software for semantic similarity estimation of such elements is carried out; algorithms of their functioning are studied; their advantages and disadvantages are shown. Methods. Mathematical statistics methods were used to analyze Wiktionary articles markup features. The method of semantic similarity computing based on statistics data for compared structural elements was proposed.Main Results. We have concluded that there is no possibility for direct use of Wiktionary articles as the source for semantic network. We have proposed to find hidden similarity between article elements, and for that purpose we have developed the algorithm for calculation of confidence coefficients proving that each pair of sentences is semantically near. The research of quantitative and qualitative characteristics for the developed algorithm has shown its major performance advantage over the other existing solutions in the presence of insignificantly higher error rate. Practical Relevance. The resulting algorithm may be useful in developing tools for automatic Wiktionary articles parsing. The developed method could be used in computing of semantic similarity for short text fragments in natural language in case of algorithm performance requirements are higher than its accuracy specifications.

  7. Genomic similarity and kernel methods I: advancements by building on mathematical and statistical foundations.

    Science.gov (United States)

    Schaid, Daniel J

    2010-01-01

    Measures of genomic similarity are the basis of many statistical analytic methods. We review the mathematical and statistical basis of similarity methods, particularly based on kernel methods. A kernel function converts information for a pair of subjects to a quantitative value representing either similarity (larger values meaning more similar) or distance (smaller values meaning more similar), with the requirement that it must create a positive semidefinite matrix when applied to all pairs of subjects. This review emphasizes the wide range of statistical methods and software that can be used when similarity is based on kernel methods, such as nonparametric regression, linear mixed models and generalized linear mixed models, hierarchical models, score statistics, and support vector machines. The mathematical rigor for these methods is summarized, as is the mathematical framework for making kernels. This review provides a framework to move from intuitive and heuristic approaches to define genomic similarities to more rigorous methods that can take advantage of powerful statistical modeling and existing software. A companion paper reviews novel approaches to creating kernels that might be useful for genomic analyses, providing insights with examples [1]. Copyright © 2010 S. Karger AG, Basel.

  8. Statistical similarities of pre-earthquake electromagnetic emissions to biological and economic extreme events

    Science.gov (United States)

    Potirakis, Stelios M.; Contoyiannis, Yiannis; Kopanas, John; Kalimeris, Anastasios; Antonopoulos, George; Peratzakis, Athanasios; Eftaxias, Konstantinos; Nomicos, Costantinos

    2014-05-01

    When one considers a phenomenon that is "complex" refers to a system whose phenomenological laws that describe the global behavior of the system, are not necessarily directly related to the "microscopic" laws that regulate the evolution of its elementary parts. The field of study of complex systems considers that the dynamics of complex systems are founded on universal principles that may be used to describe disparate problems ranging from particle physics to economies of societies. Several authors have suggested that earthquake (EQ) dynamics can be analyzed within similar mathematical frameworks with economy dynamics, and neurodynamics. A central property of the EQ preparation process is the occurrence of coherent large-scale collective behavior with a very rich structure, resulting from repeated nonlinear interactions among the constituents of the system. As a result, nonextensive statistics is an appropriate, physically meaningful, tool for the study of EQ dynamics. Since the fracture induced electromagnetic (EM) precursors are observable manifestations of the underlying EQ preparation process, the analysis of a fracture induced EM precursor observed prior to the occurrence of a large EQ can also be conducted within the nonextensive statistics framework. Within the frame of the investigation for universal principles that may hold for different dynamical systems that are related to the genesis of extreme events, we present here statistical similarities of the pre-earthquake EM emissions related to an EQ, with the pre-ictal electrical brain activity related to an epileptic seizure, and with the pre-crisis economic observables related to the collapse of a share. It is demonstrated the all three dynamical systems' observables can be analyzed in the frame of nonextensive statistical mechanics, while the frequency-size relations of appropriately defined "events" that precede the extreme event related to each one of these different systems present striking quantitative

  9. PHOG analysis of self-similarity in aesthetic images

    Science.gov (United States)

    Amirshahi, Seyed Ali; Koch, Michael; Denzler, Joachim; Redies, Christoph

    2012-03-01

    In recent years, there have been efforts in defining the statistical properties of aesthetic photographs and artworks using computer vision techniques. However, it is still an open question how to distinguish aesthetic from non-aesthetic images with a high recognition rate. This is possibly because aesthetic perception is influenced also by a large number of cultural variables. Nevertheless, the search for statistical properties of aesthetic images has not been futile. For example, we have shown that the radially averaged power spectrum of monochrome artworks of Western and Eastern provenance falls off according to a power law with increasing spatial frequency (1/f2 characteristics). This finding implies that this particular subset of artworks possesses a Fourier power spectrum that is self-similar across different scales of spatial resolution. Other types of aesthetic images, such as cartoons, comics and mangas also display this type of self-similarity, as do photographs of complex natural scenes. Since the human visual system is adapted to encode images of natural scenes in a particular efficient way, we have argued that artists imitate these statistics in their artworks. In support of this notion, we presented results that artists portrait human faces with the self-similar Fourier statistics of complex natural scenes although real-world photographs of faces are not self-similar. In view of these previous findings, we investigated other statistical measures of self-similarity to characterize aesthetic and non-aesthetic images. In the present work, we propose a novel measure of self-similarity that is based on the Pyramid Histogram of Oriented Gradients (PHOG). For every image, we first calculate PHOG up to pyramid level 3. The similarity between the histograms of each section at a particular level is then calculated to the parent section at the previous level (or to the histogram at the ground level). The proposed approach is tested on datasets of aesthetic and

  10. Image-guided radiotherapy quality control: Statistical process control using image similarity metrics.

    Science.gov (United States)

    Shiraishi, Satomi; Grams, Michael P; Fong de Los Santos, Luis E

    2018-05-01

    The purpose of this study was to demonstrate an objective quality control framework for the image review process. A total of 927 cone-beam computed tomography (CBCT) registrations were retrospectively analyzed for 33 bilateral head and neck cancer patients who received definitive radiotherapy. Two registration tracking volumes (RTVs) - cervical spine (C-spine) and mandible - were defined, within which a similarity metric was calculated and used as a registration quality tracking metric over the course of treatment. First, sensitivity to large misregistrations was analyzed for normalized cross-correlation (NCC) and mutual information (MI) in the context of statistical analysis. The distribution of metrics was obtained for displacements that varied according to a normal distribution with standard deviation of σ = 2 mm, and the detectability of displacements greater than 5 mm was investigated. Then, similarity metric control charts were created using a statistical process control (SPC) framework to objectively monitor the image registration and review process. Patient-specific control charts were created using NCC values from the first five fractions to set a patient-specific process capability limit. Population control charts were created using the average of the first five NCC values for all patients in the study. For each patient, the similarity metrics were calculated as a function of unidirectional translation, referred to as the effective displacement. Patient-specific action limits corresponding to 5 mm effective displacements were defined. Furthermore, effective displacements of the ten registrations with the lowest similarity metrics were compared with a three dimensional (3DoF) couch displacement required to align the anatomical landmarks. Normalized cross-correlation identified suboptimal registrations more effectively than MI within the framework of SPC. Deviations greater than 5 mm were detected at 2.8σ and 2.1σ from the mean for NCC and MI

  11. [Establishment of the mathematic model of total quantum statistical moment standard similarity for application to medical theoretical research].

    Science.gov (United States)

    He, Fu-yuan; Deng, Kai-wen; Huang, Sheng; Liu, Wen-long; Shi, Ji-lian

    2013-09-01

    The paper aims to elucidate and establish a new mathematic model: the total quantum statistical moment standard similarity (TQSMSS) on the base of the original total quantum statistical moment model and to illustrate the application of the model to medical theoretical research. The model was established combined with the statistical moment principle and the normal distribution probability density function properties, then validated and illustrated by the pharmacokinetics of three ingredients in Buyanghuanwu decoction and of three data analytical method for them, and by analysis of chromatographic fingerprint for various extracts with different solubility parameter solvents dissolving the Buyanghanwu-decoction extract. The established model consists of four mainly parameters: (1) total quantum statistical moment similarity as ST, an overlapped area by two normal distribution probability density curves in conversion of the two TQSM parameters; (2) total variability as DT, a confidence limit of standard normal accumulation probability which is equal to the absolute difference value between the two normal accumulation probabilities within integration of their curve nodical; (3) total variable probability as 1-Ss, standard normal distribution probability within interval of D(T); (4) total variable probability (1-beta)alpha and (5) stable confident probability beta(1-alpha): the correct probability to make positive and negative conclusions under confident coefficient alpha. With the model, we had analyzed the TQSMS similarities of pharmacokinetics of three ingredients in Buyanghuanwu decoction and of three data analytical methods for them were at range of 0.3852-0.9875 that illuminated different pharmacokinetic behaviors of each other; and the TQSMS similarities (ST) of chromatographic fingerprint for various extracts with different solubility parameter solvents dissolving Buyanghuanwu-decoction-extract were at range of 0.6842-0.999 2 that showed different constituents

  12. Statistical analysis of metallicity in spiral galaxies

    Energy Technology Data Exchange (ETDEWEB)

    Galeotti, P [Consiglio Nazionale delle Ricerche, Turin (Italy). Lab. di Cosmo-Geofisica; Turin Univ. (Italy). Ist. di Fisica Generale)

    1981-04-01

    A principal component analysis of metallicity and other integral properties of 33 spiral galaxies is presented; the involved parameters are: morphological type, diameter, luminosity and metallicity. From the statistical analysis it is concluded that the sample has only two significant dimensions and additonal tests, involving different parameters, show similar results. Thus it seems that only type and luminosity are independent variables, being the other integral properties of spiral galaxies correlated with them.

  13. Statistical Power Analysis with Missing Data A Structural Equation Modeling Approach

    CERN Document Server

    Davey, Adam

    2009-01-01

    Statistical power analysis has revolutionized the ways in which we conduct and evaluate research.  Similar developments in the statistical analysis of incomplete (missing) data are gaining more widespread applications. This volume brings statistical power and incomplete data together under a common framework, in a way that is readily accessible to those with only an introductory familiarity with structural equation modeling.  It answers many practical questions such as: How missing data affects the statistical power in a study How much power is likely with different amounts and types

  14. Selected papers on analysis, probability, and statistics

    CERN Document Server

    Nomizu, Katsumi

    1994-01-01

    This book presents papers that originally appeared in the Japanese journal Sugaku. The papers fall into the general area of mathematical analysis as it pertains to probability and statistics, dynamical systems, differential equations and analytic function theory. Among the topics discussed are: stochastic differential equations, spectra of the Laplacian and Schrödinger operators, nonlinear partial differential equations which generate dissipative dynamical systems, fractal analysis on self-similar sets and the global structure of analytic functions.

  15. Statistical data analysis using SAS intermediate statistical methods

    CERN Document Server

    Marasinghe, Mervyn G

    2018-01-01

    The aim of this textbook (previously titled SAS for Data Analytics) is to teach the use of SAS for statistical analysis of data for advanced undergraduate and graduate students in statistics, data science, and disciplines involving analyzing data. The book begins with an introduction beyond the basics of SAS, illustrated with non-trivial, real-world, worked examples. It proceeds to SAS programming and applications, SAS graphics, statistical analysis of regression models, analysis of variance models, analysis of variance with random and mixed effects models, and then takes the discussion beyond regression and analysis of variance to conclude. Pedagogically, the authors introduce theory and methodological basis topic by topic, present a problem as an application, followed by a SAS analysis of the data provided and a discussion of results. The text focuses on applied statistical problems and methods. Key features include: end of chapter exercises, downloadable SAS code and data sets, and advanced material suitab...

  16. Expanding the boundaries of local similarity analysis.

    Science.gov (United States)

    Durno, W Evan; Hanson, Niels W; Konwar, Kishori M; Hallam, Steven J

    2013-01-01

    Pairwise comparison of time series data for both local and time-lagged relationships is a computationally challenging problem relevant to many fields of inquiry. The Local Similarity Analysis (LSA) statistic identifies the existence of local and lagged relationships, but determining significance through a p-value has been algorithmically cumbersome due to an intensive permutation test, shuffling rows and columns and repeatedly calculating the statistic. Furthermore, this p-value is calculated with the assumption of normality -- a statistical luxury dissociated from most real world datasets. To improve the performance of LSA on big datasets, an asymptotic upper bound on the p-value calculation was derived without the assumption of normality. This change in the bound calculation markedly improved computational speed from O(pm²n) to O(m²n), where p is the number of permutations in a permutation test, m is the number of time series, and n is the length of each time series. The bounding process is implemented as a computationally efficient software package, FASTLSA, written in C and optimized for threading on multi-core computers, improving its practical computation time. We computationally compare our approach to previous implementations of LSA, demonstrate broad applicability by analyzing time series data from public health, microbial ecology, and social media, and visualize resulting networks using the Cytoscape software. The FASTLSA software package expands the boundaries of LSA allowing analysis on datasets with millions of co-varying time series. Mapping metadata onto force-directed graphs derived from FASTLSA allows investigators to view correlated cliques and explore previously unrecognized network relationships. The software is freely available for download at: http://www.cmde.science.ubc.ca/hallam/fastLSA/.

  17. Fundamental statistical features and self-similar properties of tagged networks

    International Nuclear Information System (INIS)

    Palla, Gergely; Farkas, Illes J; Pollner, Peter; Vicsek, Tamas; Derenyi, Imre

    2008-01-01

    We investigate the fundamental statistical features of tagged (or annotated) networks having a rich variety of attributes associated with their nodes. Tags (attributes, annotations, properties, features, etc) provide essential information about the entity represented by a given node, thus, taking them into account represents a significant step towards a more complete description of the structure of large complex systems. Our main goal here is to uncover the relations between the statistical properties of the node tags and those of the graph topology. In order to better characterize the networks with tagged nodes, we introduce a number of new notions, including tag-assortativity (relating link probability to node similarity), and new quantities, such as node uniqueness (measuring how rarely the tags of a node occur in the network) and tag-assortativity exponent. We apply our approach to three large networks representing very different domains of complex systems. A number of the tag related quantities display analogous behaviour (e.g. the networks we studied are tag-assortative, indicating possible universal aspects of tags versus topology), while some other features, such as the distribution of the node uniqueness, show variability from network to network allowing for pin-pointing large scale specific features of real-world complex networks. We also find that for each network the topology and the tag distribution are scale invariant, and this self-similar property of the networks can be well characterized by the tag-assortativity exponent, which is specific to each system.

  18. Testing statistical significance scores of sequence comparison methods with structure similarity

    Directory of Open Access Journals (Sweden)

    Leunissen Jack AM

    2006-10-01

    Full Text Available Abstract Background In the past years the Smith-Waterman sequence comparison algorithm has gained popularity due to improved implementations and rapidly increasing computing power. However, the quality and sensitivity of a database search is not only determined by the algorithm but also by the statistical significance testing for an alignment. The e-value is the most commonly used statistical validation method for sequence database searching. The CluSTr database and the Protein World database have been created using an alternative statistical significance test: a Z-score based on Monte-Carlo statistics. Several papers have described the superiority of the Z-score as compared to the e-value, using simulated data. We were interested if this could be validated when applied to existing, evolutionary related protein sequences. Results All experiments are performed on the ASTRAL SCOP database. The Smith-Waterman sequence comparison algorithm with both e-value and Z-score statistics is evaluated, using ROC, CVE and AP measures. The BLAST and FASTA algorithms are used as reference. We find that two out of three Smith-Waterman implementations with e-value are better at predicting structural similarities between proteins than the Smith-Waterman implementation with Z-score. SSEARCH especially has very high scores. Conclusion The compute intensive Z-score does not have a clear advantage over the e-value. The Smith-Waterman implementations give generally better results than their heuristic counterparts. We recommend using the SSEARCH algorithm combined with e-values for pairwise sequence comparisons.

  19. On the Statistical Validation of Technical Analysis

    Directory of Open Access Journals (Sweden)

    Rosane Riera Freire

    2007-06-01

    Full Text Available Technical analysis, or charting, aims on visually identifying geometrical patterns in price charts in order to antecipate price "trends". In this paper we revisit the issue of thecnical analysis validation which has been tackled in the literature without taking care for (i the presence of heterogeneity and (ii statistical dependence in the analyzed data - various agglutinated return time series from distinct financial securities. The main purpose here is to address the first cited problem by suggesting a validation methodology that also "homogenizes" the securities according to the finite dimensional probability distribution of their return series. The general steps go through the identification of the stochastic processes for the securities returns, the clustering of similar securities and, finally, the identification of presence, or absence, of informatinal content obtained from those price patterns. We illustrate the proposed methodology with a real data exercise including several securities of the global market. Our investigation shows that there is a statistically significant informational content in two out of three common patterns usually found through technical analysis, namely: triangle, rectangle and head and shoulders.

  20. K2 and K2*: efficient alignment-free sequence similarity measurement based on Kendall statistics.

    Science.gov (United States)

    Lin, Jie; Adjeroh, Donald A; Jiang, Bing-Hua; Jiang, Yue

    2018-05-15

    Alignment-free sequence comparison methods can compute the pairwise similarity between a huge number of sequences much faster than sequence-alignment based methods. We propose a new non-parametric alignment-free sequence comparison method, called K2, based on the Kendall statistics. Comparing to the other state-of-the-art alignment-free comparison methods, K2 demonstrates competitive performance in generating the phylogenetic tree, in evaluating functionally related regulatory sequences, and in computing the edit distance (similarity/dissimilarity) between sequences. Furthermore, the K2 approach is much faster than the other methods. An improved method, K2*, is also proposed, which is able to determine the appropriate algorithmic parameter (length) automatically, without first considering different values. Comparative analysis with the state-of-the-art alignment-free sequence similarity methods demonstrates the superiority of the proposed approaches, especially with increasing sequence length, or increasing dataset sizes. The K2 and K2* approaches are implemented in the R language as a package and is freely available for open access (http://community.wvu.edu/daadjeroh/projects/K2/K2_1.0.tar.gz). yueljiang@163.com. Supplementary data are available at Bioinformatics online.

  1. Using SQL Databases for Sequence Similarity Searching and Analysis.

    Science.gov (United States)

    Pearson, William R; Mackey, Aaron J

    2017-09-13

    Relational databases can integrate diverse types of information and manage large sets of similarity search results, greatly simplifying genome-scale analyses. By focusing on taxonomic subsets of sequences, relational databases can reduce the size and redundancy of sequence libraries and improve the statistical significance of homologs. In addition, by loading similarity search results into a relational database, it becomes possible to explore and summarize the relationships between all of the proteins in an organism and those in other biological kingdoms. This unit describes how to use relational databases to improve the efficiency of sequence similarity searching and demonstrates various large-scale genomic analyses of homology-related data. It also describes the installation and use of a simple protein sequence database, seqdb_demo, which is used as a basis for the other protocols. The unit also introduces search_demo, a database that stores sequence similarity search results. The search_demo database is then used to explore the evolutionary relationships between E. coli proteins and proteins in other organisms in a large-scale comparative genomic analysis. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.

  2. A new similarity index for nonlinear signal analysis based on local extrema patterns

    Science.gov (United States)

    Niknazar, Hamid; Motie Nasrabadi, Ali; Shamsollahi, Mohammad Bagher

    2018-02-01

    Common similarity measures of time domain signals such as cross-correlation and Symbolic Aggregate approximation (SAX) are not appropriate for nonlinear signal analysis. This is because of the high sensitivity of nonlinear systems to initial points. Therefore, a similarity measure for nonlinear signal analysis must be invariant to initial points and quantify the similarity by considering the main dynamics of signals. The statistical behavior of local extrema (SBLE) method was previously proposed to address this problem. The SBLE similarity index uses quantized amplitudes of local extrema to quantify the dynamical similarity of signals by considering patterns of sequential local extrema. By adding time information of local extrema as well as fuzzifying quantized values, this work proposes a new similarity index for nonlinear and long-term signal analysis, which extends the SBLE method. These new features provide more information about signals and reduce noise sensitivity by fuzzifying them. A number of practical tests were performed to demonstrate the ability of the method in nonlinear signal clustering and classification on synthetic data. In addition, epileptic seizure detection based on electroencephalography (EEG) signal processing was done by the proposed similarity to feature the potentials of the method as a real-world application tool.

  3. Gender similarities and differences.

    Science.gov (United States)

    Hyde, Janet Shibley

    2014-01-01

    Whether men and women are fundamentally different or similar has been debated for more than a century. This review summarizes major theories designed to explain gender differences: evolutionary theories, cognitive social learning theory, sociocultural theory, and expectancy-value theory. The gender similarities hypothesis raises the possibility of theorizing gender similarities. Statistical methods for the analysis of gender differences and similarities are reviewed, including effect sizes, meta-analysis, taxometric analysis, and equivalence testing. Then, relying mainly on evidence from meta-analyses, gender differences are reviewed in cognitive performance (e.g., math performance), personality and social behaviors (e.g., temperament, emotions, aggression, and leadership), and psychological well-being. The evidence on gender differences in variance is summarized. The final sections explore applications of intersectionality and directions for future research.

  4. Beginning statistics with data analysis

    CERN Document Server

    Mosteller, Frederick; Rourke, Robert EK

    2013-01-01

    This introduction to the world of statistics covers exploratory data analysis, methods for collecting data, formal statistical inference, and techniques of regression and analysis of variance. 1983 edition.

  5. Statistical Analysis of Zebrafish Locomotor Response.

    Science.gov (United States)

    Liu, Yiwen; Carmer, Robert; Zhang, Gaonan; Venkatraman, Prahatha; Brown, Skye Ashton; Pang, Chi-Pui; Zhang, Mingzhi; Ma, Ping; Leung, Yuk Fai

    2015-01-01

    Zebrafish larvae display rich locomotor behaviour upon external stimulation. The movement can be simultaneously tracked from many larvae arranged in multi-well plates. The resulting time-series locomotor data have been used to reveal new insights into neurobiology and pharmacology. However, the data are of large scale, and the corresponding locomotor behavior is affected by multiple factors. These issues pose a statistical challenge for comparing larval activities. To address this gap, this study has analyzed a visually-driven locomotor behaviour named the visual motor response (VMR) by the Hotelling's T-squared test. This test is congruent with comparing locomotor profiles from a time period. Different wild-type (WT) strains were compared using the test, which shows that they responded differently to light change at different developmental stages. The performance of this test was evaluated by a power analysis, which shows that the test was sensitive for detecting differences between experimental groups with sample numbers that were commonly used in various studies. In addition, this study investigated the effects of various factors that might affect the VMR by multivariate analysis of variance (MANOVA). The results indicate that the larval activity was generally affected by stage, light stimulus, their interaction, and location in the plate. Nonetheless, different factors affected larval activity differently over time, as indicated by a dynamical analysis of the activity at each second. Intriguingly, this analysis also shows that biological and technical repeats had negligible effect on larval activity. This finding is consistent with that from the Hotelling's T-squared test, and suggests that experimental repeats can be combined to enhance statistical power. Together, these investigations have established a statistical framework for analyzing VMR data, a framework that should be generally applicable to other locomotor data with similar structure.

  6. Universal self-similarity of propagating populations.

    Science.gov (United States)

    Eliazar, Iddo; Klafter, Joseph

    2010-07-01

    This paper explores the universal self-similarity of propagating populations. The following general propagation model is considered: particles are randomly emitted from the origin of a d-dimensional Euclidean space and propagate randomly and independently of each other in space; all particles share a statistically common--yet arbitrary--motion pattern; each particle has its own random propagation parameters--emission epoch, motion frequency, and motion amplitude. The universally self-similar statistics of the particles' displacements and first passage times (FPTs) are analyzed: statistics which are invariant with respect to the details of the displacement and FPT measurements and with respect to the particles' underlying motion pattern. Analysis concludes that the universally self-similar statistics are governed by Poisson processes with power-law intensities and by the Fréchet and Weibull extreme-value laws.

  7. Universal self-similarity of propagating populations

    Science.gov (United States)

    Eliazar, Iddo; Klafter, Joseph

    2010-07-01

    This paper explores the universal self-similarity of propagating populations. The following general propagation model is considered: particles are randomly emitted from the origin of a d -dimensional Euclidean space and propagate randomly and independently of each other in space; all particles share a statistically common—yet arbitrary—motion pattern; each particle has its own random propagation parameters—emission epoch, motion frequency, and motion amplitude. The universally self-similar statistics of the particles’ displacements and first passage times (FPTs) are analyzed: statistics which are invariant with respect to the details of the displacement and FPT measurements and with respect to the particles’ underlying motion pattern. Analysis concludes that the universally self-similar statistics are governed by Poisson processes with power-law intensities and by the Fréchet and Weibull extreme-value laws.

  8. Research design and statistical analysis

    CERN Document Server

    Myers, Jerome L; Lorch Jr, Robert F

    2013-01-01

    Research Design and Statistical Analysis provides comprehensive coverage of the design principles and statistical concepts necessary to make sense of real data.  The book's goal is to provide a strong conceptual foundation to enable readers to generalize concepts to new research situations.  Emphasis is placed on the underlying logic and assumptions of the analysis and what it tells the researcher, the limitations of the analysis, and the consequences of violating assumptions.  Sampling, design efficiency, and statistical models are emphasized throughout. As per APA recommendations

  9. Application of Multivariable Statistical Techniques in Plant-wide WWTP Control Strategies Analysis

    DEFF Research Database (Denmark)

    Flores Alsina, Xavier; Comas, J.; Rodríguez-Roda, I.

    2007-01-01

    The main objective of this paper is to present the application of selected multivariable statistical techniques in plant-wide wastewater treatment plant (WWTP) control strategies analysis. In this study, cluster analysis (CA), principal component analysis/factor analysis (PCA/FA) and discriminant...... analysis (DA) are applied to the evaluation matrix data set obtained by simulation of several control strategies applied to the plant-wide IWA Benchmark Simulation Model No 2 (BSM2). These techniques allow i) to determine natural groups or clusters of control strategies with a similar behaviour, ii......) to find and interpret hidden, complex and casual relation features in the data set and iii) to identify important discriminant variables within the groups found by the cluster analysis. This study illustrates the usefulness of multivariable statistical techniques for both analysis and interpretation...

  10. Statistical data analysis handbook

    National Research Council Canada - National Science Library

    Wall, Francis J

    1986-01-01

    It must be emphasized that this is not a text book on statistics. Instead it is a working tool that presents data analysis in clear, concise terms which can be readily understood even by those without formal training in statistics...

  11. Extended local similarity analysis (eLSA) of microbial community and other time series data with replicates.

    Science.gov (United States)

    Xia, Li C; Steele, Joshua A; Cram, Jacob A; Cardon, Zoe G; Simmons, Sheri L; Vallino, Joseph J; Fuhrman, Jed A; Sun, Fengzhu

    2011-01-01

    The increasing availability of time series microbial community data from metagenomics and other molecular biological studies has enabled the analysis of large-scale microbial co-occurrence and association networks. Among the many analytical techniques available, the Local Similarity Analysis (LSA) method is unique in that it captures local and potentially time-delayed co-occurrence and association patterns in time series data that cannot otherwise be identified by ordinary correlation analysis. However LSA, as originally developed, does not consider time series data with replicates, which hinders the full exploitation of available information. With replicates, it is possible to understand the variability of local similarity (LS) score and to obtain its confidence interval. We extended our LSA technique to time series data with replicates and termed it extended LSA, or eLSA. Simulations showed the capability of eLSA to capture subinterval and time-delayed associations. We implemented the eLSA technique into an easy-to-use analytic software package. The software pipeline integrates data normalization, statistical correlation calculation, statistical significance evaluation, and association network construction steps. We applied the eLSA technique to microbial community and gene expression datasets, where unique time-dependent associations were identified. The extended LSA analysis technique was demonstrated to reveal statistically significant local and potentially time-delayed association patterns in replicated time series data beyond that of ordinary correlation analysis. These statistically significant associations can provide insights to the real dynamics of biological systems. The newly designed eLSA software efficiently streamlines the analysis and is freely available from the eLSA homepage, which can be accessed at http://meta.usc.edu/softs/lsa.

  12. Statistical Power in Meta-Analysis

    Science.gov (United States)

    Liu, Jin

    2015-01-01

    Statistical power is important in a meta-analysis study, although few studies have examined the performance of simulated power in meta-analysis. The purpose of this study is to inform researchers about statistical power estimation on two sample mean difference test under different situations: (1) the discrepancy between the analytical power and…

  13. Statistical analysis of subjective preferences for video enhancement

    Science.gov (United States)

    Woods, Russell L.; Satgunam, PremNandhini; Bronstad, P. Matthew; Peli, Eli

    2010-02-01

    Measuring preferences for moving video quality is harder than for static images due to the fleeting and variable nature of moving video. Subjective preferences for image quality can be tested by observers indicating their preference for one image over another. Such pairwise comparisons can be analyzed using Thurstone scaling (Farrell, 1999). Thurstone (1927) scaling is widely used in applied psychology, marketing, food tasting and advertising research. Thurstone analysis constructs an arbitrary perceptual scale for the items that are compared (e.g. enhancement levels). However, Thurstone scaling does not determine the statistical significance of the differences between items on that perceptual scale. Recent papers have provided inferential statistical methods that produce an outcome similar to Thurstone scaling (Lipovetsky and Conklin, 2004). Here, we demonstrate that binary logistic regression can analyze preferences for enhanced video.

  14. A Self-Organizing Map-Based Approach to Generating Reduced-Size, Statistically Similar Climate Datasets

    Science.gov (United States)

    Cabell, R.; Delle Monache, L.; Alessandrini, S.; Rodriguez, L.

    2015-12-01

    Climate-based studies require large amounts of data in order to produce accurate and reliable results. Many of these studies have used 30-plus year data sets in order to produce stable and high-quality results, and as a result, many such data sets are available, generally in the form of global reanalyses. While the analysis of these data lead to high-fidelity results, its processing can be very computationally expensive. This computational burden prevents the utilization of these data sets for certain applications, e.g., when rapid response is needed in crisis management and disaster planning scenarios resulting from release of toxic material in the atmosphere. We have developed a methodology to reduce large climate datasets to more manageable sizes while retaining statistically similar results when used to produce ensembles of possible outcomes. We do this by employing a Self-Organizing Map (SOM) algorithm to analyze general patterns of meteorological fields over a regional domain of interest to produce a small set of "typical days" with which to generate the model ensemble. The SOM algorithm takes as input a set of vectors and generates a 2D map of representative vectors deemed most similar to the input set and to each other. Input predictors are selected that are correlated with the model output, which in our case is an Atmospheric Transport and Dispersion (T&D) model that is highly dependent on surface winds and boundary layer depth. To choose a subset of "typical days," each input day is assigned to its closest SOM map node vector and then ranked by distance. Each node vector is treated as a distribution and days are sampled from them by percentile. Using a 30-node SOM, with sampling every 20th percentile, we have been able to reduce 30 years of the Climate Forecast System Reanalysis (CFSR) data for the month of October to 150 "typical days." To estimate the skill of this approach, the "Measure of Effectiveness" (MOE) metric is used to compare area and overlap

  15. Rweb:Web-based Statistical Analysis

    Directory of Open Access Journals (Sweden)

    Jeff Banfield

    1999-03-01

    Full Text Available Rweb is a freely accessible statistical analysis environment that is delivered through the World Wide Web (WWW. It is based on R, a well known statistical analysis package. The only requirement to run the basic Rweb interface is a WWW browser that supports forms. If you want graphical output you must, of course, have a browser that supports graphics. The interface provides access to WWW accessible data sets, so you may run Rweb on your own data. Rweb can provide a four window statistical computing environment (code input, text output, graphical output, and error information through browsers that support Javascript. There is also a set of point and click modules under development for use in introductory statistics courses.

  16. Regularized Statistical Analysis of Anatomy

    DEFF Research Database (Denmark)

    Sjöstrand, Karl

    2007-01-01

    This thesis presents the application and development of regularized methods for the statistical analysis of anatomical structures. Focus is on structure-function relationships in the human brain, such as the connection between early onset of Alzheimer’s disease and shape changes of the corpus...... and mind. Statistics represents a quintessential part of such investigations as they are preluded by a clinical hypothesis that must be verified based on observed data. The massive amounts of image data produced in each examination pose an important and interesting statistical challenge...... efficient algorithms which make the analysis of large data sets feasible, and gives examples of applications....

  17. Statistical Analysis of Sport Movement Observations: the Case of Orienteering

    Science.gov (United States)

    Amouzandeh, K.; Karimipour, F.

    2017-09-01

    Study of movement observations is becoming more popular in several applications. Particularly, analyzing sport movement time series has been considered as a demanding area. However, most of the attempts made on analyzing movement sport data have focused on spatial aspects of movement to extract some movement characteristics, such as spatial patterns and similarities. This paper proposes statistical analysis of sport movement observations, which refers to analyzing changes in the spatial movement attributes (e.g. distance, altitude and slope) and non-spatial movement attributes (e.g. speed and heart rate) of athletes. As the case study, an example dataset of movement observations acquired during the "orienteering" sport is presented and statistically analyzed.

  18. Neutron activation and statistical analysis of pottery from Thera, Greece

    International Nuclear Information System (INIS)

    Kilikoglou, V.; Grimanis, A.P.; Karayannis, M.I.

    1990-01-01

    Neutron activation analysis, in combination with multivariate analysis of the generated data, was used for the chemical characterization of prehistoric pottery from the Greek islands of Thera, Melos (islands with similar geology) and Crete. The statistical procedure which proved that Theran pottery could be distinguished from Melian is described. This discrimination, attained for the first time, was mainly based on the concentrations of the trace elements Sm, Yb, Lu and Cr. Also, Cretan imports to both Thera and Melos were clearly separable from local products. (author) 22 refs.; 1 fig.; 4 tabs

  19. BSSF: a fingerprint based ultrafast binding site similarity search and function analysis server

    Directory of Open Access Journals (Sweden)

    Jiang Hualiang

    2010-01-01

    Full Text Available Abstract Background Genome sequencing and post-genomics projects such as structural genomics are extending the frontier of the study of sequence-structure-function relationship of genes and their products. Although many sequence/structure-based methods have been devised with the aim of deciphering this delicate relationship, there still remain large gaps in this fundamental problem, which continuously drives researchers to develop novel methods to extract relevant information from sequences and structures and to infer the functions of newly identified genes by genomics technology. Results Here we present an ultrafast method, named BSSF(Binding Site Similarity & Function, which enables researchers to conduct similarity searches in a comprehensive three-dimensional binding site database extracted from PDB structures. This method utilizes a fingerprint representation of the binding site and a validated statistical Z-score function scheme to judge the similarity between the query and database items, even if their similarities are only constrained in a sub-pocket. This fingerprint based similarity measurement was also validated on a known binding site dataset by comparing with geometric hashing, which is a standard 3D similarity method. The comparison clearly demonstrated the utility of this ultrafast method. After conducting the database searching, the hit list is further analyzed to provide basic statistical information about the occurrences of Gene Ontology terms and Enzyme Commission numbers, which may benefit researchers by helping them to design further experiments to study the query proteins. Conclusions This ultrafast web-based system will not only help researchers interested in drug design and structural genomics to identify similar binding sites, but also assist them by providing further analysis of hit list from database searching.

  20. Chromatographic fingerprint similarity analysis for pollutant source identification

    International Nuclear Information System (INIS)

    Xie, Juan-Ping; Ni, Hong-Gang

    2015-01-01

    In the present study, a similarity analysis method was proposed to evaluate the source-sink relationships among environmental media for polybrominated diphenyl ethers (PBDEs), which were taken as the representative contaminants. Chromatographic fingerprint analysis has been widely used in the fields of natural products chemistry and forensic chemistry, but its application to environmental science has been limited. We established a library of various sources of media containing contaminants (e.g., plastics), recognizing that the establishment of a more comprehensive library allows for a better understanding of the sources of contamination. We then compared an environmental complex mixture (e.g., sediment, soil) with the profiles in the library. These comparisons could be used as the first step in source tracking. The cosine similarities between plastic and soil or sediment ranged from 0.53 to 0.68, suggesting that plastic in electronic waste is an important source of PBDEs in the environment, but it is not the only source. A similarity analysis between soil and sediment indicated that they have a source-sink relationship. Generally, the similarity analysis method can encompass more relevant information of complex mixtures in the environment than a profile-based approach that only focuses on target pollutants. There is an inherent advantage to creating a data matrix containing all peaks and their relative levels after matching the peaks based on retention times and peak areas. This data matrix can be used for source identification via a similarity analysis without quantitative or qualitative analysis of all chemicals in a sample. - Highlights: • Chromatographic fingerprint analysis can be used as the first step in source tracking. • Similarity analysis method can encompass more relevant information of pollution. • The fingerprints strongly depend on the chromatographic conditions. • A more effective and robust method for identifying similarities is required

  1. Multivariate meta-analysis: a robust approach based on the theory of U-statistic.

    Science.gov (United States)

    Ma, Yan; Mazumdar, Madhu

    2011-10-30

    Meta-analysis is the methodology for combining findings from similar research studies asking the same question. When the question of interest involves multiple outcomes, multivariate meta-analysis is used to synthesize the outcomes simultaneously taking into account the correlation between the outcomes. Likelihood-based approaches, in particular restricted maximum likelihood (REML) method, are commonly utilized in this context. REML assumes a multivariate normal distribution for the random-effects model. This assumption is difficult to verify, especially for meta-analysis with small number of component studies. The use of REML also requires iterative estimation between parameters, needing moderately high computation time, especially when the dimension of outcomes is large. A multivariate method of moments (MMM) is available and is shown to perform equally well to REML. However, there is a lack of information on the performance of these two methods when the true data distribution is far from normality. In this paper, we propose a new nonparametric and non-iterative method for multivariate meta-analysis on the basis of the theory of U-statistic and compare the properties of these three procedures under both normal and skewed data through simulation studies. It is shown that the effect on estimates from REML because of non-normal data distribution is marginal and that the estimates from MMM and U-statistic-based approaches are very similar. Therefore, we conclude that for performing multivariate meta-analysis, the U-statistic estimation procedure is a viable alternative to REML and MMM. Easy implementation of all three methods are illustrated by their application to data from two published meta-analysis from the fields of hip fracture and periodontal disease. We discuss ideas for future research based on U-statistic for testing significance of between-study heterogeneity and for extending the work to meta-regression setting. Copyright © 2011 John Wiley & Sons, Ltd.

  2. Statistical methods for astronomical data analysis

    CERN Document Server

    Chattopadhyay, Asis Kumar

    2014-01-01

    This book introduces “Astrostatistics” as a subject in its own right with rewarding examples, including work by the authors with galaxy and Gamma Ray Burst data to engage the reader. This includes a comprehensive blending of Astrophysics and Statistics. The first chapter’s coverage of preliminary concepts and terminologies for astronomical phenomenon will appeal to both Statistics and Astrophysics readers as helpful context. Statistics concepts covered in the book provide a methodological framework. A unique feature is the inclusion of different possible sources of astronomical data, as well as software packages for converting the raw data into appropriate forms for data analysis. Readers can then use the appropriate statistical packages for their particular data analysis needs. The ideas of statistical inference discussed in the book help readers determine how to apply statistical tests. The authors cover different applications of statistical techniques already developed or specifically introduced for ...

  3. STATISTICAL ANALYSIS OF SPORT MOVEMENT OBSERVATIONS: THE CASE OF ORIENTEERING

    Directory of Open Access Journals (Sweden)

    K. Amouzandeh

    2017-09-01

    Full Text Available Study of movement observations is becoming more popular in several applications. Particularly, analyzing sport movement time series has been considered as a demanding area. However, most of the attempts made on analyzing movement sport data have focused on spatial aspects of movement to extract some movement characteristics, such as spatial patterns and similarities. This paper proposes statistical analysis of sport movement observations, which refers to analyzing changes in the spatial movement attributes (e.g. distance, altitude and slope and non-spatial movement attributes (e.g. speed and heart rate of athletes. As the case study, an example dataset of movement observations acquired during the “orienteering” sport is presented and statistically analyzed.

  4. Self-similar analysis of the spherical implosion process

    International Nuclear Information System (INIS)

    Ishiguro, Yukio; Katsuragi, Satoru.

    1976-07-01

    The implosion processes caused by laser-heating ablation has been studied by self-similarity analysis. Attention is paid to the possibility of existence of the self-similar solution which reproduces the implosion process of high compression. Details of the self-similar analysis are reproduced and conclusions are drawn quantitatively on the gas compression by a single shock. The compression process by a sequence of shocks is discussed in self-similarity. The gas motion followed by a homogeneous isentropic compression is represented by a self-similar motion. (auth.)

  5. Log-Normality and Multifractal Analysis of Flame Surface Statistics

    Science.gov (United States)

    Saha, Abhishek; Chaudhuri, Swetaprovo; Law, Chung K.

    2013-11-01

    The turbulent flame surface is typically highly wrinkled and folded at a multitude of scales controlled by various flame properties. It is useful if the information contained in this complex geometry can be projected onto a simpler regular geometry for the use of spectral, wavelet or multifractal analyses. Here we investigate local flame surface statistics of turbulent flame expanding under constant pressure. First the statistics of local length ratio is experimentally obtained from high-speed Mie scattering images. For spherically expanding flame, length ratio on the measurement plane, at predefined equiangular sectors is defined as the ratio of the actual flame length to the length of a circular-arc of radius equal to the average radius of the flame. Assuming isotropic distribution of such flame segments we convolute suitable forms of the length-ratio probability distribution functions (pdfs) to arrive at corresponding area-ratio pdfs. Both the pdfs are found to be near log-normally distributed and shows self-similar behavior with increasing radius. Near log-normality and rather intermittent behavior of the flame-length ratio suggests similarity with dissipation rate quantities which stimulates multifractal analysis. Currently at Indian Institute of Science, India.

  6. Similarity-based pattern analysis and recognition

    CERN Document Server

    Pelillo, Marcello

    2013-01-01

    This accessible text/reference presents a coherent overview of the emerging field of non-Euclidean similarity learning. The book presents a broad range of perspectives on similarity-based pattern analysis and recognition methods, from purely theoretical challenges to practical, real-world applications. The coverage includes both supervised and unsupervised learning paradigms, as well as generative and discriminative models. Topics and features: explores the origination and causes of non-Euclidean (dis)similarity measures, and how they influence the performance of traditional classification alg

  7. FTree query construction for virtual screening: a statistical analysis.

    Science.gov (United States)

    Gerlach, Christof; Broughton, Howard; Zaliani, Andrea

    2008-02-01

    FTrees (FT) is a known chemoinformatic tool able to condense molecular descriptions into a graph object and to search for actives in large databases using graph similarity. The query graph is classically derived from a known active molecule, or a set of actives, for which a similar compound has to be found. Recently, FT similarity has been extended to fragment space, widening its capabilities. If a user were able to build a knowledge-based FT query from information other than a known active structure, the similarity search could be combined with other, normally separate, fields like de-novo design or pharmacophore searches. With this aim in mind, we performed a comprehensive analysis of several databases in terms of FT description and provide a basic statistical analysis of the FT spaces so far at hand. Vendors' catalogue collections and MDDR as a source of potential or known "actives", respectively, have been used. With the results reported herein, a set of ranges, mean values and standard deviations for several query parameters are presented in order to set a reference guide for the users. Applications on how to use this information in FT query building are also provided, using a newly built 3D-pharmacophore from 57 5HT-1F agonists and a published one which was used for virtual screening for tRNA-guanine transglycosylase (TGT) inhibitors.

  8. The effective thermal conductivity of porous media based on statistical self-similarity

    International Nuclear Information System (INIS)

    Kou Jianlong; Wu Fengmin; Lu Hangjun; Xu Yousheng; Song Fuquan

    2009-01-01

    A fractal model is presented based on the thermal-electrical analogy technique and statistical self-similarity of fractal saturated porous media. A dimensionless effective thermal conductivity of saturated fractal porous media is studied by the relationship between the dimensionless effective thermal conductivity and the geometrical parameters of porous media with no empirical constant. Through this study, it is shown that the dimensionless effective thermal conductivity decreases with the increase of porosity (φ) and pore area fractal dimension (D f ) when k s /k g >1. The opposite trends is observed when k s /k g t ). The model predictions are compared with existing experimental data and the results show that they are in good agreement with existing experimental data.

  9. A Statistical Toolkit for Data Analysis

    International Nuclear Information System (INIS)

    Donadio, S.; Guatelli, S.; Mascialino, B.; Pfeiffer, A.; Pia, M.G.; Ribon, A.; Viarengo, P.

    2006-01-01

    The present project aims to develop an open-source and object-oriented software Toolkit for statistical data analysis. Its statistical testing component contains a variety of Goodness-of-Fit tests, from Chi-squared to Kolmogorov-Smirnov, to less known, but generally much more powerful tests such as Anderson-Darling, Goodman, Fisz-Cramer-von Mises, Kuiper, Tiku. Thanks to the component-based design and the usage of the standard abstract interfaces for data analysis, this tool can be used by other data analysis systems or integrated in experimental software frameworks. This Toolkit has been released and is downloadable from the web. In this paper we describe the statistical details of the algorithms, the computational features of the Toolkit and describe the code validation

  10. [Confidence interval or p-value--similarities and differences between two important methods of statistical inference of quantitative studies].

    Science.gov (United States)

    Harari, Gil

    2014-01-01

    Statistic significance, also known as p-value, and CI (Confidence Interval) are common statistics measures and are essential for the statistical analysis of studies in medicine and life sciences. These measures provide complementary information about the statistical probability and conclusions regarding the clinical significance of study findings. This article is intended to describe the methodologies, compare between the methods, assert their suitability for the different needs of study results analysis and to explain situations in which each method should be used.

  11. Statistical considerations on safety analysis

    International Nuclear Information System (INIS)

    Pal, L.; Makai, M.

    2004-01-01

    The authors have investigated the statistical methods applied to safety analysis of nuclear reactors and arrived at alarming conclusions: a series of calculations with the generally appreciated safety code ATHLET were carried out to ascertain the stability of the results against input uncertainties in a simple experimental situation. Scrutinizing those calculations, we came to the conclusion that the ATHLET results may exhibit chaotic behavior. A further conclusion is that the technological limits are incorrectly set when the output variables are correlated. Another formerly unnoticed conclusion of the previous ATHLET calculations that certain innocent looking parameters (like wall roughness factor, the number of bubbles per unit volume, the number of droplets per unit volume) can influence considerably such output parameters as water levels. The authors are concerned with the statistical foundation of present day safety analysis practices and can only hope that their own misjudgment will be dispelled. Until then, the authors suggest applying correct statistical methods in safety analysis even if it makes the analysis more expensive. It would be desirable to continue exploring the role of internal parameters (wall roughness factor, steam-water surface in thermal hydraulics codes, homogenization methods in neutronics codes) in system safety codes and to study their effects on the analysis. In the validation and verification process of a code one carries out a series of computations. The input data are not precisely determined because measured data have an error, calculated data are often obtained from a more or less accurate model. Some users of large codes are content with comparing the nominal output obtained from the nominal input, whereas all the possible inputs should be taken into account when judging safety. At the same time, any statement concerning safety must be aleatory, and its merit can be judged only when the probability is known with which the

  12. Statistical shape analysis with applications in R

    CERN Document Server

    Dryden, Ian L

    2016-01-01

    A thoroughly revised and updated edition of this introduction to modern statistical methods for shape analysis Shape analysis is an important tool in the many disciplines where objects are compared using geometrical features. Examples include comparing brain shape in schizophrenia; investigating protein molecules in bioinformatics; and describing growth of organisms in biology. This book is a significant update of the highly-regarded `Statistical Shape Analysis’ by the same authors. The new edition lays the foundations of landmark shape analysis, including geometrical concepts and statistical techniques, and extends to include analysis of curves, surfaces, images and other types of object data. Key definitions and concepts are discussed throughout, and the relative merits of different approaches are presented. The authors have included substantial new material on recent statistical developments and offer numerous examples throughout the text. Concepts are introduced in an accessible manner, while reta...

  13. Spatial analysis statistics, visualization, and computational methods

    CERN Document Server

    Oyana, Tonny J

    2015-01-01

    An introductory text for the next generation of geospatial analysts and data scientists, Spatial Analysis: Statistics, Visualization, and Computational Methods focuses on the fundamentals of spatial analysis using traditional, contemporary, and computational methods. Outlining both non-spatial and spatial statistical concepts, the authors present practical applications of geospatial data tools, techniques, and strategies in geographic studies. They offer a problem-based learning (PBL) approach to spatial analysis-containing hands-on problem-sets that can be worked out in MS Excel or ArcGIS-as well as detailed illustrations and numerous case studies. The book enables readers to: Identify types and characterize non-spatial and spatial data Demonstrate their competence to explore, visualize, summarize, analyze, optimize, and clearly present statistical data and results Construct testable hypotheses that require inferential statistical analysis Process spatial data, extract explanatory variables, conduct statisti...

  14. Application of descriptive statistics in analysis of experimental data

    OpenAIRE

    Mirilović Milorad; Pejin Ivana

    2008-01-01

    Statistics today represent a group of scientific methods for the quantitative and qualitative investigation of variations in mass appearances. In fact, statistics present a group of methods that are used for the accumulation, analysis, presentation and interpretation of data necessary for reaching certain conclusions. Statistical analysis is divided into descriptive statistical analysis and inferential statistics. The values which represent the results of an experiment, and which are the subj...

  15. Multivariate Statistical Analysis of Water Quality data in Indian River Lagoon, Florida

    Science.gov (United States)

    Sayemuzzaman, M.; Ye, M.

    2015-12-01

    The Indian River Lagoon, is part of the longest barrier island complex in the United States, is a region of particular concern to the environmental scientist because of the rapid rate of human development throughout the region and the geographical position in between the colder temperate zone and warmer sub-tropical zone. Thus, the surface water quality analysis in this region always brings the newer information. In this present study, multivariate statistical procedures were applied to analyze the spatial and temporal water quality in the Indian River Lagoon over the period 1998-2013. Twelve parameters have been analyzed on twelve key water monitoring stations in and beside the lagoon on monthly datasets (total of 27,648 observations). The dataset was treated using cluster analysis (CA), principle component analysis (PCA) and non-parametric trend analysis. The CA was used to cluster twelve monitoring stations into four groups, with stations on the similar surrounding characteristics being in the same group. The PCA was then applied to the similar groups to find the important water quality parameters. The principal components (PCs), PC1 to PC5 was considered based on the explained cumulative variances 75% to 85% in each cluster groups. Nutrient species (phosphorus and nitrogen), salinity, specific conductivity and erosion factors (TSS, Turbidity) were major variables involved in the construction of the PCs. Statistical significant positive or negative trends and the abrupt trend shift were detected applying Mann-Kendall trend test and Sequential Mann-Kendall (SQMK), for each individual stations for the important water quality parameters. Land use land cover change pattern, local anthropogenic activities and extreme climate such as drought might be associated with these trends. This study presents the multivariate statistical assessment in order to get better information about the quality of surface water. Thus, effective pollution control/management of the surface

  16. Statistical Analysis of Research Data | Center for Cancer Research

    Science.gov (United States)

    Recent advances in cancer biology have resulted in the need for increased statistical analysis of research data. The Statistical Analysis of Research Data (SARD) course will be held on April 5-6, 2018 from 9 a.m.-5 p.m. at the National Institutes of Health's Natcher Conference Center, Balcony C on the Bethesda Campus. SARD is designed to provide an overview on the general principles of statistical analysis of research data.  The first day will feature univariate data analysis, including descriptive statistics, probability distributions, one- and two-sample inferential statistics.

  17. Statistical analysis with Excel for dummies

    CERN Document Server

    Schmuller, Joseph

    2013-01-01

    Take the mystery out of statistical terms and put Excel to work! If you need to create and interpret statistics in business or classroom settings, this easy-to-use guide is just what you need. It shows you how to use Excel's powerful tools for statistical analysis, even if you've never taken a course in statistics. Learn the meaning of terms like mean and median, margin of error, standard deviation, and permutations, and discover how to interpret the statistics of everyday life. You'll learn to use Excel formulas, charts, PivotTables, and other tools to make sense of everything fro

  18. Application of instrumental neutron activation analysis and multivariate statistical methods to archaeological Syrian ceramics

    International Nuclear Information System (INIS)

    Bakraji, E. H.; Othman, I.; Sarhil, A.; Al-Somel, N.

    2002-01-01

    Instrumental neutron activation analysis (INAA) has been utilized in the analysis of thirty-seven archaeological ceramics fragment samples collected from Tal AI-Wardiate site, Missiaf town, Hamma city, Syria. 36 chemical elements were determined. These elemental concentrations have been processed using two multivariate statistical methods, cluster and factor analysis in order to determine similarities and correlation between the various samples. Factor analysis confirms that samples were correctly classified by cluster analysis. The results showed that samples can be considered to be manufactured using three different sources of raw material. (author)

  19. Statistical analysis of dynamic parameters of the core

    International Nuclear Information System (INIS)

    Ionov, V.S.

    2007-01-01

    The transients of various types were investigated for the cores of zero power critical facilities in RRC KI and NPP. Dynamic parameters of neutron transients were explored by tool statistical analysis. Its have sufficient duration, few channels for currents of chambers and reactivity and also some channels for technological parameters. On these values the inverse period. reactivity, lifetime of neutrons, reactivity coefficients and some effects of a reactivity are determinate, and on the values were restored values of measured dynamic parameters as result of the analysis. The mathematical means of statistical analysis were used: approximation(A), filtration (F), rejection (R), estimation of parameters of descriptive statistic (DSP), correlation performances (kk), regression analysis(KP), the prognosis (P), statistician criteria (SC). The calculation procedures were realized by computer language MATLAB. The reasons of methodical and statistical errors are submitted: inadequacy of model operation, precision neutron-physical parameters, features of registered processes, used mathematical model in reactivity meters, technique of processing for registered data etc. Examples of results of statistical analysis. Problems of validity of the methods used for definition and certification of values of statistical parameters and dynamic characteristics are considered (Authors)

  20. CONFIDENCE LEVELS AND/VS. STATISTICAL HYPOTHESIS TESTING IN STATISTICAL ANALYSIS. CASE STUDY

    Directory of Open Access Journals (Sweden)

    ILEANA BRUDIU

    2009-05-01

    Full Text Available Estimated parameters with confidence intervals and testing statistical assumptions used in statistical analysis to obtain conclusions on research from a sample extracted from the population. Paper to the case study presented aims to highlight the importance of volume of sample taken in the study and how this reflects on the results obtained when using confidence intervals and testing for pregnant. If statistical testing hypotheses not only give an answer "yes" or "no" to some questions of statistical estimation using statistical confidence intervals provides more information than a test statistic, show high degree of uncertainty arising from small samples and findings build in the "marginally significant" or "almost significant (p very close to 0.05.

  1. Collecting operational event data for statistical analysis

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1994-09-01

    This report gives guidance for collecting operational data to be used for statistical analysis, especially analysis of event counts. It discusses how to define the purpose of the study, the unit (system, component, etc.) to be studied, events to be counted, and demand or exposure time. Examples are given of classification systems for events in the data sources. A checklist summarizes the essential steps in data collection for statistical analysis

  2. Statistics and analysis of scientific data

    CERN Document Server

    Bonamente, Massimiliano

    2013-01-01

    Statistics and Analysis of Scientific Data covers the foundations of probability theory and statistics, and a number of numerical and analytical methods that are essential for the present-day analyst of scientific data. Topics covered include probability theory, distribution functions of statistics, fits to two-dimensional datasheets and parameter estimation, Monte Carlo methods and Markov chains. Equal attention is paid to the theory and its practical application, and results from classic experiments in various fields are used to illustrate the importance of statistics in the analysis of scientific data. The main pedagogical method is a theory-then-application approach, where emphasis is placed first on a sound understanding of the underlying theory of a topic, which becomes the basis for an efficient and proactive use of the material for practical applications. The level is appropriate for undergraduates and beginning graduate students, and as a reference for the experienced researcher. Basic calculus is us...

  3. Method for statistical data analysis of multivariate observations

    CERN Document Server

    Gnanadesikan, R

    1997-01-01

    A practical guide for multivariate statistical techniques-- now updated and revised In recent years, innovations in computer technology and statistical methodologies have dramatically altered the landscape of multivariate data analysis. This new edition of Methods for Statistical Data Analysis of Multivariate Observations explores current multivariate concepts and techniques while retaining the same practical focus of its predecessor. It integrates methods and data-based interpretations relevant to multivariate analysis in a way that addresses real-world problems arising in many areas of inte

  4. Advances in statistical models for data analysis

    CERN Document Server

    Minerva, Tommaso; Vichi, Maurizio

    2015-01-01

    This edited volume focuses on recent research results in classification, multivariate statistics and machine learning and highlights advances in statistical models for data analysis. The volume provides both methodological developments and contributions to a wide range of application areas such as economics, marketing, education, social sciences and environment. The papers in this volume were first presented at the 9th biannual meeting of the Classification and Data Analysis Group (CLADAG) of the Italian Statistical Society, held in September 2013 at the University of Modena and Reggio Emilia, Italy.

  5. Statistical analysis of anomalous transport in resistive interchange turbulence

    International Nuclear Information System (INIS)

    Sugama, Hideo; Wakatani, Masahiro.

    1992-01-01

    A new anomalous transport model for resistive interchange turbulence is derived from statistical analysis applying two-scale direct-interaction approximation to resistive magnetohydrodynamic equations with a gravity term. Our model is similar to the K-ε model for eddy viscosity of turbulent shear flows in that anomalous transport coefficients are expressed in terms of by the turbulent kinetic energy K and its dissipation rate ε while K and ε are determined by transport equations. This anomalous transport model can describe some nonlocal effects such as those from boundary conditions which cannot be treated by conventional models based on the transport coefficients represented by locally determined plasma parameters. (author)

  6. Consolidity analysis for fully fuzzy functions, matrices, probability and statistics

    Directory of Open Access Journals (Sweden)

    Walaa Ibrahim Gabr

    2015-03-01

    Full Text Available The paper presents a comprehensive review of the know-how for developing the systems consolidity theory for modeling, analysis, optimization and design in fully fuzzy environment. The solving of systems consolidity theory included its development for handling new functions of different dimensionalities, fuzzy analytic geometry, fuzzy vector analysis, functions of fuzzy complex variables, ordinary differentiation of fuzzy functions and partial fraction of fuzzy polynomials. On the other hand, the handling of fuzzy matrices covered determinants of fuzzy matrices, the eigenvalues of fuzzy matrices, and solving least-squares fuzzy linear equations. The approach demonstrated to be also applicable in a systematic way in handling new fuzzy probabilistic and statistical problems. This included extending the conventional probabilistic and statistical analysis for handling fuzzy random data. Application also covered the consolidity of fuzzy optimization problems. Various numerical examples solved have demonstrated that the new consolidity concept is highly effective in solving in a compact form the propagation of fuzziness in linear, nonlinear, multivariable and dynamic problems with different types of complexities. Finally, it is demonstrated that the implementation of the suggested fuzzy mathematics can be easily embedded within normal mathematics through building special fuzzy functions library inside the computational Matlab Toolbox or using other similar software languages.

  7. Statistical Analysis Of Tank 19F Floor Sample Results

    International Nuclear Information System (INIS)

    Harris, S.

    2010-01-01

    Representative sampling has been completed for characterization of the residual material on the floor of Tank 19F as per the statistical sampling plan developed by Harris and Shine. Samples from eight locations have been obtained from the tank floor and two of the samples were archived as a contingency. Six samples, referred to in this report as the current scrape samples, have been submitted to and analyzed by SRNL. This report contains the statistical analysis of the floor sample analytical results to determine if further data are needed to reduce uncertainty. Included are comparisons with the prior Mantis samples results to determine if they can be pooled with the current scrape samples to estimate the upper 95% confidence limits (UCL95%) for concentration. Statistical analysis revealed that the Mantis and current scrape sample results are not compatible. Therefore, the Mantis sample results were not used to support the quantification of analytes in the residual material. Significant spatial variability among the current scrape sample results was not found. Constituent concentrations were similar between the North and South hemispheres as well as between the inner and outer regions of the tank floor. The current scrape sample results from all six samples fall within their 3-sigma limits. In view of the results from numerous statistical tests, the data were pooled from all six current scrape samples. As such, an adequate sample size was provided for quantification of the residual material on the floor of Tank 19F. The uncertainty is quantified in this report by an UCL95% on each analyte concentration. The uncertainty in analyte concentration was calculated as a function of the number of samples, the average, and the standard deviation of the analytical results. The UCL95% was based entirely on the six current scrape sample results (each averaged across three analytical determinations).

  8. Statistical models and methods for reliability and survival analysis

    CERN Document Server

    Couallier, Vincent; Huber-Carol, Catherine; Mesbah, Mounir; Huber -Carol, Catherine; Limnios, Nikolaos; Gerville-Reache, Leo

    2013-01-01

    Statistical Models and Methods for Reliability and Survival Analysis brings together contributions by specialists in statistical theory as they discuss their applications providing up-to-date developments in methods used in survival analysis, statistical goodness of fit, stochastic processes for system reliability, amongst others. Many of these are related to the work of Professor M. Nikulin in statistics over the past 30 years. The authors gather together various contributions with a broad array of techniques and results, divided into three parts - Statistical Models and Methods, Statistical

  9. Classification, (big) data analysis and statistical learning

    CERN Document Server

    Conversano, Claudio; Vichi, Maurizio

    2018-01-01

    This edited book focuses on the latest developments in classification, statistical learning, data analysis and related areas of data science, including statistical analysis of large datasets, big data analytics, time series clustering, integration of data from different sources, as well as social networks. It covers both methodological aspects as well as applications to a wide range of areas such as economics, marketing, education, social sciences, medicine, environmental sciences and the pharmaceutical industry. In addition, it describes the basic features of the software behind the data analysis results, and provides links to the corresponding codes and data sets where necessary. This book is intended for researchers and practitioners who are interested in the latest developments and applications in the field. The peer-reviewed contributions were presented at the 10th Scientific Meeting of the Classification and Data Analysis Group (CLADAG) of the Italian Statistical Society, held in Santa Margherita di Pul...

  10. Statistical hot spot analysis of reactor cores

    International Nuclear Information System (INIS)

    Schaefer, H.

    1974-05-01

    This report is an introduction into statistical hot spot analysis. After the definition of the term 'hot spot' a statistical analysis is outlined. The mathematical method is presented, especially the formula concerning the probability of no hot spots in a reactor core is evaluated. A discussion with the boundary conditions of a statistical hot spot analysis is given (technological limits, nominal situation, uncertainties). The application of the hot spot analysis to the linear power of pellets and the temperature rise in cooling channels is demonstrated with respect to the test zone of KNK II. Basic values, such as probability of no hot spots, hot spot potential, expected hot spot diagram and cumulative distribution function of hot spots, are discussed. It is shown, that the risk of hot channels can be dispersed equally over all subassemblies by an adequate choice of the nominal temperature distribution in the core

  11. The statistical analysis of anisotropies

    International Nuclear Information System (INIS)

    Webster, A.

    1977-01-01

    One of the many uses to which a radio survey may be put is an analysis of the distribution of the radio sources on the celestial sphere to find out whether they are bunched into clusters or lie in preferred regions of space. There are many methods of testing for clustering in point processes and since they are not all equally good this contribution is presented as a brief guide to what seems to be the best of them. The radio sources certainly do not show very strong clusering and may well be entirely unclustered so if a statistical method is to be useful it must be both powerful and flexible. A statistic is powerful in this context if it can efficiently distinguish a weakly clustered distribution of sources from an unclustered one, and it is flexible if it can be applied in a way which avoids mistaking defects in the survey for true peculiarities in the distribution of sources. The paper divides clustering statistics into two classes: number density statistics and log N/log S statistics. (Auth.)

  12. Basic statistical tools in research and data analysis

    Directory of Open Access Journals (Sweden)

    Zulfiqar Ali

    2016-01-01

    Full Text Available Statistical methods involved in carrying out a study include planning, designing, collecting data, analysing, drawing meaningful interpretation and reporting of the research findings. The statistical analysis gives meaning to the meaningless numbers, thereby breathing life into a lifeless data. The results and inferences are precise only if proper statistical tests are used. This article will try to acquaint the reader with the basic research tools that are utilised while conducting various studies. The article covers a brief outline of the variables, an understanding of quantitative and qualitative variables and the measures of central tendency. An idea of the sample size estimation, power analysis and the statistical errors is given. Finally, there is a summary of parametric and non-parametric tests used for data analysis.

  13. Protein structure similarity from principle component correlation analysis

    Directory of Open Access Journals (Sweden)

    Chou James

    2006-01-01

    Full Text Available Abstract Background Owing to rapid expansion of protein structure databases in recent years, methods of structure comparison are becoming increasingly effective and important in revealing novel information on functional properties of proteins and their roles in the grand scheme of evolutionary biology. Currently, the structural similarity between two proteins is measured by the root-mean-square-deviation (RMSD in their best-superimposed atomic coordinates. RMSD is the golden rule of measuring structural similarity when the structures are nearly identical; it, however, fails to detect the higher order topological similarities in proteins evolved into different shapes. We propose new algorithms for extracting geometrical invariants of proteins that can be effectively used to identify homologous protein structures or topologies in order to quantify both close and remote structural similarities. Results We measure structural similarity between proteins by correlating the principle components of their secondary structure interaction matrix. In our approach, the Principle Component Correlation (PCC analysis, a symmetric interaction matrix for a protein structure is constructed with relationship parameters between secondary elements that can take the form of distance, orientation, or other relevant structural invariants. When using a distance-based construction in the presence or absence of encoded N to C terminal sense, there are strong correlations between the principle components of interaction matrices of structurally or topologically similar proteins. Conclusion The PCC method is extensively tested for protein structures that belong to the same topological class but are significantly different by RMSD measure. The PCC analysis can also differentiate proteins having similar shapes but different topological arrangements. Additionally, we demonstrate that when using two independently defined interaction matrices, comparison of their maximum

  14. Reproducible statistical analysis with multiple languages

    DEFF Research Database (Denmark)

    Lenth, Russell; Højsgaard, Søren

    2011-01-01

    This paper describes the system for making reproducible statistical analyses. differs from other systems for reproducible analysis in several ways. The two main differences are: (1) Several statistics programs can be in used in the same document. (2) Documents can be prepared using OpenOffice or ......Office or \\LaTeX. The main part of this paper is an example showing how to use and together in an OpenOffice text document. The paper also contains some practical considerations on the use of literate programming in statistics....

  15. A Study of Wavelet Analysis and Data Extraction from Second-Order Self-Similar Time Series

    Directory of Open Access Journals (Sweden)

    Leopoldo Estrada Vargas

    2013-01-01

    Full Text Available Statistical analysis and synthesis of self-similar discrete time signals are presented. The analysis equation is formally defined through a special family of basis functions of which the simplest case matches the Haar wavelet. The original discrete time series is synthesized without loss by a linear combination of the basis functions after some scaling, displacement, and phase shift. The decomposition is then used to synthesize a new second-order self-similar signal with a different Hurst index than the original. The components are also used to describe the behavior of the estimated mean and variance of self-similar discrete time series. It is shown that the sample mean, although it is unbiased, provides less information about the process mean as its Hurst index is higher. It is also demonstrated that the classical variance estimator is biased and that the widely accepted aggregated variance-based estimator of the Hurst index results biased not due to its nature (which is being unbiased and has minimal variance but to flaws in its implementation. Using the proposed decomposition, the correct estimation of the Variance Plot is described, as well as its close association with the popular Logscale Diagram.

  16. Common pitfalls in statistical analysis: "P" values, statistical significance and confidence intervals

    Directory of Open Access Journals (Sweden)

    Priya Ranganathan

    2015-01-01

    Full Text Available In the second part of a series on pitfalls in statistical analysis, we look at various ways in which a statistically significant study result can be expressed. We debunk some of the myths regarding the ′P′ value, explain the importance of ′confidence intervals′ and clarify the importance of including both values in a paper

  17. Statistics and analysis of scientific data

    CERN Document Server

    Bonamente, Massimiliano

    2017-01-01

    The revised second edition of this textbook provides the reader with a solid foundation in probability theory and statistics as applied to the physical sciences, engineering and related fields. It covers a broad range of numerical and analytical methods that are essential for the correct analysis of scientific data, including probability theory, distribution functions of statistics, fits to two-dimensional data and parameter estimation, Monte Carlo methods and Markov chains. Features new to this edition include: • a discussion of statistical techniques employed in business science, such as multiple regression analysis of multivariate datasets. • a new chapter on the various measures of the mean including logarithmic averages. • new chapters on systematic errors and intrinsic scatter, and on the fitting of data with bivariate errors. • a new case study and additional worked examples. • mathematical derivations and theoretical background material have been appropriately marked,to improve the readabili...

  18. Statistical evaluation of diagnostic performance topics in ROC analysis

    CERN Document Server

    Zou, Kelly H; Bandos, Andriy I; Ohno-Machado, Lucila; Rockette, Howard E

    2016-01-01

    Statistical evaluation of diagnostic performance in general and Receiver Operating Characteristic (ROC) analysis in particular are important for assessing the performance of medical tests and statistical classifiers, as well as for evaluating predictive models or algorithms. This book presents innovative approaches in ROC analysis, which are relevant to a wide variety of applications, including medical imaging, cancer research, epidemiology, and bioinformatics. Statistical Evaluation of Diagnostic Performance: Topics in ROC Analysis covers areas including monotone-transformation techniques in parametric ROC analysis, ROC methods for combined and pooled biomarkers, Bayesian hierarchical transformation models, sequential designs and inferences in the ROC setting, predictive modeling, multireader ROC analysis, and free-response ROC (FROC) methodology. The book is suitable for graduate-level students and researchers in statistics, biostatistics, epidemiology, public health, biomedical engineering, radiology, medi...

  19. Bayesian Inference in Statistical Analysis

    CERN Document Server

    Box, George E P

    2011-01-01

    The Wiley Classics Library consists of selected books that have become recognized classics in their respective fields. With these new unabridged and inexpensive editions, Wiley hopes to extend the life of these important works by making them available to future generations of mathematicians and scientists. Currently available in the Series: T. W. Anderson The Statistical Analysis of Time Series T. S. Arthanari & Yadolah Dodge Mathematical Programming in Statistics Emil Artin Geometric Algebra Norman T. J. Bailey The Elements of Stochastic Processes with Applications to the Natural Sciences Rob

  20. Analysis of Variance: What Is Your Statistical Software Actually Doing?

    Science.gov (United States)

    Li, Jian; Lomax, Richard G.

    2011-01-01

    Users assume statistical software packages produce accurate results. In this article, the authors systematically examined Statistical Package for the Social Sciences (SPSS) and Statistical Analysis System (SAS) for 3 analysis of variance (ANOVA) designs, mixed-effects ANOVA, fixed-effects analysis of covariance (ANCOVA), and nested ANOVA. For each…

  1. Comparing Visual and Statistical Analysis of Multiple Baseline Design Graphs.

    Science.gov (United States)

    Wolfe, Katie; Dickenson, Tammiee S; Miller, Bridget; McGrath, Kathleen V

    2018-04-01

    A growing number of statistical analyses are being developed for single-case research. One important factor in evaluating these methods is the extent to which each corresponds to visual analysis. Few studies have compared statistical and visual analysis, and information about more recently developed statistics is scarce. Therefore, our purpose was to evaluate the agreement between visual analysis and four statistical analyses: improvement rate difference (IRD); Tau-U; Hedges, Pustejovsky, Shadish (HPS) effect size; and between-case standardized mean difference (BC-SMD). Results indicate that IRD and BC-SMD had the strongest overall agreement with visual analysis. Although Tau-U had strong agreement with visual analysis on raw values, it had poorer agreement when those values were dichotomized to represent the presence or absence of a functional relation. Overall, visual analysis appeared to be more conservative than statistical analysis, but further research is needed to evaluate the nature of these disagreements.

  2. Sensitivity analysis and related analysis : A survey of statistical techniques

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    1995-01-01

    This paper reviews the state of the art in five related types of analysis, namely (i) sensitivity or what-if analysis, (ii) uncertainty or risk analysis, (iii) screening, (iv) validation, and (v) optimization. The main question is: when should which type of analysis be applied; which statistical

  3. STATISTICAL ANALYSIS OF TANK 18F FLOOR SAMPLE RESULTS

    Energy Technology Data Exchange (ETDEWEB)

    Harris, S.

    2010-09-02

    Representative sampling has been completed for characterization of the residual material on the floor of Tank 18F as per the statistical sampling plan developed by Shine [1]. Samples from eight locations have been obtained from the tank floor and two of the samples were archived as a contingency. Six samples, referred to in this report as the current scrape samples, have been submitted to and analyzed by SRNL [2]. This report contains the statistical analysis of the floor sample analytical results to determine if further data are needed to reduce uncertainty. Included are comparisons with the prior Mantis samples results [3] to determine if they can be pooled with the current scrape samples to estimate the upper 95% confidence limits (UCL{sub 95%}) for concentration. Statistical analysis revealed that the Mantis and current scrape sample results are not compatible. Therefore, the Mantis sample results were not used to support the quantification of analytes in the residual material. Significant spatial variability among the current sample results was not found. Constituent concentrations were similar between the North and South hemispheres as well as between the inner and outer regions of the tank floor. The current scrape sample results from all six samples fall within their 3-sigma limits. In view of the results from numerous statistical tests, the data were pooled from all six current scrape samples. As such, an adequate sample size was provided for quantification of the residual material on the floor of Tank 18F. The uncertainty is quantified in this report by an upper 95% confidence limit (UCL{sub 95%}) on each analyte concentration. The uncertainty in analyte concentration was calculated as a function of the number of samples, the average, and the standard deviation of the analytical results. The UCL{sub 95%} was based entirely on the six current scrape sample results (each averaged across three analytical determinations).

  4. Online Statistical Modeling (Regression Analysis) for Independent Responses

    Science.gov (United States)

    Made Tirta, I.; Anggraeni, Dian; Pandutama, Martinus

    2017-06-01

    Regression analysis (statistical analmodelling) are among statistical methods which are frequently needed in analyzing quantitative data, especially to model relationship between response and explanatory variables. Nowadays, statistical models have been developed into various directions to model various type and complex relationship of data. Rich varieties of advanced and recent statistical modelling are mostly available on open source software (one of them is R). However, these advanced statistical modelling, are not very friendly to novice R users, since they are based on programming script or command line interface. Our research aims to developed web interface (based on R and shiny), so that most recent and advanced statistical modelling are readily available, accessible and applicable on web. We have previously made interface in the form of e-tutorial for several modern and advanced statistical modelling on R especially for independent responses (including linear models/LM, generalized linier models/GLM, generalized additive model/GAM and generalized additive model for location scale and shape/GAMLSS). In this research we unified them in the form of data analysis, including model using Computer Intensive Statistics (Bootstrap and Markov Chain Monte Carlo/ MCMC). All are readily accessible on our online Virtual Statistics Laboratory. The web (interface) make the statistical modeling becomes easier to apply and easier to compare them in order to find the most appropriate model for the data.

  5. Application of Ontology Technology in Health Statistic Data Analysis.

    Science.gov (United States)

    Guo, Minjiang; Hu, Hongpu; Lei, Xingyun

    2017-01-01

    Research Purpose: establish health management ontology for analysis of health statistic data. Proposed Methods: this paper established health management ontology based on the analysis of the concepts in China Health Statistics Yearbook, and used protégé to define the syntactic and semantic structure of health statistical data. six classes of top-level ontology concepts and their subclasses had been extracted and the object properties and data properties were defined to establish the construction of these classes. By ontology instantiation, we can integrate multi-source heterogeneous data and enable administrators to have an overall understanding and analysis of the health statistic data. ontology technology provides a comprehensive and unified information integration structure of the health management domain and lays a foundation for the efficient analysis of multi-source and heterogeneous health system management data and enhancement of the management efficiency.

  6. Explorations in Statistics: The Analysis of Change

    Science.gov (United States)

    Curran-Everett, Douglas; Williams, Calvin L.

    2015-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This tenth installment of "Explorations in Statistics" explores the analysis of a potential change in some physiological response. As researchers, we often express absolute change as percent change so we can…

  7. Common pitfalls in statistical analysis: “P” values, statistical significance and confidence intervals

    Science.gov (United States)

    Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc

    2015-01-01

    In the second part of a series on pitfalls in statistical analysis, we look at various ways in which a statistically significant study result can be expressed. We debunk some of the myths regarding the ‘P’ value, explain the importance of ‘confidence intervals’ and clarify the importance of including both values in a paper PMID:25878958

  8. TECHNIQUE OF THE STATISTICAL ANALYSIS OF INVESTMENT APPEAL OF THE REGION

    Directory of Open Access Journals (Sweden)

    А. А. Vershinina

    2014-01-01

    Full Text Available The technique of the statistical analysis of investment appeal of the region is given in scientific article for direct foreign investments. Definition of a technique of the statistical analysis is given, analysis stages reveal, the mathematico-statistical tools are considered.

  9. Musical structure analysis using similarity matrix and dynamic programming

    Science.gov (United States)

    Shiu, Yu; Jeong, Hong; Kuo, C.-C. Jay

    2005-10-01

    Automatic music segmentation and structure analysis from audio waveforms based on a three-level hierarchy is examined in this research, where the three-level hierarchy includes notes, measures and parts. The pitch class profile (PCP) feature is first extracted at the note level. Then, a similarity matrix is constructed at the measure level, where a dynamic time warping (DTW) technique is used to enhance the similarity computation by taking the temporal distortion of similar audio segments into account. By processing the similarity matrix, we can obtain a coarse-grain music segmentation result. Finally, dynamic programming is applied to the coarse-grain segments so that a song can be decomposed into several major parts such as intro, verse, chorus, bridge and outro. The performance of the proposed music structure analysis system is demonstrated for pop and rock music.

  10. Statistical analysis of network data with R

    CERN Document Server

    Kolaczyk, Eric D

    2014-01-01

    Networks have permeated everyday life through everyday realities like the Internet, social networks, and viral marketing. As such, network analysis is an important growth area in the quantitative sciences, with roots in social network analysis going back to the 1930s and graph theory going back centuries. Measurement and analysis are integral components of network research. As a result, statistical methods play a critical role in network analysis. This book is the first of its kind in network research. It can be used as a stand-alone resource in which multiple R packages are used to illustrate how to conduct a wide range of network analyses, from basic manipulation and visualization, to summary and characterization, to modeling of network data. The central package is igraph, which provides extensive capabilities for studying network graphs in R. This text builds on Eric D. Kolaczyk’s book Statistical Analysis of Network Data (Springer, 2009).

  11. Semiclassical analysis, Witten Laplacians, and statistical mechanis

    CERN Document Server

    Helffer, Bernard

    2002-01-01

    This important book explains how the technique of Witten Laplacians may be useful in statistical mechanics. It considers the problem of analyzing the decay of correlations, after presenting its origin in statistical mechanics. In addition, it compares the Witten Laplacian approach with other techniques, such as the transfer matrix approach and its semiclassical analysis. The author concludes by providing a complete proof of the uniform Log-Sobolev inequality. Contents: Witten Laplacians Approach; Problems in Statistical Mechanics with Discrete Spins; Laplace Integrals and Transfer Operators; S

  12. A novel statistic for genome-wide interaction analysis.

    Directory of Open Access Journals (Sweden)

    Xuesen Wu

    2010-09-01

    Full Text Available Although great progress in genome-wide association studies (GWAS has been made, the significant SNP associations identified by GWAS account for only a few percent of the genetic variance, leading many to question where and how we can find the missing heritability. There is increasing interest in genome-wide interaction analysis as a possible source of finding heritability unexplained by current GWAS. However, the existing statistics for testing interaction have low power for genome-wide interaction analysis. To meet challenges raised by genome-wide interactional analysis, we have developed a novel statistic for testing interaction between two loci (either linked or unlinked. The null distribution and the type I error rates of the new statistic for testing interaction are validated using simulations. Extensive power studies show that the developed statistic has much higher power to detect interaction than classical logistic regression. The results identified 44 and 211 pairs of SNPs showing significant evidence of interactions with FDR<0.001 and 0.001analysis is a valuable tool for finding remaining missing heritability unexplained by the current GWAS, and the developed novel statistic is able to search significant interaction between SNPs across the genome. Real data analysis showed that the results of genome-wide interaction analysis can be replicated in two independent studies.

  13. Statistical analysis of human maintenance failures of a nuclear power plant

    International Nuclear Information System (INIS)

    Pyy, P.

    2000-01-01

    In this paper, a statistical study of faults caused by maintenance activities is presented. The objective of the study was to draw conclusions on the unplanned effects of maintenance on nuclear power plant safety and system availability. More than 4400 maintenance history reports from the years 1992-1994 of Olkiluoto BWR nuclear power plant (NPP) were analysed together with the maintenance personnel. The human action induced faults were classified, e.g., according to their multiplicity and effects. This paper presents and discusses the results of a statistical analysis of the data. Instrumentation and electrical components are especially prone to human failures. Many human failures were found in safety related systems. Similarly, several failures remained latent from outages to power operation. The safety significance was generally small. Modifications are an important source of multiple human failures. Plant maintenance data is a good source of human reliability data and it should be used more, in future. (orig.)

  14. Appropriate Similarity Measures for Author Cocitation Analysis

    NARCIS (Netherlands)

    N.J.P. van Eck (Nees Jan); L. Waltman (Ludo)

    2007-01-01

    textabstractWe provide a number of new insights into the methodological discussion about author cocitation analysis. We first argue that the use of the Pearson correlation for measuring the similarity between authors’ cocitation profiles is not very satisfactory. We then discuss what kind of

  15. A statistical approach to plasma profile analysis

    International Nuclear Information System (INIS)

    Kardaun, O.J.W.F.; McCarthy, P.J.; Lackner, K.; Riedel, K.S.

    1990-05-01

    A general statistical approach to the parameterisation and analysis of tokamak profiles is presented. The modelling of the profile dependence on both the radius and the plasma parameters is discussed, and pertinent, classical as well as robust, methods of estimation are reviewed. Special attention is given to statistical tests for discriminating between the various models, and to the construction of confidence intervals for the parameterised profiles and the associated global quantities. The statistical approach is shown to provide a rigorous approach to the empirical testing of plasma profile invariance. (orig.)

  16. Testing Self-Similarity Through Lamperti Transformations

    KAUST Repository

    Lee, Myoungji; Genton, Marc G.; Jun, Mikyoung

    2016-01-01

    extensively, while statistical tests for self-similarity are scarce and limited to processes indexed in one dimension. This paper proposes a statistical hypothesis test procedure for self-similarity of a stochastic process indexed in one dimension and multi

  17. Study designs, use of statistical tests, and statistical analysis software choice in 2015: Results from two Pakistani monthly Medline indexed journals.

    Science.gov (United States)

    Shaikh, Masood Ali

    2017-09-01

    Assessment of research articles in terms of study designs used, statistical tests applied and the use of statistical analysis programmes help determine research activity profile and trends in the country. In this descriptive study, all original articles published by Journal of Pakistan Medical Association (JPMA) and Journal of the College of Physicians and Surgeons Pakistan (JCPSP), in the year 2015 were reviewed in terms of study designs used, application of statistical tests, and the use of statistical analysis programmes. JPMA and JCPSP published 192 and 128 original articles, respectively, in the year 2015. Results of this study indicate that cross-sectional study design, bivariate inferential statistical analysis entailing comparison between two variables/groups, and use of statistical software programme SPSS to be the most common study design, inferential statistical analysis, and statistical analysis software programmes, respectively. These results echo previously published assessment of these two journals for the year 2014.

  18. Statistical analysis of brake squeal noise

    Science.gov (United States)

    Oberst, S.; Lai, J. C. S.

    2011-06-01

    Despite substantial research efforts applied to the prediction of brake squeal noise since the early 20th century, the mechanisms behind its generation are still not fully understood. Squealing brakes are of significant concern to the automobile industry, mainly because of the costs associated with warranty claims. In order to remedy the problems inherent in designing quieter brakes and, therefore, to understand the mechanisms, a design of experiments study, using a noise dynamometer, was performed by a brake system manufacturer to determine the influence of geometrical parameters (namely, the number and location of slots) of brake pads on brake squeal noise. The experimental results were evaluated with a noise index and ranked for warm and cold brake stops. These data are analysed here using statistical descriptors based on population distributions, and a correlation analysis, to gain greater insight into the functional dependency between the time-averaged friction coefficient as the input and the peak sound pressure level data as the output quantity. The correlation analysis between the time-averaged friction coefficient and peak sound pressure data is performed by applying a semblance analysis and a joint recurrence quantification analysis. Linear measures are compared with complexity measures (nonlinear) based on statistics from the underlying joint recurrence plots. Results show that linear measures cannot be used to rank the noise performance of the four test pad configurations. On the other hand, the ranking of the noise performance of the test pad configurations based on the noise index agrees with that based on nonlinear measures: the higher the nonlinearity between the time-averaged friction coefficient and peak sound pressure, the worse the squeal. These results highlight the nonlinear character of brake squeal and indicate the potential of using nonlinear statistical analysis tools to analyse disc brake squeal.

  19. Research Update: Spatially resolved mapping of electronic structure on atomic level by multivariate statistical analysis

    International Nuclear Information System (INIS)

    Belianinov, Alex; Ganesh, Panchapakesan; Lin, Wenzhi; Jesse, Stephen; Pan, Minghu; Kalinin, Sergei V.; Sales, Brian C.; Sefat, Athena S.

    2014-01-01

    Atomic level spatial variability of electronic structure in Fe-based superconductor FeTe 0.55 Se 0.45 (T c = 15 K) is explored using current-imaging tunneling-spectroscopy. Multivariate statistical analysis of the data differentiates regions of dissimilar electronic behavior that can be identified with the segregation of chalcogen atoms, as well as boundaries between terminations and near neighbor interactions. Subsequent clustering analysis allows identification of the spatial localization of these dissimilar regions. Similar statistical analysis of modeled calculated density of states of chemically inhomogeneous FeTe 1−x Se x structures further confirms that the two types of chalcogens, i.e., Te and Se, can be identified by their electronic signature and differentiated by their local chemical environment. This approach allows detailed chemical discrimination of the scanning tunneling microscopy data including separation of atomic identities, proximity, and local configuration effects and can be universally applicable to chemically and electronically inhomogeneous surfaces

  20. The Statistical Analysis of Time Series

    CERN Document Server

    Anderson, T W

    2011-01-01

    The Wiley Classics Library consists of selected books that have become recognized classics in their respective fields. With these new unabridged and inexpensive editions, Wiley hopes to extend the life of these important works by making them available to future generations of mathematicians and scientists. Currently available in the Series: T. W. Anderson Statistical Analysis of Time Series T. S. Arthanari & Yadolah Dodge Mathematical Programming in Statistics Emil Artin Geometric Algebra Norman T. J. Bailey The Elements of Stochastic Processes with Applications to the Natural Sciences George

  1. Analysis of room transfer function and reverberant signal statistics

    DEFF Research Database (Denmark)

    Georganti, Eleftheria; Mourjopoulos, John; Jacobsen, Finn

    2008-01-01

    For some time now, statistical analysis has been a valuable tool in analyzing room transfer functions (RTFs). This work examines existing statistical time-frequency models and techniques for RTF analysis (e.g., Schroeder's stochastic model and the standard deviation over frequency bands for the RTF...... magnitude and phase). RTF fractional octave smoothing, as with 1-slash 3 octave analysis, may lead to RTF simplifications that can be useful for several audio applications, like room compensation, room modeling, auralisation purposes. The aim of this work is to identify the relationship of optimal response...... and the corresponding ratio of the direct and reverberant signal. In addition, this work examines the statistical quantities for speech and audio signals prior to their reproduction within rooms and when recorded in rooms. Histograms and other statistical distributions are used to compare RTF minima of typical...

  2. Binary similarity measures for fingerprint analysis of qualitative metabolomic profiles.

    Science.gov (United States)

    Rácz, Anita; Andrić, Filip; Bajusz, Dávid; Héberger, Károly

    2018-01-01

    Contemporary metabolomic fingerprinting is based on multiple spectrometric and chromatographic signals, used either alone or combined with structural and chemical information of metabolic markers at the qualitative and semiquantitative level. However, signal shifting, convolution, and matrix effects may compromise metabolomic patterns. Recent increase in the use of qualitative metabolomic data, described by the presence (1) or absence (0) of particular metabolites, demonstrates great potential in the field of metabolomic profiling and fingerprint analysis. The aim of this study is a comprehensive evaluation of binary similarity measures for the elucidation of patterns among samples of different botanical origin and various metabolomic profiles. Nine qualitative metabolomic data sets covering a wide range of natural products and metabolomic profiles were applied to assess 44 binary similarity measures for the fingerprinting of plant extracts and natural products. The measures were analyzed by the novel sum of ranking differences method (SRD), searching for the most promising candidates. Baroni-Urbani-Buser (BUB) and Hawkins-Dotson (HD) similarity coefficients were selected as the best measures by SRD and analysis of variance (ANOVA), while Dice (Di1), Yule, Russel-Rao, and Consonni-Todeschini 3 ranked the worst. ANOVA revealed that concordantly and intermediately symmetric similarity coefficients are better candidates for metabolomic fingerprinting than the asymmetric and correlation based ones. The fingerprint analysis based on the BUB and HD coefficients and qualitative metabolomic data performed equally well as the quantitative metabolomic profile analysis. Fingerprint analysis based on the qualitative metabolomic profiles and binary similarity measures proved to be a reliable way in finding the same/similar patterns in metabolomic data as that extracted from quantitative data.

  3. Statistical Inference for Porous Materials using Persistent Homology.

    Energy Technology Data Exchange (ETDEWEB)

    Moon, Chul [Univ. of Georgia, Athens, GA (United States); Heath, Jason E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mitchell, Scott A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-12-01

    We propose a porous materials analysis pipeline using persistent homology. We rst compute persistent homology of binarized 3D images of sampled material subvolumes. For each image we compute sets of homology intervals, which are represented as summary graphics called persistence diagrams. We convert persistence diagrams into image vectors in order to analyze the similarity of the homology of the material images using the mature tools for image analysis. Each image is treated as a vector and we compute its principal components to extract features. We t a statistical model using the loadings of principal components to estimate material porosity, permeability, anisotropy, and tortuosity. We also propose an adaptive version of the structural similarity index (SSIM), a similarity metric for images, as a measure to determine the statistical representative elementary volumes (sREV) for persistence homology. Thus we provide a capability for making a statistical inference of the uid ow and transport properties of porous materials based on their geometry and connectivity.

  4. Transit safety & security statistics & analysis 2002 annual report (formerly SAMIS)

    Science.gov (United States)

    2004-12-01

    The Transit Safety & Security Statistics & Analysis 2002 Annual Report (formerly SAMIS) is a compilation and analysis of mass transit accident, casualty, and crime statistics reported under the Federal Transit Administrations (FTAs) National Tr...

  5. Transit safety & security statistics & analysis 2003 annual report (formerly SAMIS)

    Science.gov (United States)

    2005-12-01

    The Transit Safety & Security Statistics & Analysis 2003 Annual Report (formerly SAMIS) is a compilation and analysis of mass transit accident, casualty, and crime statistics reported under the Federal Transit Administrations (FTAs) National Tr...

  6. Data exploration, quality control and statistical analysis of ChIP-exo/nexus experiments.

    Science.gov (United States)

    Welch, Rene; Chung, Dongjun; Grass, Jeffrey; Landick, Robert; Keles, Sündüz

    2017-09-06

    ChIP-exo/nexus experiments rely on innovative modifications of the commonly used ChIP-seq protocol for high resolution mapping of transcription factor binding sites. Although many aspects of the ChIP-exo data analysis are similar to those of ChIP-seq, these high throughput experiments pose a number of unique quality control and analysis challenges. We develop a novel statistical quality control pipeline and accompanying R/Bioconductor package, ChIPexoQual, to enable exploration and analysis of ChIP-exo and related experiments. ChIPexoQual evaluates a number of key issues including strand imbalance, library complexity, and signal enrichment of data. Assessment of these features are facilitated through diagnostic plots and summary statistics computed over regions of the genome with varying levels of coverage. We evaluated our QC pipeline with both large collections of public ChIP-exo/nexus data and multiple, new ChIP-exo datasets from Escherichia coli. ChIPexoQual analysis of these datasets resulted in guidelines for using these QC metrics across a wide range of sequencing depths and provided further insights for modelling ChIP-exo data. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  7. Similarity Analysis of Cable Insulations by Chemical Test

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jong Seog [Central Research Institute of Korea Hydro and Nuclear Power Co., Daejeon (Korea, Republic of)

    2013-10-15

    As result of this experiment, it was found that FT-IR test for material composition, TGA test for aging trend are applicable for similarity analysis of cable materials. OIT is recommended as option if TGA doesn't show good trend. Qualification of new insulation by EQ report of old insulation should be based on higher activation energy of new insulation than that of old one in the consideration of conservatism. In old nuclear power plant, it is easy to find black cable which has no marking of cable information such as manufacturer, material name and voltage. If a type test is required for qualification of these cables, how could I select representative cable? How could I determine the similarity of these cables? If manufacturer has qualified a cable for nuclear power plant more than a decade ago and composition of cable material is changed with similar one, is it acceptable to use the old EQ report for recently manufactured cable? It is well known to use FT-IR method to determine the similarity of cable materials. Infrared ray is easy tool to compare compositions of each material. But, it is not proper to compare aging trend of these materials. Study for similarity analysis of cable insulation by chemical test is described herein. To study a similarity evaluation method for polymer materials, FT-IR, TGA and OIT tests were performed for two cable insulation(old and new) which were supplied from same manufacturer. FT-IR shows good result to compare material compositions while TGA and OIT show good result to compare aging character of materials.

  8. Similarity Analysis of Cable Insulations by Chemical Test

    International Nuclear Information System (INIS)

    Kim, Jong Seog

    2013-01-01

    As result of this experiment, it was found that FT-IR test for material composition, TGA test for aging trend are applicable for similarity analysis of cable materials. OIT is recommended as option if TGA doesn't show good trend. Qualification of new insulation by EQ report of old insulation should be based on higher activation energy of new insulation than that of old one in the consideration of conservatism. In old nuclear power plant, it is easy to find black cable which has no marking of cable information such as manufacturer, material name and voltage. If a type test is required for qualification of these cables, how could I select representative cable? How could I determine the similarity of these cables? If manufacturer has qualified a cable for nuclear power plant more than a decade ago and composition of cable material is changed with similar one, is it acceptable to use the old EQ report for recently manufactured cable? It is well known to use FT-IR method to determine the similarity of cable materials. Infrared ray is easy tool to compare compositions of each material. But, it is not proper to compare aging trend of these materials. Study for similarity analysis of cable insulation by chemical test is described herein. To study a similarity evaluation method for polymer materials, FT-IR, TGA and OIT tests were performed for two cable insulation(old and new) which were supplied from same manufacturer. FT-IR shows good result to compare material compositions while TGA and OIT show good result to compare aging character of materials

  9. Statistical Modelling of Wind Proles - Data Analysis and Modelling

    DEFF Research Database (Denmark)

    Jónsson, Tryggvi; Pinson, Pierre

    The aim of the analysis presented in this document is to investigate whether statistical models can be used to make very short-term predictions of wind profiles.......The aim of the analysis presented in this document is to investigate whether statistical models can be used to make very short-term predictions of wind profiles....

  10. Statistical analysis of long term spatial and temporal trends of ...

    Indian Academy of Sciences (India)

    Statistical analysis of long term spatial and temporal trends of temperature ... CGCM3; HadCM3; modified Mann–Kendall test; statistical analysis; Sutlej basin. ... Water Resources Systems Division, National Institute of Hydrology, Roorkee 247 ...

  11. Box-Cox transformation of firm size data in statistical analysis

    Science.gov (United States)

    Chen, Ting Ting; Takaishi, Tetsuya

    2014-03-01

    Firm size data usually do not show the normality that is often assumed in statistical analysis such as regression analysis. In this study we focus on two firm size data: the number of employees and sale. Those data deviate considerably from a normal distribution. To improve the normality of those data we transform them by the Box-Cox transformation with appropriate parameters. The Box-Cox transformation parameters are determined so that the transformed data best show the kurtosis of a normal distribution. It is found that the two firm size data transformed by the Box-Cox transformation show strong linearity. This indicates that the number of employees and sale have the similar property as a firm size indicator. The Box-Cox parameters obtained for the firm size data are found to be very close to zero. In this case the Box-Cox transformations are approximately a log-transformation. This suggests that the firm size data we used are approximately log-normal distributions.

  12. Box-Cox transformation of firm size data in statistical analysis

    International Nuclear Information System (INIS)

    Chen, Ting Ting; Takaishi, Tetsuya

    2014-01-01

    Firm size data usually do not show the normality that is often assumed in statistical analysis such as regression analysis. In this study we focus on two firm size data: the number of employees and sale. Those data deviate considerably from a normal distribution. To improve the normality of those data we transform them by the Box-Cox transformation with appropriate parameters. The Box-Cox transformation parameters are determined so that the transformed data best show the kurtosis of a normal distribution. It is found that the two firm size data transformed by the Box-Cox transformation show strong linearity. This indicates that the number of employees and sale have the similar property as a firm size indicator. The Box-Cox parameters obtained for the firm size data are found to be very close to zero. In this case the Box-Cox transformations are approximately a log-transformation. This suggests that the firm size data we used are approximately log-normal distributions

  13. Measuring the statistical validity of summary meta‐analysis and meta‐regression results for use in clinical practice

    Science.gov (United States)

    Riley, Richard D.

    2017-01-01

    An important question for clinicians appraising a meta‐analysis is: are the findings likely to be valid in their own practice—does the reported effect accurately represent the effect that would occur in their own clinical population? To this end we advance the concept of statistical validity—where the parameter being estimated equals the corresponding parameter for a new independent study. Using a simple (‘leave‐one‐out’) cross‐validation technique, we demonstrate how we may test meta‐analysis estimates for statistical validity using a new validation statistic, Vn, and derive its distribution. We compare this with the usual approach of investigating heterogeneity in meta‐analyses and demonstrate the link between statistical validity and homogeneity. Using a simulation study, the properties of Vn and the Q statistic are compared for univariate random effects meta‐analysis and a tailored meta‐regression model, where information from the setting (included as model covariates) is used to calibrate the summary estimate to the setting of application. Their properties are found to be similar when there are 50 studies or more, but for fewer studies Vn has greater power but a higher type 1 error rate than Q. The power and type 1 error rate of Vn are also shown to depend on the within‐study variance, between‐study variance, study sample size, and the number of studies in the meta‐analysis. Finally, we apply Vn to two published meta‐analyses and conclude that it usefully augments standard methods when deciding upon the likely validity of summary meta‐analysis estimates in clinical practice. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:28620945

  14. CORSSA: The Community Online Resource for Statistical Seismicity Analysis

    Science.gov (United States)

    Michael, Andrew J.; Wiemer, Stefan

    2010-01-01

    Statistical seismology is the application of rigorous statistical methods to earthquake science with the goal of improving our knowledge of how the earth works. Within statistical seismology there is a strong emphasis on the analysis of seismicity data in order to improve our scientific understanding of earthquakes and to improve the evaluation and testing of earthquake forecasts, earthquake early warning, and seismic hazards assessments. Given the societal importance of these applications, statistical seismology must be done well. Unfortunately, a lack of educational resources and available software tools make it difficult for students and new practitioners to learn about this discipline. The goal of the Community Online Resource for Statistical Seismicity Analysis (CORSSA) is to promote excellence in statistical seismology by providing the knowledge and resources necessary to understand and implement the best practices, so that the reader can apply these methods to their own research. This introduction describes the motivation for and vision of CORRSA. It also describes its structure and contents.

  15. Multivariate statistical analysis a high-dimensional approach

    CERN Document Server

    Serdobolskii, V

    2000-01-01

    In the last few decades the accumulation of large amounts of in­ formation in numerous applications. has stimtllated an increased in­ terest in multivariate analysis. Computer technologies allow one to use multi-dimensional and multi-parametric models successfully. At the same time, an interest arose in statistical analysis with a de­ ficiency of sample data. Nevertheless, it is difficult to describe the recent state of affairs in applied multivariate methods as satisfactory. Unimprovable (dominating) statistical procedures are still unknown except for a few specific cases. The simplest problem of estimat­ ing the mean vector with minimum quadratic risk is unsolved, even for normal distributions. Commonly used standard linear multivari­ ate procedures based on the inversion of sample covariance matrices can lead to unstable results or provide no solution in dependence of data. Programs included in standard statistical packages cannot process 'multi-collinear data' and there are no theoretical recommen­ ...

  16. Statistical assessment on a combined analysis of GRYN-ROMN-UCBN upland vegetation vital signs

    Science.gov (United States)

    Irvine, Kathryn M.; Rodhouse, Thomas J.

    2014-01-01

    As of 2013, Rocky Mountain and Upper Columbia Basin Inventory and Monitoring Networks have multiple years of vegetation data and Greater Yellowstone Network has three years of vegetation data and monitoring is ongoing in all three networks. Our primary objective is to assess whether a combined analysis of these data aimed at exploring correlations with climate and weather data is feasible. We summarize the core survey design elements across protocols and point out the major statistical challenges for a combined analysis at present. The dissimilarity in response designs between ROMN and UCBN-GRYN network protocols presents a statistical challenge that has not been resolved yet. However, the UCBN and GRYN data are compatible as they implement a similar response design; therefore, a combined analysis is feasible and will be pursued in future. When data collected by different networks are combined, the survey design describing the merged dataset is (likely) a complex survey design. A complex survey design is the result of combining datasets from different sampling designs. A complex survey design is characterized by unequal probability sampling, varying stratification, and clustering (see Lohr 2010 Chapter 7 for general overview). Statistical analysis of complex survey data requires modifications to standard methods, one of which is to include survey design weights within a statistical model. We focus on this issue for a combined analysis of upland vegetation from these networks, leaving other topics for future research. We conduct a simulation study on the possible effects of equal versus unequal probability selection of points on parameter estimates of temporal trend using available packages within the R statistical computing package. We find that, as written, using lmer or lm for trend detection in a continuous response and clm and clmm for visually estimated cover classes with “raw” GRTS design weights specified for the weight argument leads to substantially

  17. Applied multivariate statistical analysis

    CERN Document Server

    Härdle, Wolfgang Karl

    2015-01-01

    Focusing on high-dimensional applications, this 4th edition presents the tools and concepts used in multivariate data analysis in a style that is also accessible for non-mathematicians and practitioners.  It surveys the basic principles and emphasizes both exploratory and inferential statistics; a new chapter on Variable Selection (Lasso, SCAD and Elastic Net) has also been added.  All chapters include practical exercises that highlight applications in different multivariate data analysis fields: in quantitative financial studies, where the joint dynamics of assets are observed; in medicine, where recorded observations of subjects in different locations form the basis for reliable diagnoses and medication; and in quantitative marketing, where consumers’ preferences are collected in order to construct models of consumer behavior.  All of these examples involve high to ultra-high dimensions and represent a number of major fields in big data analysis. The fourth edition of this book on Applied Multivariate ...

  18. Statistical evaluation of vibration analysis techniques

    Science.gov (United States)

    Milner, G. Martin; Miller, Patrice S.

    1987-01-01

    An evaluation methodology is presented for a selection of candidate vibration analysis techniques applicable to machinery representative of the environmental control and life support system of advanced spacecraft; illustrative results are given. Attention is given to the statistical analysis of small sample experiments, the quantification of detection performance for diverse techniques through the computation of probability of detection versus probability of false alarm, and the quantification of diagnostic performance.

  19. HistFitter software framework for statistical data analysis

    CERN Document Server

    Baak, M.; Côte, D.; Koutsman, A.; Lorenz, J.; Short, D.

    2015-01-01

    We present a software framework for statistical data analysis, called HistFitter, that has been used extensively by the ATLAS Collaboration to analyze big datasets originating from proton-proton collisions at the Large Hadron Collider at CERN. Since 2012 HistFitter has been the standard statistical tool in searches for supersymmetric particles performed by ATLAS. HistFitter is a programmable and flexible framework to build, book-keep, fit, interpret and present results of data models of nearly arbitrary complexity. Starting from an object-oriented configuration, defined by users, the framework builds probability density functions that are automatically fitted to data and interpreted with statistical tests. A key innovation of HistFitter is its design, which is rooted in core analysis strategies of particle physics. The concepts of control, signal and validation regions are woven into its very fabric. These are progressively treated with statistically rigorous built-in methods. Being capable of working with mu...

  20. Statistical analysis on extreme wave height

    Digital Repository Service at National Institute of Oceanography (India)

    Teena, N.V.; SanilKumar, V.; Sudheesh, K.; Sajeev, R.

    -294. • WAFO (2000) – A MATLAB toolbox for analysis of random waves and loads, Lund University, Sweden, homepage http://www.maths.lth.se/matstat/wafo/,2000. 15    Table 1: Statistical results of data and fitted distribution for cumulative distribution...

  1. Time Series Analysis Based on Running Mann Whitney Z Statistics

    Science.gov (United States)

    A sensitive and objective time series analysis method based on the calculation of Mann Whitney U statistics is described. This method samples data rankings over moving time windows, converts those samples to Mann-Whitney U statistics, and then normalizes the U statistics to Z statistics using Monte-...

  2. Sensitivity analysis of ranked data: from order statistics to quantiles

    NARCIS (Netherlands)

    Heidergott, B.F.; Volk-Makarewicz, W.

    2015-01-01

    In this paper we provide the mathematical theory for sensitivity analysis of order statistics of continuous random variables, where the sensitivity is with respect to a distributional parameter. Sensitivity analysis of order statistics over a finite number of observations is discussed before

  3. Phishing Detection: Analysis of Visual Similarity Based Approaches

    Directory of Open Access Journals (Sweden)

    Ankit Kumar Jain

    2017-01-01

    Full Text Available Phishing is one of the major problems faced by cyber-world and leads to financial losses for both industries and individuals. Detection of phishing attack with high accuracy has always been a challenging issue. At present, visual similarities based techniques are very useful for detecting phishing websites efficiently. Phishing website looks very similar in appearance to its corresponding legitimate website to deceive users into believing that they are browsing the correct website. Visual similarity based phishing detection techniques utilise the feature set like text content, text format, HTML tags, Cascading Style Sheet (CSS, image, and so forth, to make the decision. These approaches compare the suspicious website with the corresponding legitimate website by using various features and if the similarity is greater than the predefined threshold value then it is declared phishing. This paper presents a comprehensive analysis of phishing attacks, their exploitation, some of the recent visual similarity based approaches for phishing detection, and its comparative study. Our survey provides a better understanding of the problem, current solution space, and scope of future research to deal with phishing attacks efficiently using visual similarity based approaches.

  4. Feature-Based Statistical Analysis of Combustion Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, J; Krishnamoorthy, V; Liu, S; Grout, R; Hawkes, E; Chen, J; Pascucci, V; Bremer, P T

    2011-11-18

    We present a new framework for feature-based statistical analysis of large-scale scientific data and demonstrate its effectiveness by analyzing features from Direct Numerical Simulations (DNS) of turbulent combustion. Turbulent flows are ubiquitous and account for transport and mixing processes in combustion, astrophysics, fusion, and climate modeling among other disciplines. They are also characterized by coherent structure or organized motion, i.e. nonlocal entities whose geometrical features can directly impact molecular mixing and reactive processes. While traditional multi-point statistics provide correlative information, they lack nonlocal structural information, and hence, fail to provide mechanistic causality information between organized fluid motion and mixing and reactive processes. Hence, it is of great interest to capture and track flow features and their statistics together with their correlation with relevant scalar quantities, e.g. temperature or species concentrations. In our approach we encode the set of all possible flow features by pre-computing merge trees augmented with attributes, such as statistical moments of various scalar fields, e.g. temperature, as well as length-scales computed via spectral analysis. The computation is performed in an efficient streaming manner in a pre-processing step and results in a collection of meta-data that is orders of magnitude smaller than the original simulation data. This meta-data is sufficient to support a fully flexible and interactive analysis of the features, allowing for arbitrary thresholds, providing per-feature statistics, and creating various global diagnostics such as Cumulative Density Functions (CDFs), histograms, or time-series. We combine the analysis with a rendering of the features in a linked-view browser that enables scientists to interactively explore, visualize, and analyze the equivalent of one terabyte of simulation data. We highlight the utility of this new framework for combustion

  5. Statistical learning methods in high-energy and astrophysics analysis

    Energy Technology Data Exchange (ETDEWEB)

    Zimmermann, J. [Forschungszentrum Juelich GmbH, Zentrallabor fuer Elektronik, 52425 Juelich (Germany) and Max-Planck-Institut fuer Physik, Foehringer Ring 6, 80805 Munich (Germany)]. E-mail: zimmerm@mppmu.mpg.de; Kiesling, C. [Max-Planck-Institut fuer Physik, Foehringer Ring 6, 80805 Munich (Germany)

    2004-11-21

    We discuss several popular statistical learning methods used in high-energy- and astro-physics analysis. After a short motivation for statistical learning we present the most popular algorithms and discuss several examples from current research in particle- and astro-physics. The statistical learning methods are compared with each other and with standard methods for the respective application.

  6. Statistical learning methods in high-energy and astrophysics analysis

    International Nuclear Information System (INIS)

    Zimmermann, J.; Kiesling, C.

    2004-01-01

    We discuss several popular statistical learning methods used in high-energy- and astro-physics analysis. After a short motivation for statistical learning we present the most popular algorithms and discuss several examples from current research in particle- and astro-physics. The statistical learning methods are compared with each other and with standard methods for the respective application

  7. The fuzzy approach to statistical analysis

    NARCIS (Netherlands)

    Coppi, Renato; Gil, Maria A.; Kiers, Henk A. L.

    2006-01-01

    For the last decades, research studies have been developed in which a coalition of Fuzzy Sets Theory and Statistics has been established with different purposes. These namely are: (i) to introduce new data analysis problems in which the objective involves either fuzzy relationships or fuzzy terms;

  8. Statistical analysis applied to safety culture self-assessment

    International Nuclear Information System (INIS)

    Macedo Soares, P.P.

    2002-01-01

    Interviews and opinion surveys are instruments used to assess the safety culture in an organization as part of the Safety Culture Enhancement Programme. Specific statistical tools are used to analyse the survey results. This paper presents an example of an opinion survey with the corresponding application of the statistical analysis and the conclusions obtained. Survey validation, Frequency statistics, Kolmogorov-Smirnov non-parametric test, Student (T-test) and ANOVA means comparison tests and LSD post-hoc multiple comparison test, are discussed. (author)

  9. Foundation of statistical energy analysis in vibroacoustics

    CERN Document Server

    Le Bot, A

    2015-01-01

    This title deals with the statistical theory of sound and vibration. The foundation of statistical energy analysis is presented in great detail. In the modal approach, an introduction to random vibration with application to complex systems having a large number of modes is provided. For the wave approach, the phenomena of propagation, group speed, and energy transport are extensively discussed. Particular emphasis is given to the emergence of diffuse field, the central concept of the theory.

  10. Machine Fault Detection Based on Filter Bank Similarity Features Using Acoustic and Vibration Analysis

    Directory of Open Access Journals (Sweden)

    Mauricio Holguín-Londoño

    2016-01-01

    Full Text Available Vibration and acoustic analysis actively support the nondestructive and noninvasive fault diagnostics of rotating machines at early stages. Nonetheless, the acoustic signal is less used because of its vulnerability to external interferences, hindering an efficient and robust analysis for condition monitoring (CM. This paper presents a novel methodology to characterize different failure signatures from rotating machines using either acoustic or vibration signals. Firstly, the signal is decomposed into several narrow-band spectral components applying different filter bank methods such as empirical mode decomposition, wavelet packet transform, and Fourier-based filtering. Secondly, a feature set is built using a proposed similarity measure termed cumulative spectral density index and used to estimate the mutual statistical dependence between each bandwidth-limited component and the raw signal. Finally, a classification scheme is carried out to distinguish the different types of faults. The methodology is tested in two laboratory experiments, including turbine blade degradation and rolling element bearing faults. The robustness of our approach is validated contaminating the signal with several levels of additive white Gaussian noise, obtaining high-performance outcomes that make the usage of vibration, acoustic, and vibroacoustic measurements in different applications comparable. As a result, the proposed fault detection based on filter bank similarity features is a promising methodology to implement in CM of rotating machinery, even using measurements with low signal-to-noise ratio.

  11. Inverse Statistics in the Foreign Exchange Market

    OpenAIRE

    Jensen, M. H.; Johansen, A.; Petroni, F.; Simonsen, I.

    2004-01-01

    We investigate intra-day foreign exchange (FX) time series using the inverse statistic analysis developed in [1,2]. Specifically, we study the time-averaged distributions of waiting times needed to obtain a certain increase (decrease) $\\rho$ in the price of an investment. The analysis is performed for the Deutsch mark (DM) against the $US for the full year of 1998, but similar results are obtained for the Japanese Yen against the $US. With high statistical significance, the presence of "reson...

  12. Statistical Analysis of Big Data on Pharmacogenomics

    Science.gov (United States)

    Fan, Jianqing; Liu, Han

    2013-01-01

    This paper discusses statistical methods for estimating complex correlation structure from large pharmacogenomic datasets. We selectively review several prominent statistical methods for estimating large covariance matrix for understanding correlation structure, inverse covariance matrix for network modeling, large-scale simultaneous tests for selecting significantly differently expressed genes and proteins and genetic markers for complex diseases, and high dimensional variable selection for identifying important molecules for understanding molecule mechanisms in pharmacogenomics. Their applications to gene network estimation and biomarker selection are used to illustrate the methodological power. Several new challenges of Big data analysis, including complex data distribution, missing data, measurement error, spurious correlation, endogeneity, and the need for robust statistical methods, are also discussed. PMID:23602905

  13. HistFitter software framework for statistical data analysis

    Energy Technology Data Exchange (ETDEWEB)

    Baak, M. [CERN, Geneva (Switzerland); Besjes, G.J. [Radboud University Nijmegen, Nijmegen (Netherlands); Nikhef, Amsterdam (Netherlands); Cote, D. [University of Texas, Arlington (United States); Koutsman, A. [TRIUMF, Vancouver (Canada); Lorenz, J. [Ludwig-Maximilians-Universitaet Muenchen, Munich (Germany); Excellence Cluster Universe, Garching (Germany); Short, D. [University of Oxford, Oxford (United Kingdom)

    2015-04-15

    We present a software framework for statistical data analysis, called HistFitter, that has been used extensively by the ATLAS Collaboration to analyze big datasets originating from proton-proton collisions at the Large Hadron Collider at CERN. Since 2012 HistFitter has been the standard statistical tool in searches for supersymmetric particles performed by ATLAS. HistFitter is a programmable and flexible framework to build, book-keep, fit, interpret and present results of data models of nearly arbitrary complexity. Starting from an object-oriented configuration, defined by users, the framework builds probability density functions that are automatically fit to data and interpreted with statistical tests. Internally HistFitter uses the statistics packages RooStats and HistFactory. A key innovation of HistFitter is its design, which is rooted in analysis strategies of particle physics. The concepts of control, signal and validation regions are woven into its fabric. These are progressively treated with statistically rigorous built-in methods. Being capable of working with multiple models at once that describe the data, HistFitter introduces an additional level of abstraction that allows for easy bookkeeping, manipulation and testing of large collections of signal hypotheses. Finally, HistFitter provides a collection of tools to present results with publication quality style through a simple command-line interface. (orig.)

  14. HistFitter software framework for statistical data analysis

    International Nuclear Information System (INIS)

    Baak, M.; Besjes, G.J.; Cote, D.; Koutsman, A.; Lorenz, J.; Short, D.

    2015-01-01

    We present a software framework for statistical data analysis, called HistFitter, that has been used extensively by the ATLAS Collaboration to analyze big datasets originating from proton-proton collisions at the Large Hadron Collider at CERN. Since 2012 HistFitter has been the standard statistical tool in searches for supersymmetric particles performed by ATLAS. HistFitter is a programmable and flexible framework to build, book-keep, fit, interpret and present results of data models of nearly arbitrary complexity. Starting from an object-oriented configuration, defined by users, the framework builds probability density functions that are automatically fit to data and interpreted with statistical tests. Internally HistFitter uses the statistics packages RooStats and HistFactory. A key innovation of HistFitter is its design, which is rooted in analysis strategies of particle physics. The concepts of control, signal and validation regions are woven into its fabric. These are progressively treated with statistically rigorous built-in methods. Being capable of working with multiple models at once that describe the data, HistFitter introduces an additional level of abstraction that allows for easy bookkeeping, manipulation and testing of large collections of signal hypotheses. Finally, HistFitter provides a collection of tools to present results with publication quality style through a simple command-line interface. (orig.)

  15. Robust statistics and geochemical data analysis

    International Nuclear Information System (INIS)

    Di, Z.

    1987-01-01

    Advantages of robust procedures over ordinary least-squares procedures in geochemical data analysis is demonstrated using NURE data from the Hot Springs Quadrangle, South Dakota, USA. Robust principal components analysis with 5% multivariate trimming successfully guarded the analysis against perturbations by outliers and increased the number of interpretable factors. Regression with SINE estimates significantly increased the goodness-of-fit of the regression and improved the correspondence of delineated anomalies with known uranium prospects. Because of the ubiquitous existence of outliers in geochemical data, robust statistical procedures are suggested as routine procedures to replace ordinary least-squares procedures

  16. Using Pre-Statistical Analysis to Streamline Monitoring Assessments

    International Nuclear Information System (INIS)

    Reed, J.K.

    1999-01-01

    A variety of statistical methods exist to aid evaluation of groundwater quality and subsequent decision making in regulatory programs. These methods are applied because of large temporal and spatial extrapolations commonly applied to these data. In short, statistical conclusions often serve as a surrogate for knowledge. However, facilities with mature monitoring programs that have generated abundant data have inherently less uncertainty because of the sheer quantity of analytical results. In these cases, statistical tests can be less important, and ''expert'' data analysis should assume an important screening role.The WSRC Environmental Protection Department, working with the General Separations Area BSRI Environmental Restoration project team has developed a method for an Integrated Hydrogeological Analysis (IHA) of historical water quality data from the F and H Seepage Basins groundwater remediation project. The IHA combines common sense analytical techniques and a GIS presentation that force direct interactive evaluation of the data. The IHA can perform multiple data analysis tasks required by the RCRA permit. These include: (1) Development of a groundwater quality baseline prior to remediation startup, (2) Targeting of constituents for removal from RCRA GWPS, (3) Targeting of constituents for removal from UIC, permit, (4) Targeting of constituents for reduced, (5)Targeting of monitoring wells not producing representative samples, (6) Reduction in statistical evaluation, and (7) Identification of contamination from other facilities

  17. Conjunction analysis and propositional logic in fMRI data analysis using Bayesian statistics.

    Science.gov (United States)

    Rudert, Thomas; Lohmann, Gabriele

    2008-12-01

    To evaluate logical expressions over different effects in data analyses using the general linear model (GLM) and to evaluate logical expressions over different posterior probability maps (PPMs). In functional magnetic resonance imaging (fMRI) data analysis, the GLM was applied to estimate unknown regression parameters. Based on the GLM, Bayesian statistics can be used to determine the probability of conjunction, disjunction, implication, or any other arbitrary logical expression over different effects or contrast. For second-level inferences, PPMs from individual sessions or subjects are utilized. These PPMs can be combined to a logical expression and its probability can be computed. The methods proposed in this article are applied to data from a STROOP experiment and the methods are compared to conjunction analysis approaches for test-statistics. The combination of Bayesian statistics with propositional logic provides a new approach for data analyses in fMRI. Two different methods are introduced for propositional logic: the first for analyses using the GLM and the second for common inferences about different probability maps. The methods introduced extend the idea of conjunction analysis to a full propositional logic and adapt it from test-statistics to Bayesian statistics. The new approaches allow inferences that are not possible with known standard methods in fMRI. (c) 2008 Wiley-Liss, Inc.

  18. Recognizing stationary and locomotion activities using combinational of spectral analysis with statistical descriptors features

    Science.gov (United States)

    Zainudin, M. N. Shah; Sulaiman, Md Nasir; Mustapha, Norwati; Perumal, Thinagaran

    2017-10-01

    Prior knowledge in pervasive computing recently garnered a lot of attention due to its high demand in various application domains. Human activity recognition (HAR) considered as the applications that are widely explored by the expertise that provides valuable information to the human. Accelerometer sensor-based approach is utilized as devices to undergo the research in HAR since their small in size and this sensor already build-in in the various type of smartphones. However, the existence of high inter-class similarities among the class tends to degrade the recognition performance. Hence, this work presents the method for activity recognition using our proposed features from combinational of spectral analysis with statistical descriptors that able to tackle the issue of differentiating stationary and locomotion activities. The noise signal is filtered using Fourier Transform before it will be extracted using two different groups of features, spectral frequency analysis, and statistical descriptors. Extracted signal later will be classified using random forest ensemble classifier models. The recognition results show the good accuracy performance for stationary and locomotion activities based on USC HAD datasets.

  19. Statistical analysis of field data for aircraft warranties

    Science.gov (United States)

    Lakey, Mary J.

    Air Force and Navy maintenance data collection systems were researched to determine their scientific applicability to the warranty process. New and unique algorithms were developed to extract failure distributions which were then used to characterize how selected families of equipment typically fails. Families of similar equipment were identified in terms of function, technology and failure patterns. Statistical analyses and applications such as goodness-of-fit test, maximum likelihood estimation and derivation of confidence intervals for the probability density function parameters were applied to characterize the distributions and their failure patterns. Statistical and reliability theory, with relevance to equipment design and operational failures were also determining factors in characterizing the failure patterns of the equipment families. Inferences about the families with relevance to warranty needs were then made.

  20. Monitoring of bread cooling by statistical analysis of laser speckle patterns

    Science.gov (United States)

    Lyubenova, Tanya; Stoykova, Elena; Nacheva, Elena; Ivanov, Branimir; Panchev, Ivan; Sainov, Ventseslav

    2013-03-01

    The phenomenon of laser speckle can be used for detection and visualization of physical or biological activity in various objects (e.g. fruits, seeds, coatings) through statistical description of speckle dynamics. The paper presents the results of non-destructive monitoring of bread cooling by co-occurrence matrix and temporal structure function analysis of speckle patterns which have been recorded continuously within a few days. In total, 72960 and 39680 images were recorded and processed for two similar bread samples respectively. The experiments proved the expected steep decrease of activity related to the processes in the bread samples during the first several hours and revealed its oscillating character within the next few days. Characterization of activity over the bread sample surface was also obtained.

  1. Sensitivity analysis and optimization of system dynamics models : Regression analysis and statistical design of experiments

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    1995-01-01

    This tutorial discusses what-if analysis and optimization of System Dynamics models. These problems are solved, using the statistical techniques of regression analysis and design of experiments (DOE). These issues are illustrated by applying the statistical techniques to a System Dynamics model for

  2. Multivariate Statistical Methods as a Tool of Financial Analysis of Farm Business

    Czech Academy of Sciences Publication Activity Database

    Novák, J.; Sůvová, H.; Vondráček, Jiří

    2002-01-01

    Roč. 48, č. 1 (2002), s. 9-12 ISSN 0139-570X Institutional research plan: AV0Z1030915 Keywords : financial analysis * financial ratios * multivariate statistical methods * correlation analysis * discriminant analysis * cluster analysis Subject RIV: BB - Applied Statistics, Operational Research

  3. Robust Multiscale Modelling Of Two-Phase Steels On Heterogeneous Hardware Infrastructures By Using Statistically Similar Representative Volume Element

    Directory of Open Access Journals (Sweden)

    Rauch Ł.

    2015-09-01

    Full Text Available The coupled finite element multiscale simulations (FE2 require costly numerical procedures in both macro and micro scales. Attempts to improve numerical efficiency are focused mainly on two areas of development, i.e. parallelization/distribution of numerical procedures and simplification of virtual material representation. One of the representatives of both mentioned areas is the idea of Statistically Similar Representative Volume Element (SSRVE. It aims at the reduction of the number of finite elements in micro scale as well as at parallelization of the calculations in micro scale which can be performed without barriers. The simplification of computational domain is realized by transformation of sophisticated images of material microstructure into artificially created simple objects being characterized by similar features as their original equivalents. In existing solutions for two-phase steels SSRVE is created on the basis of the analysis of shape coefficients of hard phase in real microstructure and searching for a representative simple structure with similar shape coefficients. Optimization techniques were used to solve this task. In the present paper local strains and stresses are added to the cost function in optimization. Various forms of the objective function composed of different elements were investigated and used in the optimization procedure for the creation of the final SSRVE. The results are compared as far as the efficiency of the procedure and uniqueness of the solution are considered. The best objective function composed of shape coefficients, as well as of strains and stresses, was proposed. Examples of SSRVEs determined for the investigated two-phase steel using that objective function are demonstrated in the paper. Each step of SSRVE creation is investigated from computational efficiency point of view. The proposition of implementation of the whole computational procedure on modern High Performance Computing (HPC

  4. Statistical analysis and interpretation of prenatal diagnostic imaging studies, Part 2: descriptive and inferential statistical methods.

    Science.gov (United States)

    Tuuli, Methodius G; Odibo, Anthony O

    2011-08-01

    The objective of this article is to discuss the rationale for common statistical tests used for the analysis and interpretation of prenatal diagnostic imaging studies. Examples from the literature are used to illustrate descriptive and inferential statistics. The uses and limitations of linear and logistic regression analyses are discussed in detail.

  5. Statistical analysis of environmental data

    International Nuclear Information System (INIS)

    Beauchamp, J.J.; Bowman, K.O.; Miller, F.L. Jr.

    1975-10-01

    This report summarizes the analyses of data obtained by the Radiological Hygiene Branch of the Tennessee Valley Authority from samples taken around the Browns Ferry Nuclear Plant located in Northern Alabama. The data collection was begun in 1968 and a wide variety of types of samples have been gathered on a regular basis. The statistical analysis of environmental data involving very low-levels of radioactivity is discussed. Applications of computer calculations for data processing are described

  6. Highly Robust Statistical Methods in Medical Image Analysis

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2012-01-01

    Roč. 32, č. 2 (2012), s. 3-16 ISSN 0208-5216 R&D Projects: GA MŠk(CZ) 1M06014 Institutional research plan: CEZ:AV0Z10300504 Keywords : robust statistics * classification * faces * robust image analysis * forensic science Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.208, year: 2012 http://www.ibib.waw.pl/bbe/bbefulltext/BBE_32_2_003_FT.pdf

  7. Effective Results Analysis for the Similar Software Products’ Orthogonality

    OpenAIRE

    Ion Ivan; Daniel Milodin

    2009-01-01

    It is defined the concept of similar software. There are established conditions of archiving the software components. It is carried out the orthogonality evaluation and the correlation between the orthogonality and the complexity of the homogenous software components is analyzed. Shall proceed to build groups of similar software products, belonging to the orthogonality intervals. There are presented in graphical form the results of the analysis. There are detailed aspects of the functioning o...

  8. Statistical Analysis of Data for Timber Strengths

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2003-01-01

    Statistical analyses are performed for material strength parameters from a large number of specimens of structural timber. Non-parametric statistical analysis and fits have been investigated for the following distribution types: Normal, Lognormal, 2 parameter Weibull and 3-parameter Weibull...... fits to the data available, especially if tail fits are used whereas the Log Normal distribution generally gives a poor fit and larger coefficients of variation, especially if tail fits are used. The implications on the reliability level of typical structural elements and on partial safety factors...... for timber are investigated....

  9. Statistical analysis and planning of multihundred-watt impact tests

    International Nuclear Information System (INIS)

    Martz, H.F. Jr.; Waterman, M.S.

    1977-10-01

    Modular multihundred-watt (MHW) radioisotope thermoelectric generators (RTG's) are used as a power source for spacecraft. Due to possible environmental contamination by radioactive materials, numerous tests are required to determine and verify the safety of the RTG. There are results available from 27 fueled MHW impact tests regarding hoop failure, fingerprint failure, and fuel failure. Data from the 27 tests are statistically analyzed for relationships that exist between the test design variables and the failure types. Next, these relationships are used to develop a statistical procedure for planning and conducting either future MHW impact tests or similar tests on other RTG fuel sources. Finally, some conclusions are given

  10. Numeric computation and statistical data analysis on the Java platform

    CERN Document Server

    Chekanov, Sergei V

    2016-01-01

    Numerical computation, knowledge discovery and statistical data analysis integrated with powerful 2D and 3D graphics for visualization are the key topics of this book. The Python code examples powered by the Java platform can easily be transformed to other programming languages, such as Java, Groovy, Ruby and BeanShell. This book equips the reader with a computational platform which, unlike other statistical programs, is not limited by a single programming language. The author focuses on practical programming aspects and covers a broad range of topics, from basic introduction to the Python language on the Java platform (Jython), to descriptive statistics, symbolic calculations, neural networks, non-linear regression analysis and many other data-mining topics. He discusses how to find regularities in real-world data, how to classify data, and how to process data for knowledge discoveries. The code snippets are so short that they easily fit into single pages. Numeric Computation and Statistical Data Analysis ...

  11. A Divergence Statistics Extension to VTK for Performance Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Pebay, Philippe Pierre [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bennett, Janine Camille [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    This report follows the series of previous documents ([PT08, BPRT09b, PT09, BPT09, PT10, PB13], where we presented the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k -means, order and auto-correlative statistics engines which we developed within the Visualization Tool Kit ( VTK ) as a scalable, parallel and versatile statistics package. We now report on a new engine which we developed for the calculation of divergence statistics, a concept which we hereafter explain and whose main goal is to quantify the discrepancy, in a stasticial manner akin to measuring a distance, between an observed empirical distribution and a theoretical, "ideal" one. The ease of use of the new diverence statistics engine is illustrated by the means of C++ code snippets. Although this new engine does not yet have a parallel implementation, it has already been applied to HPC performance analysis, of which we provide an example.

  12. Similarity in Bilateral Isolated Internal Orbital Fractures.

    Science.gov (United States)

    Chen, Hung-Chang; Cox, Jacob T; Sanyal, Abanti; Mahoney, Nicholas R

    2018-04-13

    In evaluating patients sustaining bilateral isolated internal orbital fractures, the authors have observed both similar fracture locations and also similar expansion of orbital volumes. In this study, we aim to investigate if there is a propensity for the 2 orbits to fracture in symmetrically similar patterns when sustaining similar trauma. A retrospective chart review was performed studying all cases at our institution of bilateral isolated internal orbital fractures involving the medial wall and/or the floor at the time of presentation. The similarity of the bilateral fracture locations was evaluated using the Fisher's exact test. The bilateral expanded orbital volumes were analyzed using the Wilcoxon signed-rank test to assess for orbital volume similarity. Twenty-four patients with bilateral internal orbital fractures were analyzed for fracture location similarity. Seventeen patients (70.8%) had 100% concordance in the orbital subregion fractured, and the association between the right and the left orbital fracture subregion locations was statistically significant (P < 0.0001). Fifteen patients were analyzed for orbital volume similarity. The average orbital cavity volume was 31.2 ± 3.8 cm on the right and 32.0 ± 3.7 cm on the left. There was a statistically significant difference between right and left orbital cavity volumes (P = 0.0026). The data from this study suggest that an individual who suffers isolated bilateral internal orbital fractures has a statistically significant similarity in the location of their orbital fractures. However, there does not appear to be statistically significant similarity in the expansion of the orbital volumes in these patients.

  13. Developments in statistical analysis in quantitative genetics

    DEFF Research Database (Denmark)

    Sorensen, Daniel

    2009-01-01

    of genetic means and variances, models for the analysis of categorical and count data, the statistical genetics of a model postulating that environmental variance is partly under genetic control, and a short discussion of models that incorporate massive genetic marker information. We provide an overview......A remarkable research impetus has taken place in statistical genetics since the last World Conference. This has been stimulated by breakthroughs in molecular genetics, automated data-recording devices and computer-intensive statistical methods. The latter were revolutionized by the bootstrap...... and by Markov chain Monte Carlo (McMC). In this overview a number of specific areas are chosen to illustrate the enormous flexibility that McMC has provided for fitting models and exploring features of data that were previously inaccessible. The selected areas are inferences of the trajectories over time...

  14. A method for rapid similarity analysis of RNA secondary structures

    Directory of Open Access Journals (Sweden)

    Liu Na

    2006-11-01

    Full Text Available Abstract Background Owing to the rapid expansion of RNA structure databases in recent years, efficient methods for structure comparison are in demand for function prediction and evolutionary analysis. Usually, the similarity of RNA secondary structures is evaluated based on tree models and dynamic programming algorithms. We present here a new method for the similarity analysis of RNA secondary structures. Results Three sets of real data have been used as input for the example applications. Set I includes the structures from 5S rRNAs. Set II includes the secondary structures from RNase P and RNase MRP. Set III includes the structures from 16S rRNAs. Reasonable phylogenetic trees are derived for these three sets of data by using our method. Moreover, our program runs faster as compared to some existing ones. Conclusion The famous Lempel-Ziv algorithm can efficiently extract the information on repeated patterns encoded in RNA secondary structures and makes our method an alternative to analyze the similarity of RNA secondary structures. This method will also be useful to researchers who are interested in evolutionary analysis.

  15. Statistical Analysis of Data with Non-Detectable Values

    Energy Technology Data Exchange (ETDEWEB)

    Frome, E.L.

    2004-08-26

    Environmental exposure measurements are, in general, positive and may be subject to left censoring, i.e. the measured value is less than a ''limit of detection''. In occupational monitoring, strategies for assessing workplace exposures typically focus on the mean exposure level or the probability that any measurement exceeds a limit. A basic problem of interest in environmental risk assessment is to determine if the mean concentration of an analyte is less than a prescribed action level. Parametric methods, used to determine acceptable levels of exposure, are often based on a two parameter lognormal distribution. The mean exposure level and/or an upper percentile (e.g. the 95th percentile) are used to characterize exposure levels, and upper confidence limits are needed to describe the uncertainty in these estimates. In certain situations it is of interest to estimate the probability of observing a future (or ''missed'') value of a lognormal variable. Statistical methods for random samples (without non-detects) from the lognormal distribution are well known for each of these situations. In this report, methods for estimating these quantities based on the maximum likelihood method for randomly left censored lognormal data are described and graphical methods are used to evaluate the lognormal assumption. If the lognormal model is in doubt and an alternative distribution for the exposure profile of a similar exposure group is not available, then nonparametric methods for left censored data are used. The mean exposure level, along with the upper confidence limit, is obtained using the product limit estimate, and the upper confidence limit on the 95th percentile (i.e. the upper tolerance limit) is obtained using a nonparametric approach. All of these methods are well known but computational complexity has limited their use in routine data analysis with left censored data. The recent development of the R environment for statistical

  16. Data management and statistical analysis for environmental assessment

    International Nuclear Information System (INIS)

    Wendelberger, J.R.; McVittie, T.I.

    1995-01-01

    Data management and statistical analysis for environmental assessment are important issues on the interface of computer science and statistics. Data collection for environmental decision making can generate large quantities of various types of data. A database/GIS system developed is described which provides efficient data storage as well as visualization tools which may be integrated into the data analysis process. FIMAD is a living database and GIS system. The system has changed and developed over time to meet the needs of the Los Alamos National Laboratory Restoration Program. The system provides a repository for data which may be accessed by different individuals for different purposes. The database structure is driven by the large amount and varied types of data required for environmental assessment. The integration of the database with the GIS system provides the foundation for powerful visualization and analysis capabilities

  17. Compliance strategy for statistically based neutron overpower protection safety analysis methodology

    International Nuclear Information System (INIS)

    Holliday, E.; Phan, B.; Nainer, O.

    2009-01-01

    The methodology employed in the safety analysis of the slow Loss of Regulation (LOR) event in the OPG and Bruce Power CANDU reactors, referred to as Neutron Overpower Protection (NOP) analysis, is a statistically based methodology. Further enhancement to this methodology includes the use of Extreme Value Statistics (EVS) for the explicit treatment of aleatory and epistemic uncertainties, and probabilistic weighting of the initial core states. A key aspect of this enhanced NOP methodology is to demonstrate adherence, or compliance, with the analysis basis. This paper outlines a compliance strategy capable of accounting for the statistical nature of the enhanced NOP methodology. (author)

  18. Similarity and uncertainty analysis of the ALLEGRO MOX core

    International Nuclear Information System (INIS)

    Vrban, B.; Hascik, J.; Necas, V.; Slugen, V.

    2015-01-01

    The similarity and uncertainty analysis of the ESNII+ ALLEGRO MOX core has identified specific problems and challenges in the field of neutronic calculations. Similarity assessment identified 9 partly comparable experiments where only one reached ck and E values over 0.9. However the Global Integral Index G remains still low (0.75) and cannot be judge das sufficient. The total uncertainty of calculated k eff induced by XS data is according to our calculation 1.04%. The main contributors to this uncertainty are 239 Pu nubar and 238 U inelastic scattering. The additional margin from uncovered sensitivities was determined to be 0.28%. The identified low number of similar experiments prevents the use of advanced XS adjustment and bias estimation methods. More experimental data are needed and presented results may serve as a basic step in development of necessary critical assemblies. Although exact data are not presented in the paper, faster 44 energy group calculation gives almost the same results in similarity analysis in comparison to more complex 238 group calculation. Finally, it was demonstrated that TSUNAMI-IP utility can play a significant role in the future fast reactor development in Slovakia and in the Visegrad region. Clearly a further Research and Development and strong effort should be carried out in order to receive more complex methodology consisting of more plausible covariance data and related quantities. (authors)

  19. Diagnosis checking of statistical analysis in RCTs indexed in PubMed.

    Science.gov (United States)

    Lee, Paul H; Tse, Andy C Y

    2017-11-01

    Statistical analysis is essential for reporting of the results of randomized controlled trials (RCTs), as well as evaluating their effectiveness. However, the validity of a statistical analysis also depends on whether the assumptions of that analysis are valid. To review all RCTs published in journals indexed in PubMed during December 2014 to provide a complete picture of how RCTs handle assumptions of statistical analysis. We reviewed all RCTs published in December 2014 that appeared in journals indexed in PubMed using the Cochrane highly sensitive search strategy. The 2014 impact factors of the journals were used as proxies for their quality. The type of statistical analysis used and whether the assumptions of the analysis were tested were reviewed. In total, 451 papers were included. Of the 278 papers that reported a crude analysis for the primary outcomes, 31 (27·2%) reported whether the outcome was normally distributed. Of the 172 papers that reported an adjusted analysis for the primary outcomes, diagnosis checking was rarely conducted, with only 20%, 8·6% and 7% checked for generalized linear model, Cox proportional hazard model and multilevel model, respectively. Study characteristics (study type, drug trial, funding sources, journal type and endorsement of CONSORT guidelines) were not associated with the reporting of diagnosis checking. The diagnosis of statistical analyses in RCTs published in PubMed-indexed journals was usually absent. Journals should provide guidelines about the reporting of a diagnosis of assumptions. © 2017 Stichting European Society for Clinical Investigation Journal Foundation.

  20. A κ-generalized statistical mechanics approach to income analysis

    Science.gov (United States)

    Clementi, F.; Gallegati, M.; Kaniadakis, G.

    2009-02-01

    This paper proposes a statistical mechanics approach to the analysis of income distribution and inequality. A new distribution function, having its roots in the framework of κ-generalized statistics, is derived that is particularly suitable for describing the whole spectrum of incomes, from the low-middle income region up to the high income Pareto power-law regime. Analytical expressions for the shape, moments and some other basic statistical properties are given. Furthermore, several well-known econometric tools for measuring inequality, which all exist in a closed form, are considered. A method for parameter estimation is also discussed. The model is shown to fit remarkably well the data on personal income for the United States, and the analysis of inequality performed in terms of its parameters is revealed as very powerful.

  1. A κ-generalized statistical mechanics approach to income analysis

    International Nuclear Information System (INIS)

    Clementi, F; Gallegati, M; Kaniadakis, G

    2009-01-01

    This paper proposes a statistical mechanics approach to the analysis of income distribution and inequality. A new distribution function, having its roots in the framework of κ-generalized statistics, is derived that is particularly suitable for describing the whole spectrum of incomes, from the low–middle income region up to the high income Pareto power-law regime. Analytical expressions for the shape, moments and some other basic statistical properties are given. Furthermore, several well-known econometric tools for measuring inequality, which all exist in a closed form, are considered. A method for parameter estimation is also discussed. The model is shown to fit remarkably well the data on personal income for the United States, and the analysis of inequality performed in terms of its parameters is revealed as very powerful

  2. Normality Tests for Statistical Analysis: A Guide for Non-Statisticians

    Science.gov (United States)

    Ghasemi, Asghar; Zahediasl, Saleh

    2012-01-01

    Statistical errors are common in scientific literature and about 50% of the published articles have at least one error. The assumption of normality needs to be checked for many statistical procedures, namely parametric tests, because their validity depends on it. The aim of this commentary is to overview checking for normality in statistical analysis using SPSS. PMID:23843808

  3. Effective Results Analysis for the Similar Software Products’ Orthogonality

    Directory of Open Access Journals (Sweden)

    Ion Ivan

    2009-10-01

    Full Text Available It is defined the concept of similar software. There are established conditions of archiving the software components. It is carried out the orthogonality evaluation and the correlation between the orthogonality and the complexity of the homogenous software components is analyzed. Shall proceed to build groups of similar software products, belonging to the orthogonality intervals. There are presented in graphical form the results of the analysis. There are detailed aspects of the functioning of the software product allocated for the orthogonality.

  4. A knowledge-based T2-statistic to perform pathway analysis for quantitative proteomic data.

    Science.gov (United States)

    Lai, En-Yu; Chen, Yi-Hau; Wu, Kun-Pin

    2017-06-01

    Approaches to identify significant pathways from high-throughput quantitative data have been developed in recent years. Still, the analysis of proteomic data stays difficult because of limited sample size. This limitation also leads to the practice of using a competitive null as common approach; which fundamentally implies genes or proteins as independent units. The independent assumption ignores the associations among biomolecules with similar functions or cellular localization, as well as the interactions among them manifested as changes in expression ratios. Consequently, these methods often underestimate the associations among biomolecules and cause false positives in practice. Some studies incorporate the sample covariance matrix into the calculation to address this issue. However, sample covariance may not be a precise estimation if the sample size is very limited, which is usually the case for the data produced by mass spectrometry. In this study, we introduce a multivariate test under a self-contained null to perform pathway analysis for quantitative proteomic data. The covariance matrix used in the test statistic is constructed by the confidence scores retrieved from the STRING database or the HitPredict database. We also design an integrating procedure to retain pathways of sufficient evidence as a pathway group. The performance of the proposed T2-statistic is demonstrated using five published experimental datasets: the T-cell activation, the cAMP/PKA signaling, the myoblast differentiation, and the effect of dasatinib on the BCR-ABL pathway are proteomic datasets produced by mass spectrometry; and the protective effect of myocilin via the MAPK signaling pathway is a gene expression dataset of limited sample size. Compared with other popular statistics, the proposed T2-statistic yields more accurate descriptions in agreement with the discussion of the original publication. We implemented the T2-statistic into an R package T2GA, which is available at https

  5. Development of computer-assisted instruction application for statistical data analysis android platform as learning resource

    Science.gov (United States)

    Hendikawati, P.; Arifudin, R.; Zahid, M. Z.

    2018-03-01

    This study aims to design an android Statistics Data Analysis application that can be accessed through mobile devices to making it easier for users to access. The Statistics Data Analysis application includes various topics of basic statistical along with a parametric statistics data analysis application. The output of this application system is parametric statistics data analysis that can be used for students, lecturers, and users who need the results of statistical calculations quickly and easily understood. Android application development is created using Java programming language. The server programming language uses PHP with the Code Igniter framework, and the database used MySQL. The system development methodology used is the Waterfall methodology with the stages of analysis, design, coding, testing, and implementation and system maintenance. This statistical data analysis application is expected to support statistical lecturing activities and make students easier to understand the statistical analysis of mobile devices.

  6. Spatially pooled contrast responses predict neural and perceptual similarity of naturalistic image categories.

    Directory of Open Access Journals (Sweden)

    Iris I A Groen

    Full Text Available The visual world is complex and continuously changing. Yet, our brain transforms patterns of light falling on our retina into a coherent percept within a few hundred milliseconds. Possibly, low-level neural responses already carry substantial information to facilitate rapid characterization of the visual input. Here, we computationally estimated low-level contrast responses to computer-generated naturalistic images, and tested whether spatial pooling of these responses could predict image similarity at the neural and behavioral level. Using EEG, we show that statistics derived from pooled responses explain a large amount of variance between single-image evoked potentials (ERPs in individual subjects. Dissimilarity analysis on multi-electrode ERPs demonstrated that large differences between images in pooled response statistics are predictive of more dissimilar patterns of evoked activity, whereas images with little difference in statistics give rise to highly similar evoked activity patterns. In a separate behavioral experiment, images with large differences in statistics were judged as different categories, whereas images with little differences were confused. These findings suggest that statistics derived from low-level contrast responses can be extracted in early visual processing and can be relevant for rapid judgment of visual similarity. We compared our results with two other, well- known contrast statistics: Fourier power spectra and higher-order properties of contrast distributions (skewness and kurtosis. Interestingly, whereas these statistics allow for accurate image categorization, they do not predict ERP response patterns or behavioral categorization confusions. These converging computational, neural and behavioral results suggest that statistics of pooled contrast responses contain information that corresponds with perceived visual similarity in a rapid, low-level categorization task.

  7. Spatially Pooled Contrast Responses Predict Neural and Perceptual Similarity of Naturalistic Image Categories

    Science.gov (United States)

    Groen, Iris I. A.; Ghebreab, Sennay; Lamme, Victor A. F.; Scholte, H. Steven

    2012-01-01

    The visual world is complex and continuously changing. Yet, our brain transforms patterns of light falling on our retina into a coherent percept within a few hundred milliseconds. Possibly, low-level neural responses already carry substantial information to facilitate rapid characterization of the visual input. Here, we computationally estimated low-level contrast responses to computer-generated naturalistic images, and tested whether spatial pooling of these responses could predict image similarity at the neural and behavioral level. Using EEG, we show that statistics derived from pooled responses explain a large amount of variance between single-image evoked potentials (ERPs) in individual subjects. Dissimilarity analysis on multi-electrode ERPs demonstrated that large differences between images in pooled response statistics are predictive of more dissimilar patterns of evoked activity, whereas images with little difference in statistics give rise to highly similar evoked activity patterns. In a separate behavioral experiment, images with large differences in statistics were judged as different categories, whereas images with little differences were confused. These findings suggest that statistics derived from low-level contrast responses can be extracted in early visual processing and can be relevant for rapid judgment of visual similarity. We compared our results with two other, well- known contrast statistics: Fourier power spectra and higher-order properties of contrast distributions (skewness and kurtosis). Interestingly, whereas these statistics allow for accurate image categorization, they do not predict ERP response patterns or behavioral categorization confusions. These converging computational, neural and behavioral results suggest that statistics of pooled contrast responses contain information that corresponds with perceived visual similarity in a rapid, low-level categorization task. PMID:23093921

  8. Statistical Inference at Work: Statistical Process Control as an Example

    Science.gov (United States)

    Bakker, Arthur; Kent, Phillip; Derry, Jan; Noss, Richard; Hoyles, Celia

    2008-01-01

    To characterise statistical inference in the workplace this paper compares a prototypical type of statistical inference at work, statistical process control (SPC), with a type of statistical inference that is better known in educational settings, hypothesis testing. Although there are some similarities between the reasoning structure involved in…

  9. Statistical Analysis of Protein Ensembles

    Science.gov (United States)

    Máté, Gabriell; Heermann, Dieter

    2014-04-01

    As 3D protein-configuration data is piling up, there is an ever-increasing need for well-defined, mathematically rigorous analysis approaches, especially that the vast majority of the currently available methods rely heavily on heuristics. We propose an analysis framework which stems from topology, the field of mathematics which studies properties preserved under continuous deformations. First, we calculate a barcode representation of the molecules employing computational topology algorithms. Bars in this barcode represent different topological features. Molecules are compared through their barcodes by statistically determining the difference in the set of their topological features. As a proof-of-principle application, we analyze a dataset compiled of ensembles of different proteins, obtained from the Ensemble Protein Database. We demonstrate that our approach correctly detects the different protein groupings.

  10. State analysis of BOP using statistical and heuristic methods

    International Nuclear Information System (INIS)

    Heo, Gyun Young; Chang, Soon Heung

    2003-01-01

    Under the deregulation environment, the performance enhancement of BOP in nuclear power plants is being highlighted. To analyze performance level of BOP, we use the performance test procedures provided from an authorized institution such as ASME. However, through plant investigation, it was proved that the requirements of the performance test procedures about the reliability and quantity of sensors was difficult to be satisfied. As a solution of this, state analysis method that are the expanded concept of signal validation, was proposed on the basis of the statistical and heuristic approaches. Authors recommended the statistical linear regression model by analyzing correlation among BOP parameters as a reference state analysis method. Its advantage is that its derivation is not heuristic, it is possible to calculate model uncertainty, and it is easy to apply to an actual plant. The error of the statistical linear regression model is below 3% under normal as well as abnormal system states. Additionally a neural network model was recommended since the statistical model is impossible to apply to the validation of all of the sensors and is sensitive to the outlier that is the signal located out of a statistical distribution. Because there are a lot of sensors need to be validated in BOP, wavelet analysis (WA) were applied as a pre-processor for the reduction of input dimension and for the enhancement of training accuracy. The outlier localization capability of WA enhanced the robustness of the neural network. The trained neural network restored the degraded signals to the values within ±3% of the true signals

  11. Precision Statistical Analysis of Images Based on Brightness Distribution

    Directory of Open Access Journals (Sweden)

    Muzhir Shaban Al-Ani

    2017-07-01

    Full Text Available Study the content of images is considered an important topic in which reasonable and accurate analysis of images are generated. Recently image analysis becomes a vital field because of huge number of images transferred via transmission media in our daily life. These crowded media with images lead to highlight in research area of image analysis. In this paper, the implemented system is passed into many steps to perform the statistical measures of standard deviation and mean values of both color and grey images. Whereas the last step of the proposed method concerns to compare the obtained results in different cases of the test phase. In this paper, the statistical parameters are implemented to characterize the content of an image and its texture. Standard deviation, mean and correlation values are used to study the intensity distribution of the tested images. Reasonable results are obtained for both standard deviation and mean value via the implementation of the system. The major issue addressed in the work is concentrated on brightness distribution via statistical measures applying different types of lighting.

  12. Assessment of statistical methods used in library-based approaches to microbial source tracking.

    Science.gov (United States)

    Ritter, Kerry J; Carruthers, Ethan; Carson, C Andrew; Ellender, R D; Harwood, Valerie J; Kingsley, Kyle; Nakatsu, Cindy; Sadowsky, Michael; Shear, Brian; West, Brian; Whitlock, John E; Wiggins, Bruce A; Wilbur, Jayson D

    2003-12-01

    Several commonly used statistical methods for fingerprint identification in microbial source tracking (MST) were examined to assess the effectiveness of pattern-matching algorithms to correctly identify sources. Although numerous statistical methods have been employed for source identification, no widespread consensus exists as to which is most appropriate. A large-scale comparison of several MST methods, using identical fecal sources, presented a unique opportunity to assess the utility of several popular statistical methods. These included discriminant analysis, nearest neighbour analysis, maximum similarity and average similarity, along with several measures of distance or similarity. Threshold criteria for excluding uncertain or poorly matched isolates from final analysis were also examined for their ability to reduce false positives and increase prediction success. Six independent libraries used in the study were constructed from indicator bacteria isolated from fecal materials of humans, seagulls, cows and dogs. Three of these libraries were constructed using the rep-PCR technique and three relied on antibiotic resistance analysis (ARA). Five of the libraries were constructed using Escherichia coli and one using Enterococcus spp. (ARA). Overall, the outcome of this study suggests a high degree of variability across statistical methods. Despite large differences in correct classification rates among the statistical methods, no single statistical approach emerged as superior. Thresholds failed to consistently increase rates of correct classification and improvement was often associated with substantial effective sample size reduction. Recommendations are provided to aid in selecting appropriate analyses for these types of data.

  13. Fisher statistics for analysis of diffusion tensor directional information.

    Science.gov (United States)

    Hutchinson, Elizabeth B; Rutecki, Paul A; Alexander, Andrew L; Sutula, Thomas P

    2012-04-30

    A statistical approach is presented for the quantitative analysis of diffusion tensor imaging (DTI) directional information using Fisher statistics, which were originally developed for the analysis of vectors in the field of paleomagnetism. In this framework, descriptive and inferential statistics have been formulated based on the Fisher probability density function, a spherical analogue of the normal distribution. The Fisher approach was evaluated for investigation of rat brain DTI maps to characterize tissue orientation in the corpus callosum, fornix, and hilus of the dorsal hippocampal dentate gyrus, and to compare directional properties in these regions following status epilepticus (SE) or traumatic brain injury (TBI) with values in healthy brains. Direction vectors were determined for each region of interest (ROI) for each brain sample and Fisher statistics were applied to calculate the mean direction vector and variance parameters in the corpus callosum, fornix, and dentate gyrus of normal rats and rats that experienced TBI or SE. Hypothesis testing was performed by calculation of Watson's F-statistic and associated p-value giving the likelihood that grouped observations were from the same directional distribution. In the fornix and midline corpus callosum, no directional differences were detected between groups, however in the hilus, significant (pstatistical comparison of tissue structural orientation. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. Statistical analysis of RHIC beam position monitors performance

    Science.gov (United States)

    Calaga, R.; Tomás, R.

    2004-04-01

    A detailed statistical analysis of beam position monitors (BPM) performance at RHIC is a critical factor in improving regular operations and future runs. Robust identification of malfunctioning BPMs plays an important role in any orbit or turn-by-turn analysis. Singular value decomposition and Fourier transform methods, which have evolved as powerful numerical techniques in signal processing, will aid in such identification from BPM data. This is the first attempt at RHIC to use a large set of data to statistically enhance the capability of these two techniques and determine BPM performance. A comparison from run 2003 data shows striking agreement between the two methods and hence can be used to improve BPM functioning at RHIC and possibly other accelerators.

  15. Statistical analysis of RHIC beam position monitors performance

    Directory of Open Access Journals (Sweden)

    R. Calaga

    2004-04-01

    Full Text Available A detailed statistical analysis of beam position monitors (BPM performance at RHIC is a critical factor in improving regular operations and future runs. Robust identification of malfunctioning BPMs plays an important role in any orbit or turn-by-turn analysis. Singular value decomposition and Fourier transform methods, which have evolved as powerful numerical techniques in signal processing, will aid in such identification from BPM data. This is the first attempt at RHIC to use a large set of data to statistically enhance the capability of these two techniques and determine BPM performance. A comparison from run 2003 data shows striking agreement between the two methods and hence can be used to improve BPM functioning at RHIC and possibly other accelerators.

  16. Statistics Education Research in Malaysia and the Philippines: A Comparative Analysis

    Science.gov (United States)

    Reston, Enriqueta; Krishnan, Saras; Idris, Noraini

    2014-01-01

    This paper presents a comparative analysis of statistics education research in Malaysia and the Philippines by modes of dissemination, research areas, and trends. An electronic search for published research papers in the area of statistics education from 2000-2012 yielded 20 for Malaysia and 19 for the Philippines. Analysis of these papers showed…

  17. Statistical analysis of next generation sequencing data

    CERN Document Server

    Nettleton, Dan

    2014-01-01

    Next Generation Sequencing (NGS) is the latest high throughput technology to revolutionize genomic research. NGS generates massive genomic datasets that play a key role in the big data phenomenon that surrounds us today. To extract signals from high-dimensional NGS data and make valid statistical inferences and predictions, novel data analytic and statistical techniques are needed. This book contains 20 chapters written by prominent statisticians working with NGS data. The topics range from basic preprocessing and analysis with NGS data to more complex genomic applications such as copy number variation and isoform expression detection. Research statisticians who want to learn about this growing and exciting area will find this book useful. In addition, many chapters from this book could be included in graduate-level classes in statistical bioinformatics for training future biostatisticians who will be expected to deal with genomic data in basic biomedical research, genomic clinical trials and personalized med...

  18. Analysis of filament statistics in fast camera data on MAST

    Science.gov (United States)

    Farley, Tom; Militello, Fulvio; Walkden, Nick; Harrison, James; Silburn, Scott; Bradley, James

    2017-10-01

    Coherent filamentary structures have been shown to play a dominant role in turbulent cross-field particle transport [D'Ippolito 2011]. An improved understanding of filaments is vital in order to control scrape off layer (SOL) density profiles and thus control first wall erosion, impurity flushing and coupling of radio frequency heating in future devices. The Elzar code [T. Farley, 2017 in prep.] is applied to MAST data. The code uses information about the magnetic equilibrium to calculate the intensity of light emission along field lines as seen in the camera images, as a function of the field lines' radial and toroidal locations at the mid-plane. In this way a `pseudo-inversion' of the intensity profiles in the camera images is achieved from which filaments can be identified and measured. In this work, a statistical analysis of the intensity fluctuations along field lines in the camera field of view is performed using techniques similar to those typically applied in standard Langmuir probe analyses. These filament statistics are interpreted in terms of the theoretical ergodic framework presented by F. Militello & J.T. Omotani, 2016, in order to better understand how time averaged filament dynamics produce the more familiar SOL density profiles. This work has received funding from the RCUK Energy programme (Grant Number EP/P012450/1), from Euratom (Grant Agreement No. 633053) and from the EUROfusion consortium.

  19. Analysis of statistical misconception in terms of statistical reasoning

    Science.gov (United States)

    Maryati, I.; Priatna, N.

    2018-05-01

    Reasoning skill is needed for everyone to face globalization era, because every person have to be able to manage and use information from all over the world which can be obtained easily. Statistical reasoning skill is the ability to collect, group, process, interpret, and draw conclusion of information. Developing this skill can be done through various levels of education. However, the skill is low because many people assume that statistics is just the ability to count and using formulas and so do students. Students still have negative attitude toward course which is related to research. The purpose of this research is analyzing students’ misconception in descriptive statistic course toward the statistical reasoning skill. The observation was done by analyzing the misconception test result and statistical reasoning skill test; observing the students’ misconception effect toward statistical reasoning skill. The sample of this research was 32 students of math education department who had taken descriptive statistic course. The mean value of misconception test was 49,7 and standard deviation was 10,6 whereas the mean value of statistical reasoning skill test was 51,8 and standard deviation was 8,5. If the minimal value is 65 to state the standard achievement of a course competence, students’ mean value is lower than the standard competence. The result of students’ misconception study emphasized on which sub discussion that should be considered. Based on the assessment result, it was found that students’ misconception happen on this: 1) writing mathematical sentence and symbol well, 2) understanding basic definitions, 3) determining concept that will be used in solving problem. In statistical reasoning skill, the assessment was done to measure reasoning from: 1) data, 2) representation, 3) statistic format, 4) probability, 5) sample, and 6) association.

  20. Comparative analysis of positive and negative attitudes toward statistics

    Science.gov (United States)

    Ghulami, Hassan Rahnaward; Ab Hamid, Mohd Rashid; Zakaria, Roslinazairimah

    2015-02-01

    Many statistics lecturers and statistics education researchers are interested to know the perception of their students' attitudes toward statistics during the statistics course. In statistics course, positive attitude toward statistics is a vital because it will be encourage students to get interested in the statistics course and in order to master the core content of the subject matters under study. Although, students who have negative attitudes toward statistics they will feel depressed especially in the given group assignment, at risk for failure, are often highly emotional, and could not move forward. Therefore, this study investigates the students' attitude towards learning statistics. Six latent constructs have been the measurement of students' attitudes toward learning statistic such as affect, cognitive competence, value, difficulty, interest, and effort. The questionnaire was adopted and adapted from the reliable and validate instrument of Survey of Attitudes towards Statistics (SATS). This study is conducted among engineering undergraduate engineering students in the university Malaysia Pahang (UMP). The respondents consist of students who were taking the applied statistics course from different faculties. From the analysis, it is found that the questionnaire is acceptable and the relationships among the constructs has been proposed and investigated. In this case, students show full effort to master the statistics course, feel statistics course enjoyable, have confidence that they have intellectual capacity, and they have more positive attitudes then negative attitudes towards statistics learning. In conclusion in terms of affect, cognitive competence, value, interest and effort construct the positive attitude towards statistics was mostly exhibited. While negative attitudes mostly exhibited by difficulty construct.

  1. Gene flow analysis method, the D-statistic, is robust in a wide parameter space.

    Science.gov (United States)

    Zheng, Yichen; Janke, Axel

    2018-01-08

    We evaluated the sensitivity of the D-statistic, a parsimony-like method widely used to detect gene flow between closely related species. This method has been applied to a variety of taxa with a wide range of divergence times. However, its parameter space and thus its applicability to a wide taxonomic range has not been systematically studied. Divergence time, population size, time of gene flow, distance of outgroup and number of loci were examined in a sensitivity analysis. The sensitivity study shows that the primary determinant of the D-statistic is the relative population size, i.e. the population size scaled by the number of generations since divergence. This is consistent with the fact that the main confounding factor in gene flow detection is incomplete lineage sorting by diluting the signal. The sensitivity of the D-statistic is also affected by the direction of gene flow, size and number of loci. In addition, we examined the ability of the f-statistics, [Formula: see text] and [Formula: see text], to estimate the fraction of a genome affected by gene flow; while these statistics are difficult to implement to practical questions in biology due to lack of knowledge of when the gene flow happened, they can be used to compare datasets with identical or similar demographic background. The D-statistic, as a method to detect gene flow, is robust against a wide range of genetic distances (divergence times) but it is sensitive to population size. The D-statistic should only be applied with critical reservation to taxa where population sizes are large relative to branch lengths in generations.

  2. Approach to analysis of inter-regional similarity of investment activity support measures in legislation of regions (on the example of Krasnoyarsk region

    Directory of Open Access Journals (Sweden)

    Valentina F. Lapo

    2017-01-01

    Full Text Available The most part of stimulation methods in Russia are concentrated in legal documents of the regions of the Russian Federation. They directed on intensification of investment activity in regions. How similar are these investment stimulation conceptions? There is no mention in the literature of the methodical questions of quantitative analysis and inter-regional comparisons. In addition, there are no results of statistical research of inter-regional correlations of stimulation methods and analysis of dynamics of this process. There are no special measuring instruments. The presented work is aimed at completion of these blanks. The approach for the spatial correlation analysis of legislative norms is offered in research. Classification of investments’ stimulation methods is developed. The way of preparing and coding data for research is offered. The approach and system of coefficients for the analysis of inter-regional interrelations of legislative systems of investments’ stimulation is offered. A proximity coefficient of regional legislation, a factor of structure similarity and a dynamic coincidence index are proposed. The space-time base of investment stimulation methods on Russian Federation regions for 12 years is collected and statistically processed for research. There are only 758 documents. A source of texts is a site of the Ministry of Justice of the Russian Federation.Research of documents has allowed revealing a number of laws in formation of regional investment stimulation systems. The regions that are the most similar in terms of structure of stimulation methods are identified. We have found the group of regions for which it is observed the increase in similarity of the legislation and the group with the reduction of similarity. Therefore, it is obvious that the general trend to reduction of similarity in the legislation takes place between Krasnoyarsk territory and the other regions of the Russian Federation. Calculations have

  3. Vapor Pressure Data Analysis and Statistics

    Science.gov (United States)

    2016-12-01

    near 8, 2000, and 200, respectively. The A (or a) value is directly related to vapor pressure and will be greater for high vapor pressure materials...1, (10) where n is the number of data points, Yi is the natural logarithm of the i th experimental vapor pressure value, and Xi is the...VAPOR PRESSURE DATA ANALYSIS AND STATISTICS ECBC-TR-1422 Ann Brozena RESEARCH AND TECHNOLOGY DIRECTORATE

  4. Statistical analysis of planktic foraminifera of the surface Continental ...

    African Journals Online (AJOL)

    Planktic foraminiferal assemblage recorded from selected samples obtained from shallow continental shelf sediments off southwestern Nigeria were subjected to statistical analysis. The Principal Component Analysis (PCA) was used to determine variants of planktic parameters. Values obtained for these parameters were ...

  5. Imaging mass spectrometry statistical analysis.

    Science.gov (United States)

    Jones, Emrys A; Deininger, Sören-Oliver; Hogendoorn, Pancras C W; Deelder, André M; McDonnell, Liam A

    2012-08-30

    Imaging mass spectrometry is increasingly used to identify new candidate biomarkers. This clinical application of imaging mass spectrometry is highly multidisciplinary: expertise in mass spectrometry is necessary to acquire high quality data, histology is required to accurately label the origin of each pixel's mass spectrum, disease biology is necessary to understand the potential meaning of the imaging mass spectrometry results, and statistics to assess the confidence of any findings. Imaging mass spectrometry data analysis is further complicated because of the unique nature of the data (within the mass spectrometry field); several of the assumptions implicit in the analysis of LC-MS/profiling datasets are not applicable to imaging. The very large size of imaging datasets and the reporting of many data analysis routines, combined with inadequate training and accessible reviews, have exacerbated this problem. In this paper we provide an accessible review of the nature of imaging data and the different strategies by which the data may be analyzed. Particular attention is paid to the assumptions of the data analysis routines to ensure that the reader is apprised of their correct usage in imaging mass spectrometry research. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. Applied Behavior Analysis and Statistical Process Control?

    Science.gov (United States)

    Hopkins, B. L.

    1995-01-01

    Incorporating statistical process control (SPC) methods into applied behavior analysis is discussed. It is claimed that SPC methods would likely reduce applied behavior analysts' intimate contacts with problems and would likely yield poor treatment and research decisions. Cases and data presented by Pfadt and Wheeler (1995) are cited as examples.…

  7. Statistical analysis and data mining of digital reconstructions of dendritic morphologies

    Directory of Open Access Journals (Sweden)

    Sridevi ePolavaram

    2014-12-01

    Full Text Available Neuronal morphology is diverse among animal species, developmental stages, brain regions, and cell types. The geometry of individual neurons also varies substantially even within the same cell class. Moreover, specific histological, imaging, and reconstruction methodologies can differentially affect morphometric measures. The quantitative characterization of neuronal arbors is necessary for in-depth understanding of the structure-function relationship in nervous systems. The large collection of community-contributed digitally reconstructed neurons available at NeuroMorpho.Org constitutes a big data research opportunity for neuroscience discovery beyond the approaches typically pursued in single laboratories. To illustrate these potential and related challenges, we present a database-wide statistical analysis of dendritic arbors enabling the quantification of major morphological similarities and differences across broadly adopted metadata categories. Furthermore, we adopt a complementary unsupervised approach based on clustering and dimensionality reduction to identify the main morphological parameters leading to the most statistically informative structural classification. We find that specific combinations of measures related to branching density, overall size, tortuosity, bifurcation angles, arbor flatness, and topological asymmetry can capture anatomically and functionally relevant features of dendritic trees. The reported results only represent a small fraction of the relationships available for data exploration and hypothesis testing enabled by digital sharing of morphological reconstructions.

  8. Statistical analysis and data mining of digital reconstructions of dendritic morphologies.

    Science.gov (United States)

    Polavaram, Sridevi; Gillette, Todd A; Parekh, Ruchi; Ascoli, Giorgio A

    2014-01-01

    Neuronal morphology is diverse among animal species, developmental stages, brain regions, and cell types. The geometry of individual neurons also varies substantially even within the same cell class. Moreover, specific histological, imaging, and reconstruction methodologies can differentially affect morphometric measures. The quantitative characterization of neuronal arbors is necessary for in-depth understanding of the structure-function relationship in nervous systems. The large collection of community-contributed digitally reconstructed neurons available at NeuroMorpho.Org constitutes a "big data" research opportunity for neuroscience discovery beyond the approaches typically pursued in single laboratories. To illustrate these potential and related challenges, we present a database-wide statistical analysis of dendritic arbors enabling the quantification of major morphological similarities and differences across broadly adopted metadata categories. Furthermore, we adopt a complementary unsupervised approach based on clustering and dimensionality reduction to identify the main morphological parameters leading to the most statistically informative structural classification. We find that specific combinations of measures related to branching density, overall size, tortuosity, bifurcation angles, arbor flatness, and topological asymmetry can capture anatomically and functionally relevant features of dendritic trees. The reported results only represent a small fraction of the relationships available for data exploration and hypothesis testing enabled by sharing of digital morphological reconstructions.

  9. Characterization of self-similarity properties of turbulence in magnetized plasmas

    International Nuclear Information System (INIS)

    Scipioni, A.; Rischette, P.; Bonhomme, G.; Devynck, P.

    2008-01-01

    The understanding of turbulence in magnetized plasmas and its role in the cross field transport is still greatly incomplete. Several previous works reported on evidences of long-time correlations compatible with an avalanche-type of radial transport. Persistence properties in time records have been deduced from high values of the Hurst exponent obtained with the rescaled range R/S analysis applied to experimental probe data acquired in the edge of tokamaks. In this paper the limitations of this R/S method, in particular when applied to signals having mixed statistics are investigated, and the great advantages of the wavelets decomposition as a tool to characterize the self-similarity properties of experimental signals are highlighted. Furthermore the analysis of modified simulated fractional Brownian motions (fBm) and fractional Gaussian noises (fGn) allows us to discuss the relationship between high values of the Hurst exponent and long range correlations. It is shown that for such simulated signals with mixed statistics persistence at large time scales can still reflect the self-similarity properties of the original fBm and do not imply the existence of long range correlations, which are destroyed. It is thus questionable to assert the existence of long range correlations for experimental signals with non-Gaussian and mixed statistics just from high values of the Hurst exponent.

  10. Finite element modeling of the human kidney for probabilistic occupant models: Statistical shape analysis and mesh morphing.

    Science.gov (United States)

    Yates, Keegan M; Untaroiu, Costin D

    2018-04-16

    Statistical shape analysis was conducted on 15 pairs (left and right) of human kidneys. It was shown that the left and right kidney were significantly different in size and shape. In addition, several common modes of kidney variation were identified using statistical shape analysis. Semi-automatic mesh morphing techniques have been developed to efficiently create subject specific meshes from a template mesh with a similar geometry. Subject specific meshes as well as probabilistic kidney meshes were created from a template mesh. Mesh quality remained about the same as the template mesh while only taking a fraction of the time to create the mesh from scratch or morph with manually identified landmarks. This technique can help enhance the quality of information gathered from experimental testing with subject specific meshes as well as help to more efficiently predict injury by creating models with the mean shape as well as models at the extremes for each principal component. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Statistical analysis of JET disruptions

    International Nuclear Information System (INIS)

    Tanga, A.; Johnson, M.F.

    1991-07-01

    In the operation of JET and of any tokamak many discharges are terminated by a major disruption. The disruptive termination of a discharge is usually an unwanted event which may cause damage to the structure of the vessel. In a reactor disruptions are potentially a very serious problem, hence the importance of studying them and devising methods to avoid disruptions. Statistical information has been collected about the disruptions which have occurred at JET over a long span of operations. The analysis is focused on the operational aspects of the disruptions rather than on the underlining physics. (Author)

  12. A statistically harmonized alignment-classification in image space enables accurate and robust alignment of noisy images in single particle analysis.

    Science.gov (United States)

    Kawata, Masaaki; Sato, Chikara

    2007-06-01

    In determining the three-dimensional (3D) structure of macromolecular assemblies in single particle analysis, a large representative dataset of two-dimensional (2D) average images from huge number of raw images is a key for high resolution. Because alignments prior to averaging are computationally intensive, currently available multireference alignment (MRA) software does not survey every possible alignment. This leads to misaligned images, creating blurred averages and reducing the quality of the final 3D reconstruction. We present a new method, in which multireference alignment is harmonized with classification (multireference multiple alignment: MRMA). This method enables a statistical comparison of multiple alignment peaks, reflecting the similarities between each raw image and a set of reference images. Among the selected alignment candidates for each raw image, misaligned images are statistically excluded, based on the principle that aligned raw images of similar projections have a dense distribution around the correctly aligned coordinates in image space. This newly developed method was examined for accuracy and speed using model image sets with various signal-to-noise ratios, and with electron microscope images of the Transient Receptor Potential C3 and the sodium channel. In every data set, the newly developed method outperformed conventional methods in robustness against noise and in speed, creating 2D average images of higher quality. This statistically harmonized alignment-classification combination should greatly improve the quality of single particle analysis.

  13. Simulation Experiments in Practice : Statistical Design and Regression Analysis

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2007-01-01

    In practice, simulation analysts often change only one factor at a time, and use graphical analysis of the resulting Input/Output (I/O) data. Statistical theory proves that more information is obtained when applying Design Of Experiments (DOE) and linear regression analysis. Unfortunately, classic

  14. Statistical analysis of the Ft. Calhoun reactor coolant pump system

    International Nuclear Information System (INIS)

    Patel, Bimal; Heising, C.D.

    1997-01-01

    In engineering science, statistical quality control techniques have traditionally been applied to control manufacturing processes. An application to commercial nuclear power plant maintenance and control is presented that can greatly improve plant safety. As a demonstration of such an approach, a specific system is analyzed: the reactor coolant pumps (RCPs) of the Ft. Calhoun nuclear power plant. This research uses capability analysis, Shewhart X-bar, R charts, canonical correlation methods, and design of experiments to analyze the process for the state of statistical control. The results obtained show that six out of ten parameters are under control specification limits and four parameters are not in the state of statistical control. The analysis shows that statistical process control methods can be applied as an early warning system capable of identifying significant equipment problems well in advance of traditional control room alarm indicators. Such a system would provide operators with ample time to respond to possible emergency situations and thus improve plant safety and reliability. (Author)

  15. Research and Development of Statistical Analysis Software System of Maize Seedling Experiment

    OpenAIRE

    Hui Cao

    2014-01-01

    In this study, software engineer measures were used to develop a set of software system for maize seedling experiments statistics and analysis works. During development works, B/S structure software design method was used and a set of statistics indicators for maize seedling evaluation were established. The experiments results indicated that this set of software system could finish quality statistics and analysis for maize seedling very well. The development of this software system explored a...

  16. Statistical trend analysis methods for temporal phenomena

    Energy Technology Data Exchange (ETDEWEB)

    Lehtinen, E.; Pulkkinen, U. [VTT Automation, (Finland); Poern, K. [Poern Consulting, Nykoeping (Sweden)

    1997-04-01

    We consider point events occurring in a random way in time. In many applications the pattern of occurrence is of intrinsic interest as indicating a trend or some other systematic feature in the rate of occurrence. The purpose of this report is to survey briefly different statistical trend analysis methods and illustrate their applicability to temporal phenomena in particular. The trend testing of point events is usually seen as the testing of the hypotheses concerning the intensity of the occurrence of events. When the intensity function is parametrized, the testing of trend is a typical parametric testing problem. In industrial applications the operational experience generally does not suggest any specified model and method in advance. Therefore, and particularly, if the Poisson process assumption is very questionable, it is desirable to apply tests that are valid for a wide variety of possible processes. The alternative approach for trend testing is to use some non-parametric procedure. In this report we have presented four non-parametric tests: The Cox-Stuart test, the Wilcoxon signed ranks test, the Mann test, and the exponential ordered scores test. In addition to the classical parametric and non-parametric approaches we have also considered the Bayesian trend analysis. First we discuss a Bayesian model, which is based on a power law intensity model. The Bayesian statistical inferences are based on the analysis of the posterior distribution of the trend parameters, and the probability of trend is immediately seen from these distributions. We applied some of the methods discussed in an example case. It should be noted, that this report is a feasibility study rather than a scientific evaluation of statistical methods, and the examples can only be seen as demonstrations of the methods. 14 refs, 10 figs.

  17. Statistical trend analysis methods for temporal phenomena

    International Nuclear Information System (INIS)

    Lehtinen, E.; Pulkkinen, U.; Poern, K.

    1997-04-01

    We consider point events occurring in a random way in time. In many applications the pattern of occurrence is of intrinsic interest as indicating a trend or some other systematic feature in the rate of occurrence. The purpose of this report is to survey briefly different statistical trend analysis methods and illustrate their applicability to temporal phenomena in particular. The trend testing of point events is usually seen as the testing of the hypotheses concerning the intensity of the occurrence of events. When the intensity function is parametrized, the testing of trend is a typical parametric testing problem. In industrial applications the operational experience generally does not suggest any specified model and method in advance. Therefore, and particularly, if the Poisson process assumption is very questionable, it is desirable to apply tests that are valid for a wide variety of possible processes. The alternative approach for trend testing is to use some non-parametric procedure. In this report we have presented four non-parametric tests: The Cox-Stuart test, the Wilcoxon signed ranks test, the Mann test, and the exponential ordered scores test. In addition to the classical parametric and non-parametric approaches we have also considered the Bayesian trend analysis. First we discuss a Bayesian model, which is based on a power law intensity model. The Bayesian statistical inferences are based on the analysis of the posterior distribution of the trend parameters, and the probability of trend is immediately seen from these distributions. We applied some of the methods discussed in an example case. It should be noted, that this report is a feasibility study rather than a scientific evaluation of statistical methods, and the examples can only be seen as demonstrations of the methods

  18. Combined data preprocessing and multivariate statistical analysis characterizes fed-batch culture of mouse hybridoma cells for rational medium design.

    Science.gov (United States)

    Selvarasu, Suresh; Kim, Do Yun; Karimi, Iftekhar A; Lee, Dong-Yup

    2010-10-01

    We present an integrated framework for characterizing fed-batch cultures of mouse hybridoma cells producing monoclonal antibody (mAb). This framework systematically combines data preprocessing, elemental balancing and statistical analysis technique. Initially, specific rates of cell growth, glucose/amino acid consumptions and mAb/metabolite productions were calculated via curve fitting using logistic equations, with subsequent elemental balancing of the preprocessed data indicating the presence of experimental measurement errors. Multivariate statistical analysis was then employed to understand physiological characteristics of the cellular system. The results from principal component analysis (PCA) revealed three major clusters of amino acids with similar trends in their consumption profiles: (i) arginine, threonine and serine, (ii) glycine, tyrosine, phenylalanine, methionine, histidine and asparagine, and (iii) lysine, valine and isoleucine. Further analysis using partial least square (PLS) regression identified key amino acids which were positively or negatively correlated with the cell growth, mAb production and the generation of lactate and ammonia. Based on these results, the optimal concentrations of key amino acids in the feed medium can be inferred, potentially leading to an increase in cell viability and productivity, as well as a decrease in toxic waste production. The study demonstrated how the current methodological framework using multivariate statistical analysis techniques can serve as a potential tool for deriving rational medium design strategies. Copyright © 2010 Elsevier B.V. All rights reserved.

  19. Testing Self-Similarity Through Lamperti Transformations

    KAUST Repository

    Lee, Myoungji

    2016-07-14

    Self-similar processes have been widely used in modeling real-world phenomena occurring in environmetrics, network traffic, image processing, and stock pricing, to name but a few. The estimation of the degree of self-similarity has been studied extensively, while statistical tests for self-similarity are scarce and limited to processes indexed in one dimension. This paper proposes a statistical hypothesis test procedure for self-similarity of a stochastic process indexed in one dimension and multi-self-similarity for a random field indexed in higher dimensions. If self-similarity is not rejected, our test provides a set of estimated self-similarity indexes. The key is to test stationarity of the inverse Lamperti transformations of the process. The inverse Lamperti transformation of a self-similar process is a strongly stationary process, revealing a theoretical connection between the two processes. To demonstrate the capability of our test, we test self-similarity of fractional Brownian motions and sheets, their time deformations and mixtures with Gaussian white noise, and the generalized Cauchy family. We also apply the self-similarity test to real data: annual minimum water levels of the Nile River, network traffic records, and surface heights of food wrappings. © 2016, International Biometric Society.

  20. StOCNET : Software for the statistical analysis of social networks

    NARCIS (Netherlands)

    Huisman, M.; van Duijn, M.A.J.

    2003-01-01

    StOCNET3 is an open software system in a Windows environment for the advanced statistical analysis of social networks. It provides a platform to make a number of recently developed and therefore not (yet) standard statistical methods available to a wider audience. A flexible user interface utilizing

  1. AutoBayes: A System for Generating Data Analysis Programs from Statistical Models

    OpenAIRE

    Fischer, Bernd; Schumann, Johann

    2003-01-01

    Data analysis is an important scientific task which is required whenever information needs to be extracted from raw data. Statistical approaches to data analysis, which use methods from probability theory and numerical analysis, are well-founded but dificult to implement: the development of a statistical data analysis program for any given application is time-consuming and requires substantial knowledge and experience in several areas. In this paper, we describe AutoBayes, a program synthesis...

  2. Equipment Maintenance management support system based on statistical analysis of maintenance history data

    International Nuclear Information System (INIS)

    Shimizu, S.; Ando, Y.; Morioka, T.

    1990-01-01

    Plant maintenance is recently becoming important with the increase in the number of nuclear power stations and in plant operating time. Various kinds of requirements for plant maintenance, such as countermeasures for equipment degradation and saving maintenance costs while keeping up plant reliability and productivity, are proposed. For this purpose, plant maintenance programs should be improved based on equipment reliability estimated by field data. In order to meet these requirements, it is planned to develop an equipment maintenance management support system for nuclear power plants based on statistical analysis of equipment maintenance history data. The large difference between this proposed new method and current similar methods is to evaluate not only failure data but maintenance data, which includes normal termination data and some degree of degradation or functional disorder data for equipment and parts. So, it is possible to utilize these field data for improving maintenance schedules and to evaluate actual equipment and parts reliability under the current maintenance schedule. In the present paper, the authors show the objectives of this system, an outline of this system and its functions, and the basic technique for collecting and managing of maintenance history data on statistical analysis. It is shown, from the results of feasibility tests using simulation data of maintenance history, that this system has the ability to provide useful information for maintenance and the design enhancement

  3. Observations in the statistical analysis of NBG-18 nuclear graphite strength tests

    International Nuclear Information System (INIS)

    Hindley, Michael P.; Mitchell, Mark N.; Blaine, Deborah C.; Groenwold, Albert A.

    2012-01-01

    Highlights: ► Statistical analysis of NBG-18 nuclear graphite strength test. ► A Weibull distribution and normal distribution is tested for all data. ► A Bimodal distribution in the CS data is confirmed. ► The CS data set has the lowest variance. ► A Combined data set is formed and has Weibull distribution. - Abstract: The purpose of this paper is to report on the selection of a statistical distribution chosen to represent the experimental material strength of NBG-18 nuclear graphite. Three large sets of samples were tested during the material characterisation of the Pebble Bed Modular Reactor and Core Structure Ceramics materials. These sets of samples are tensile strength, flexural strength and compressive strength (CS) measurements. A relevant statistical fit is determined and the goodness of fit is also evaluated for each data set. The data sets are also normalised for ease of comparison, and combined into one representative data set. The validity of this approach is demonstrated. A second failure mode distribution is found on the CS test data. Identifying this failure mode supports the similar observations made in the past. The success of fitting the Weibull distribution through the normalised data sets allows us to improve the basis for the estimates of the variability. This could also imply that the variability on the graphite strength for the different strength measures is based on the same flaw distribution and thus a property of the material.

  4. Statistical analysis of fatigue crack growth behavior for grade B cast steel

    International Nuclear Information System (INIS)

    Li, W.; Sakai, T.; Li, Q.; Wang, P.

    2011-01-01

    Tests for fatigue crack growth rate (FCGR) and crack-tip opening displacement (CTOD) were performed to clarify the fatigue crack growth behavior of a railway grade B cast steel. The threshold values of this steel with specific survival probabilities are evaluated, in which the mean value is 8.3516 MPa m 1/2 , very similar to the experimental value, about 8.7279 MPa m 1/2 . Under the conditions of plane strain and small-scale yielding, the values of fracture toughness for this steel with specific survival probabilities are converted from the corresponding critical CTOD values, in which the mean value is about 138.4256 MPa m 1/2 . In consideration of the inherent variability of crack growth rates, six statistical models are proposed to represent the probabilistic FCGR curves of this steel in entire crack propagation region from the viewpoints of statistical evaluation on the number of cycles at a given crack size and the crack growth rate at a given stress intensity factor range, stochastic characteristic of crack growth as well as statistical analysis of coefficient and exponent in FCGR power law equation. Based on the model adequacy checking, result shows that all models are basically in good agreement with test data. Although the probabilistic damage-tolerant design based on some models may involve a certain amount of risk in stable crack propagation region, they just accord with the fact that the dispersion degree of test data in this region is relatively smaller.

  5. Statistics that learn: can logistic discriminant analysis improve diagnosis in brain SPECT?

    International Nuclear Information System (INIS)

    Behin-Ain, S.; Barnden, L.; Kwiatek, R.; Del Fante, P.; Casse, R.; Burnet, R.; Chew, G.; Kitchener, M.; Boundy, K.; Unger, S.

    2002-01-01

    Full text: Logistic discriminant analysis (LDA) is a statistical technique capable of discriminating individuals within a diseased group against normals. It also enables classification of various diseases within a group of patients. This technique provides a quantitative, automated and non-subjective clinical diagnostic tool. Based on a population known to have the disease and a normal control group, an algorithm was developed and trained to identify regions in the human brain responsible for the disease in question. The algorithm outputs a statistical map representing diseased or normal probability on a voxel or cluster basis from which an index is generated for each subject. The algorithm also generates a set of coefficients which is used to generate an index for the purpose of classification of new subjects. The results are comparable and complement those of Statistical Parametric Mapping (SPM) which employs a more common linear discriminant technique. The results are presented for brain SPECT studies of two diseases: chronic fatigue syndrome (CFS) and fibromyalgia (FM). A 100% specificity and 94% sensitivity is achieved for the CFS study (similar to SPM results) and for the FM study 82% specificity and 94% sensitivity is achieved with corresponding SPM results showing 90% specificity and 82% sensitivity. The results encourages application of LDA for discrimination of new single subjects as well as of diseased and normal groups. Copyright (2002) The Australian and New Zealand Society of Nuclear Medicine Inc

  6. Statistical analysis and interpolation of compositional data in materials science.

    Science.gov (United States)

    Pesenson, Misha Z; Suram, Santosh K; Gregoire, John M

    2015-02-09

    Compositional data are ubiquitous in chemistry and materials science: analysis of elements in multicomponent systems, combinatorial problems, etc., lead to data that are non-negative and sum to a constant (for example, atomic concentrations). The constant sum constraint restricts the sampling space to a simplex instead of the usual Euclidean space. Since statistical measures such as mean and standard deviation are defined for the Euclidean space, traditional correlation studies, multivariate analysis, and hypothesis testing may lead to erroneous dependencies and incorrect inferences when applied to compositional data. Furthermore, composition measurements that are used for data analytics may not include all of the elements contained in the material; that is, the measurements may be subcompositions of a higher-dimensional parent composition. Physically meaningful statistical analysis must yield results that are invariant under the number of composition elements, requiring the application of specialized statistical tools. We present specifics and subtleties of compositional data processing through discussion of illustrative examples. We introduce basic concepts, terminology, and methods required for the analysis of compositional data and utilize them for the spatial interpolation of composition in a sputtered thin film. The results demonstrate the importance of this mathematical framework for compositional data analysis (CDA) in the fields of materials science and chemistry.

  7. An Application of Multivariate Statistical Analysis for Query-Driven Visualization

    Energy Technology Data Exchange (ETDEWEB)

    Gosink, Luke J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Garth, Christoph [Univ. of California, Davis, CA (United States); Anderson, John C. [Univ. of California, Davis, CA (United States); Bethel, E. Wes [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Joy, Kenneth I. [Univ. of California, Davis, CA (United States)

    2011-03-01

    Driven by the ability to generate ever-larger, increasingly complex data, there is an urgent need in the scientific community for scalable analysis methods that can rapidly identify salient trends in scientific data. Query-Driven Visualization (QDV) strategies are among the small subset of techniques that can address both large and highly complex datasets. This paper extends the utility of QDV strategies with a statistics-based framework that integrates non-parametric distribution estimation techniques with a new segmentation strategy to visually identify statistically significant trends and features within the solution space of a query. In this framework, query distribution estimates help users to interactively explore their query's solution and visually identify the regions where the combined behavior of constrained variables is most important, statistically, to their inquiry. Our new segmentation strategy extends the distribution estimation analysis by visually conveying the individual importance of each variable to these regions of high statistical significance. We demonstrate the analysis benefits these two strategies provide and show how they may be used to facilitate the refinement of constraints over variables expressed in a user's query. We apply our method to datasets from two different scientific domains to demonstrate its broad applicability.

  8. Design of nuclear fuel cells by means of a statistical analysis and a sensibility study

    International Nuclear Information System (INIS)

    Jauregui C, V.; Castillo M, J. A.; Ortiz S, J. J.; Montes T, J. L.; Perusquia del C, R.

    2013-10-01

    This work contains the results of the statistical analysis realized to study the nuclear fuel cells performance, considering the frequencies for the election of fuel bars used in the design of the same ones. The election of the bars used for the cells design are of 3 types, the first election shows that to the plotting the respective frequency is similar to a normal distribution, in the second case the frequencies graph is of type inverted square X 2 and the last election is when the bars are chosen in aleatory form. The heuristic techniques used for the cells design were the neural networks, the ant colonies and a hybrid between the dispersed search and the trajectories re-linkage. To carry out the statistical analysis in the cells design were considered the local power peak factor and the neutron infinite multiplication factor (k∞) of this. On the other hand, the performance of the designed cells was analyzed when verifying the position of the bars containing gadolinium. The results show that is possible to design cells of nuclear fuel with a good performance, when considering the frequency of the bars used in their design. (Author)

  9. Comprehensive analysis of tornado statistics in comparison to earthquakes: intensity and temporal behaviour

    Directory of Open Access Journals (Sweden)

    L. Schielicke

    2013-01-01

    Full Text Available Tornadoes and earthquakes are characterised by a high variability in their properties concerning intensity, geometric properties and temporal behaviour. Earthquakes are known for power-law behaviour in their intensity (Gutenberg–Richter law and temporal statistics (e.g. Omori law and interevent waiting times. The observed similarity of high variability of these two phenomena motivated us to compare the statistical behaviour of tornadoes using seismological methods and quest for power-law behaviour. In general, the statistics of tornadoes show power-law behaviour partly coextensive with characteristic scales when the temporal resolution is high (10 to 60 min. These characteristic scales match with the typical diurnal behaviour of tornadoes, which is characterised by a maximum of tornado occurrences in the late afternoon hours. Furthermore, the distributions support the observation that tornadoes cluster in time. Finally, we shortly discuss a possible similar underlying structure composed of heterogeneous, coupled, interactive threshold oscillators that possibly explains the observed behaviour.

  10. Color Image Segmentation Based on Statistics of Location and Feature Similarity

    Science.gov (United States)

    Mori, Fumihiko; Yamada, Hiromitsu; Mizuno, Makoto; Sugano, Naotoshi

    The process of “image segmentation and extracting remarkable regions” is an important research subject for the image understanding. However, an algorithm based on the global features is hardly found. The requisite of such an image segmentation algorism is to reduce as much as possible the over segmentation and over unification. We developed an algorithm using the multidimensional convex hull based on the density as the global feature. In the concrete, we propose a new algorithm in which regions are expanded according to the statistics of the region such as the mean value, standard deviation, maximum value and minimum value of pixel location, brightness and color elements and the statistics are updated. We also introduced a new concept of conspicuity degree and applied it to the various 21 images to examine the effectiveness. The remarkable object regions, which were extracted by the presented system, highly coincided with those which were pointed by the sixty four subjects who attended the psychological experiment.

  11. Explorations in Statistics: The Analysis of Ratios and Normalized Data

    Science.gov (United States)

    Curran-Everett, Douglas

    2013-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This ninth installment of "Explorations in Statistics" explores the analysis of ratios and normalized--or standardized--data. As researchers, we compute a ratio--a numerator divided by a denominator--to compute a…

  12. Statistical Energy Analysis (SEA) and Energy Finite Element Analysis (EFEA) Predictions for a Floor-Equipped Composite Cylinder

    Science.gov (United States)

    Grosveld, Ferdinand W.; Schiller, Noah H.; Cabell, Randolph H.

    2011-01-01

    Comet Enflow is a commercially available, high frequency vibroacoustic analysis software founded on Energy Finite Element Analysis (EFEA) and Energy Boundary Element Analysis (EBEA). Energy Finite Element Analysis (EFEA) was validated on a floor-equipped composite cylinder by comparing EFEA vibroacoustic response predictions with Statistical Energy Analysis (SEA) and experimental results. Statistical Energy Analysis (SEA) predictions were made using the commercial software program VA One 2009 from ESI Group. The frequency region of interest for this study covers the one-third octave bands with center frequencies from 100 Hz to 4000 Hz.

  13. Simulation Experiments in Practice : Statistical Design and Regression Analysis

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2007-01-01

    In practice, simulation analysts often change only one factor at a time, and use graphical analysis of the resulting Input/Output (I/O) data. The goal of this article is to change these traditional, naïve methods of design and analysis, because statistical theory proves that more information is

  14. 15N/14N isotopic ratio and statistical analysis: an efficient way of linking seized Ecstasy tablets

    International Nuclear Information System (INIS)

    Palhol, Fabien; Lamoureux, Catherine; Chabrillat, Martine; Naulet, Norbert

    2004-01-01

    In this study, the 15 N/ 14 N isotopic ratios of 106 samples of 3,4-methylenedioxymethamphetamine (MDMA) extracted from Ecstasy tablets are presented. These ratios, measured using gas chromatography-combustion-isotope ratio mass spectrometry (GC-C-IRMS), show a large discrimination between samples with a range of δ 15 N values between -17 and +19%o, depending on the precursors and the method used in clandestine laboratories. Thus, δ 15 N values can be used in a statistical analysis carried out in order to link Ecstasy tablets prepared with the same precursors and synthetic pathway. The similarity index obtained after principal component analysis and hierarchical cluster analysis appears to be an efficient way to group tablets seized in different places

  15. Statistical trend analysis methodology for rare failures in changing technical systems

    International Nuclear Information System (INIS)

    Ott, K.O.; Hoffmann, H.J.

    1983-07-01

    A methodology for a statistical trend analysis (STA) in failure rates is presented. It applies primarily to relatively rare events in changing technologies or components. The formulation is more general and the assumptions are less restrictive than in a previously published version. Relations of the statistical analysis and probabilistic assessment (PRA) are discussed in terms of categorization of decisions for action following particular failure events. The significance of tentatively identified trends is explored. In addition to statistical tests for trend significance, a combination of STA and PRA results quantifying the trend complement is proposed. The STA approach is compared with other concepts for trend characterization. (orig.)

  16. Analysis of thrips distribution: application of spatial statistics and Kriging

    Science.gov (United States)

    John Aleong; Bruce L. Parker; Margaret Skinner; Diantha Howard

    1991-01-01

    Kriging is a statistical technique that provides predictions for spatially and temporally correlated data. Observations of thrips distribution and density in Vermont soils are made in both space and time. Traditional statistical analysis of such data assumes that the counts taken over space and time are independent, which is not necessarily true. Therefore, to analyze...

  17. Statistical wind analysis for near-space applications

    Science.gov (United States)

    Roney, Jason A.

    2007-09-01

    Statistical wind models were developed based on the existing observational wind data for near-space altitudes between 60 000 and 100 000 ft (18 30 km) above ground level (AGL) at two locations, Akon, OH, USA, and White Sands, NM, USA. These two sites are envisioned as playing a crucial role in the first flights of high-altitude airships. The analysis shown in this paper has not been previously applied to this region of the stratosphere for such an application. Standard statistics were compiled for these data such as mean, median, maximum wind speed, and standard deviation, and the data were modeled with Weibull distributions. These statistics indicated, on a yearly average, there is a lull or a “knee” in the wind between 65 000 and 72 000 ft AGL (20 22 km). From the standard statistics, trends at both locations indicated substantial seasonal variation in the mean wind speed at these heights. The yearly and monthly statistical modeling indicated that Weibull distributions were a reasonable model for the data. Forecasts and hindcasts were done by using a Weibull model based on 2004 data and comparing the model with the 2003 and 2005 data. The 2004 distribution was also a reasonable model for these years. Lastly, the Weibull distribution and cumulative function were used to predict the 50%, 95%, and 99% winds, which are directly related to the expected power requirements of a near-space station-keeping airship. These values indicated that using only the standard deviation of the mean may underestimate the operational conditions.

  18. Analysis of photon statistics with Silicon Photomultiplier

    International Nuclear Information System (INIS)

    D'Ascenzo, N.; Saveliev, V.; Wang, L.; Xie, Q.

    2015-01-01

    The Silicon Photomultiplier (SiPM) is a novel silicon-based photodetector, which represents the modern perspective of low photon flux detection. The aim of this paper is to provide an introduction on the statistical analysis methods needed to understand and estimate in quantitative way the correct features and description of the response of the SiPM to a coherent source of light

  19. Development of statistical analysis code for meteorological data (W-View)

    International Nuclear Information System (INIS)

    Tachibana, Haruo; Sekita, Tsutomu; Yamaguchi, Takenori

    2003-03-01

    A computer code (W-View: Weather View) was developed to analyze the meteorological data statistically based on 'the guideline of meteorological statistics for the safety analysis of nuclear power reactor' (Nuclear Safety Commission on January 28, 1982; revised on March 29, 2001). The code gives statistical meteorological data to assess the public dose in case of normal operation and severe accident to get the license of nuclear reactor operation. This code was revised from the original code used in a large office computer code to enable a personal computer user to analyze the meteorological data simply and conveniently and to make the statistical data tables and figures of meteorology. (author)

  20. Statistical analysis of the Ft. Calhoun reactor coolant pump system

    International Nuclear Information System (INIS)

    Heising, Carolyn D.

    1998-01-01

    In engineering science, statistical quality control techniques have traditionally been applied to control manufacturing processes. An application to commercial nuclear power plant maintenance and control is presented that can greatly improve plant safety. As a demonstration of such an approach to plant maintenance and control, a specific system is analyzed: the reactor coolant pumps (RCPs) of the Ft. Calhoun nuclear power plant. This research uses capability analysis, Shewhart X-bar, R-charts, canonical correlation methods, and design of experiments to analyze the process for the state of statistical control. The results obtained show that six out of ten parameters are under control specifications limits and four parameters are not in the state of statistical control. The analysis shows that statistical process control methods can be applied as an early warning system capable of identifying significant equipment problems well in advance of traditional control room alarm indicators Such a system would provide operators with ample time to respond to possible emergency situations and thus improve plant safety and reliability. (author)

  1. Propensity Score Analysis: An Alternative Statistical Approach for HRD Researchers

    Science.gov (United States)

    Keiffer, Greggory L.; Lane, Forrest C.

    2016-01-01

    Purpose: This paper aims to introduce matching in propensity score analysis (PSA) as an alternative statistical approach for researchers looking to make causal inferences using intact groups. Design/methodology/approach: An illustrative example demonstrated the varying results of analysis of variance, analysis of covariance and PSA on a heuristic…

  2. UNCOVERING THE FORMATION OF ULTRACOMPACT DWARF GALAXIES BY MULTIVARIATE STATISTICAL ANALYSIS

    International Nuclear Information System (INIS)

    Chattopadhyay, Tanuka; Sharina, Margarita; Davoust, Emmanuel; De, Tuli; Chattopadhyay, Asis Kumar

    2012-01-01

    We present a statistical analysis of the properties of a large sample of dynamically hot old stellar systems, from globular clusters (GCs) to giant ellipticals, which was performed in order to investigate the origin of ultracompact dwarf galaxies (UCDs). The data were mostly drawn from Forbes et al. We recalculated some of the effective radii, computed mean surface brightnesses and mass-to-light ratios, and estimated ages and metallicities. We completed the sample with GCs of M31. We used a multivariate statistical technique (K-Means clustering), together with a new algorithm (Gap Statistics) for finding the optimum number of homogeneous sub-groups in the sample, using a total of six parameters (absolute magnitude, effective radius, virial mass-to-light ratio, stellar mass-to-light ratio, and metallicity). We found six groups. FK1 and FK5 are composed of high- and low-mass elliptical galaxies, respectively. FK3 and FK6 are composed of high-metallicity and low-metallicity objects, respectively, and both include GCs and UCDs. Two very small groups, FK2 and FK4, are composed of Local Group dwarf spheroidals. Our groups differ in their mean masses and virial mass-to-light ratios. The relations between these two parameters are also different for the various groups. The probability density distributions of metallicity for the four groups of galaxies are similar to those of the GCs and UCDs. The brightest low-metallicity GCs and UCDs tend to follow the mass-metallicity relation like elliptical galaxies. The objects of FK3 are more metal-rich per unit effective luminosity density than high-mass ellipticals.

  3. UNCOVERING THE FORMATION OF ULTRACOMPACT DWARF GALAXIES BY MULTIVARIATE STATISTICAL ANALYSIS

    Energy Technology Data Exchange (ETDEWEB)

    Chattopadhyay, Tanuka [Department of Applied Mathematics, Calcutta University, 92 A.P.C. Road, Calcutta 700009 (India); Sharina, Margarita [Special Astrophysical Observatory, Russian Academy of Sciences, N. Arkhyz, KCh R 369167 (Russian Federation); Davoust, Emmanuel [IRAP, Universite de Toulouse, CNRS, 14 Avenue Edouard Belin, 31400 Toulouse (France); De, Tuli; Chattopadhyay, Asis Kumar, E-mail: tanuka@iucaa.ernet.in, E-mail: sme@sao.ru, E-mail: davoust@ast.obs-mip.fr, E-mail: akcstat@caluniv.ac.in [Department of Statistics, Calcutta University, 35 B.C. Road, Calcutta 700019 (India)

    2012-05-10

    We present a statistical analysis of the properties of a large sample of dynamically hot old stellar systems, from globular clusters (GCs) to giant ellipticals, which was performed in order to investigate the origin of ultracompact dwarf galaxies (UCDs). The data were mostly drawn from Forbes et al. We recalculated some of the effective radii, computed mean surface brightnesses and mass-to-light ratios, and estimated ages and metallicities. We completed the sample with GCs of M31. We used a multivariate statistical technique (K-Means clustering), together with a new algorithm (Gap Statistics) for finding the optimum number of homogeneous sub-groups in the sample, using a total of six parameters (absolute magnitude, effective radius, virial mass-to-light ratio, stellar mass-to-light ratio, and metallicity). We found six groups. FK1 and FK5 are composed of high- and low-mass elliptical galaxies, respectively. FK3 and FK6 are composed of high-metallicity and low-metallicity objects, respectively, and both include GCs and UCDs. Two very small groups, FK2 and FK4, are composed of Local Group dwarf spheroidals. Our groups differ in their mean masses and virial mass-to-light ratios. The relations between these two parameters are also different for the various groups. The probability density distributions of metallicity for the four groups of galaxies are similar to those of the GCs and UCDs. The brightest low-metallicity GCs and UCDs tend to follow the mass-metallicity relation like elliptical galaxies. The objects of FK3 are more metal-rich per unit effective luminosity density than high-mass ellipticals.

  4. Simulation Experiments in Practice: Statistical Design and Regression Analysis

    OpenAIRE

    Kleijnen, J.P.C.

    2007-01-01

    In practice, simulation analysts often change only one factor at a time, and use graphical analysis of the resulting Input/Output (I/O) data. The goal of this article is to change these traditional, naïve methods of design and analysis, because statistical theory proves that more information is obtained when applying Design Of Experiments (DOE) and linear regression analysis. Unfortunately, classic DOE and regression analysis assume a single simulation response that is normally and independen...

  5. Statistical analysis of thermal conductivity of nanofluid containing ...

    Indian Academy of Sciences (India)

    Thermal conductivity measurements of nanofluids were analysed via two-factor completely randomized design and comparison of data means is carried out with Duncan's multiple-range test. Statistical analysis of experimental data show that temperature and weight fraction have a reasonable impact on the thermal ...

  6. Longitudinal data analysis a handbook of modern statistical methods

    CERN Document Server

    Fitzmaurice, Garrett; Verbeke, Geert; Molenberghs, Geert

    2008-01-01

    Although many books currently available describe statistical models and methods for analyzing longitudinal data, they do not highlight connections between various research threads in the statistical literature. Responding to this void, Longitudinal Data Analysis provides a clear, comprehensive, and unified overview of state-of-the-art theory and applications. It also focuses on the assorted challenges that arise in analyzing longitudinal data. After discussing historical aspects, leading researchers explore four broad themes: parametric modeling, nonparametric and semiparametric methods, joint

  7. Mathematical statistics

    CERN Document Server

    Pestman, Wiebe R

    2009-01-01

    This textbook provides a broad and solid introduction to mathematical statistics, including the classical subjects hypothesis testing, normal regression analysis, and normal analysis of variance. In addition, non-parametric statistics and vectorial statistics are considered, as well as applications of stochastic analysis in modern statistics, e.g., Kolmogorov-Smirnov testing, smoothing techniques, robustness and density estimation. For students with some elementary mathematical background. With many exercises. Prerequisites from measure theory and linear algebra are presented.

  8. Bayesian Sensitivity Analysis of Statistical Models with Missing Data.

    Science.gov (United States)

    Zhu, Hongtu; Ibrahim, Joseph G; Tang, Niansheng

    2014-04-01

    Methods for handling missing data depend strongly on the mechanism that generated the missing values, such as missing completely at random (MCAR) or missing at random (MAR), as well as other distributional and modeling assumptions at various stages. It is well known that the resulting estimates and tests may be sensitive to these assumptions as well as to outlying observations. In this paper, we introduce various perturbations to modeling assumptions and individual observations, and then develop a formal sensitivity analysis to assess these perturbations in the Bayesian analysis of statistical models with missing data. We develop a geometric framework, called the Bayesian perturbation manifold, to characterize the intrinsic structure of these perturbations. We propose several intrinsic influence measures to perform sensitivity analysis and quantify the effect of various perturbations to statistical models. We use the proposed sensitivity analysis procedure to systematically investigate the tenability of the non-ignorable missing at random (NMAR) assumption. Simulation studies are conducted to evaluate our methods, and a dataset is analyzed to illustrate the use of our diagnostic measures.

  9. Advanced data analysis in neuroscience integrating statistical and computational models

    CERN Document Server

    Durstewitz, Daniel

    2017-01-01

    This book is intended for use in advanced graduate courses in statistics / machine learning, as well as for all experimental neuroscientists seeking to understand statistical methods at a deeper level, and theoretical neuroscientists with a limited background in statistics. It reviews almost all areas of applied statistics, from basic statistical estimation and test theory, linear and nonlinear approaches for regression and classification, to model selection and methods for dimensionality reduction, density estimation and unsupervised clustering.  Its focus, however, is linear and nonlinear time series analysis from a dynamical systems perspective, based on which it aims to convey an understanding also of the dynamical mechanisms that could have generated observed time series. Further, it integrates computational modeling of behavioral and neural dynamics with statistical estimation and hypothesis testing. This way computational models in neuroscience are not only explanat ory frameworks, but become powerfu...

  10. Quantitative analysis and IBM SPSS statistics a guide for business and finance

    CERN Document Server

    Aljandali, Abdulkader

    2016-01-01

    This guide is for practicing statisticians and data scientists who use IBM SPSS for statistical analysis of big data in business and finance. This is the first of a two-part guide to SPSS for Windows, introducing data entry into SPSS, along with elementary statistical and graphical methods for summarizing and presenting data. Part I also covers the rudiments of hypothesis testing and business forecasting while Part II will present multivariate statistical methods, more advanced forecasting methods, and multivariate methods. IBM SPSS Statistics offers a powerful set of statistical and information analysis systems that run on a wide variety of personal computers. The software is built around routines that have been developed, tested, and widely used for more than 20 years. As such, IBM SPSS Statistics is extensively used in industry, commerce, banking, local and national governments, and education. Just a small subset of users of the package include the major clearing banks, the BBC, British Gas, British Airway...

  11. What type of statistical model to choose for the analysis of radioimmunoassays

    International Nuclear Information System (INIS)

    Huet, S.

    1984-01-01

    The current techniques used for statistical analysis of radioimmunoassays are not very satisfactory for either the statistician or the biologist. They are based on an attempt to make the response curve linear to avoid complicated computations. The present article shows that this practice has considerable effects (often neglected) on the statistical assumptions which must be formulated. A more strict analysis is proposed by applying the four-parameter logistic model. The advantages of this method are: the statistical assumptions formulated are based on observed data, and the model can be applied to almost all radioimmunoassays [fr

  12. Self-similar continued root approximants

    International Nuclear Information System (INIS)

    Gluzman, S.; Yukalov, V.I.

    2012-01-01

    A novel method of summing asymptotic series is advanced. Such series repeatedly arise when employing perturbation theory in powers of a small parameter for complicated problems of condensed matter physics, statistical physics, and various applied problems. The method is based on the self-similar approximation theory involving self-similar root approximants. The constructed self-similar continued roots extrapolate asymptotic series to finite values of the expansion parameter. The self-similar continued roots contain, as a particular case, continued fractions and Padé approximants. A theorem on the convergence of the self-similar continued roots is proved. The method is illustrated by several examples from condensed-matter physics.

  13. Similar speaker recognition using nonlinear analysis

    International Nuclear Information System (INIS)

    Seo, J.P.; Kim, M.S.; Baek, I.C.; Kwon, Y.H.; Lee, K.S.; Chang, S.W.; Yang, S.I.

    2004-01-01

    Speech features of the conventional speaker identification system, are usually obtained by linear methods in spectral space. However, these methods have the drawback that speakers with similar voices cannot be distinguished, because the characteristics of their voices are also similar in spectral space. To overcome the difficulty in linear methods, we propose to use the correlation exponent in the nonlinear space as a new feature vector for speaker identification among persons with similar voices. We show that our proposed method surprisingly reduces the error rate of speaker identification system to speakers with similar voices

  14. MODEL APPLICATION MULTIVARIATE ANALYSIS OF STATISTICAL TECHNIQUES PCA AND HCA ASSESSMENT QUESTIONNAIRE ON CUSTOMER SATISFACTION: CASE STUDY IN A METALLURGICAL COMPANY OF METAL CONTAINERS

    Directory of Open Access Journals (Sweden)

    Cláudio Roberto Rosário

    2012-07-01

    Full Text Available The purpose of this research is to improve the practice on customer satisfaction analysis The article presents an analysis model to analyze the answers of a customer satisfaction evaluation in a systematic way with the aid of multivariate statistical techniques, specifically, exploratory analysis with PCA – Partial Components Analysis with HCA - Hierarchical Cluster Analysis. It was tried to evaluate the applicability of the model to be used by the issue company as a tool to assist itself on identifying the value chain perceived by the customer when applied the questionnaire of customer satisfaction. It was found with the assistance of multivariate statistical analysis that it was observed similar behavior among customers. It also allowed the company to conduct reviews on questions of the questionnaires, using analysis of the degree of correlation between the questions that was not a company’s practice before this research.

  15. Computerized statistical analysis with bootstrap method in nuclear medicine

    International Nuclear Information System (INIS)

    Zoccarato, O.; Sardina, M.; Zatta, G.; De Agostini, A.; Barbesti, S.; Mana, O.; Tarolo, G.L.

    1988-01-01

    Statistical analysis of data samples involves some hypothesis about the features of data themselves. The accuracy of these hypotheses can influence the results of statistical inference. Among the new methods of computer-aided statistical analysis, the bootstrap method appears to be one of the most powerful, thanks to its ability to reproduce many artificial samples starting from a single original sample and because it works without hypothesis about data distribution. The authors applied the bootstrap method to two typical situation of Nuclear Medicine Department. The determination of the normal range of serum ferritin, as assessed by radioimmunoassay and defined by the mean value ±2 standard deviations, starting from an experimental sample of small dimension, shows an unacceptable lower limit (ferritin plasmatic levels below zero). On the contrary, the results obtained by elaborating 5000 bootstrap samples gives ans interval of values (10.95 ng/ml - 72.87 ng/ml) corresponding to the normal ranges commonly reported. Moreover the authors applied the bootstrap method in evaluating the possible error associated with the correlation coefficient determined between left ventricular ejection fraction (LVEF) values obtained by first pass radionuclide angiocardiography with 99m Tc and 195m Au. The results obtained indicate a high degree of statistical correlation and give the range of r 2 values to be considered acceptable for this type of studies

  16. Software for statistical data analysis used in Higgs searches

    International Nuclear Information System (INIS)

    Gumpert, Christian; Moneta, Lorenzo; Cranmer, Kyle; Kreiss, Sven; Verkerke, Wouter

    2014-01-01

    The analysis and interpretation of data collected by the Large Hadron Collider (LHC) requires advanced statistical tools in order to quantify the agreement between observation and theoretical models. RooStats is a project providing a statistical framework for data analysis with the focus on discoveries, confidence intervals and combination of different measurements in both Bayesian and frequentist approaches. It employs the RooFit data modelling language where mathematical concepts such as variables, (probability density) functions and integrals are represented as C++ objects. RooStats and RooFit rely on the persistency technology of the ROOT framework. The usage of a common data format enables the concept of digital publishing of complicated likelihood functions. The statistical tools have been developed in close collaboration with the LHC experiments to ensure their applicability to real-life use cases. Numerous physics results have been produced using the RooStats tools, with the discovery of the Higgs boson by the ATLAS and CMS experiments being certainly the most popular among them. We will discuss tools currently used by LHC experiments to set exclusion limits, to derive confidence intervals and to estimate discovery significances based on frequentist statistics and the asymptotic behaviour of likelihood functions. Furthermore, new developments in RooStats and performance optimisation necessary to cope with complex models depending on more than 1000 variables will be reviewed

  17. PRECISE - pregabalin in addition to usual care: Statistical analysis plan

    NARCIS (Netherlands)

    S. Mathieson (Stephanie); L. Billot (Laurent); C. Maher (Chris); A.J. McLachlan (Andrew J.); J. Latimer (Jane); B.W. Koes (Bart); M.J. Hancock (Mark J.); I. Harris (Ian); R.O. Day (Richard O.); J. Pik (Justin); S. Jan (Stephen); C.-W.C. Lin (Chung-Wei Christine)

    2016-01-01

    textabstractBackground: Sciatica is a severe, disabling condition that lacks high quality evidence for effective treatment strategies. This a priori statistical analysis plan describes the methodology of analysis for the PRECISE study. Methods/design: PRECISE is a prospectively registered, double

  18. Statistical margin to DNB safety analysis approach for LOFT

    International Nuclear Information System (INIS)

    Atkinson, S.A.

    1982-01-01

    A method was developed and used for LOFT thermal safety analysis to estimate the statistical margin to DNB for the hot rod, and to base safety analysis on desired DNB probability limits. This method is an advanced approach using response surface analysis methods, a very efficient experimental design, and a 2nd-order response surface equation with a 2nd-order error propagation analysis to define the MDNBR probability density function. Calculations for limiting transients were used in the response surface analysis thereby including transient interactions and trip uncertainties in the MDNBR probability density

  19. Multivariate statistical analysis of atom probe tomography data

    International Nuclear Information System (INIS)

    Parish, Chad M.; Miller, Michael K.

    2010-01-01

    The application of spectrum imaging multivariate statistical analysis methods, specifically principal component analysis (PCA), to atom probe tomography (APT) data has been investigated. The mathematical method of analysis is described and the results for two example datasets are analyzed and presented. The first dataset is from the analysis of a PM 2000 Fe-Cr-Al-Ti steel containing two different ultrafine precipitate populations. PCA properly describes the matrix and precipitate phases in a simple and intuitive manner. A second APT example is from the analysis of an irradiated reactor pressure vessel steel. Fine, nm-scale Cu-enriched precipitates having a core-shell structure were identified and qualitatively described by PCA. Advantages, disadvantages, and future prospects for implementing these data analysis methodologies for APT datasets, particularly with regard to quantitative analysis, are also discussed.

  20. Development of statistical analysis code for meteorological data (W-View)

    Energy Technology Data Exchange (ETDEWEB)

    Tachibana, Haruo; Sekita, Tsutomu; Yamaguchi, Takenori [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2003-03-01

    A computer code (W-View: Weather View) was developed to analyze the meteorological data statistically based on 'the guideline of meteorological statistics for the safety analysis of nuclear power reactor' (Nuclear Safety Commission on January 28, 1982; revised on March 29, 2001). The code gives statistical meteorological data to assess the public dose in case of normal operation and severe accident to get the license of nuclear reactor operation. This code was revised from the original code used in a large office computer code to enable a personal computer user to analyze the meteorological data simply and conveniently and to make the statistical data tables and figures of meteorology. (author)

  1. Structural similarities between brain and linguistic data provide evidence of semantic relations in the brain.

    Directory of Open Access Journals (Sweden)

    Colleen E Crangle

    Full Text Available This paper presents a new method of analysis by which structural similarities between brain data and linguistic data can be assessed at the semantic level. It shows how to measure the strength of these structural similarities and so determine the relatively better fit of the brain data with one semantic model over another. The first model is derived from WordNet, a lexical database of English compiled by language experts. The second is given by the corpus-based statistical technique of latent semantic analysis (LSA, which detects relations between words that are latent or hidden in text. The brain data are drawn from experiments in which statements about the geography of Europe were presented auditorily to participants who were asked to determine their truth or falsity while electroencephalographic (EEG recordings were made. The theoretical framework for the analysis of the brain and semantic data derives from axiomatizations of theories such as the theory of differences in utility preference. Using brain-data samples from individual trials time-locked to the presentation of each word, ordinal relations of similarity differences are computed for the brain data and for the linguistic data. In each case those relations that are invariant with respect to the brain and linguistic data, and are correlated with sufficient statistical strength, amount to structural similarities between the brain and linguistic data. Results show that many more statistically significant structural similarities can be found between the brain data and the WordNet-derived data than the LSA-derived data. The work reported here is placed within the context of other recent studies of semantics and the brain. The main contribution of this paper is the new method it presents for the study of semantics and the brain and the focus it permits on networks of relations detected in brain data and represented by a semantic model.

  2. CORSSA: Community Online Resource for Statistical Seismicity Analysis

    Science.gov (United States)

    Zechar, J. D.; Hardebeck, J. L.; Michael, A. J.; Naylor, M.; Steacy, S.; Wiemer, S.; Zhuang, J.

    2011-12-01

    Statistical seismology is critical to the understanding of seismicity, the evaluation of proposed earthquake prediction and forecasting methods, and the assessment of seismic hazard. Unfortunately, despite its importance to seismology-especially to those aspects with great impact on public policy-statistical seismology is mostly ignored in the education of seismologists, and there is no central repository for the existing open-source software tools. To remedy these deficiencies, and with the broader goal to enhance the quality of statistical seismology research, we have begun building the Community Online Resource for Statistical Seismicity Analysis (CORSSA, www.corssa.org). We anticipate that the users of CORSSA will range from beginning graduate students to experienced researchers. More than 20 scientists from around the world met for a week in Zurich in May 2010 to kick-start the creation of CORSSA: the format and initial table of contents were defined; a governing structure was organized; and workshop participants began drafting articles. CORSSA materials are organized with respect to six themes, each will contain between four and eight articles. CORSSA now includes seven articles with an additional six in draft form along with forums for discussion, a glossary, and news about upcoming meetings, special issues, and recent papers. Each article is peer-reviewed and presents a balanced discussion, including illustrative examples and code snippets. Topics in the initial set of articles include: introductions to both CORSSA and statistical seismology, basic statistical tests and their role in seismology; understanding seismicity catalogs and their problems; basic techniques for modeling seismicity; and methods for testing earthquake predictability hypotheses. We have also begun curating a collection of statistical seismology software packages.

  3. Recent advances in statistical energy analysis

    Science.gov (United States)

    Heron, K. H.

    1992-01-01

    Statistical Energy Analysis (SEA) has traditionally been developed using modal summation and averaging approach, and has led to the need for many restrictive SEA assumptions. The assumption of 'weak coupling' is particularly unacceptable when attempts are made to apply SEA to structural coupling. It is now believed that this assumption is more a function of the modal formulation rather than a necessary formulation of SEA. The present analysis ignores this restriction and describes a wave approach to the calculation of plate-plate coupling loss factors. Predictions based on this method are compared with results obtained from experiments using point excitation on one side of an irregular six-sided box structure. Conclusions show that the use and calculation of infinite transmission coefficients is the way forward for the development of a purely predictive SEA code.

  4. Application of multivariate statistical methods to classify archaeological pottery from Tel-Alramad site, Syria, based on x-ray fluorescence analysis

    International Nuclear Information System (INIS)

    Bakraji, E. H.

    2007-01-01

    Radioisotopic x-ray fluorescence (XRF) analysis has been utilized to determine the elemental composition of 55 archaeological pottery samples by the determination of 17 chemical elements. Fifty-four of them came from the Tel-Alramad Site in Katana town, near Damascus city, Syria, and one sample came from Brazil. The XRF results have been processed using two multivariate statistical methods, cluster and factor analysis, in order to determine similarities and correlation between the selected samples based on their elemental composition. The methodology successfully separates the samples where four distinct chemical groups were identified. (author)

  5. Statistical analysis of tourism destination competitiveness

    Directory of Open Access Journals (Sweden)

    Attilio Gardini

    2013-05-01

    Full Text Available The growing relevance of tourism industry for modern advanced economies has increased the interest among researchers and policy makers in the statistical analysis of destination competitiveness. In this paper we outline a new model of destination competitiveness based on sound theoretical grounds and we develop a statistical test of the model on sample data based on Italian tourist destination decisions and choices. Our model focuses on the tourism decision process which starts from the demand schedule for holidays and ends with the choice of a specific holiday destination. The demand schedule is a function of individual preferences and of destination positioning, while the final decision is a function of the initial demand schedule and the information concerning services for accommodation and recreation in the selected destinations. Moreover, we extend previous studies that focused on image or attributes (such as climate and scenery by paying more attention to the services for accommodation and recreation in the holiday destinations. We test the proposed model using empirical data collected from a sample of 1.200 Italian tourists interviewed in 2007 (October - December. Data analysis shows that the selection probability for the destination included in the consideration set is not proportional to the share of inclusion because the share of inclusion is determined by the brand image, while the selection of the effective holiday destination is influenced by the real supply conditions. The analysis of Italian tourists preferences underline the existence of a latent demand for foreign holidays which points out a risk of market share reduction for Italian tourism system in the global market. We also find a snow ball effect which helps the most popular destinations, mainly in the northern Italian regions.

  6. Visual and statistical analysis of 18F-FDG PET in primary progressive aphasia

    International Nuclear Information System (INIS)

    Matias-Guiu, Jordi A.; Moreno-Ramos, Teresa; Garcia-Ramos, Rocio; Fernandez-Matarrubia, Marta; Oreja-Guevara, Celia; Matias-Guiu, Jorge; Cabrera-Martin, Maria Nieves; Perez-Castejon, Maria Jesus; Rodriguez-Rey, Cristina; Ortega-Candil, Aida; Carreras, Jose Luis

    2015-01-01

    Diagnosing progressive primary aphasia (PPA) and its variants is of great clinical importance, and fluorodeoxyglucose (FDG) positron emission tomography (PET) may be a useful diagnostic technique. The purpose of this study was to evaluate interobserver variability in the interpretation of FDG PET images in PPA as well as the diagnostic sensitivity and specificity of the technique. We also aimed to compare visual and statistical analyses of these images. There were 10 raters who analysed 44 FDG PET scans from 33 PPA patients and 11 controls. Five raters analysed the images visually, while the other five used maps created using Statistical Parametric Mapping software. Two spatial normalization procedures were performed: global mean normalization and cerebellar normalization. Clinical diagnosis was considered the gold standard. Inter-rater concordance was moderate for visual analysis (Fleiss' kappa 0.568) and substantial for statistical analysis (kappa 0.756-0.881). Agreement was good for all three variants of PPA except for the nonfluent/agrammatic variant studied with visual analysis. The sensitivity and specificity of each rater's diagnosis of PPA was high, averaging 87.8 and 89.9 % for visual analysis and 96.9 and 90.9 % for statistical analysis using global mean normalization, respectively. In cerebellar normalization, sensitivity was 88.9 % and specificity 100 %. FDG PET demonstrated high diagnostic accuracy for the diagnosis of PPA and its variants. Inter-rater concordance was higher for statistical analysis, especially for the nonfluent/agrammatic variant. These data support the use of FDG PET to evaluate patients with PPA and show that statistical analysis methods are particularly useful for identifying the nonfluent/agrammatic variant of PPA. (orig.)

  7. Australasian Resuscitation In Sepsis Evaluation trial statistical analysis plan.

    Science.gov (United States)

    Delaney, Anthony; Peake, Sandra L; Bellomo, Rinaldo; Cameron, Peter; Holdgate, Anna; Howe, Belinda; Higgins, Alisa; Presneill, Jeffrey; Webb, Steve

    2013-10-01

    The Australasian Resuscitation In Sepsis Evaluation (ARISE) study is an international, multicentre, randomised, controlled trial designed to evaluate the effectiveness of early goal-directed therapy compared with standard care for patients presenting to the ED with severe sepsis. In keeping with current practice, and taking into considerations aspects of trial design and reporting specific to non-pharmacologic interventions, this document outlines the principles and methods for analysing and reporting the trial results. The document is prepared prior to completion of recruitment into the ARISE study, without knowledge of the results of the interim analysis conducted by the data safety and monitoring committee and prior to completion of the two related international studies. The statistical analysis plan was designed by the ARISE chief investigators, and reviewed and approved by the ARISE steering committee. The data collected by the research team as specified in the study protocol, and detailed in the study case report form were reviewed. Information related to baseline characteristics, characteristics of delivery of the trial interventions, details of resuscitation and other related therapies, and other relevant data are described with appropriate comparisons between groups. The primary, secondary and tertiary outcomes for the study are defined, with description of the planned statistical analyses. A statistical analysis plan was developed, along with a trial profile, mock-up tables and figures. A plan for presenting baseline characteristics, microbiological and antibiotic therapy, details of the interventions, processes of care and concomitant therapies, along with adverse events are described. The primary, secondary and tertiary outcomes are described along with identification of subgroups to be analysed. A statistical analysis plan for the ARISE study has been developed, and is available in the public domain, prior to the completion of recruitment into the

  8. Measuring the Success of an Academic Development Programme: A Statistical Analysis

    Science.gov (United States)

    Smith, L. C.

    2009-01-01

    This study uses statistical analysis to estimate the impact of first-year academic development courses in microeconomics, statistics, accountancy, and information systems, offered by the University of Cape Town's Commerce Academic Development Programme, on students' graduation performance relative to that achieved by mainstream students. The data…

  9. Measuring user similarity using electric circuit analysis: application to collaborative filtering.

    Science.gov (United States)

    Yang, Joonhyuk; Kim, Jinwook; Kim, Wonjoon; Kim, Young Hwan

    2012-01-01

    We propose a new technique of measuring user similarity in collaborative filtering using electric circuit analysis. Electric circuit analysis is used to measure the potential differences between nodes on an electric circuit. In this paper, by applying this method to transaction networks comprising users and items, i.e., user-item matrix, and by using the full information about the relationship structure of users in the perspective of item adoption, we overcome the limitations of one-to-one similarity calculation approach, such as the Pearson correlation, Tanimoto coefficient, and Hamming distance, in collaborative filtering. We found that electric circuit analysis can be successfully incorporated into recommender systems and has the potential to significantly enhance predictability, especially when combined with user-based collaborative filtering. We also propose four types of hybrid algorithms that combine the Pearson correlation method and electric circuit analysis. One of the algorithms exceeds the performance of the traditional collaborative filtering by 37.5% at most. This work opens new opportunities for interdisciplinary research between physics and computer science and the development of new recommendation systems.

  10. Measuring user similarity using electric circuit analysis: application to collaborative filtering.

    Directory of Open Access Journals (Sweden)

    Joonhyuk Yang

    Full Text Available We propose a new technique of measuring user similarity in collaborative filtering using electric circuit analysis. Electric circuit analysis is used to measure the potential differences between nodes on an electric circuit. In this paper, by applying this method to transaction networks comprising users and items, i.e., user-item matrix, and by using the full information about the relationship structure of users in the perspective of item adoption, we overcome the limitations of one-to-one similarity calculation approach, such as the Pearson correlation, Tanimoto coefficient, and Hamming distance, in collaborative filtering. We found that electric circuit analysis can be successfully incorporated into recommender systems and has the potential to significantly enhance predictability, especially when combined with user-based collaborative filtering. We also propose four types of hybrid algorithms that combine the Pearson correlation method and electric circuit analysis. One of the algorithms exceeds the performance of the traditional collaborative filtering by 37.5% at most. This work opens new opportunities for interdisciplinary research between physics and computer science and the development of new recommendation systems.

  11. Analysis of Variance in Statistical Image Processing

    Science.gov (United States)

    Kurz, Ludwik; Hafed Benteftifa, M.

    1997-04-01

    A key problem in practical image processing is the detection of specific features in a noisy image. Analysis of variance (ANOVA) techniques can be very effective in such situations, and this book gives a detailed account of the use of ANOVA in statistical image processing. The book begins by describing the statistical representation of images in the various ANOVA models. The authors present a number of computationally efficient algorithms and techniques to deal with such problems as line, edge, and object detection, as well as image restoration and enhancement. By describing the basic principles of these techniques, and showing their use in specific situations, the book will facilitate the design of new algorithms for particular applications. It will be of great interest to graduate students and engineers in the field of image processing and pattern recognition.

  12. Study of relationship between MUF correlation and detection sensitivity of statistical analysis

    International Nuclear Information System (INIS)

    Tamura, Toshiaki; Ihara, Hitoshi; Yamamoto, Yoichi; Ikawa, Koji

    1989-11-01

    Various kinds of statistical analysis are proposed to NRTA (Near Real Time Materials Accountancy) which was devised to satisfy the timeliness goal of one of the detection goals of IAEA. It will be presumed that different statistical analysis results will occur between the case of considered rigorous error propagation (with MUF correlation) and the case of simplified error propagation (without MUF correlation). Therefore, measurement simulation and decision analysis were done using flow simulation of 800 MTHM/Y model reprocessing plant, and relationship between MUF correlation and detection sensitivity and false alarm of statistical analysis was studied. Specific character of material accountancy for 800 MTHM/Y model reprocessing plant was grasped by this simulation. It also became clear that MUF correlation decreases not only false alarm but also detection probability for protracted loss in case of CUMUF test and Page's test applied to NRTA. (author)

  13. Optimum welding condition of 2017 aluminum similar alloy friction welded joints

    Energy Technology Data Exchange (ETDEWEB)

    Tsujino R.; Ochi, H. [Osaka Inst. of Tech., Osaka (Japan); Morikawa, K. [Osaka Sangyo Univ., Osaka (Japan); Yamaguchi, H.; Ogawa, K. [Osaka Prefecture Univ., Osaka (Japan); Fujishiro, Y.; Yoshida, M. [Sumitomo Metal Technology Ltd., Hyogo (Japan)

    2002-07-01

    Usefulness of the statistical analysis for judging optimization of the friction welding conditions was investigated by using 2017 aluminum similar alloy, where many samples under fixed welding conditions were friction welded and analyzed statistically. In general, selection of the optimum friction welding conditions for similar materials is easy. However, it was not always the case for 2017 aluminum alloy. For optimum friction welding conditions of this material, it is necessary to apply relatively larger upset pressure to obtain high friction heating. Joint efficiencies obtained under the optimum friction welding conditions showed large shape parameter (m value) of Weibull distribution as well as in the dissimilar materials previously reported. The m value calculated on the small number of data can be substituted for m value on the 30 data. Therefore, m value is useful for practical use in the factory for assuming the propriety of the friction welding conditions. (orig.)

  14. Statistical considerations in the development of injury risk functions.

    Science.gov (United States)

    McMurry, Timothy L; Poplin, Gerald S

    2015-01-01

    We address 4 frequently misunderstood and important statistical ideas in the construction of injury risk functions. These include the similarities of survival analysis and logistic regression, the correct scale on which to construct pointwise confidence intervals for injury risk, the ability to discern which form of injury risk function is optimal, and the handling of repeated tests on the same subject. The statistical models are explored through simulation and examination of the underlying mathematics. We provide recommendations for the statistically valid construction and correct interpretation of single-predictor injury risk functions. This article aims to provide useful and understandable statistical guidance to improve the practice in constructing injury risk functions.

  15. A Hierarchical Multivariate Bayesian Approach to Ensemble Model output Statistics in Atmospheric Prediction

    Science.gov (United States)

    2017-09-01

    application of statistical inference. Even when human forecasters leverage their professional experience, which is often gained through long periods of... application throughout statistics and Bayesian data analysis. The multivariate form of 2( , )  (e.g., Figure 12) is similarly analytically...data (i.e., no systematic manipulations with analytical functions), it is common in the statistical literature to apply mathematical transformations

  16. 75 FR 24718 - Guidance for Industry on Documenting Statistical Analysis Programs and Data Files; Availability

    Science.gov (United States)

    2010-05-05

    ...] Guidance for Industry on Documenting Statistical Analysis Programs and Data Files; Availability AGENCY... documenting statistical analyses and data files submitted to the Center for Veterinary Medicine (CVM) for the... on Documenting Statistical Analysis Programs and Data Files; Availability'' giving interested persons...

  17. Point defect characterization in HAADF-STEM images using multivariate statistical analysis

    International Nuclear Information System (INIS)

    Sarahan, Michael C.; Chi, Miaofang; Masiel, Daniel J.; Browning, Nigel D.

    2011-01-01

    Quantitative analysis of point defects is demonstrated through the use of multivariate statistical analysis. This analysis consists of principal component analysis for dimensional estimation and reduction, followed by independent component analysis to obtain physically meaningful, statistically independent factor images. Results from these analyses are presented in the form of factor images and scores. Factor images show characteristic intensity variations corresponding to physical structure changes, while scores relate how much those variations are present in the original data. The application of this technique is demonstrated on a set of experimental images of dislocation cores along a low-angle tilt grain boundary in strontium titanate. A relationship between chemical composition and lattice strain is highlighted in the analysis results, with picometer-scale shifts in several columns measurable from compositional changes in a separate column. -- Research Highlights: → Multivariate analysis of HAADF-STEM images. → Distinct structural variations among SrTiO 3 dislocation cores. → Picometer atomic column shifts correlated with atomic column population changes.

  18. Observations and analysis of self-similar branching topology in glacier networks

    Science.gov (United States)

    Bahr, D.B.; Peckham, S.D.

    1996-01-01

    Glaciers, like rivers, have a branching structure which can be characterized by topological trees or networks. Probability distributions of various topological quantities in the networks are shown to satisfy the criterion for self-similarity, a symmetry structure which might be used to simplify future models of glacier dynamics. Two analytical methods of describing river networks, Shreve's random topology model and deterministic self-similar trees, are applied to the six glaciers of south central Alaska studied in this analysis. Self-similar trees capture the topological behavior observed for all of the glaciers, and most of the networks are also reasonably approximated by Shreve's theory. Copyright 1996 by the American Geophysical Union.

  19. STATCAT, Statistical Analysis of Parametric and Non-Parametric Data

    International Nuclear Information System (INIS)

    David, Hugh

    1990-01-01

    1 - Description of program or function: A suite of 26 programs designed to facilitate the appropriate statistical analysis and data handling of parametric and non-parametric data, using classical and modern univariate and multivariate methods. 2 - Method of solution: Data is read entry by entry, using a choice of input formats, and the resultant data bank is checked for out-of- range, rare, extreme or missing data. The completed STATCAT data bank can be treated by a variety of descriptive and inferential statistical methods, and modified, using other standard programs as required

  20. FADTTS: functional analysis of diffusion tensor tract statistics.

    Science.gov (United States)

    Zhu, Hongtu; Kong, Linglong; Li, Runze; Styner, Martin; Gerig, Guido; Lin, Weili; Gilmore, John H

    2011-06-01

    The aim of this paper is to present a functional analysis of a diffusion tensor tract statistics (FADTTS) pipeline for delineating the association between multiple diffusion properties along major white matter fiber bundles with a set of covariates of interest, such as age, diagnostic status and gender, and the structure of the variability of these white matter tract properties in various diffusion tensor imaging studies. The FADTTS integrates five statistical tools: (i) a multivariate varying coefficient model for allowing the varying coefficient functions in terms of arc length to characterize the varying associations between fiber bundle diffusion properties and a set of covariates, (ii) a weighted least squares estimation of the varying coefficient functions, (iii) a functional principal component analysis to delineate the structure of the variability in fiber bundle diffusion properties, (iv) a global test statistic to test hypotheses of interest, and (v) a simultaneous confidence band to quantify the uncertainty in the estimated coefficient functions. Simulated data are used to evaluate the finite sample performance of FADTTS. We apply FADTTS to investigate the development of white matter diffusivities along the splenium of the corpus callosum tract and the right internal capsule tract in a clinical study of neurodevelopment. FADTTS can be used to facilitate the understanding of normal brain development, the neural bases of neuropsychiatric disorders, and the joint effects of environmental and genetic factors on white matter fiber bundles. The advantages of FADTTS compared with the other existing approaches are that they are capable of modeling the structured inter-subject variability, testing the joint effects, and constructing their simultaneous confidence bands. However, FADTTS is not crucial for estimation and reduces to the functional analysis method for the single measure. Copyright © 2011 Elsevier Inc. All rights reserved.

  1. Statistical process control methods allow the analysis and improvement of anesthesia care.

    Science.gov (United States)

    Fasting, Sigurd; Gisvold, Sven E

    2003-10-01

    Quality aspects of the anesthetic process are reflected in the rate of intraoperative adverse events. The purpose of this report is to illustrate how the quality of the anesthesia process can be analyzed using statistical process control methods, and exemplify how this analysis can be used for quality improvement. We prospectively recorded anesthesia-related data from all anesthetics for five years. The data included intraoperative adverse events, which were graded into four levels, according to severity. We selected four adverse events, representing important quality and safety aspects, for statistical process control analysis. These were: inadequate regional anesthesia, difficult emergence from general anesthesia, intubation difficulties and drug errors. We analyzed the underlying process using 'p-charts' for statistical process control. In 65,170 anesthetics we recorded adverse events in 18.3%; mostly of lesser severity. Control charts were used to define statistically the predictable normal variation in problem rate, and then used as a basis for analysis of the selected problems with the following results: Inadequate plexus anesthesia: stable process, but unacceptably high failure rate; Difficult emergence: unstable process, because of quality improvement efforts; Intubation difficulties: stable process, rate acceptable; Medication errors: methodology not suited because of low rate of errors. By applying statistical process control methods to the analysis of adverse events, we have exemplified how this allows us to determine if a process is stable, whether an intervention is required, and if quality improvement efforts have the desired effect.

  2. Effect of the absolute statistic on gene-sampling gene-set analysis methods.

    Science.gov (United States)

    Nam, Dougu

    2017-06-01

    Gene-set enrichment analysis and its modified versions have commonly been used for identifying altered functions or pathways in disease from microarray data. In particular, the simple gene-sampling gene-set analysis methods have been heavily used for datasets with only a few sample replicates. The biggest problem with this approach is the highly inflated false-positive rate. In this paper, the effect of absolute gene statistic on gene-sampling gene-set analysis methods is systematically investigated. Thus far, the absolute gene statistic has merely been regarded as a supplementary method for capturing the bidirectional changes in each gene set. Here, it is shown that incorporating the absolute gene statistic in gene-sampling gene-set analysis substantially reduces the false-positive rate and improves the overall discriminatory ability. Its effect was investigated by power, false-positive rate, and receiver operating curve for a number of simulated and real datasets. The performances of gene-set analysis methods in one-tailed (genome-wide association study) and two-tailed (gene expression data) tests were also compared and discussed.

  3. An improved method for statistical analysis of raw accelerator mass spectrometry data

    International Nuclear Information System (INIS)

    Gutjahr, A.; Phillips, F.; Kubik, P.W.; Elmore, D.

    1987-01-01

    Hierarchical statistical analysis is an appropriate method for statistical treatment of raw accelerator mass spectrometry (AMS) data. Using Monte Carlo simulations we show that this method yields more accurate estimates of isotope ratios and analytical uncertainty than the generally used propagation of errors approach. The hierarchical analysis is also useful in design of experiments because it can be used to identify sources of variability. 8 refs., 2 figs

  4. Statistical Image Analysis of Tomograms with Application to Fibre Geometry Characterisation

    DEFF Research Database (Denmark)

    Emerson, Monica Jane

    The goal of this thesis is to develop statistical image analysis tools to characterise the micro-structure of complex materials used in energy technologies, with a strong focus on fibre composites. These quantification tools are based on extracting geometrical parameters defining structures from 2D...... with high resolution both in space and time to observe fast micro-structural changes. This thesis demonstrates that statistical image analysis combined with X-ray CT opens up numerous possibilities for understanding the behaviour of fibre composites under real life conditions. Besides enabling...

  5. The art of data analysis how to answer almost any question using basic statistics

    CERN Document Server

    Jarman, Kristin H

    2013-01-01

    A friendly and accessible approach to applying statistics in the real worldWith an emphasis on critical thinking, The Art of Data Analysis: How to Answer Almost Any Question Using Basic Statistics presents fun and unique examples, guides readers through the entire data collection and analysis process, and introduces basic statistical concepts along the way.Leaving proofs and complicated mathematics behind, the author portrays the more engaging side of statistics and emphasizes its role as a problem-solving tool.  In addition, light-hearted case studies

  6. Markov model plus k-word distributions: a synergy that produces novel statistical measures for sequence comparison.

    Science.gov (United States)

    Dai, Qi; Yang, Yanchun; Wang, Tianming

    2008-10-15

    Many proposed statistical measures can efficiently compare biological sequences to further infer their structures, functions and evolutionary information. They are related in spirit because all the ideas for sequence comparison try to use the information on the k-word distributions, Markov model or both. Motivated by adding k-word distributions to Markov model directly, we investigated two novel statistical measures for sequence comparison, called wre.k.r and S2.k.r. The proposed measures were tested by similarity search, evaluation on functionally related regulatory sequences and phylogenetic analysis. This offers the systematic and quantitative experimental assessment of our measures. Moreover, we compared our achievements with these based on alignment or alignment-free. We grouped our experiments into two sets. The first one, performed via ROC (receiver operating curve) analysis, aims at assessing the intrinsic ability of our statistical measures to search for similar sequences from a database and discriminate functionally related regulatory sequences from unrelated sequences. The second one aims at assessing how well our statistical measure is used for phylogenetic analysis. The experimental assessment demonstrates that our similarity measures intending to incorporate k-word distributions into Markov model are more efficient.

  7. Statistics in experimental design, preprocessing, and analysis of proteomics data.

    Science.gov (United States)

    Jung, Klaus

    2011-01-01

    High-throughput experiments in proteomics, such as 2-dimensional gel electrophoresis (2-DE) and mass spectrometry (MS), yield usually high-dimensional data sets of expression values for hundreds or thousands of proteins which are, however, observed on only a relatively small number of biological samples. Statistical methods for the planning and analysis of experiments are important to avoid false conclusions and to receive tenable results. In this chapter, the most frequent experimental designs for proteomics experiments are illustrated. In particular, focus is put on studies for the detection of differentially regulated proteins. Furthermore, issues of sample size planning, statistical analysis of expression levels as well as methods for data preprocessing are covered.

  8. The Statistical Analysis Techniques to Support the NGNP Fuel Performance Experiments

    International Nuclear Information System (INIS)

    Pham, Bihn T.; Einerson, Jeffrey J.

    2010-01-01

    This paper describes the development and application of statistical analysis techniques to support the AGR experimental program on NGNP fuel performance. The experiments conducted in the Idaho National Laboratory's Advanced Test Reactor employ fuel compacts placed in a graphite cylinder shrouded by a steel capsule. The tests are instrumented with thermocouples embedded in graphite blocks and the target quantity (fuel/graphite temperature) is regulated by the He-Ne gas mixture that fills the gap volume. Three techniques for statistical analysis, namely control charting, correlation analysis, and regression analysis, are implemented in the SAS-based NGNP Data Management and Analysis System (NDMAS) for automated processing and qualification of the AGR measured data. The NDMAS also stores daily neutronic (power) and thermal (heat transfer) code simulation results along with the measurement data, allowing for their combined use and comparative scrutiny. The ultimate objective of this work includes (a) a multi-faceted system for data monitoring and data accuracy testing, (b) identification of possible modes of diagnostics deterioration and changes in experimental conditions, (c) qualification of data for use in code validation, and (d) identification and use of data trends to support effective control of test conditions with respect to the test target. Analysis results and examples given in the paper show the three statistical analysis techniques providing a complementary capability to warn of thermocouple failures. It also suggests that the regression analysis models relating calculated fuel temperatures and thermocouple readings can enable online regulation of experimental parameters (i.e. gas mixture content), to effectively maintain the target quantity (fuel temperature) within a given range.

  9. The statistical analysis techniques to support the NGNP fuel performance experiments

    Energy Technology Data Exchange (ETDEWEB)

    Pham, Binh T., E-mail: Binh.Pham@inl.gov; Einerson, Jeffrey J.

    2013-10-15

    This paper describes the development and application of statistical analysis techniques to support the Advanced Gas Reactor (AGR) experimental program on Next Generation Nuclear Plant (NGNP) fuel performance. The experiments conducted in the Idaho National Laboratory’s Advanced Test Reactor employ fuel compacts placed in a graphite cylinder shrouded by a steel capsule. The tests are instrumented with thermocouples embedded in graphite blocks and the target quantity (fuel temperature) is regulated by the He–Ne gas mixture that fills the gap volume. Three techniques for statistical analysis, namely control charting, correlation analysis, and regression analysis, are implemented in the NGNP Data Management and Analysis System for automated processing and qualification of the AGR measured data. The neutronic and thermal code simulation results are used for comparative scrutiny. The ultimate objective of this work includes (a) a multi-faceted system for data monitoring and data accuracy testing, (b) identification of possible modes of diagnostics deterioration and changes in experimental conditions, (c) qualification of data for use in code validation, and (d) identification and use of data trends to support effective control of test conditions with respect to the test target. Analysis results and examples given in the paper show the three statistical analysis techniques providing a complementary capability to warn of thermocouple failures. It also suggests that the regression analysis models relating calculated fuel temperatures and thermocouple readings can enable online regulation of experimental parameters (i.e. gas mixture content), to effectively maintain the fuel temperature within a given range.

  10. Statistical Challenges of Big Data Analysis in Medicine

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2015-01-01

    Roč. 3, č. 1 (2015), s. 24-27 ISSN 1805-8698 R&D Projects: GA ČR GA13-23940S Grant - others:CESNET Development Fund(CZ) 494/2013 Institutional support: RVO:67985807 Keywords : big data * variable selection * classification * cluster analysis Subject RIV: BB - Applied Statistics, Operational Research http://www.ijbh.org/ijbh2015-1.pdf

  11. Statistical Analysis of Hypercalcaemia Data related to Transferability

    DEFF Research Database (Denmark)

    Frølich, Anne; Nielsen, Bo Friis

    2005-01-01

    In this report we describe statistical analysis related to a study of hypercalcaemia carried out in the Copenhagen area in the ten year period from 1984 to 1994. Results from the study have previously been publised in a number of papers [3, 4, 5, 6, 7, 8, 9] and in various abstracts and posters...... at conferences during the late eighties and early nineties. In this report we give a more detailed description of many of the analysis and provide some new results primarily by simultaneous studies of several databases....

  12. Statistical analysis of questionnaires a unified approach based on R and Stata

    CERN Document Server

    Bartolucci, Francesco; Gnaldi, Michela

    2015-01-01

    Statistical Analysis of Questionnaires: A Unified Approach Based on R and Stata presents special statistical methods for analyzing data collected by questionnaires. The book takes an applied approach to testing and measurement tasks, mirroring the growing use of statistical methods and software in education, psychology, sociology, and other fields. It is suitable for graduate students in applied statistics and psychometrics and practitioners in education, health, and marketing.The book covers the foundations of classical test theory (CTT), test reliability, va

  13. Reducing bias in the analysis of counting statistics data

    International Nuclear Information System (INIS)

    Hammersley, A.P.; Antoniadis, A.

    1997-01-01

    In the analysis of counting statistics data it is common practice to estimate the variance of the measured data points as the data points themselves. This practice introduces a bias into the results of further analysis which may be significant, and under certain circumstances lead to false conclusions. In the case of normal weighted least squares fitting this bias is quantified and methods to avoid it are proposed. (orig.)

  14. Dimensional analysis and self-similarity methods for engineers and scientists

    CERN Document Server

    Zohuri, Bahman

    2015-01-01

    This ground-breaking reference provides an overview of key concepts in dimensional analysis, and then pushes well beyond traditional applications in fluid mechanics to demonstrate how powerful this tool can be in solving complex problems across many diverse fields. Of particular interest is the book's coverage of  dimensional analysis and self-similarity methods in nuclear and energy engineering. Numerous practical examples of dimensional problems are presented throughout, allowing readers to link the book's theoretical explanations and step-by-step mathematical solutions to practical impleme

  15. Immunoinformatics and Similarity Analysis of House Dust Mite Tropomyosin

    Directory of Open Access Journals (Sweden)

    Mohammad Mehdi Ranjbar

    2015-10-01

    Full Text Available Background: Dermatophagoides farinae and Dermatophagoides pteronyssinus are house dust mites (HDM that they cause severe asthma and allergic symptoms. Tropomyosin protein plays an important role in mentioned immune and allergic reactions to HDMs. Here, tropomyosin protein from Dermatophagoides spp. was comprehensively screened in silico for its allergenicity, antigenicity and similarity/conservation.Materials and Methods: The amino acid sequences of D. farinae tropomyosin, D. pteronyssinus and other mites were retrieved. We included alignments and evaluated conserved/ variable regions along sequences, constructed their phylogenetic tree and estimated overall mean distances. Then, followed by with prediction of linear B-cell epitope based on different approaches, and besides in-silico evaluation of IgE epitopes allergenicity (by SVMc, IgE epitope, ARPs BLAST, MAST and hybrid method. Finally, comparative analysis of results by different approaches was made.Results: Alignment results revealed near complete identity between D. farina and D. pteronyssinus members, and also there was close similarity among Dermatophagoides spp. Most of the variations among mites' tropomyosin were approximately located at amino acids 23 to 80, 108 to 120, 142 to 153 and 220 to 230. Topology of tree showed close relationships among mites in tropomyosin protein sequence, although their sequences in D. farina, D. pteronyssinus and Psoroptes ovis are more similar to each other and clustered. Dermanyssus gallinae (AC: Q2WBI0 has less relationship to other mites, being located in a separate branch. Hydrophilicity and flexibility plots revealed that many parts of this protein have potential to be hydrophilic and flexible. Surface accessibility represented 7 different epitopes. Beta-turns in this protein are with high probability in the middle part and its two terminals. Kolaskar and Tongaonkar method analysis represented 11 immunogenic epitopes between amino acids 7-16. From

  16. Benchmark validation of statistical models: Application to mediation analysis of imagery and memory.

    Science.gov (United States)

    MacKinnon, David P; Valente, Matthew J; Wurpts, Ingrid C

    2018-03-29

    This article describes benchmark validation, an approach to validating a statistical model. According to benchmark validation, a valid model generates estimates and research conclusions consistent with a known substantive effect. Three types of benchmark validation-(a) benchmark value, (b) benchmark estimate, and (c) benchmark effect-are described and illustrated with examples. Benchmark validation methods are especially useful for statistical models with assumptions that are untestable or very difficult to test. Benchmark effect validation methods were applied to evaluate statistical mediation analysis in eight studies using the established effect that increasing mental imagery improves recall of words. Statistical mediation analysis led to conclusions about mediation that were consistent with established theory that increased imagery leads to increased word recall. Benchmark validation based on established substantive theory is discussed as a general way to investigate characteristics of statistical models and a complement to mathematical proof and statistical simulation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  17. Analysis and meta-analysis of single-case designs with a standardized mean difference statistic: a primer and applications.

    Science.gov (United States)

    Shadish, William R; Hedges, Larry V; Pustejovsky, James E

    2014-04-01

    This article presents a d-statistic for single-case designs that is in the same metric as the d-statistic used in between-subjects designs such as randomized experiments and offers some reasons why such a statistic would be useful in SCD research. The d has a formal statistical development, is accompanied by appropriate power analyses, and can be estimated using user-friendly SPSS macros. We discuss both advantages and disadvantages of d compared to other approaches such as previous d-statistics, overlap statistics, and multilevel modeling. It requires at least three cases for computation and assumes normally distributed outcomes and stationarity, assumptions that are discussed in some detail. We also show how to test these assumptions. The core of the article then demonstrates in depth how to compute d for one study, including estimation of the autocorrelation and the ratio of between case variance to total variance (between case plus within case variance), how to compute power using a macro, and how to use the d to conduct a meta-analysis of studies using single-case designs in the free program R, including syntax in an appendix. This syntax includes how to read data, compute fixed and random effect average effect sizes, prepare a forest plot and a cumulative meta-analysis, estimate various influence statistics to identify studies contributing to heterogeneity and effect size, and do various kinds of publication bias analyses. This d may prove useful for both the analysis and meta-analysis of data from SCDs. Copyright © 2013 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  18. Bayesian statistics applied to neutron activation data for reactor flux spectrum analysis

    International Nuclear Information System (INIS)

    Chiesa, Davide; Previtali, Ezio; Sisti, Monica

    2014-01-01

    Highlights: • Bayesian statistics to analyze the neutron flux spectrum from activation data. • Rigorous statistical approach for accurate evaluation of the neutron flux groups. • Cross section and activation data uncertainties included for the problem solution. • Flexible methodology applied to analyze different nuclear reactor flux spectra. • The results are in good agreement with the MCNP simulations of neutron fluxes. - Abstract: In this paper, we present a statistical method, based on Bayesian statistics, to analyze the neutron flux spectrum from the activation data of different isotopes. The experimental data were acquired during a neutron activation experiment performed at the TRIGA Mark II reactor of Pavia University (Italy) in four irradiation positions characterized by different neutron spectra. In order to evaluate the neutron flux spectrum, subdivided in energy groups, a system of linear equations, containing the group effective cross sections and the activation rate data, has to be solved. However, since the system’s coefficients are experimental data affected by uncertainties, a rigorous statistical approach is fundamental for an accurate evaluation of the neutron flux groups. For this purpose, we applied the Bayesian statistical analysis, that allows to include the uncertainties of the coefficients and the a priori information about the neutron flux. A program for the analysis of Bayesian hierarchical models, based on Markov Chain Monte Carlo (MCMC) simulations, was used to define the problem statistical model and solve it. The first analysis involved the determination of the thermal, resonance-intermediate and fast flux components and the dependence of the results on the Prior distribution choice was investigated to confirm the reliability of the Bayesian analysis. After that, the main resonances of the activation cross sections were analyzed to implement multi-group models with finer energy subdivisions that would allow to determine the

  19. Reactor noise analysis by statistical pattern recognition methods

    International Nuclear Information System (INIS)

    Howington, L.C.; Gonzalez, R.C.

    1976-01-01

    A multivariate statistical pattern recognition system for reactor noise analysis is presented. The basis of the system is a transformation for decoupling correlated variables and algorithms for inferring probability density functions. The system is adaptable to a variety of statistical properties of the data, and it has learning, tracking, updating, and data compacting capabilities. System design emphasizes control of the false-alarm rate. Its abilities to learn normal patterns, to recognize deviations from these patterns, and to reduce the dimensionality of data with minimum error were evaluated by experiments at the Oak Ridge National Laboratory (ORNL) High-Flux Isotope Reactor. Power perturbations of less than 0.1 percent of the mean value in selected frequency ranges were detected by the pattern recognition system

  20. Comparative Analysis of Networks of Phonologically Similar Words in English and Spanish

    Directory of Open Access Journals (Sweden)

    Michael S. Vitevitch

    2010-03-01

    Full Text Available Previous network analyses of several languages revealed a unique set of structural characteristics. One of these characteristics—the presence of many smaller components (referred to as islands—was further examined with a comparative analysis of the island constituents. The results showed that Spanish words in the islands tended to be phonologically and semantically similar to each other, but English words in the islands tended only to be phonologically similar to each other. The results of this analysis yielded hypotheses about language processing that can be tested with psycholinguistic experiments, and offer insight into cross-language differences in processing that have been previously observed.

  1. Data analysis using the Gnu R system for statistical computation

    Energy Technology Data Exchange (ETDEWEB)

    Simone, James; /Fermilab

    2011-07-01

    R is a language system for statistical computation. It is widely used in statistics, bioinformatics, machine learning, data mining, quantitative finance, and the analysis of clinical drug trials. Among the advantages of R are: it has become the standard language for developing statistical techniques, it is being actively developed by a large and growing global user community, it is open source software, it is highly portable (Linux, OS-X and Windows), it has a built-in documentation system, it produces high quality graphics and it is easily extensible with over four thousand extension library packages available covering statistics and applications. This report gives a very brief introduction to R with some examples using lattice QCD simulation results. It then discusses the development of R packages designed for chi-square minimization fits for lattice n-pt correlation functions.

  2. Application of a statistical thermal design procedure to evaluate the PWR DNBR safety analysis limits

    International Nuclear Information System (INIS)

    Robeyns, J.; Parmentier, F.; Peeters, G.

    2001-01-01

    In the framework of safety analysis for the Belgian nuclear power plants and for the reload compatibility studies, Tractebel Energy Engineering (TEE) has developed, to define a 95/95 DNBR criterion, a statistical thermal design method based on the analytical full statistical approach: the Statistical Thermal Design Procedure (STDP). In that methodology, each DNBR value in the core assemblies is calculated with an adapted CHF (Critical Heat Flux) correlation implemented in the sub-channel code Cobra for core thermal hydraulic analysis. The uncertainties of the correlation are represented by the statistical parameters calculated from an experimental database. The main objective of a sub-channel analysis is to prove that in all class 1 and class 2 situations, the minimum DNBR (Departure from Nucleate Boiling Ratio) remains higher than the Safety Analysis Limit (SAL). The SAL value is calculated from the Statistical Design Limit (SDL) value adjusted with some penalties and deterministic factors. The search of a realistic value for the SDL is the objective of the statistical thermal design methods. In this report, we apply a full statistical approach to define the DNBR criterion or SDL (Statistical Design Limit) with the strict observance of the design criteria defined in the Standard Review Plan. The same statistical approach is used to define the expected number of rods experiencing DNB. (author)

  3. Analytical and statistical analysis of elemental composition of lichens

    International Nuclear Information System (INIS)

    Calvelo, S.; Baccala, N.; Bubach, D.; Arribere, M.A.; Riberio Guevara, S.

    1997-01-01

    The elemental composition of lichens from remote southern South America regions has been studied with analytical and statistical techniques to determine if the values obtained reflect species, growth forms or habitat characteristics. The enrichment factors are calculated discriminated by species and collection site and compared with data available in the literature. The elemental concentrations are standardized and compared for different species. The information was statistically processed, a cluster analysis was performed using the three first principal axes of the PCA; the three groups formed are presented. Their relationship with the species, collection sites and the lichen growth forms are interpreted. (author)

  4. The Fusion of Financial Analysis and Seismology: Statistical Methods from Financial Market Analysis Applied to Earthquake Data

    Science.gov (United States)

    Ohyanagi, S.; Dileonardo, C.

    2013-12-01

    As a natural phenomenon earthquake occurrence is difficult to predict. Statistical analysis of earthquake data was performed using candlestick chart and Bollinger Band methods. These statistical methods, commonly used in the financial world to analyze market trends were tested against earthquake data. Earthquakes above Mw 4.0 located on shore of Sanriku (37.75°N ~ 41.00°N, 143.00°E ~ 144.50°E) from February 1973 to May 2013 were selected for analysis. Two specific patterns in earthquake occurrence were recognized through the analysis. One is a spread of candlestick prior to the occurrence of events greater than Mw 6.0. A second pattern shows convergence in the Bollinger Band, which implies a positive or negative change in the trend of earthquakes. Both patterns match general models for the buildup and release of strain through the earthquake cycle, and agree with both the characteristics of the candlestick chart and Bollinger Band analysis. These results show there is a high correlation between patterns in earthquake occurrence and trend analysis by these two statistical methods. The results of this study agree with the appropriateness of the application of these financial analysis methods to the analysis of earthquake occurrence.

  5. {sup 15}N/{sup 14}N isotopic ratio and statistical analysis: an efficient way of linking seized Ecstasy tablets

    Energy Technology Data Exchange (ETDEWEB)

    Palhol, Fabien; Lamoureux, Catherine; Chabrillat, Martine; Naulet, Norbert

    2004-05-10

    In this study, the {sup 15}N/{sup 14}N isotopic ratios of 106 samples of 3,4-methylenedioxymethamphetamine (MDMA) extracted from Ecstasy tablets are presented. These ratios, measured using gas chromatography-combustion-isotope ratio mass spectrometry (GC-C-IRMS), show a large discrimination between samples with a range of {delta}{sup 15}N values between -17 and +19%o, depending on the precursors and the method used in clandestine laboratories. Thus, {delta}{sup 15}N values can be used in a statistical analysis carried out in order to link Ecstasy tablets prepared with the same precursors and synthetic pathway. The similarity index obtained after principal component analysis and hierarchical cluster analysis appears to be an efficient way to group tablets seized in different places.

  6. Parametric analysis of the statistical model of the stick-slip process

    Science.gov (United States)

    Lima, Roberta; Sampaio, Rubens

    2017-06-01

    In this paper it is performed a parametric analysis of the statistical model of the response of a dry-friction oscillator. The oscillator is a spring-mass system which moves over a base with a rough surface. Due to this roughness, the mass is subject to a dry-frictional force modeled as a Coulomb friction. The system is stochastically excited by an imposed bang-bang base motion. The base velocity is modeled by a Poisson process for which a probabilistic model is fully specified. The excitation induces in the system stochastic stick-slip oscillations. The system response is composed by a random sequence alternating stick and slip-modes. With realizations of the system, a statistical model is constructed for this sequence. In this statistical model, the variables of interest of the sequence are modeled as random variables, as for example, the number of time intervals in which stick or slip occur, the instants at which they begin, and their duration. Samples of the system response are computed by integration of the dynamic equation of the system using independent samples of the base motion. Statistics and histograms of the random variables which characterize the stick-slip process are estimated for the generated samples. The objective of the paper is to analyze how these estimated statistics and histograms vary with the system parameters, i.e., to make a parametric analysis of the statistical model of the stick-slip process.

  7. Introduction to applied statistical signal analysis guide to biomedical and electrical engineering applications

    CERN Document Server

    Shiavi, Richard

    2007-01-01

    Introduction to Applied Statistical Signal Analysis is designed for the experienced individual with a basic background in mathematics, science, and computer. With this predisposed knowledge, the reader will coast through the practical introduction and move on to signal analysis techniques, commonly used in a broad range of engineering areas such as biomedical engineering, communications, geophysics, and speech.Introduction to Applied Statistical Signal Analysis intertwines theory and implementation with practical examples and exercises. Topics presented in detail include: mathematical

  8. Visual and statistical analysis of {sup 18}F-FDG PET in primary progressive aphasia

    Energy Technology Data Exchange (ETDEWEB)

    Matias-Guiu, Jordi A.; Moreno-Ramos, Teresa; Garcia-Ramos, Rocio; Fernandez-Matarrubia, Marta; Oreja-Guevara, Celia; Matias-Guiu, Jorge [Hospital Clinico San Carlos, Department of Neurology, Madrid (Spain); Cabrera-Martin, Maria Nieves; Perez-Castejon, Maria Jesus; Rodriguez-Rey, Cristina; Ortega-Candil, Aida; Carreras, Jose Luis [San Carlos Health Research Institute (IdISSC) Complutense University of Madrid, Department of Nuclear Medicine, Hospital Clinico San Carlos, Madrid (Spain)

    2015-05-01

    Diagnosing progressive primary aphasia (PPA) and its variants is of great clinical importance, and fluorodeoxyglucose (FDG) positron emission tomography (PET) may be a useful diagnostic technique. The purpose of this study was to evaluate interobserver variability in the interpretation of FDG PET images in PPA as well as the diagnostic sensitivity and specificity of the technique. We also aimed to compare visual and statistical analyses of these images. There were 10 raters who analysed 44 FDG PET scans from 33 PPA patients and 11 controls. Five raters analysed the images visually, while the other five used maps created using Statistical Parametric Mapping software. Two spatial normalization procedures were performed: global mean normalization and cerebellar normalization. Clinical diagnosis was considered the gold standard. Inter-rater concordance was moderate for visual analysis (Fleiss' kappa 0.568) and substantial for statistical analysis (kappa 0.756-0.881). Agreement was good for all three variants of PPA except for the nonfluent/agrammatic variant studied with visual analysis. The sensitivity and specificity of each rater's diagnosis of PPA was high, averaging 87.8 and 89.9 % for visual analysis and 96.9 and 90.9 % for statistical analysis using global mean normalization, respectively. In cerebellar normalization, sensitivity was 88.9 % and specificity 100 %. FDG PET demonstrated high diagnostic accuracy for the diagnosis of PPA and its variants. Inter-rater concordance was higher for statistical analysis, especially for the nonfluent/agrammatic variant. These data support the use of FDG PET to evaluate patients with PPA and show that statistical analysis methods are particularly useful for identifying the nonfluent/agrammatic variant of PPA. (orig.)

  9. PVeStA: A Parallel Statistical Model Checking and Quantitative Analysis Tool

    KAUST Repository

    AlTurki, Musab

    2011-01-01

    Statistical model checking is an attractive formal analysis method for probabilistic systems such as, for example, cyber-physical systems which are often probabilistic in nature. This paper is about drastically increasing the scalability of statistical model checking, and making such scalability of analysis available to tools like Maude, where probabilistic systems can be specified at a high level as probabilistic rewrite theories. It presents PVeStA, an extension and parallelization of the VeStA statistical model checking tool [10]. PVeStA supports statistical model checking of probabilistic real-time systems specified as either: (i) discrete or continuous Markov Chains; or (ii) probabilistic rewrite theories in Maude. Furthermore, the properties that it can model check can be expressed in either: (i) PCTL/CSL, or (ii) the QuaTEx quantitative temporal logic. As our experiments show, the performance gains obtained from parallelization can be very high. © 2011 Springer-Verlag.

  10. Statistical analysis of extreme values from insurance, finance, hydrology and other fields

    CERN Document Server

    Reiss, Rolf-Dieter

    1997-01-01

    The statistical analysis of extreme data is important for various disciplines, including hydrology, insurance, finance, engineering and environmental sciences. This book provides a self-contained introduction to the parametric modeling, exploratory analysis and statistical interference for extreme values. The entire text of this third edition has been thoroughly updated and rearranged to meet the new requirements. Additional sections and chapters, elaborated on more than 100 pages, are particularly concerned with topics like dependencies, the conditional analysis and the multivariate modeling of extreme data. Parts I–III about the basic extreme value methodology remain unchanged to some larger extent, yet notable are, e.g., the new sections about "An Overview of Reduced-Bias Estimation" (co-authored by M.I. Gomes), "The Spectral Decomposition Methodology", and "About Tail Independence" (co-authored by M. Frick), and the new chapter about "Extreme Value Statistics of Dependent Random Variables" (co-authored ...

  11. Power flow as a complement to statistical energy analysis and finite element analysis

    Science.gov (United States)

    Cuschieri, J. M.

    1987-01-01

    Present methods of analysis of the structural response and the structure-borne transmission of vibrational energy use either finite element (FE) techniques or statistical energy analysis (SEA) methods. The FE methods are a very useful tool at low frequencies where the number of resonances involved in the analysis is rather small. On the other hand SEA methods can predict with acceptable accuracy the response and energy transmission between coupled structures at relatively high frequencies where the structural modal density is high and a statistical approach is the appropriate solution. In the mid-frequency range, a relatively large number of resonances exist which make finite element method too costly. On the other hand SEA methods can only predict an average level form. In this mid-frequency range a possible alternative is to use power flow techniques, where the input and flow of vibrational energy to excited and coupled structural components can be expressed in terms of input and transfer mobilities. This power flow technique can be extended from low to high frequencies and this can be integrated with established FE models at low frequencies and SEA models at high frequencies to form a verification of the method. This method of structural analysis using power flo and mobility methods, and its integration with SEA and FE analysis is applied to the case of two thin beams joined together at right angles.

  12. Bayesian Statistics and Uncertainty Quantification for Safety Boundary Analysis in Complex Systems

    Science.gov (United States)

    He, Yuning; Davies, Misty Dawn

    2014-01-01

    The analysis of a safety-critical system often requires detailed knowledge of safe regions and their highdimensional non-linear boundaries. We present a statistical approach to iteratively detect and characterize the boundaries, which are provided as parameterized shape candidates. Using methods from uncertainty quantification and active learning, we incrementally construct a statistical model from only few simulation runs and obtain statistically sound estimates of the shape parameters for safety boundaries.

  13. Validation of statistical models for creep rupture by parametric analysis

    Energy Technology Data Exchange (ETDEWEB)

    Bolton, J., E-mail: john.bolton@uwclub.net [65, Fisher Ave., Rugby, Warks CV22 5HW (United Kingdom)

    2012-01-15

    Statistical analysis is an efficient method for the optimisation of any candidate mathematical model of creep rupture data, and for the comparative ranking of competing models. However, when a series of candidate models has been examined and the best of the series has been identified, there is no statistical criterion to determine whether a yet more accurate model might be devised. Hence there remains some uncertainty that the best of any series examined is sufficiently accurate to be considered reliable as a basis for extrapolation. This paper proposes that models should be validated primarily by parametric graphical comparison to rupture data and rupture gradient data. It proposes that no mathematical model should be considered reliable for extrapolation unless the visible divergence between model and data is so small as to leave no apparent scope for further reduction. This study is based on the data for a 12% Cr alloy steel used in BS PD6605:1998 to exemplify its recommended statistical analysis procedure. The models considered in this paper include a) a relatively simple model, b) the PD6605 recommended model and c) a more accurate model of somewhat greater complexity. - Highlights: Black-Right-Pointing-Pointer The paper discusses the validation of creep rupture models derived from statistical analysis. Black-Right-Pointing-Pointer It demonstrates that models can be satisfactorily validated by a visual-graphic comparison of models to data. Black-Right-Pointing-Pointer The method proposed utilises test data both as conventional rupture stress and as rupture stress gradient. Black-Right-Pointing-Pointer The approach is shown to be more reliable than a well-established and widely used method (BS PD6605).

  14. The Bayesian New Statistics: Hypothesis testing, estimation, meta-analysis, and power analysis from a Bayesian perspective.

    Science.gov (United States)

    Kruschke, John K; Liddell, Torrin M

    2018-02-01

    In the practice of data analysis, there is a conceptual distinction between hypothesis testing, on the one hand, and estimation with quantified uncertainty on the other. Among frequentists in psychology, a shift of emphasis from hypothesis testing to estimation has been dubbed "the New Statistics" (Cumming 2014). A second conceptual distinction is between frequentist methods and Bayesian methods. Our main goal in this article is to explain how Bayesian methods achieve the goals of the New Statistics better than frequentist methods. The article reviews frequentist and Bayesian approaches to hypothesis testing and to estimation with confidence or credible intervals. The article also describes Bayesian approaches to meta-analysis, randomized controlled trials, and power analysis.

  15. UNSOLVED AND LATENT CRIME: DIFFERENCES AND SIMILARITIES

    Directory of Open Access Journals (Sweden)

    Mikhail Kleymenov

    2017-01-01

    Full Text Available УДК 343Purpose of the article is to study the specific legal and informational nature of the unsolved crime in comparison with the phenomenon of delinquency, special study and analysis to improve the efficiency of law enforcement.Methods of research are abstract-logical, systematic, statistical, study of documents. The main results of research. Unsolved crime has specific legal, statistical and informational na-ture as the crime phenomenon, which is expressed in cumulative statistical population of unsolved crimes. An array of unsolved crimes is the sum of the number of acts, things of which is suspended and not terminated. The fault of the perpetrator in these cases is not proven, they are not considered by the court, it is not a conviction. Unsolved crime must be registered. Latent crime has a different informational nature. The main symptom of latent crimes is the uncertainty for the subjects of law enforcement, which delegated functions of identification, registration and accounting. Latent crime is not recorded. At the same time, there is a "border" area between the latent and unsolved crimes, which includes covered from the account of the crime. In modern Russia the majority of crimes covered from accounting by passing the decision about refusal in excitation of criminal case. Unsolved crime on their criminogenic consequences represents a significant danger to the public is higher compared to latent crime.It is conducted in the article a special analysis of the differences and similarities in the unsolved latent crime for the first time in criminological literature.The analysis proves the need for radical changes in the current Russian assessment of the state of crime and law enforcement to solve crimes. The article argues that an unsolved crime is a separate and, in contrast to latent crime, poorly understood phenomenon. However unsolved latent crime and have common features and areas of interaction.

  16. Statistical analysis of solar proton events

    Directory of Open Access Journals (Sweden)

    V. Kurt

    2004-06-01

    Full Text Available A new catalogue of 253 solar proton events (SPEs with energy >10MeV and peak intensity >10 protons/cm2.s.sr (pfu at the Earth's orbit for three complete 11-year solar cycles (1970-2002 is given. A statistical analysis of this data set of SPEs and their associated flares that occurred during this time period is presented. It is outlined that 231 of these proton events are flare related and only 22 of them are not associated with Ha flares. It is also noteworthy that 42 of these events are registered as Ground Level Enhancements (GLEs in neutron monitors. The longitudinal distribution of the associated flares shows that a great number of these events are connected with west flares. This analysis enables one to understand the long-term dependence of the SPEs and the related flare characteristics on the solar cycle which are useful for space weather prediction.

  17. STATISTICAL ANALYSIS OF THE HEAVY NEUTRAL ATOMS MEASURED BY IBEX

    International Nuclear Information System (INIS)

    Park, Jeewoo; Kucharek, Harald; Möbius, Eberhard; Galli, André; Livadiotis, George; Fuselier, Steve A.; McComas, David J.

    2015-01-01

    We investigate the directional distribution of heavy neutral atoms in the heliosphere by using heavy neutral maps generated with the IBEX-Lo instrument over three years from 2009 to 2011. The interstellar neutral (ISN) O and Ne gas flow was found in the first-year heavy neutral map at 601 keV and its flow direction and temperature were studied. However, due to the low counting statistics, researchers have not treated the full sky maps in detail. The main goal of this study is to evaluate the statistical significance of each pixel in the heavy neutral maps to get a better understanding of the directional distribution of heavy neutral atoms in the heliosphere. Here, we examine three statistical analysis methods: the signal-to-noise filter, the confidence limit method, and the cluster analysis method. These methods allow us to exclude background from areas where the heavy neutral signal is statistically significant. These methods also allow the consistent detection of heavy neutral atom structures. The main emission feature expands toward lower longitude and higher latitude from the observational peak of the ISN O and Ne gas flow. We call this emission the extended tail. It may be an imprint of the secondary oxygen atoms generated by charge exchange between ISN hydrogen atoms and oxygen ions in the outer heliosheath

  18. A Model for Comparative Analysis of the Similarity between Android and iOS Operating Systems

    Directory of Open Access Journals (Sweden)

    Lixandroiu R.

    2014-12-01

    Full Text Available Due to recent expansion of mobile devices, in this article we try to do an analysis of two of the most used mobile OSS. This analysis is made on the method of calculating Jaccard's similarity coefficient. To complete the analysis, we developed a hierarchy of factors in evaluating OSS. Analysis has shown that the two OSS are similar in terms of functionality, but there are a number of factors that weighted make a difference.

  19. Explorations in statistics: the analysis of ratios and normalized data.

    Science.gov (United States)

    Curran-Everett, Douglas

    2013-09-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This ninth installment of Explorations in Statistics explores the analysis of ratios and normalized-or standardized-data. As researchers, we compute a ratio-a numerator divided by a denominator-to compute a proportion for some biological response or to derive some standardized variable. In each situation, we want to control for differences in the denominator when the thing we really care about is the numerator. But there is peril lurking in a ratio: only if the relationship between numerator and denominator is a straight line through the origin will the ratio be meaningful. If not, the ratio will misrepresent the true relationship between numerator and denominator. In contrast, regression techniques-these include analysis of covariance-are versatile: they can accommodate an analysis of the relationship between numerator and denominator when a ratio is useless.

  20. Parametric statistical change point analysis

    CERN Document Server

    Chen, Jie

    2000-01-01

    This work is an in-depth study of the change point problem from a general point of view and a further examination of change point analysis of the most commonly used statistical models Change point problems are encountered in such disciplines as economics, finance, medicine, psychology, signal processing, and geology, to mention only several The exposition is clear and systematic, with a great deal of introductory material included Different models are presented in each chapter, including gamma and exponential models, rarely examined thus far in the literature Other models covered in detail are the multivariate normal, univariate normal, regression, and discrete models Extensive examples throughout the text emphasize key concepts and different methodologies are used, namely the likelihood ratio criterion, and the Bayesian and information criterion approaches A comprehensive bibliography and two indices complete the study

  1. Perceptual and statistical analysis of cardiac phase and amplitude images

    International Nuclear Information System (INIS)

    Houston, A.; Craig, A.

    1991-01-01

    A perceptual experiment was conducted using cardiac phase and amplitude images. Estimates of statistical parameters were derived from the images and the diagnostic potential of human and statistical decisions compared. Five methods were used to generate the images from 75 gated cardiac studies, 39 of which were classified as pathological. The images were presented to 12 observers experienced in nuclear medicine. The observers rated the images using a five-category scale based on their confidence of an abnormality presenting. Circular and linear statistics were used to analyse phase and amplitude image data, respectively. Estimates of mean, standard deviation (SD), skewness, kurtosis and the first term of the spatial correlation function were evaluated in the region of the left ventricle. A receiver operating characteristic analysis was performed on both sets of data and the human and statistical decisions compared. For phase images, circular SD was shown to discriminate better between normal and abnormal than experienced observers, but no single statistic discriminated as well as the human observer for amplitude images. (orig.)

  2. Statistical analysis of the count and profitability of air conditioners.

    Science.gov (United States)

    Rady, El Houssainy A; Mohamed, Salah M; Abd Elmegaly, Alaa A

    2018-08-01

    This article presents the statistical analysis of the number and profitability of air conditioners in an Egyptian company. Checking the same distribution for each categorical variable has been made using Kruskal-Wallis test.

  3. Fast Depiction Invariant Visual Similarity for Content Based Image Retrieval Based on Data-driven Visual Similarity using Linear Discriminant Analysis

    Science.gov (United States)

    Wihardi, Y.; Setiawan, W.; Nugraha, E.

    2018-01-01

    On this research we try to build CBIRS based on Learning Distance/Similarity Function using Linear Discriminant Analysis (LDA) and Histogram of Oriented Gradient (HoG) feature. Our method is invariant to depiction of image, such as similarity of image to image, sketch to image, and painting to image. LDA can decrease execution time compared to state of the art method, but it still needs an improvement in term of accuracy. Inaccuracy in our experiment happen because we did not perform sliding windows search and because of low number of negative samples as natural-world images.

  4. metaCCA: summary statistics-based multivariate meta-analysis of genome-wide association studies using canonical correlation analysis.

    Science.gov (United States)

    Cichonska, Anna; Rousu, Juho; Marttinen, Pekka; Kangas, Antti J; Soininen, Pasi; Lehtimäki, Terho; Raitakari, Olli T; Järvelin, Marjo-Riitta; Salomaa, Veikko; Ala-Korpela, Mika; Ripatti, Samuli; Pirinen, Matti

    2016-07-01

    A dominant approach to genetic association studies is to perform univariate tests between genotype-phenotype pairs. However, analyzing related traits together increases statistical power, and certain complex associations become detectable only when several variants are tested jointly. Currently, modest sample sizes of individual cohorts, and restricted availability of individual-level genotype-phenotype data across the cohorts limit conducting multivariate tests. We introduce metaCCA, a computational framework for summary statistics-based analysis of a single or multiple studies that allows multivariate representation of both genotype and phenotype. It extends the statistical technique of canonical correlation analysis to the setting where original individual-level records are not available, and employs a covariance shrinkage algorithm to achieve robustness.Multivariate meta-analysis of two Finnish studies of nuclear magnetic resonance metabolomics by metaCCA, using standard univariate output from the program SNPTEST, shows an excellent agreement with the pooled individual-level analysis of original data. Motivated by strong multivariate signals in the lipid genes tested, we envision that multivariate association testing using metaCCA has a great potential to provide novel insights from already published summary statistics from high-throughput phenotyping technologies. Code is available at https://github.com/aalto-ics-kepaco anna.cichonska@helsinki.fi or matti.pirinen@helsinki.fi Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  5. Statistical Analysis of the Exchange Rate of Bitcoin.

    Science.gov (United States)

    Chu, Jeffrey; Nadarajah, Saralees; Chan, Stephen

    2015-01-01

    Bitcoin, the first electronic payment system, is becoming a popular currency. We provide a statistical analysis of the log-returns of the exchange rate of Bitcoin versus the United States Dollar. Fifteen of the most popular parametric distributions in finance are fitted to the log-returns. The generalized hyperbolic distribution is shown to give the best fit. Predictions are given for future values of the exchange rate.

  6. Statistical analysis and Monte Carlo simulation of growing self-avoiding walks on percolation

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Yuxia [Department of Physics, Wuhan University, Wuhan 430072 (China); Sang Jianping [Department of Physics, Wuhan University, Wuhan 430072 (China); Department of Physics, Jianghan University, Wuhan 430056 (China); Zou Xianwu [Department of Physics, Wuhan University, Wuhan 430072 (China)]. E-mail: xwzou@whu.edu.cn; Jin Zhunzhi [Department of Physics, Wuhan University, Wuhan 430072 (China)

    2005-09-26

    The two-dimensional growing self-avoiding walk on percolation was investigated by statistical analysis and Monte Carlo simulation. We obtained the expression of the mean square displacement and effective exponent as functions of time and percolation probability by statistical analysis and made a comparison with simulations. We got a reduced time to scale the motion of walkers in growing self-avoiding walks on regular and percolation lattices.

  7. General specifications for the development of a USL NASA PC R and D statistical analysis support package

    Science.gov (United States)

    Dominick, Wayne D. (Editor); Bassari, Jinous; Triantafyllopoulos, Spiros

    1984-01-01

    The University of Southwestern Louisiana (USL) NASA PC R and D statistical analysis support package is designed to be a three-level package to allow statistical analysis for a variety of applications within the USL Data Base Management System (DBMS) contract work. The design addresses usage of the statistical facilities as a library package, as an interactive statistical analysis system, and as a batch processing package.

  8. Composites Similarity Analysis Method Based on Knowledge Set in Composites Quality Control

    OpenAIRE

    Li Haifeng

    2016-01-01

    Composites similarity analysis is an important link of composites review, it can not only to declare composites review rechecking, still help composites applicants promptly have the research content relevant progress and avoid duplication. This paper mainly studies the composites similarity model in composites review. With the actual experience of composites management, based on the author’s knowledge set theory, paper analyzes deeply knowledge set representation of composites knowledge, impr...

  9. Uncovering highly obfuscated plagiarism cases using fuzzy semantic-based similarity model

    Directory of Open Access Journals (Sweden)

    Salha M. Alzahrani

    2015-07-01

    Full Text Available Highly obfuscated plagiarism cases contain unseen and obfuscated texts, which pose difficulties when using existing plagiarism detection methods. A fuzzy semantic-based similarity model for uncovering obfuscated plagiarism is presented and compared with five state-of-the-art baselines. Semantic relatedness between words is studied based on the part-of-speech (POS tags and WordNet-based similarity measures. Fuzzy-based rules are introduced to assess the semantic distance between source and suspicious texts of short lengths, which implement the semantic relatedness between words as a membership function to a fuzzy set. In order to minimize the number of false positives and false negatives, a learning method that combines a permission threshold and a variation threshold is used to decide true plagiarism cases. The proposed model and the baselines are evaluated on 99,033 ground-truth annotated cases extracted from different datasets, including 11,621 (11.7% handmade paraphrases, 54,815 (55.4% artificial plagiarism cases, and 32,578 (32.9% plagiarism-free cases. We conduct extensive experimental verifications, including the study of the effects of different segmentations schemes and parameter settings. Results are assessed using precision, recall, F-measure and granularity on stratified 10-fold cross-validation data. The statistical analysis using paired t-tests shows that the proposed approach is statistically significant in comparison with the baselines, which demonstrates the competence of fuzzy semantic-based model to detect plagiarism cases beyond the literal plagiarism. Additionally, the analysis of variance (ANOVA statistical test shows the effectiveness of different segmentation schemes used with the proposed approach.

  10. Privacy-Preserving Patient Similarity Learning in a Federated Environment: Development and Analysis.

    Science.gov (United States)

    Lee, Junghye; Sun, Jimeng; Wang, Fei; Wang, Shuang; Jun, Chi-Hyuck; Jiang, Xiaoqian

    2018-04-13

    There is an urgent need for the development of global analytic frameworks that can perform analyses in a privacy-preserving federated environment across multiple institutions without privacy leakage. A few studies on the topic of federated medical analysis have been conducted recently with the focus on several algorithms. However, none of them have solved similar patient matching, which is useful for applications such as cohort construction for cross-institution observational studies, disease surveillance, and clinical trials recruitment. The aim of this study was to present a privacy-preserving platform in a federated setting for patient similarity learning across institutions. Without sharing patient-level information, our model can find similar patients from one hospital to another. We proposed a federated patient hashing framework and developed a novel algorithm to learn context-specific hash codes to represent patients across institutions. The similarities between patients can be efficiently computed using the resulting hash codes of corresponding patients. To avoid security attack from reverse engineering on the model, we applied homomorphic encryption to patient similarity search in a federated setting. We used sequential medical events extracted from the Multiparameter Intelligent Monitoring in Intensive Care-III database to evaluate the proposed algorithm in predicting the incidence of five diseases independently. Our algorithm achieved averaged area under the curves of 0.9154 and 0.8012 with balanced and imbalanced data, respectively, in κ-nearest neighbor with κ=3. We also confirmed privacy preservation in similarity search by using homomorphic encryption. The proposed algorithm can help search similar patients across institutions effectively to support federated data analysis in a privacy-preserving manner. ©Junghye Lee, Jimeng Sun, Fei Wang, Shuang Wang, Chi-Hyuck Jun, Xiaoqian Jiang. Originally published in JMIR Medical Informatics (http

  11. A method for statistical steady state thermal analysis of reactor cores

    International Nuclear Information System (INIS)

    Whetton, P.A.

    1981-01-01

    In a previous publication the author presented a method for undertaking statistical steady state thermal analyses of reactor cores. The present paper extends the technique to an assessment of confidence limits for the resulting probability functions which define the probability that a given thermal response value will be exceeded in a reactor core. Establishing such confidence limits is considered an integral part of any statistical thermal analysis and essential if such analysis are to be considered in any regulatory process. In certain applications the use of a best estimate probability function may be justifiable but it is recognised that a demonstrably conservative probability function is required for any regulatory considerations. (orig.)

  12. A statistical test for outlier identification in data envelopment analysis

    Directory of Open Access Journals (Sweden)

    Morteza Khodabin

    2010-09-01

    Full Text Available In the use of peer group data to assess individual, typical or best practice performance, the effective detection of outliers is critical for achieving useful results. In these ‘‘deterministic’’ frontier models, statistical theory is now mostly available. This paper deals with the statistical pared sample method and its capability of detecting outliers in data envelopment analysis. In the presented method, each observation is deleted from the sample once and the resulting linear program is solved, leading to a distribution of efficiency estimates. Based on the achieved distribution, a pared test is designed to identify the potential outlier(s. We illustrate the method through a real data set. The method could be used in a first step, as an exploratory data analysis, before using any frontier estimation.

  13. Radar Derived Spatial Statistics of Summer Rain. Volume 2; Data Reduction and Analysis

    Science.gov (United States)

    Konrad, T. G.; Kropfli, R. A.

    1975-01-01

    Data reduction and analysis procedures are discussed along with the physical and statistical descriptors used. The statistical modeling techniques are outlined and examples of the derived statistical characterization of rain cells in terms of the several physical descriptors are presented. Recommendations concerning analyses which can be pursued using the data base collected during the experiment are included.

  14. Instrumental Neutron Activation Analysis and Multivariate Statistics for Pottery Provenance

    Science.gov (United States)

    Glascock, M. D.; Neff, H.; Vaughn, K. J.

    2004-06-01

    The application of instrumental neutron activation analysis and multivariate statistics to archaeological studies of ceramics and clays is described. A small pottery data set from the Nasca culture in southern Peru is presented for illustration.

  15. Instrumental Neutron Activation Analysis and Multivariate Statistics for Pottery Provenance

    International Nuclear Information System (INIS)

    Glascock, M. D.; Neff, H.; Vaughn, K. J.

    2004-01-01

    The application of instrumental neutron activation analysis and multivariate statistics to archaeological studies of ceramics and clays is described. A small pottery data set from the Nasca culture in southern Peru is presented for illustration.

  16. Instrumental Neutron Activation Analysis and Multivariate Statistics for Pottery Provenance

    Energy Technology Data Exchange (ETDEWEB)

    Glascock, M. D.; Neff, H. [University of Missouri, Research Reactor Center (United States); Vaughn, K. J. [Pacific Lutheran University, Department of Anthropology (United States)

    2004-06-15

    The application of instrumental neutron activation analysis and multivariate statistics to archaeological studies of ceramics and clays is described. A small pottery data set from the Nasca culture in southern Peru is presented for illustration.

  17. Statistical analysis and data management

    International Nuclear Information System (INIS)

    Anon.

    1981-01-01

    This report provides an overview of the history of the WIPP Biology Program. The recommendations of the American Institute of Biological Sciences (AIBS) for the WIPP biology program are summarized. The data sets available for statistical analyses and problems associated with these data sets are also summarized. Biological studies base maps are presented. A statistical model is presented to evaluate any correlation between climatological data and small mammal captures. No statistically significant relationship between variance in small mammal captures on Dr. Gennaro's 90m x 90m grid and precipitation records from the Duval Potash Mine were found

  18. Detecting errors in micro and trace analysis by using statistics

    DEFF Research Database (Denmark)

    Heydorn, K.

    1993-01-01

    By assigning a standard deviation to each step in an analytical method it is possible to predict the standard deviation of each analytical result obtained by this method. If the actual variability of replicate analytical results agrees with the expected, the analytical method is said...... to be in statistical control. Significant deviations between analytical results from different laboratories reveal the presence of systematic errors, and agreement between different laboratories indicate the absence of systematic errors. This statistical approach, referred to as the analysis of precision, was applied...

  19. Statistical analysis of the BOIL program in RSYST-III

    International Nuclear Information System (INIS)

    Beck, W.; Hausch, H.J.

    1978-11-01

    The paper describes a statistical analysis in the RSYST-III program system. Using the example of the BOIL program, it is shown how the effects of inaccurate input data on the output data can be discovered. The existing possibilities of data generation, data handling, and data evaluation are outlined. (orig.) [de

  20. Multivariate statistical analysis of precipitation chemistry in Northwestern Spain

    International Nuclear Information System (INIS)

    Prada-Sanchez, J.M.; Garcia-Jurado, I.; Gonzalez-Manteiga, W.; Fiestras-Janeiro, M.G.; Espada-Rios, M.I.; Lucas-Dominguez, T.

    1993-01-01

    149 samples of rainwater were collected in the proximity of a power station in northwestern Spain at three rainwater monitoring stations. The resulting data are analyzed using multivariate statistical techniques. Firstly, the Principal Component Analysis shows that there are three main sources of pollution in the area (a marine source, a rural source and an acid source). The impact from pollution from these sources on the immediate environment of the stations is studied using Factorial Discriminant Analysis. 8 refs., 7 figs., 11 tabs

  1. Multivariate statistical analysis of precipitation chemistry in Northwestern Spain

    Energy Technology Data Exchange (ETDEWEB)

    Prada-Sanchez, J.M.; Garcia-Jurado, I.; Gonzalez-Manteiga, W.; Fiestras-Janeiro, M.G.; Espada-Rios, M.I.; Lucas-Dominguez, T. (University of Santiago, Santiago (Spain). Faculty of Mathematics, Dept. of Statistics and Operations Research)

    1993-07-01

    149 samples of rainwater were collected in the proximity of a power station in northwestern Spain at three rainwater monitoring stations. The resulting data are analyzed using multivariate statistical techniques. Firstly, the Principal Component Analysis shows that there are three main sources of pollution in the area (a marine source, a rural source and an acid source). The impact from pollution from these sources on the immediate environment of the stations is studied using Factorial Discriminant Analysis. 8 refs., 7 figs., 11 tabs.

  2. SWToolbox: A surface-water tool-box for statistical analysis of streamflow time series

    Science.gov (United States)

    Kiang, Julie E.; Flynn, Kate; Zhai, Tong; Hummel, Paul; Granato, Gregory

    2018-03-07

    This report is a user guide for the low-flow analysis methods provided with version 1.0 of the Surface Water Toolbox (SWToolbox) computer program. The software combines functionality from two software programs—U.S. Geological Survey (USGS) SWSTAT and U.S. Environmental Protection Agency (EPA) DFLOW. Both of these programs have been used primarily for computation of critical low-flow statistics. The main analysis methods are the computation of hydrologic frequency statistics such as the 7-day minimum flow that occurs on average only once every 10 years (7Q10), computation of design flows including biologically based flows, and computation of flow-duration curves and duration hydrographs. Other annual, monthly, and seasonal statistics can also be computed. The interface facilitates retrieval of streamflow discharge data from the USGS National Water Information System and outputs text reports for a record of the analysis. Tools for graphing data and screening tests are available to assist the analyst in conducting the analysis.

  3. Anomalous heat transfer modes of nanofluids: a review based on statistical analysis

    Science.gov (United States)

    2011-01-01

    This paper contains the results of a concise statistical review analysis of a large amount of publications regarding the anomalous heat transfer modes of nanofluids. The application of nanofluids as coolants is a novel practise with no established physical foundations explaining the observed anomalous heat transfer. As a consequence, traditional methods of performing a literature review may not be adequate in presenting objectively the results representing the bulk of the available literature. The current literature review analysis aims to resolve the problems faced by researchers in the past by employing an unbiased statistical analysis to present and reveal the current trends and general belief of the scientific community regarding the anomalous heat transfer modes of nanofluids. The thermal performance analysis indicated that statistically there exists a variable enhancement for conduction, convection/mixed heat transfer, pool boiling heat transfer and critical heat flux modes. The most popular proposed mechanisms in the literature to explain heat transfer in nanofluids are revealed, as well as possible trends between nanofluid properties and thermal performance. The review also suggests future experimentation to provide more conclusive answers to the control mechanisms and influential parameters of heat transfer in nanofluids. PMID:21711932

  4. Anomalous heat transfer modes of nanofluids: a review based on statistical analysis

    Science.gov (United States)

    Sergis, Antonis; Hardalupas, Yannis

    2011-05-01

    This paper contains the results of a concise statistical review analysis of a large amount of publications regarding the anomalous heat transfer modes of nanofluids. The application of nanofluids as coolants is a novel practise with no established physical foundations explaining the observed anomalous heat transfer. As a consequence, traditional methods of performing a literature review may not be adequate in presenting objectively the results representing the bulk of the available literature. The current literature review analysis aims to resolve the problems faced by researchers in the past by employing an unbiased statistical analysis to present and reveal the current trends and general belief of the scientific community regarding the anomalous heat transfer modes of nanofluids. The thermal performance analysis indicated that statistically there exists a variable enhancement for conduction, convection/mixed heat transfer, pool boiling heat transfer and critical heat flux modes. The most popular proposed mechanisms in the literature to explain heat transfer in nanofluids are revealed, as well as possible trends between nanofluid properties and thermal performance. The review also suggests future experimentation to provide more conclusive answers to the control mechanisms and influential parameters of heat transfer in nanofluids.

  5. Anomalous heat transfer modes of nanofluids: a review based on statistical analysis

    Directory of Open Access Journals (Sweden)

    Sergis Antonis

    2011-01-01

    Full Text Available Abstract This paper contains the results of a concise statistical review analysis of a large amount of publications regarding the anomalous heat transfer modes of nanofluids. The application of nanofluids as coolants is a novel practise with no established physical foundations explaining the observed anomalous heat transfer. As a consequence, traditional methods of performing a literature review may not be adequate in presenting objectively the results representing the bulk of the available literature. The current literature review analysis aims to resolve the problems faced by researchers in the past by employing an unbiased statistical analysis to present and reveal the current trends and general belief of the scientific community regarding the anomalous heat transfer modes of nanofluids. The thermal performance analysis indicated that statistically there exists a variable enhancement for conduction, convection/mixed heat transfer, pool boiling heat transfer and critical heat flux modes. The most popular proposed mechanisms in the literature to explain heat transfer in nanofluids are revealed, as well as possible trends between nanofluid properties and thermal performance. The review also suggests future experimentation to provide more conclusive answers to the control mechanisms and influential parameters of heat transfer in nanofluids.

  6. Modeling of asphalt-rubber rotational viscosity by statistical analysis and neural networks

    Directory of Open Access Journals (Sweden)

    Luciano Pivoto Specht

    2007-03-01

    Full Text Available It is of a great importance to know binders' viscosity in order to perform handling, mixing, application processes and asphalt mixes compaction in highway surfacing. This paper presents the results of viscosity measurement in asphalt-rubber binders prepared in laboratory. The binders were prepared varying the rubber content, rubber particle size, duration and temperature of mixture, all following a statistical design plan. The statistical analysis and artificial neural networks were used to create mathematical models for prediction of the binders viscosity. The comparison between experimental data and simulated results with the generated models showed best performance of the neural networks analysis in contrast to the statistic models. The results indicated that the rubber content and duration of mixture have major influence on the observed viscosity for the considered interval of parameters variation.

  7. Common pitfalls in statistical analysis: Odds versus risk

    Science.gov (United States)

    Ranganathan, Priya; Aggarwal, Rakesh; Pramesh, C. S.

    2015-01-01

    In biomedical research, we are often interested in quantifying the relationship between an exposure and an outcome. “Odds” and “Risk” are the most common terms which are used as measures of association between variables. In this article, which is the fourth in the series of common pitfalls in statistical analysis, we explain the meaning of risk and odds and the difference between the two. PMID:26623395

  8. Statistical Analysis of the Exchange Rate of Bitcoin.

    Directory of Open Access Journals (Sweden)

    Jeffrey Chu

    Full Text Available Bitcoin, the first electronic payment system, is becoming a popular currency. We provide a statistical analysis of the log-returns of the exchange rate of Bitcoin versus the United States Dollar. Fifteen of the most popular parametric distributions in finance are fitted to the log-returns. The generalized hyperbolic distribution is shown to give the best fit. Predictions are given for future values of the exchange rate.

  9. Statistical Analysis of the Exchange Rate of Bitcoin

    Science.gov (United States)

    Chu, Jeffrey; Nadarajah, Saralees; Chan, Stephen

    2015-01-01

    Bitcoin, the first electronic payment system, is becoming a popular currency. We provide a statistical analysis of the log-returns of the exchange rate of Bitcoin versus the United States Dollar. Fifteen of the most popular parametric distributions in finance are fitted to the log-returns. The generalized hyperbolic distribution is shown to give the best fit. Predictions are given for future values of the exchange rate. PMID:26222702

  10. Statistical analysis of incidents reported in the Greek Petrochemical Industry for the period 1997-2003

    International Nuclear Information System (INIS)

    Konstandinidou, Myrto; Nivolianitou, Zoe; Markatos, Nikolaos; Kiranoudis, Chris

    2006-01-01

    This paper makes an analysis of all reported accidents and incidents in the Greek Petrochemical Industry for the period spanning from 1997 to 2003. The work performed is related to the analysis of important parameters of the incidents, their inclusion in a database adequately designed for the purposes of this analysis and an importance assessment of this reporting scheme. Indeed, various stakeholders have highlighted the importance of a reporting system for industrial accidents and incidents. The European Union has established for this purpose the Major Accident Reporting System (MARS) for the reporting of major accidents in the Member States. However, major accidents are not the only measure that can characterize the safety status of an establishment; neither are the former the only events from which important lessons can be learned. Near misses, industrial incidents without major consequences, as well as occupational accidents could equally supply with important findings the interested analyst, while statistical analysis of these incidents could give significant insight in the understanding and the prevention of similar incidents or major accidents in the future. This analysis could be more significant, if each industrial sector was separately analyzed, as the authors do for the petrochemical sector in the present article

  11. Analysis of Variance with Summary Statistics in Microsoft® Excel®

    Science.gov (United States)

    Larson, David A.; Hsu, Ko-Cheng

    2010-01-01

    Students regularly are asked to solve Single Factor Analysis of Variance problems given only the sample summary statistics (number of observations per category, category means, and corresponding category standard deviations). Most undergraduate students today use Excel for data analysis of this type. However, Excel, like all other statistical…

  12. The Australasian Resuscitation in Sepsis Evaluation (ARISE) trial statistical analysis plan.

    Science.gov (United States)

    Delaney, Anthony P; Peake, Sandra L; Bellomo, Rinaldo; Cameron, Peter; Holdgate, Anna; Howe, Belinda; Higgins, Alisa; Presneill, Jeffrey; Webb, Steve

    2013-09-01

    The Australasian Resuscitation in Sepsis Evaluation (ARISE) study is an international, multicentre, randomised, controlled trial designed to evaluate the effectiveness of early goal-directed therapy compared with standard care for patients presenting to the emergency department with severe sepsis. In keeping with current practice, and considering aspects of trial design and reporting specific to non-pharmacological interventions, our plan outlines the principles and methods for analysing and reporting the trial results. The document is prepared before completion of recruitment into the ARISE study, without knowledge of the results of the interim analysis conducted by the data safety and monitoring committee and before completion of the two related international studies. Our statistical analysis plan was designed by the ARISE chief investigators, and reviewed and approved by the ARISE steering committee. We reviewed the data collected by the research team as specified in the study protocol and detailed in the study case report form. We describe information related to baseline characteristics, characteristics of delivery of the trial interventions, details of resuscitation, other related therapies and other relevant data with appropriate comparisons between groups. We define the primary, secondary and tertiary outcomes for the study, with description of the planned statistical analyses. We have developed a statistical analysis plan with a trial profile, mock-up tables and figures. We describe a plan for presenting baseline characteristics, microbiological and antibiotic therapy, details of the interventions, processes of care and concomitant therapies and adverse events. We describe the primary, secondary and tertiary outcomes with identification of subgroups to be analysed. We have developed a statistical analysis plan for the ARISE study, available in the public domain, before the completion of recruitment into the study. This will minimise analytical bias and

  13. Vector-field statistics for the analysis of time varying clinical gait data.

    Science.gov (United States)

    Donnelly, C J; Alexander, C; Pataky, T C; Stannage, K; Reid, S; Robinson, M A

    2017-01-01

    In clinical settings, the time varying analysis of gait data relies heavily on the experience of the individual(s) assessing these biological signals. Though three dimensional kinematics are recognised as time varying waveforms (1D), exploratory statistical analysis of these data are commonly carried out with multiple discrete or 0D dependent variables. In the absence of an a priori 0D hypothesis, clinicians are at risk of making type I and II errors in their analyis of time varying gait signatures in the event statistics are used in concert with prefered subjective clinical assesment methods. The aim of this communication was to determine if vector field waveform statistics were capable of providing quantitative corroboration to practically significant differences in time varying gait signatures as determined by two clinically trained gait experts. The case study was a left hemiplegic Cerebral Palsy (GMFCS I) gait patient following a botulinum toxin (BoNT-A) injection to their left gastrocnemius muscle. When comparing subjective clinical gait assessments between two testers, they were in agreement with each other for 61% of the joint degrees of freedom and phases of motion analysed. For tester 1 and tester 2, they were in agreement with the vector-field analysis for 78% and 53% of the kinematic variables analysed. When the subjective analyses of tester 1 and tester 2 were pooled together and then compared to the vector-field analysis, they were in agreement for 83% of the time varying kinematic variables analysed. These outcomes demonstrate that in principle, vector-field statistics corroborates with what a team of clinical gait experts would classify as practically meaningful pre- versus post time varying kinematic differences. The potential for vector-field statistics to be used as a useful clinical tool for the objective analysis of time varying clinical gait data is established. Future research is recommended to assess the usefulness of vector-field analyses

  14. [Design and implementation of online statistical analysis function in information system of air pollution and health impact monitoring].

    Science.gov (United States)

    Lü, Yiran; Hao, Shuxin; Zhang, Guoqing; Liu, Jie; Liu, Yue; Xu, Dongqun

    2018-01-01

    To implement the online statistical analysis function in information system of air pollution and health impact monitoring, and obtain the data analysis information real-time. Using the descriptive statistical method as well as time-series analysis and multivariate regression analysis, SQL language and visual tools to implement online statistical analysis based on database software. Generate basic statistical tables and summary tables of air pollution exposure and health impact data online; Generate tendency charts of each data part online and proceed interaction connecting to database; Generate butting sheets which can lead to R, SAS and SPSS directly online. The information system air pollution and health impact monitoring implements the statistical analysis function online, which can provide real-time analysis result to its users.

  15. Introduction to statistics and data analysis with exercises, solutions and applications in R

    CERN Document Server

    Heumann, Christian; Shalabh

    2016-01-01

    This introductory statistics textbook conveys the essential concepts and tools needed to develop and nurture statistical thinking. It presents descriptive, inductive and explorative statistical methods and guides the reader through the process of quantitative data analysis. In the experimental sciences and interdisciplinary research, data analysis has become an integral part of any scientific study. Issues such as judging the credibility of data, analyzing the data, evaluating the reliability of the obtained results and finally drawing the correct and appropriate conclusions from the results are vital. The text is primarily intended for undergraduate students in disciplines like business administration, the social sciences, medicine, politics, macroeconomics, etc. It features a wealth of examples, exercises and solutions with computer code in the statistical programming language R as well as supplementary material that will enable the reader to quickly adapt all methods to their own applications.

  16. Methodology сomparative statistical analysis of Russian industry based on cluster analysis

    Directory of Open Access Journals (Sweden)

    Sergey S. Shishulin

    2017-01-01

    Full Text Available The article is devoted to researching of the possibilities of applying multidimensional statistical analysis in the study of industrial production on the basis of comparing its growth rates and structure with other developed and developing countries of the world. The purpose of this article is to determine the optimal set of statistical methods and the results of their application to industrial production data, which would give the best access to the analysis of the result.Data includes such indicators as output, output, gross value added, the number of employed and other indicators of the system of national accounts and operational business statistics. The objects of observation are the industry of the countrys of the Customs Union, the United States, Japan and Erope in 2005-2015. As the research tool used as the simplest methods of transformation, graphical and tabular visualization of data, and methods of statistical analysis. In particular, based on a specialized software package (SPSS, the main components method, discriminant analysis, hierarchical methods of cluster analysis, Ward’s method and k-means were applied.The application of the method of principal components to the initial data makes it possible to substantially and effectively reduce the initial space of industrial production data. Thus, for example, in analyzing the structure of industrial production, the reduction was from fifteen industries to three basic, well-interpreted factors: the relatively extractive industries (with a low degree of processing, high-tech industries and consumer goods (medium-technology sectors. At the same time, as a result of comparison of the results of application of cluster analysis to the initial data and data obtained on the basis of the principal components method, it was established that clustering industrial production data on the basis of new factors significantly improves the results of clustering.As a result of analyzing the parameters of

  17. Generalized sample entropy analysis for traffic signals based on similarity measure

    Science.gov (United States)

    Shang, Du; Xu, Mengjia; Shang, Pengjian

    2017-05-01

    Sample entropy is a prevailing method used to quantify the complexity of a time series. In this paper a modified method of generalized sample entropy and surrogate data analysis is proposed as a new measure to assess the complexity of a complex dynamical system such as traffic signals. The method based on similarity distance presents a different way of signals patterns match showing distinct behaviors of complexity. Simulations are conducted over synthetic data and traffic signals for providing the comparative study, which is provided to show the power of the new method. Compared with previous sample entropy and surrogate data analysis, the new method has two main advantages. The first one is that it overcomes the limitation about the relationship between the dimension parameter and the length of series. The second one is that the modified sample entropy functions can be used to quantitatively distinguish time series from different complex systems by the similar measure.

  18. Data analysis for radiological characterisation: Geostatistical and statistical complementarity

    International Nuclear Information System (INIS)

    Desnoyers, Yvon; Dubot, Didier

    2012-01-01

    Radiological characterisation may cover a large range of evaluation objectives during a decommissioning and dismantling (D and D) project: removal of doubt, delineation of contaminated materials, monitoring of the decontamination work and final survey. At each stage, collecting relevant data to be able to draw the conclusions needed is quite a big challenge. In particular two radiological characterisation stages require an advanced sampling process and data analysis, namely the initial categorization and optimisation of the materials to be removed and the final survey to demonstrate compliance with clearance levels. On the one hand the latter is widely used and well developed in national guides and norms, using random sampling designs and statistical data analysis. On the other hand a more complex evaluation methodology has to be implemented for the initial radiological characterisation, both for sampling design and for data analysis. The geostatistical framework is an efficient way to satisfy the radiological characterisation requirements providing a sound decision-making approach for the decommissioning and dismantling of nuclear premises. The relevance of the geostatistical methodology relies on the presence of a spatial continuity for radiological contamination. Thus geo-statistics provides reliable methods for activity estimation, uncertainty quantification and risk analysis, leading to a sound classification of radiological waste (surfaces and volumes). This way, the radiological characterization of contaminated premises can be divided into three steps. First, the most exhaustive facility analysis provides historical and qualitative information. Then, a systematic (exhaustive or not) surface survey of the contamination is implemented on a regular grid. Finally, in order to assess activity levels and contamination depths, destructive samples are collected at several locations within the premises (based on the surface survey results) and analysed. Combined with

  19. Statistical Modeling Approach to Quantitative Analysis of Interobserver Variability in Breast Contouring

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Jinzhong, E-mail: jyang4@mdanderson.org [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Woodward, Wendy A.; Reed, Valerie K.; Strom, Eric A.; Perkins, George H.; Tereffe, Welela; Buchholz, Thomas A. [Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Zhang, Lifei; Balter, Peter; Court, Laurence E. [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Li, X. Allen [Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, Wisconsin (United States); Dong, Lei [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Scripps Proton Therapy Center, San Diego, California (United States)

    2014-05-01

    Purpose: To develop a new approach for interobserver variability analysis. Methods and Materials: Eight radiation oncologists specializing in breast cancer radiation therapy delineated a patient's left breast “from scratch” and from a template that was generated using deformable image registration. Three of the radiation oncologists had previously received training in Radiation Therapy Oncology Group consensus contouring for breast cancer atlas. The simultaneous truth and performance level estimation algorithm was applied to the 8 contours delineated “from scratch” to produce a group consensus contour. Individual Jaccard scores were fitted to a beta distribution model. We also applied this analysis to 2 or more patients, which were contoured by 9 breast radiation oncologists from 8 institutions. Results: The beta distribution model had a mean of 86.2%, standard deviation (SD) of ±5.9%, a skewness of −0.7, and excess kurtosis of 0.55, exemplifying broad interobserver variability. The 3 RTOG-trained physicians had higher agreement scores than average, indicating that their contours were close to the group consensus contour. One physician had high sensitivity but lower specificity than the others, which implies that this physician tended to contour a structure larger than those of the others. Two other physicians had low sensitivity but specificity similar to the others, which implies that they tended to contour a structure smaller than the others. With this information, they could adjust their contouring practice to be more consistent with others if desired. When contouring from the template, the beta distribution model had a mean of 92.3%, SD ± 3.4%, skewness of −0.79, and excess kurtosis of 0.83, which indicated a much better consistency among individual contours. Similar results were obtained for the analysis of 2 additional patients. Conclusions: The proposed statistical approach was able to measure interobserver variability quantitatively

  20. Multivariate statistical pattern recognition system for reactor noise analysis

    International Nuclear Information System (INIS)

    Gonzalez, R.C.; Howington, L.C.; Sides, W.H. Jr.; Kryter, R.C.

    1976-01-01

    A multivariate statistical pattern recognition system for reactor noise analysis was developed. The basis of the system is a transformation for decoupling correlated variables and algorithms for inferring probability density functions. The system is adaptable to a variety of statistical properties of the data, and it has learning, tracking, and updating capabilities. System design emphasizes control of the false-alarm rate. The ability of the system to learn normal patterns of reactor behavior and to recognize deviations from these patterns was evaluated by experiments at the ORNL High-Flux Isotope Reactor (HFIR). Power perturbations of less than 0.1 percent of the mean value in selected frequency ranges were detected by the system

  1. Multivariate statistical pattern recognition system for reactor noise analysis

    International Nuclear Information System (INIS)

    Gonzalez, R.C.; Howington, L.C.; Sides, W.H. Jr.; Kryter, R.C.

    1975-01-01

    A multivariate statistical pattern recognition system for reactor noise analysis was developed. The basis of the system is a transformation for decoupling correlated variables and algorithms for inferring probability density functions. The system is adaptable to a variety of statistical properties of the data, and it has learning, tracking, and updating capabilities. System design emphasizes control of the false-alarm rate. The ability of the system to learn normal patterns of reactor behavior and to recognize deviations from these patterns was evaluated by experiments at the ORNL High-Flux Isotope Reactor (HFIR). Power perturbations of less than 0.1 percent of the mean value in selected frequency ranges were detected by the system. 19 references

  2. RESEARCH OF THE DATA BANK OF STATISTICAL ANALYSIS OF THE ADVERTISING MARKET

    Directory of Open Access Journals (Sweden)

    Ekaterina F. Devochkina

    2014-01-01

    Full Text Available The article contains the description of the process of making statistical accounting of the Russian advertising market. The author pays attention to the forms of state statistical accounting of different years, marks their different features and shortage. Also the article contains analysis of alternative sources of numerical information of Russian advertising market.

  3. Statistical Analysis for High-Dimensional Data : The Abel Symposium 2014

    CERN Document Server

    Bühlmann, Peter; Glad, Ingrid; Langaas, Mette; Richardson, Sylvia; Vannucci, Marina

    2016-01-01

    This book features research contributions from The Abel Symposium on Statistical Analysis for High Dimensional Data, held in Nyvågar, Lofoten, Norway, in May 2014. The focus of the symposium was on statistical and machine learning methodologies specifically developed for inference in “big data” situations, with particular reference to genomic applications. The contributors, who are among the most prominent researchers on the theory of statistics for high dimensional inference, present new theories and methods, as well as challenging applications and computational solutions. Specific themes include, among others, variable selection and screening, penalised regression, sparsity, thresholding, low dimensional structures, computational challenges, non-convex situations, learning graphical models, sparse covariance and precision matrices, semi- and non-parametric formulations, multiple testing, classification, factor models, clustering, and preselection. Highlighting cutting-edge research and casting light on...

  4. Halo statistics analysis within medium volume cosmological N-body simulation

    Directory of Open Access Journals (Sweden)

    Martinović N.

    2015-01-01

    Full Text Available In this paper we present halo statistics analysis of a ΛCDM N body cosmological simulation (from first halo formation until z = 0. We study mean major merger rate as a function of time, where for time we consider both per redshift and per Gyr dependence. For latter we find that it scales as the well known power law (1 + zn for which we obtain n = 2.4. The halo mass function and halo growth function are derived and compared both with analytical and empirical fits. We analyse halo growth through out entire simulation, making it possible to continuously monitor evolution of halo number density within given mass ranges. The halo formation redshift is studied exploring possibility for a new simple preliminary analysis during the simulation run. Visualization of the simulation is portrayed as well. At redshifts z = 0−7 halos from simulation have good statistics for further analysis especially in mass range of 1011 − 1014 M./h. [176021 ’Visible and invisible matter in nearby galaxies: theory and observations

  5. Comparison between statistical and optimization methods in accessing unmixing of spectrally similar materials

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2010-11-01

    Full Text Available This paper reports on the results from ordinary least squares and ridge regression as statistical methods, and is compared to numerical optimization methods such as the stochastic method for global optimization, simulated annealing, particle swarm...

  6. SOCR Analyses - an Instructional Java Web-based Statistical Analysis Toolkit.

    Science.gov (United States)

    Chu, Annie; Cui, Jenny; Dinov, Ivo D

    2009-03-01

    The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test.The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website.In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most

  7. Multisample adjusted U-statistics that account for confounding covariates.

    Science.gov (United States)

    Satten, Glen A; Kong, Maiying; Datta, Somnath

    2018-06-19

    Multisample U-statistics encompass a wide class of test statistics that allow the comparison of 2 or more distributions. U-statistics are especially powerful because they can be applied to both numeric and nonnumeric data, eg, ordinal and categorical data where a pairwise similarity or distance-like measure between categories is available. However, when comparing the distribution of a variable across 2 or more groups, observed differences may be due to confounding covariates. For example, in a case-control study, the distribution of exposure in cases may differ from that in controls entirely because of variables that are related to both exposure and case status and are distributed differently among case and control participants. We propose to use individually reweighted data (ie, using the stratification score for retrospective data or the propensity score for prospective data) to construct adjusted U-statistics that can test the equality of distributions across 2 (or more) groups in the presence of confounding covariates. Asymptotic normality of our adjusted U-statistics is established and a closed form expression of their asymptotic variance is presented. The utility of our approach is demonstrated through simulation studies, as well as in an analysis of data from a case-control study conducted among African-Americans, comparing whether the similarity in haplotypes (ie, sets of adjacent genetic loci inherited from the same parent) occurring in a case and a control participant differs from the similarity in haplotypes occurring in 2 control participants. Copyright © 2018 John Wiley & Sons, Ltd.

  8. Multicriteria Similarity-Based Anomaly Detection Using Pareto Depth Analysis.

    Science.gov (United States)

    Hsiao, Ko-Jen; Xu, Kevin S; Calder, Jeff; Hero, Alfred O

    2016-06-01

    We consider the problem of identifying patterns in a data set that exhibits anomalous behavior, often referred to as anomaly detection. Similarity-based anomaly detection algorithms detect abnormally large amounts of similarity or dissimilarity, e.g., as measured by the nearest neighbor Euclidean distances between a test sample and the training samples. In many application domains, there may not exist a single dissimilarity measure that captures all possible anomalous patterns. In such cases, multiple dissimilarity measures can be defined, including nonmetric measures, and one can test for anomalies by scalarizing using a nonnegative linear combination of them. If the relative importance of the different dissimilarity measures are not known in advance, as in many anomaly detection applications, the anomaly detection algorithm may need to be executed multiple times with different choices of weights in the linear combination. In this paper, we propose a method for similarity-based anomaly detection using a novel multicriteria dissimilarity measure, the Pareto depth. The proposed Pareto depth analysis (PDA) anomaly detection algorithm uses the concept of Pareto optimality to detect anomalies under multiple criteria without having to run an algorithm multiple times with different choices of weights. The proposed PDA approach is provably better than using linear combinations of the criteria, and shows superior performance on experiments with synthetic and real data sets.

  9. Analysis of uncertainties and convergence of the statistical quantities in turbulent wall-bounded flows by means of a physically based criterion

    Science.gov (United States)

    Andrade, João Rodrigo; Martins, Ramon Silva; Thompson, Roney Leon; Mompean, Gilmar; da Silveira Neto, Aristeu

    2018-04-01

    The present paper provides an analysis of the statistical uncertainties associated with direct numerical simulation (DNS) results and experimental data for turbulent channel and pipe flows, showing a new physically based quantification of these errors, to improve the determination of the statistical deviations between DNSs and experiments. The analysis is carried out using a recently proposed criterion by Thompson et al. ["A methodology to evaluate statistical errors in DNS data of plane channel flows," Comput. Fluids 130, 1-7 (2016)] for fully turbulent plane channel flows, where the mean velocity error is estimated by considering the Reynolds stress tensor, and using the balance of the mean force equation. It also presents how the residual error evolves in time for a DNS of a plane channel flow, and the influence of the Reynolds number on its convergence rate. The root mean square of the residual error is shown in order to capture a single quantitative value of the error associated with the dimensionless averaging time. The evolution in time of the error norm is compared with the final error provided by DNS data of similar Reynolds numbers available in the literature. A direct consequence of this approach is that it was possible to compare different numerical results and experimental data, providing an improved understanding of the convergence of the statistical quantities in turbulent wall-bounded flows.

  10. Similarity Analysis for Reactor Flow Distribution Test and Its Validation

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Soon Joon; Ha, Jung Hui [Heungdeok IT Valley, Yongin (Korea, Republic of); Lee, Taehoo; Han, Ji Woong [KAERI, Daejeon (Korea, Republic of)

    2015-05-15

    facility. It was clearly found in Hong et al. In this study the feasibility of the similarity analysis of Hong et al. was examined. The similarity analysis was applied to SFR which has been designed in KAERI (Korea Atomic Energy Research Institute) in order to design the reactor flow distribution test. The length scale was assumed to be 1/5, and the velocity scale 1/2, which bounds the square root of the length scale (1/√5). The CFX calculations for both prototype and model were carried out and the flow field was compared.

  11. The Statistical Properties of Host Load

    Directory of Open Access Journals (Sweden)

    Peter A. Dinda

    1999-01-01

    Full Text Available Understanding how host load changes over time is instrumental in predicting the execution time of tasks or jobs, such as in dynamic load balancing and distributed soft real‐time systems. To improve this understanding, we collected week‐long, 1 Hz resolution traces of the Digital Unix 5 second exponential load average on over 35 different machines including production and research cluster machines, compute servers, and desktop workstations. Separate sets of traces were collected at two different times of the year. The traces capture all of the dynamic load information available to user‐level programs on these machines. We present a detailed statistical analysis of these traces here, including summary statistics, distributions, and time series analysis results. Two significant new results are that load is self‐similar and that it displays epochal behavior. All of the traces exhibit a high degree of self‐similarity with Hurst parameters ranging from 0.73 to 0.99, strongly biased toward the top of that range. The traces also display epochal behavior in that the local frequency content of the load signal remains quite stable for long periods of time (150–450 s mean and changes abruptly at epoch boundaries. Despite these complex behaviors, we have found that relatively simple linear models are sufficient for short‐range host load prediction.

  12. Short-run and Current Analysis Model in Statistics

    Directory of Open Access Journals (Sweden)

    Constantin Anghelache

    2006-01-01

    Full Text Available Using the short-run statistic indicators is a compulsory requirement implied in the current analysis. Therefore, there is a system of EUROSTAT indicators on short run which has been set up in this respect, being recommended for utilization by the member-countries. On the basis of these indicators, there are regular, usually monthly, analysis being achieved in respect of: the production dynamic determination; the evaluation of the short-run investment volume; the development of the turnover; the wage evolution: the employment; the price indexes and the consumer price index (inflation; the volume of exports and imports and the extent to which the imports are covered by the exports and the sold of trade balance. The EUROSTAT system of indicators of conjuncture is conceived as an open system, so that it can be, at any moment extended or restricted, allowing indicators to be amended or even removed, depending on the domestic users requirements as well as on the specific requirements of the harmonization and integration. For the short-run analysis, there is also the World Bank system of indicators of conjuncture, which is utilized, relying on the data sources offered by the World Bank, The World Institute for Resources or other international organizations statistics. The system comprises indicators of the social and economic development and focuses on the indicators for the following three fields: human resources, environment and economic performances. At the end of the paper, there is a case study on the situation of Romania, for which we used all these indicators.

  13. Short-run and Current Analysis Model in Statistics

    Directory of Open Access Journals (Sweden)

    Constantin Mitrut

    2006-03-01

    Full Text Available Using the short-run statistic indicators is a compulsory requirement implied in the current analysis. Therefore, there is a system of EUROSTAT indicators on short run which has been set up in this respect, being recommended for utilization by the member-countries. On the basis of these indicators, there are regular, usually monthly, analysis being achieved in respect of: the production dynamic determination; the evaluation of the short-run investment volume; the development of the turnover; the wage evolution: the employment; the price indexes and the consumer price index (inflation; the volume of exports and imports and the extent to which the imports are covered by the exports and the sold of trade balance. The EUROSTAT system of indicators of conjuncture is conceived as an open system, so that it can be, at any moment extended or restricted, allowing indicators to be amended or even removed, depending on the domestic users requirements as well as on the specific requirements of the harmonization and integration. For the short-run analysis, there is also the World Bank system of indicators of conjuncture, which is utilized, relying on the data sources offered by the World Bank, The World Institute for Resources or other international organizations statistics. The system comprises indicators of the social and economic development and focuses on the indicators for the following three fields: human resources, environment and economic performances. At the end of the paper, there is a case study on the situation of Romania, for which we used all these indicators.

  14. Statistical analysis of proteomics, metabolomics, and lipidomics data using mass spectrometry

    CERN Document Server

    Mertens, Bart

    2017-01-01

    This book presents an overview of computational and statistical design and analysis of mass spectrometry-based proteomics, metabolomics, and lipidomics data. This contributed volume provides an introduction to the special aspects of statistical design and analysis with mass spectrometry data for the new omic sciences. The text discusses common aspects of design and analysis between and across all (or most) forms of mass spectrometry, while also providing special examples of application with the most common forms of mass spectrometry. Also covered are applications of computational mass spectrometry not only in clinical study but also in the interpretation of omics data in plant biology studies. Omics research fields are expected to revolutionize biomolecular research by the ability to simultaneously profile many compounds within either patient blood, urine, tissue, or other biological samples. Mass spectrometry is one of the key analytical techniques used in these new omic sciences. Liquid chromatography mass ...

  15. Three-Dimensional Assembly Tolerance Analysis Based on the Jacobian-Torsor Statistical Model

    Directory of Open Access Journals (Sweden)

    Peng Heping

    2017-01-01

    Full Text Available The unified Jacobian-Torsor model has been developed for deterministic (worst case tolerance analysis. This paper presents a comprehensive model for performing statistical tolerance analysis by integrating the unified Jacobian-Torsor model and Monte Carlo simulation. In this model, an assembly is sub-divided into surfaces, the Small Displacements Torsor (SDT parameters are used to express the relative position between any two surfaces of the assembly. Then, 3D dimension-chain can be created by using a surface graph of the assembly and the unified Jacobian-Torsor model is developed based on the effect of each functional element on the whole functional requirements of products. Finally, Monte Carlo simulation is implemented for the statistical tolerance analysis. A numerical example is given to demonstrate the capability of the proposed method in handling three-dimensional assembly tolerance analysis.

  16. SAS and R data management, statistical analysis, and graphics

    CERN Document Server

    Kleinman, Ken

    2009-01-01

    An All-in-One Resource for Using SAS and R to Carry out Common TasksProvides a path between languages that is easier than reading complete documentationSAS and R: Data Management, Statistical Analysis, and Graphics presents an easy way to learn how to perform an analytical task in both SAS and R, without having to navigate through the extensive, idiosyncratic, and sometimes unwieldy software documentation. The book covers many common tasks, such as data management, descriptive summaries, inferential procedures, regression analysis, and the creation of graphics, along with more complex applicat

  17. Statistical methods for data analysis in particle physics

    CERN Document Server

    AUTHOR|(CDS)2070643

    2015-01-01

    This concise set of course-based notes provides the reader with the main concepts and tools to perform statistical analysis of experimental data, in particular in the field of high-energy physics (HEP). First, an introduction to probability theory and basic statistics is given, mainly as reminder from advanced undergraduate studies, yet also in view to clearly distinguish the Frequentist versus Bayesian approaches and interpretations in subsequent applications. More advanced concepts and applications are gradually introduced, culminating in the chapter on upper limits as many applications in HEP concern hypothesis testing, where often the main goal is to provide better and better limits so as to be able to distinguish eventually between competing hypotheses or to rule out some of them altogether. Many worked examples will help newcomers to the field and graduate students to understand the pitfalls in applying theoretical concepts to actual data

  18. Statistical Analysis of 30 Years Rainfall Data: A Case Study

    Science.gov (United States)

    Arvind, G.; Ashok Kumar, P.; Girish Karthi, S.; Suribabu, C. R.

    2017-07-01

    Rainfall is a prime input for various engineering design such as hydraulic structures, bridges and culverts, canals, storm water sewer and road drainage system. The detailed statistical analysis of each region is essential to estimate the relevant input value for design and analysis of engineering structures and also for crop planning. A rain gauge station located closely in Trichy district is selected for statistical analysis where agriculture is the prime occupation. The daily rainfall data for a period of 30 years is used to understand normal rainfall, deficit rainfall, Excess rainfall and Seasonal rainfall of the selected circle headquarters. Further various plotting position formulae available is used to evaluate return period of monthly, seasonally and annual rainfall. This analysis will provide useful information for water resources planner, farmers and urban engineers to assess the availability of water and create the storage accordingly. The mean, standard deviation and coefficient of variation of monthly and annual rainfall was calculated to check the rainfall variability. From the calculated results, the rainfall pattern is found to be erratic. The best fit probability distribution was identified based on the minimum deviation between actual and estimated values. The scientific results and the analysis paved the way to determine the proper onset and withdrawal of monsoon results which were used for land preparation and sowing.

  19. Statistical strategies to reveal potential vibrational markers for in vivo analysis by confocal Raman spectroscopy

    Science.gov (United States)

    Oliveira Mendes, Thiago de; Pinto, Liliane Pereira; Santos, Laurita dos; Tippavajhala, Vamshi Krishna; Téllez Soto, Claudio Alberto; Martin, Airton Abrahão

    2016-07-01

    The analysis of biological systems by spectroscopic techniques involves the evaluation of hundreds to thousands of variables. Hence, different statistical approaches are used to elucidate regions that discriminate classes of samples and to propose new vibrational markers for explaining various phenomena like disease monitoring, mechanisms of action of drugs, food, and so on. However, the technical statistics are not always widely discussed in applied sciences. In this context, this work presents a detailed discussion including the various steps necessary for proper statistical analysis. It includes univariate parametric and nonparametric tests, as well as multivariate unsupervised and supervised approaches. The main objective of this study is to promote proper understanding of the application of various statistical tools in these spectroscopic methods used for the analysis of biological samples. The discussion of these methods is performed on a set of in vivo confocal Raman spectra of human skin analysis that aims to identify skin aging markers. In the Appendix, a complete routine of data analysis is executed in a free software that can be used by the scientific community involved in these studies.

  20. A method for statistical steady state thermal analysis of reactor cores

    International Nuclear Information System (INIS)

    Whetton, P.A.

    1980-01-01

    This paper presents a method for performing a statistical steady state thermal analysis of a reactor core. The technique is only outlined here since detailed thermal equations are dependent on the core geometry. The method has been applied to a pressurised water reactor core and the results are presented for illustration purposes. Random hypothetical cores are generated using the Monte-Carlo method. The technique shows that by splitting the parameters into two types, denoted core-wise and in-core, the Monte Carlo method may be used inexpensively. The idea of using extremal statistics to characterise the low probability events (i.e. the tails of a distribution) is introduced together with a method of forming the final probability distribution. After establishing an acceptable probability of exceeding a thermal design criterion, the final probability distribution may be used to determine the corresponding thermal response value. If statistical and deterministic (i.e. conservative) thermal response values are compared, information on the degree of pessimism in the deterministic method of analysis may be inferred and the restrictive performance limitations imposed by this method relieved. (orig.)

  1. VGC analyzer: a software for statistical analysis of fully crossed multiple-reader multiple-case visual grading characteristics studies

    International Nuclear Information System (INIS)

    Baath, Magnus; Hansson, Jonny

    2016-01-01

    Visual grading characteristics (VGC) analysis is a non-parametric rank-invariant method for analysis of visual grading data. In VGC analysis, image quality ratings for two different conditions are compared by producing a VGC curve, similar to how the ratings for normal and abnormal cases in receiver operating characteristic (ROC) analysis are used to create an ROC curve. The use of established ROC software for the analysis of VGC data has therefore previously been proposed. However, the ROC analysis is based on the assumption of independence between normal and abnormal cases. In VGC analysis, this independence cannot always be assumed, e.g. if the ratings are based on the same patients imaged under both conditions. A dedicated software intended for analysis of VGC studies, which takes possible dependencies between ratings into account in the statistical analysis of a VGC study, has therefore been developed. The software-VGC Analyzer-determines the area under the VGC curve and its uncertainty using non-parametric re-sampling techniques. This article gives an introduction to VGC Analyzer, describes the types of analyses that can be performed and instructs the user about the input and output data. (authors)

  2. Statistical analysis of first period of operation of FTU Tokamak

    International Nuclear Information System (INIS)

    Crisanti, F.; Apruzzese, G.; Frigione, D.; Kroegler, H.; Lovisetto, L.; Mazzitelli, G.; Podda, S.

    1996-09-01

    On the FTU Tokamak the plasma physics operations started on the 20/4/90. The first plasma had a plasma current Ip=0.75 MA for about a second. The experimental phase lasted until 7/7/94, when a long shut-down begun for installing the toroidal limiter in the inner side of the vacuum vessel. In these four years of operations plasma experiments have been successfully exploited, e.g. experiments of single and multiple pellet injections; full current drive up to Ip=300 KA was obtained by using waves at the frequency of the Lower Hybrid; analysis of ohmic plasma parameters with different materials (from the low Z silicon to high Z tungsten) as plasma facing element was performed. In this work a statistical analysis of the full period of operation is presented. Moreover, a comparison with the statistical data from other Tokamaks is attempted

  3. Head First Statistics

    CERN Document Server

    Griffiths, Dawn

    2009-01-01

    Wouldn't it be great if there were a statistics book that made histograms, probability distributions, and chi square analysis more enjoyable than going to the dentist? Head First Statistics brings this typically dry subject to life, teaching you everything you want and need to know about statistics through engaging, interactive, and thought-provoking material, full of puzzles, stories, quizzes, visual aids, and real-world examples. Whether you're a student, a professional, or just curious about statistical analysis, Head First's brain-friendly formula helps you get a firm grasp of statistics

  4. Statistical analysis of the surface figure of the James Webb Space Telescope

    Science.gov (United States)

    Lightsey, Paul A.; Chaney, David; Gallagher, Benjamin B.; Brown, Bob J.; Smith, Koby; Schwenker, John

    2012-09-01

    The performance of an optical system is best characterized by either the point spread function (PSF) or the optical transfer function (OTF). However, for system budgeting purposes, it is convenient to use a single scalar metric, or a combination of a few scalar metrics to track performance. For the James Webb Space Telescope, the Observatory level requirements were expressed in metrics of Strehl Ratio, and Encircled Energy. These in turn were converted to the metrics of total rms WFE and rms WFE within spatial frequency domains. The 18 individual mirror segments for the primary mirror segment assemblies (PMSA), the secondary mirror (SM), tertiary mirror (TM), and Fine Steering Mirror have all been fabricated. They are polished beryllium mirrors with a protected gold reflective coating. The statistical analysis of the resulting Surface Figure Error of these mirrors has been analyzed. The average spatial frequency distribution and the mirror-to-mirror consistency of the spatial frequency distribution are reported. The results provide insight to system budgeting processes for similar optical systems.

  5. Using R and RStudio for data management, statistical analysis and graphics

    CERN Document Server

    Horton, Nicholas J

    2015-01-01

    This is the second edition of the popular book on using R for statistical analysis and graphics. The authors, who run a popular blog supplementing their books, have focused on adding many new examples to this new edition. These examples are presented primarily in new chapters based on the following themes: simulation, probability, statistics, mathematics/computing, and graphics. The authors have also added many other updates, including a discussion of RStudio-a very popular development environment for R.

  6. Statistical analysis of absorptive laser damage in dielectric thin films

    International Nuclear Information System (INIS)

    Budgor, A.B.; Luria-Budgor, K.F.

    1978-01-01

    The Weibull distribution arises as an example of the theory of extreme events. It is commonly used to fit statistical data arising in the failure analysis of electrical components and in DC breakdown of materials. This distribution is employed to analyze time-to-damage and intensity-to-damage statistics obtained when irradiating thin film coated samples of SiO 2 , ZrO 2 , and Al 2 O 3 with tightly focused laser beams. The data used is furnished by Milam. The fit to the data is excellent; and least squared correlation coefficients greater than 0.9 are often obtained

  7. Statistical analysis of failure time in stress corrosion cracking of fuel tube in light water reactor

    International Nuclear Information System (INIS)

    Hirao, Keiichi; Yamane, Toshimi; Minamino, Yoritoshi

    1991-01-01

    This report is to show how the life due to stress corrosion cracking breakdown of fuel cladding tubes is evaluated by applying the statistical techniques to that examined by a few testing methods. The statistical distribution of the limiting values of constant load stress corrosion cracking life, the statistical analysis by making the probabilistic interpretation of constant load stress corrosion cracking life, and the statistical analysis of stress corrosion cracking life by the slow strain rate test (SSRT) method are described. (K.I.)

  8. Implementation and statistical analysis of Metropolis algorithm for SU(3)

    International Nuclear Information System (INIS)

    Katznelson, E.; Nobile, A.

    1984-12-01

    In this paper we study the statistical properties of an implementation of the Metropolis algorithm for SU(3) gauge theory. It is shown that the results have normal distribution. We demonstrate that in this case error analysis can be carried on in a simple way and we show that applying it to both the measurement strategy and the output data analysis has an important influence on the performance and reliability of the simulation. (author)

  9. Rapid differentiation of Chinese hop varieties (Humulus lupulus) using volatile fingerprinting by HS-SPME-GC-MS combined with multivariate statistical analysis.

    Science.gov (United States)

    Liu, Zechang; Wang, Liping; Liu, Yumei

    2018-01-18

    Hops impart flavor to beer, with the volatile components characterizing the various hop varieties and qualities. Fingerprinting, especially flavor fingerprinting, is often used to identify 'flavor products' because inconsistencies in the description of flavor may lead to an incorrect definition of beer quality. Compared to flavor fingerprinting, volatile fingerprinting is simpler and easier. We performed volatile fingerprinting using head space-solid phase micro-extraction gas chromatography-mass spectrometry combined with similarity analysis and principal component analysis (PCA) for evaluating and distinguishing between three major Chinese hops. Eighty-four volatiles were identified, which were classified into seven categories. Volatile fingerprinting based on similarity analysis did not yield any obvious result. By contrast, hop varieties and qualities were identified using volatile fingerprinting based on PCA. The potential variables explained the variance in the three hop varieties. In addition, the dendrogram and principal component score plot described the differences and classifications of hops. Volatile fingerprinting plus multivariate statistical analysis can rapidly differentiate between the different varieties and qualities of the three major Chinese hops. Furthermore, this method can be used as a reference in other fields. © 2018 Society of Chemical Industry. © 2018 Society of Chemical Industry.

  10. Using relational databases for improved sequence similarity searching and large-scale genomic analyses.

    Science.gov (United States)

    Mackey, Aaron J; Pearson, William R

    2004-10-01

    Relational databases are designed to integrate diverse types of information and manage large sets of search results, greatly simplifying genome-scale analyses. Relational databases are essential for management and analysis of large-scale sequence analyses, and can also be used to improve the statistical significance of similarity searches by focusing on subsets of sequence libraries most likely to contain homologs. This unit describes using relational databases to improve the efficiency of sequence similarity searching and to demonstrate various large-scale genomic analyses of homology-related data. This unit describes the installation and use of a simple protein sequence database, seqdb_demo, which is used as a basis for the other protocols. These include basic use of the database to generate a novel sequence library subset, how to extend and use seqdb_demo for the storage of sequence similarity search results and making use of various kinds of stored search results to address aspects of comparative genomic analysis.

  11. Characterizing multi-scale self-similar behavior and non-statistical properties of fluctuations in financial time series

    Science.gov (United States)

    Ghosh, Sayantan; Manimaran, P.; Panigrahi, Prasanta K.

    2011-11-01

    We make use of wavelet transform to study the multi-scale, self-similar behavior and deviations thereof, in the stock prices of large companies, belonging to different economic sectors. The stock market returns exhibit multi-fractal characteristics, with some of the companies showing deviations at small and large scales. The fact that, the wavelets belonging to the Daubechies’ (Db) basis enables one to isolate local polynomial trends of different degrees, plays the key role in isolating fluctuations at different scales. One of the primary motivations of this work is to study the emergence of the k-3 behavior [X. Gabaix, P. Gopikrishnan, V. Plerou, H. Stanley, A theory of power law distributions in financial market fluctuations, Nature 423 (2003) 267-270] of the fluctuations starting with high frequency fluctuations. We make use of Db4 and Db6 basis sets to respectively isolate local linear and quadratic trends at different scales in order to study the statistical characteristics of these financial time series. The fluctuations reveal fat tail non-Gaussian behavior, unstable periodic modulations, at finer scales, from which the characteristic k-3 power law behavior emerges at sufficiently large scales. We further identify stable periodic behavior through the continuous Morlet wavelet.

  12. Low-wave-number statistics of randomly advected passive scalars

    International Nuclear Information System (INIS)

    Kerstein, A.R.; McMurtry, P.A.

    1994-01-01

    A heuristic analysis of the decay of a passive scalar field subject to statistically steady random advection, predicts two low-wave-number spectral scaling regimes analogous to the similarity states previously identified by Chasnov [Phys. Fluids 6, 1036 (1994)]. Consequences of their predicted coexistence in a single flow are examined. The analysis is limited to the idealized case of narrow band advection. To complement the analysis, and to extend the predictions to physically more realistic advection processes, advection diffusion is simulated using a one-dimensional stochastic model. An experimental test of the predictions is proposed

  13. Analysis of Statistical Distributions Used for Modeling Reliability and Failure Rate of Temperature Alarm Circuit

    International Nuclear Information System (INIS)

    EI-Shanshoury, G.I.

    2011-01-01

    Several statistical distributions are used to model various reliability and maintainability parameters. The applied distribution depends on the' nature of the data being analyzed. The presented paper deals with analysis of some statistical distributions used in reliability to reach the best fit of distribution analysis. The calculations rely on circuit quantity parameters obtained by using Relex 2009 computer program. The statistical analysis of ten different distributions indicated that Weibull distribution gives the best fit distribution for modeling the reliability of the data set of Temperature Alarm Circuit (TAC). However, the Exponential distribution is found to be the best fit distribution for modeling the failure rate

  14. Statistical mechanical analysis of LMFBR fuel cladding tubes

    International Nuclear Information System (INIS)

    Poncelet, J.-P.; Pay, A.

    1977-01-01

    The most important design requirement on fuel pin cladding for LMFBR's is its mechanical integrity. Disruptive factors include internal pressure from mixed oxide fuel fission gas release, thermal stresses and high temperature creep, neutron-induced differential void-swelling as a source of stress in the cladding and irradiation creep of stainless steel material, corrosion by fission products. Under irradiation these load-restraining mechanisms are accentuated by stainless steel embrittlement and strength alterations. To account for the numerous uncertainties involved in the analysis by theoretical models and computer codes statistical tools are unavoidably requested, i.e. Monte Carlo simulation methods. Thanks to these techniques, uncertainties in nominal characteristics, material properties and environmental conditions can be linked up in a correct way and used for a more accurate conceptual design. First, a thermal creep damage index is set up through a sufficiently sophisticated clad physical analysis including arbitrary time dependence of power and neutron flux as well as effects of sodium temperature, burnup and steel mechanical behavior. Although this strain limit approach implies a more general but time consuming model., on the counterpart the net output is improved and e.g. clad temperature, stress and strain maxima may be easily assessed. A full spectrum of variables are statistically treated to account for their probability distributions. Creep damage probability may be obtained and can contribute to a quantitative fuel probability estimation

  15. A robust statistical method for association-based eQTL analysis.

    Directory of Open Access Journals (Sweden)

    Ning Jiang

    Full Text Available It has been well established that theoretical kernel for recently surging genome-wide association study (GWAS is statistical inference of linkage disequilibrium (LD between a tested genetic marker and a putative locus affecting a disease trait. However, LD analysis is vulnerable to several confounding factors of which population stratification is the most prominent. Whilst many methods have been proposed to correct for the influence either through predicting the structure parameters or correcting inflation in the test statistic due to the stratification, these may not be feasible or may impose further statistical problems in practical implementation.We propose here a novel statistical method to control spurious LD in GWAS from population structure by incorporating a control marker into testing for significance of genetic association of a polymorphic marker with phenotypic variation of a complex trait. The method avoids the need of structure prediction which may be infeasible or inadequate in practice and accounts properly for a varying effect of population stratification on different regions of the genome under study. Utility and statistical properties of the new method were tested through an intensive computer simulation study and an association-based genome-wide mapping of expression quantitative trait loci in genetically divergent human populations.The analyses show that the new method confers an improved statistical power for detecting genuine genetic association in subpopulations and an effective control of spurious associations stemmed from population structure when compared with other two popularly implemented methods in the literature of GWAS.

  16. Statistical and Machine-Learning Data Mining Techniques for Better Predictive Modeling and Analysis of Big Data

    CERN Document Server

    Ratner, Bruce

    2011-01-01

    The second edition of a bestseller, Statistical and Machine-Learning Data Mining: Techniques for Better Predictive Modeling and Analysis of Big Data is still the only book, to date, to distinguish between statistical data mining and machine-learning data mining. The first edition, titled Statistical Modeling and Analysis for Database Marketing: Effective Techniques for Mining Big Data, contained 17 chapters of innovative and practical statistical data mining techniques. In this second edition, renamed to reflect the increased coverage of machine-learning data mining techniques, the author has

  17. Activation analysis. A basis for chemical similarity and classification

    Energy Technology Data Exchange (ETDEWEB)

    Beeck, J OP de [Ghent Rijksuniversiteit (Belgium). Instituut voor Kernwetenschappen

    1977-01-01

    It is shown that activation analysis is especially suited to serve as a basis for determining the chemical similarity between samples defined by their trace-element concentration patterns. The general problem of classification and identification is discussed. The nature of possible classification structures and their appropriate clustering strategies is considered. A practical computer method is suggested and its application as well as the graphical representation of classification results are given. The possibility for classification using information theory is mentioned. Classification of chemical elements is discussed and practically realized after Hadamard transformation of the concentration variation patterns in a series of samples.

  18. Constitution of an incident database suited to statistical analysis and examples

    International Nuclear Information System (INIS)

    Verpeaux, J.L.

    1990-01-01

    The Nuclear Protection and Safety Institute (IPSN) has set up and is developing an incidents database, which is used for the management and analysis of incidents encountered in French PWR plants. IPSN has already carried out several incidents or safety important events statistical analysis, and is improving its database on the basis of the experience it gained from this various studies. A description of the analysis method and of the developed database is presented

  19. A new statistic for the analysis of circular data in gamma-ray astronomy

    Science.gov (United States)

    Protheroe, R. J.

    1985-01-01

    A new statistic is proposed for the analysis of circular data. The statistic is designed specifically for situations where a test of uniformity is required which is powerful against alternatives in which a small fraction of the observations is grouped in a small range of directions, or phases.

  20. Statistical Compilation of the ICT Sector and Policy Analysis | CRDI ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Statistical Compilation of the ICT Sector and Policy Analysis. As the presence and influence of information and communication technologies (ICTs) continues to widen and deepen, so too does its impact on economic development. However, much work needs to be done before the linkages between economic development ...

  1. Statistical Compilation of the ICT Sector and Policy Analysis | IDRC ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Statistical Compilation of the ICT Sector and Policy Analysis. As the presence and influence of information and communication technologies (ICTs) continues to widen and deepen, so too does its impact on economic development. However, much work needs to be done before the linkages between economic development ...

  2. A critical discussion of null hypothesis significance testing and statistical power analysis within psychological research

    DEFF Research Database (Denmark)

    Jones, Allan; Sommerlund, Bo

    2007-01-01

    The uses of null hypothesis significance testing (NHST) and statistical power analysis within psychological research are critically discussed. The article looks at the problems of relying solely on NHST when dealing with small and large sample sizes. The use of power-analysis in estimating...... the potential error introduced by small and large samples is advocated. Power analysis is not recommended as a replacement to NHST but as an additional source of information about the phenomena under investigation. Moreover, the importance of conceptual analysis in relation to statistical analysis of hypothesis...

  3. Molecular similarity measures.

    Science.gov (United States)

    Maggiora, Gerald M; Shanmugasundaram, Veerabahu

    2011-01-01

    Molecular similarity is a pervasive concept in chemistry. It is essential to many aspects of chemical reasoning and analysis and is perhaps the fundamental assumption underlying medicinal chemistry. Dissimilarity, the complement of similarity, also plays a major role in a growing number of applications of molecular diversity in combinatorial chemistry, high-throughput screening, and related fields. How molecular information is represented, called the representation problem, is important to the type of molecular similarity analysis (MSA) that can be carried out in any given situation. In this work, four types of mathematical structure are used to represent molecular information: sets, graphs, vectors, and functions. Molecular similarity is a pairwise relationship that induces structure into sets of molecules, giving rise to the concept of chemical space. Although all three concepts - molecular similarity, molecular representation, and chemical space - are treated in this chapter, the emphasis is on molecular similarity measures. Similarity measures, also called similarity coefficients or indices, are functions that map pairs of compatible molecular representations that are of the same mathematical form into real numbers usually, but not always, lying on the unit interval. This chapter presents a somewhat pedagogical discussion of many types of molecular similarity measures, their strengths and limitations, and their relationship to one another. An expanded account of the material on chemical spaces presented in the first edition of this book is also provided. It includes a discussion of the topography of activity landscapes and the role that activity cliffs in these landscapes play in structure-activity studies.

  4. Dataset on statistical analysis of editorial board composition of Hindawi journals indexed in Emerging sources citation index

    Directory of Open Access Journals (Sweden)

    Hilary I. Okagbue

    2018-04-01

    Full Text Available This data article contains the statistical analysis of the total, percentage and distribution of editorial board composition of 111 Hindawi journals indexed in Emerging Sources Citation Index (ESCI across the continents. The reliability of the data was shown using correlation, goodness-of-fit test, analysis of variance and statistical variability tests. Keywords: Hindawi, Bibliometrics, Data analysis, ESCI, Random, Smart campus, Web of science, Ranking analytics, Statistics

  5. Statistical analysis of the determinations of the Sun's Galactocentric distance

    Science.gov (United States)

    Malkin, Zinovy

    2013-02-01

    Based on several tens of R0 measurements made during the past two decades, several studies have been performed to derive the best estimate of R0. Some used just simple averaging to derive a result, whereas others provided comprehensive analyses of possible errors in published results. In either case, detailed statistical analyses of data used were not performed. However, a computation of the best estimates of the Galactic rotation constants is not only an astronomical but also a metrological task. Here we perform an analysis of 53 R0 measurements (published in the past 20 years) to assess the consistency of the data. Our analysis shows that they are internally consistent. It is also shown that any trend in the R0 estimates from the last 20 years is statistically negligible, which renders the presence of a bandwagon effect doubtful. On the other hand, the formal errors in the published R0 estimates improve significantly with time.

  6. Statistical analysis of global horizontal solar irradiation GHI in Fez city, Morocco

    Science.gov (United States)

    Bounoua, Z.; Mechaqrane, A.

    2018-05-01

    An accurate knowledge of the solar energy reaching the ground is necessary for sizing and optimizing the performances of solar installations. This paper describes a statistical analysis of the global horizontal solar irradiation (GHI) at Fez city, Morocco. For better reliability, we have first applied a set of check procedures to test the quality of hourly GHI measurements. We then eliminate the erroneous values which are generally due to measurement or the cosine effect errors. Statistical analysis show that the annual mean daily values of GHI is of approximately 5 kWh/m²/day. Daily monthly mean values and other parameter are also calculated.

  7. Statistical and machine learning approaches for network analysis

    CERN Document Server

    Dehmer, Matthias

    2012-01-01

    Explore the multidisciplinary nature of complex networks through machine learning techniques Statistical and Machine Learning Approaches for Network Analysis provides an accessible framework for structurally analyzing graphs by bringing together known and novel approaches on graph classes and graph measures for classification. By providing different approaches based on experimental data, the book uniquely sets itself apart from the current literature by exploring the application of machine learning techniques to various types of complex networks. Comprised of chapters written by internation

  8. Analysis of spectral data with rare events statistics

    International Nuclear Information System (INIS)

    Ilyushchenko, V.I.; Chernov, N.I.

    1990-01-01

    The case is considered of analyzing experimental data, when the results of individual experimental runs cannot be summed due to large systematic errors. A statistical analysis of the hypothesis about the persistent peaks in the spectra has been performed by means of the Neyman-Pearson test. The computations demonstrate the confidence level for the hypothesis about the presence of a persistent peak in the spectrum is proportional to the square root of the number of independent experimental runs, K. 5 refs

  9. Indoor Soiling Method and Outdoor Statistical Risk Analysis of Photovoltaic Power Plants

    Science.gov (United States)

    Rajasekar, Vidyashree

    This is a two-part thesis. Part 1 presents an approach for working towards the development of a standardized artificial soiling method for laminated photovoltaic (PV) cells or mini-modules. Construction of an artificial chamber to maintain controlled environmental conditions and components/chemicals used in artificial soil formulation is briefly explained. Both poly-Si mini-modules and a single cell mono-Si coupons were soiled and characterization tests such as I-V, reflectance and quantum efficiency (QE) were carried out on both soiled, and cleaned coupons. From the results obtained, poly-Si mini-modules proved to be a good measure of soil uniformity, as any non-uniformity present would not result in a smooth curve during I-V measurements. The challenges faced while executing reflectance and QE characterization tests on poly-Si due to smaller size cells was eliminated on the mono-Si coupons with large cells to obtain highly repeatable measurements. This study indicates that the reflectance measurements between 600-700 nm wavelengths can be used as a direct measure of soil density on the modules. Part 2 determines the most dominant failure modes of field aged PV modules using experimental data obtained in the field and statistical analysis, FMECA (Failure Mode, Effect, and Criticality Analysis). The failure and degradation modes of about 744 poly-Si glass/polymer frameless modules fielded for 18 years under the cold-dry climate of New York was evaluated. Defect chart, degradation rates (both string and module levels) and safety map were generated using the field measured data. A statistical reliability tool, FMECA that uses Risk Priority Number (RPN) is used to determine the dominant failure or degradation modes in the strings and modules by means of ranking and prioritizing the modes. This study on PV power plants considers all the failure and degradation modes from both safety and performance perspectives. The indoor and outdoor soiling studies were jointly

  10. Determination of geographic provenance of cotton fibres using multi-isotope profiles and multivariate statistical analysis

    Science.gov (United States)

    Daeid, N. Nic; Meier-Augenstein, W.; Kemp, H. F.

    2012-04-01

    The analysis of cotton fibres can be particularly challenging within a forensic science context where discrimination of one fibre from another is of importance. Normally cotton fibre analysis examines the morphological structure of the recovered material and compares this with that of a known fibre from a particular source of interest. However, the conventional microscopic and chemical analysis of fibres and any associated dyes is generally unsuccessful because of the similar morphology of the fibres. Analysis of the dyes which may have been applied to the cotton fibre can also be undertaken though this can be difficult and unproductive in terms of discriminating one fibre from another. In the study presented here we have explored the potential for Isotope Ratio Mass Spectrometry (IRMS) to be utilised as an additional tool for cotton fibre analysis in an attempt to reveal further discriminatory information. This work has concentrated on un-dyed cotton fibres of known origin in order to expose the potential of the analytical technique. We report the results of a pilot study aimed at testing the hypothesis that multi-element stable isotope analysis of cotton fibres in conjunction with multivariate statistical analysis of the resulting isotopic abundance data using well established chemometric techniques permits sample provenancing based on the determination of where the cotton was grown and as such will facilitate sample discrimination. To date there is no recorded literature of this type of application of IRMS to cotton samples, which may be of forensic science relevance.

  11. Limitations of Using Microsoft Excel Version 2016 (MS Excel 2016) for Statistical Analysis for Medical Research.

    Science.gov (United States)

    Tanavalee, Chotetawan; Luksanapruksa, Panya; Singhatanadgige, Weerasak

    2016-06-01

    Microsoft Excel (MS Excel) is a commonly used program for data collection and statistical analysis in biomedical research. However, this program has many limitations, including fewer functions that can be used for analysis and a limited number of total cells compared with dedicated statistical programs. MS Excel cannot complete analyses with blank cells, and cells must be selected manually for analysis. In addition, it requires multiple steps of data transformation and formulas to plot survival analysis graphs, among others. The Megastat add-on program, which will be supported by MS Excel 2016 soon, would eliminate some limitations of using statistic formulas within MS Excel.

  12. A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis.

    Science.gov (United States)

    Lin, Johnny; Bentler, Peter M

    2012-01-01

    Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne's asymptotically distribution-free method and Satorra Bentler's mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler's statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby's study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic.

  13. Statistical power analysis a simple and general model for traditional and modern hypothesis tests

    CERN Document Server

    Murphy, Kevin R; Wolach, Allen

    2014-01-01

    Noted for its accessible approach, this text applies the latest approaches of power analysis to both null hypothesis and minimum-effect testing using the same basic unified model. Through the use of a few simple procedures and examples, the authors show readers with little expertise in statistical analysis how to obtain the values needed to carry out the power analysis for their research. Illustrations of how these analyses work and how they can be used to choose the appropriate criterion for defining statistically significant outcomes are sprinkled throughout. The book presents a simple and g

  14. Evaluation of errors for mass-spectrometric analysis with surface-ionization type mass-spectrometer (statistical evaluation of mass-discrimination effect)

    International Nuclear Information System (INIS)

    Wada, Y.

    1981-01-01

    The surface-ionization type mass-spectrometer is widely used as an apparatus for quality assurance, accountability and safeguarding of nuclear materials, and for this analysis it has become an important factor to statistically evaluate an analytical error which consists of a random error and a systematic error. The major factor of this systematic error was the mass-discrimination effect. In this paper, various assays for evaluating the factor of variation on the mass-discrimination effect were studied and the data obtained were statistically evaluated. As a result of these analyses, it was proved that the factor of variation on the mass-discrimination effect was not attributed to the acid concentration of sample, sample size on the filament and supplied voltage for a multiplier, but mainly to the filament temperature during the mass-spectrometric analysis. The mass-discrimination effect values β which were usually calculated from the measured data of uranium, plutonium or boron isotopic standard sample were not so significant dependently of the difference of U-235, Pu-239 or B-10 isotopic abundance. Furthermore, in the case of U and Pu, measurement conditions and the mass range of these isotopes were almost similar, and these values β were not statistically significant between U and Pu. On the other hand, the value β for boron was about a third of the value β for U or Pu, but compared with the coefficient of the correction on the mass-discrimination effect for the difference of mass-number, ΔM, these coefficient values were almost the same among U, Pu, and B.As for the isotopic analysis error of U, Pu, Nd and B, it was proved that the isotopic abundance of these elements and the isotopic analysis error were in a relationship of quadratic curves on a logarithmic-logarithmic scale

  15. Statistical Analysis of CFD Solutions from the Fourth AIAA Drag Prediction Workshop

    Science.gov (United States)

    Morrison, Joseph H.

    2010-01-01

    A graphical framework is used for statistical analysis of the results from an extensive N-version test of a collection of Reynolds-averaged Navier-Stokes computational fluid dynamics codes. The solutions were obtained by code developers and users from the U.S., Europe, Asia, and Russia using a variety of grid systems and turbulence models for the June 2009 4th Drag Prediction Workshop sponsored by the AIAA Applied Aerodynamics Technical Committee. The aerodynamic configuration for this workshop was a new subsonic transport model, the Common Research Model, designed using a modern approach for the wing and included a horizontal tail. The fourth workshop focused on the prediction of both absolute and incremental drag levels for wing-body and wing-body-horizontal tail configurations. This work continues the statistical analysis begun in the earlier workshops and compares the results from the grid convergence study of the most recent workshop with earlier workshops using the statistical framework.

  16. Error analysis of terrestrial laser scanning data by means of spherical statistics and 3D graphs.

    Science.gov (United States)

    Cuartero, Aurora; Armesto, Julia; Rodríguez, Pablo G; Arias, Pedro

    2010-01-01

    This paper presents a complete analysis of the positional errors of terrestrial laser scanning (TLS) data based on spherical statistics and 3D graphs. Spherical statistics are preferred because of the 3D vectorial nature of the spatial error. Error vectors have three metric elements (one module and two angles) that were analyzed by spherical statistics. A study case has been presented and discussed in detail. Errors were calculating using 53 check points (CP) and CP coordinates were measured by a digitizer with submillimetre accuracy. The positional accuracy was analyzed by both the conventional method (modular errors analysis) and the proposed method (angular errors analysis) by 3D graphics and numerical spherical statistics. Two packages in R programming language were performed to obtain graphics automatically. The results indicated that the proposed method is advantageous as it offers a more complete analysis of the positional accuracy, such as angular error component, uniformity of the vector distribution, error isotropy, and error, in addition the modular error component by linear statistics.

  17. Self-similar distribution of oil spills in European coastal waters

    International Nuclear Information System (INIS)

    Redondo, Jose M; Platonov, Alexei K

    2009-01-01

    Marine pollution has been highlighted thanks to the advances in detection techniques as well as increasing coverage of catastrophes (e.g. the oil tankers Amoco Cadiz, Exxon Valdez, Erika, and Prestige) and of smaller oil spills from ships. The new satellite based sensors SAR and ASAR and new methods of oil spill detection and analysis coupled with self-similar statistical techniques allow surveys of environmental pollution monitoring large areas of the ocean. We present a statistical analysis of more than 700 SAR images obtained during 1996-2000, also comparing the detected small pollution events with the historical databases of great marine accidents during 1966-2004 in European coastal waters. We show that the statistical distribution of the number of oil spills as a function of their size corresponds to Zipf's law, and that the common small spills are comparable to the large accidents due to the high frequency of the smaller pollution events. Marine pollution from tankers and ships, which has been detected as oil spills between 0.01 and 100 km 2 , follows the marine transit routes. Multi-fractal methods are used to distinguish between natural slicks and spills, in order to estimate the oil spill index in European coastal waters, and in particular, the north-western Mediterranean Sea, which, due to the influence of local winds, shows optimal conditions for oil spill detection.

  18. CFAssay: statistical analysis of the colony formation assay

    International Nuclear Information System (INIS)

    Braselmann, Herbert; Michna, Agata; Heß, Julia; Unger, Kristian

    2015-01-01

    Colony formation assay is the gold standard to determine cell reproductive death after treatment with ionizing radiation, applied for different cell lines or in combination with other treatment modalities. Associated linear-quadratic cell survival curves can be calculated with different methods. For easy code exchange and methodological standardisation among collaborating laboratories a software package CFAssay for R (R Core Team, R: A Language and Environment for Statistical Computing, 2014) was established to perform thorough statistical analysis of linear-quadratic cell survival curves after treatment with ionizing radiation and of two-way designs of experiments with chemical treatments only. CFAssay offers maximum likelihood and related methods by default and the least squares or weighted least squares method can be optionally chosen. A test for comparision of cell survival curves and an ANOVA test for experimental two-way designs are provided. For the two presented examples estimated parameters do not differ much between maximum-likelihood and least squares. However the dispersion parameter of the quasi-likelihood method is much more sensitive for statistical variation in the data than the multiple R 2 coefficient of determination from the least squares method. The dispersion parameter for goodness of fit and different plot functions in CFAssay help to evaluate experimental data quality. As open source software interlaboratory code sharing between users is facilitated

  19. Procedure for statistical analysis of one-parameter discrepant experimental data

    International Nuclear Information System (INIS)

    Badikov, Sergey A.; Chechev, Valery P.

    2012-01-01

    A new, Mandel–Paule-type procedure for statistical processing of one-parameter discrepant experimental data is described. The procedure enables one to estimate a contribution of unrecognized experimental errors into the total experimental uncertainty as well as to include it in analysis. A definition of discrepant experimental data for an arbitrary number of measurements is introduced as an accompanying result. In the case of negligible unrecognized experimental errors, the procedure simply reduces to the calculation of the weighted average and its internal uncertainty. The procedure was applied to the statistical analysis of half-life experimental data; Mean half-lives for 20 actinides were calculated and results were compared to the ENSDF and DDEP evaluations. On the whole, the calculated half-lives are consistent with the ENSDF and DDEP evaluations. However, the uncertainties calculated in this work essentially exceed the ENSDF and DDEP evaluations for discrepant experimental data. This effect can be explained by adequately taking into account unrecognized experimental errors. - Highlights: ► A new statistical procedure for processing one-parametric discrepant experimental data has been presented. ► Procedure estimates a contribution of unrecognized errors in the total experimental uncertainty. ► Procedure was applied for processing half-life discrepant experimental data. ► Results of the calculations are compared to the ENSDF and DDEP evaluations.

  20. Noise removing in encrypted color images by statistical analysis

    Science.gov (United States)

    Islam, N.; Puech, W.

    2012-03-01

    Cryptographic techniques are used to secure confidential data from unauthorized access but these techniques are very sensitive to noise. A single bit change in encrypted data can have catastrophic impact over the decrypted data. This paper addresses the problem of removing bit error in visual data which are encrypted using AES algorithm in the CBC mode. In order to remove the noise, a method is proposed which is based on the statistical analysis of each block during the decryption. The proposed method exploits local statistics of the visual data and confusion/diffusion properties of the encryption algorithm to remove the errors. Experimental results show that the proposed method can be used at the receiving end for the possible solution for noise removing in visual data in encrypted domain.

  1. Statistical Analysis of Radio Propagation Channel in Ruins Environment

    Directory of Open Access Journals (Sweden)

    Jiao He

    2015-01-01

    Full Text Available The cellphone based localization system for search and rescue in complex high density ruins has attracted a great interest in recent years, where the radio channel characteristics are critical for design and development of such a system. This paper presents a spatial smoothing estimation via rotational invariance technique (SS-ESPRIT for radio channel characterization of high density ruins. The radio propagations at three typical mobile communication bands (0.9, 1.8, and 2 GHz are investigated in two different scenarios. Channel parameters, such as arrival time, delays, and complex amplitudes, are statistically analyzed. Furthermore, a channel simulator is built based on these statistics. By comparison analysis of average excess delay and delay spread, the validation results show a good agreement between the measurements and channel modeling results.

  2. SeDA: A software package for the statistical analysis of the instrument drift

    International Nuclear Information System (INIS)

    Lee, H. J.; Jang, S. C.; Lim, T. J.

    2006-01-01

    The setpoints for safety-related equipment are affected by many sources of an uncertainty. ANSI/ISA-S67.04.01-2000 [1] and ISA-RP6 7.04.02-2000 [2] suggested the statistical approaches for ensuring that the safety-related instrument setpoints were established and maintained within the technical specification limits [3]. However, Jang et al. [4] indicated that the preceding methodologies for a setpoint drift analysis might be insufficient to manage a setpoint drift on an instrumentation device and proposed new statistical analysis procedures for the management of a setpoint drift, based on the plant specific as-found/as-left data. Although IHPA (Instrument History Performance Analysis) is a widely known commercial software package to analyze an instrument setpoint drift, several steps in the new procedure cannot be performed by using it because it is based on the statistical approaches suggested in the ANSI/ISA-S67.04.01 -2000 [1] and ISA-RP67.04.02-2000 [2], In this paper we present a software package (SeDA: Setpoint Drift Analysis) that implements new methodologies, and which is easy to use, as it is accompanied by powerful graphical tools. (authors)

  3. Statistical analysis in MSW collection performance assessment.

    Science.gov (United States)

    Teixeira, Carlos Afonso; Avelino, Catarina; Ferreira, Fátima; Bentes, Isabel

    2014-09-01

    The increase of Municipal Solid Waste (MSW) generated over the last years forces waste managers pursuing more effective collection schemes, technically viable, environmentally effective and economically sustainable. The assessment of MSW services using performance indicators plays a crucial role for improving service quality. In this work, we focus on the relevance of regular system monitoring as a service assessment tool. In particular, we select and test a core-set of MSW collection performance indicators (effective collection distance, effective collection time and effective fuel consumption) that highlights collection system strengths and weaknesses and supports pro-active management decision-making and strategic planning. A statistical analysis was conducted with data collected in mixed collection system of Oporto Municipality, Portugal, during one year, a week per month. This analysis provides collection circuits' operational assessment and supports effective short-term municipality collection strategies at the level of, e.g., collection frequency and timetables, and type of containers. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Implementation of statistical analysis methods for medical physics data

    International Nuclear Information System (INIS)

    Teixeira, Marilia S.; Pinto, Nivia G.P.; Barroso, Regina C.; Oliveira, Luis F.

    2009-01-01

    The objective of biomedical research with different radiation natures is to contribute for the understanding of the basic physics and biochemistry of the biological systems, the disease diagnostic and the development of the therapeutic techniques. The main benefits are: the cure of tumors through the therapy, the anticipated detection of diseases through the diagnostic, the using as prophylactic mean for blood transfusion, etc. Therefore, for the better understanding of the biological interactions occurring after exposure to radiation, it is necessary for the optimization of therapeutic procedures and strategies for reduction of radioinduced effects. The group pf applied physics of the Physics Institute of UERJ have been working in the characterization of biological samples (human tissues, teeth, saliva, soil, plants, sediments, air, water, organic matrixes, ceramics, fossil material, among others) using X-rays diffraction and X-ray fluorescence. The application of these techniques for measurement, analysis and interpretation of the biological tissues characteristics are experimenting considerable interest in the Medical and Environmental Physics. All quantitative data analysis must be initiated with descriptive statistic calculation (means and standard deviations) in order to obtain a previous notion on what the analysis will reveal. It is well known que o high values of standard deviation found in experimental measurements of biologicals samples can be attributed to biological factors, due to the specific characteristics of each individual (age, gender, environment, alimentary habits, etc). This work has the main objective the development of a program for the use of specific statistic methods for the optimization of experimental data an analysis. The specialized programs for this analysis are proprietary, another objective of this work is the implementation of a code which is free and can be shared by the other research groups. As the program developed since the

  5. Statistical analysis of AFM topographic images of self-assembled quantum dots

    Energy Technology Data Exchange (ETDEWEB)

    Sevriuk, V. A.; Brunkov, P. N., E-mail: brunkov@mail.ioffe.ru; Shalnev, I. V.; Gutkin, A. A.; Klimko, G. V.; Gronin, S. V.; Sorokin, S. V.; Konnikov, S. G. [Russian Academy of Sciences, Ioffe Physical-Technical Institute (Russian Federation)

    2013-07-15

    To obtain statistical data on quantum-dot sizes, AFM topographic images of the substrate on which the dots under study are grown are analyzed. Due to the nonideality of the substrate containing height differences on the order of the size of nanoparticles at distances of 1-10 {mu}m and the insufficient resolution of closely arranged dots due to the finite curvature radius of the AFM probe, automation of the statistical analysis of their large dot array requires special techniques for processing topographic images to eliminate the loss of a particle fraction during conventional processing. As such a technique, convolution of the initial matrix of the AFM image with a specially selected matrix is used. This makes it possible to determine the position of each nanoparticle and, using the initial matrix, to measure their geometrical parameters. The results of statistical analysis by this method of self-assembled InAs quantum dots formed on the surface of an AlGaAs epitaxial layer are presented. It is shown that their concentration, average size, and half-width of height distribution depend strongly on the In flow and total amount of deposited InAs which are varied within insignificant limits.

  6. Similarity between neonatal profile and socioeconomic index: a spatial approach

    Directory of Open Access Journals (Sweden)

    d'Orsi Eleonora

    2005-01-01

    Full Text Available This study aims to compare neonatal characteristics and socioeconomic conditions in Rio de Janeiro city neighborhoods in order to identify priority areas for intervention. The study design was ecological. Two databases were used: the Brazilian Population Census and the Live Birth Information System, aggregated by neighborhoods. Spatial analysis, multivariate cluster classification, and Moran's I statistics for detection of spatial clustering were used. A similarity index was created to compare socioeconomic clusters with the neonatal profile in each neighborhood. The proportions of Apgar score above 8 and cesarean sections showed positive spatial correlation and high similarity with the socioeconomic index. The proportion of low birth weight infants showed a random spatial distribution, indicating that at this scale of analysis, birth weight is not sufficiently sensitive to discriminate subtler differences among population groups. The observed relationship between the neighborhoods' neonatal profile (particularly Apgar score and mode of delivery and socioeconomic conditions shows evidence of a change in infant health profile, where the possibility for intervention shifts to medical services and the Apgar score assumes growing significance as a risk indicator.

  7. Statistical Analysis of Environmental Tritium around Wolsong Site

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ju Youl [FNC Technology Co., Yongin (Korea, Republic of)

    2010-04-15

    To find the relationship among airborne tritium, tritium in rainwater, TFWT (Tissue Free Water Tritium) and TBT (Tissue Bound Tritium), statistical analysis is conducted based on tritium data measured at KHNP employees' house around Wolsong nuclear power plants during 10 years from 1999 to 2008. The results show that tritium in such media exhibits a strong seasonal and annual periodicity. Tritium concentration in rainwater is observed to be highly correlated with TFWT and directly transmitted to TFWT without delay. The response of environmental radioactivity of tritium around Wolsong site is analyzed using time-series technique and non-parametric trend analysis. Tritium in the atmosphere and rainwater is strongly auto-correlated by seasonal and annual periodicity. TFWT concentration in pine needle is proven to be more sensitive to rainfall phenomenon than other weather variables. Non-parametric trend analysis of TFWT concentration within pine needle shows a increasing slope in terms of confidence level of 95%. This study demonstrates a usefulness of time-series and trend analysis for the interpretation of environmental radioactivity relationship with various environmental media.

  8. Statistical Analysis of the Polarimetric Cloud Analysis and Seeding Test (POLCAST) Field Projects

    Science.gov (United States)

    Ekness, Jamie Lynn

    The North Dakota farming industry brings in more than $4.1 billion annually in cash receipts. Unfortunately, agriculture sales vary significantly from year to year, which is due in large part to weather events such as hail storms and droughts. One method to mitigate drought is to use hygroscopic seeding to increase the precipitation efficiency of clouds. The North Dakota Atmospheric Research Board (NDARB) sponsored the Polarimetric Cloud Analysis and Seeding Test (POLCAST) research project to determine the effectiveness of hygroscopic seeding in North Dakota. The POLCAST field projects obtained airborne and radar observations, while conducting randomized cloud seeding. The Thunderstorm Identification Tracking and Nowcasting (TITAN) program is used to analyze radar data (33 usable cases) in determining differences in the duration of the storm, rain rate and total rain amount between seeded and non-seeded clouds. The single ratio of seeded to non-seeded cases is 1.56 (0.28 mm/0.18 mm) or 56% increase for the average hourly rainfall during the first 60 minutes after target selection. A seeding effect is indicated with the lifetime of the storms increasing by 41 % between seeded and non-seeded clouds for the first 60 minutes past seeding decision. A double ratio statistic, a comparison of radar derived rain amount of the last 40 minutes of a case (seed/non-seed), compared to the first 20 minutes (seed/non-seed), is used to account for the natural variability of the cloud system and gives a double ratio of 1.85. The Mann-Whitney test on the double ratio of seeded to non-seeded cases (33 cases) gives a significance (p-value) of 0.063. Bootstrapping analysis of the POLCAST set indicates that 50 cases would provide statistically significant results based on the Mann-Whitney test of the double ratio. All the statistical analysis conducted on the POLCAST data set show that hygroscopic seeding in North Dakota does increase precipitation. While an additional POLCAST field

  9. Statistical Analysis of the Impacts of Regional Transportation on the Air Quality in Beijing

    Science.gov (United States)

    Huang, Zhongwen; Zhang, Huiling; Tong, Lei; Xiao, Hang

    2016-04-01

    From October to December 2015, Beijing-Tianjin-Hebei (BTH) region had experienced several severe haze events. In order to assess the effects of the regional transportation on the air quality in Beijing, the air monitoring data (PM2.5, SO2, NO2 and CO) from that period published by Chinese National Environmental Monitoring Center (CNEMC) was collected and analyzed with various statistical models. The cities within BTH area were clustered into three groups according to the geographical conditions, while the air pollutant concentrations of cities within a group sharing similar variation trends. The Granger causality test results indicate that significant causal relationships exist between the air pollutant data of Beijing and its surrounding cities (Baoding, Chengde, Tianjin and Zhangjiakou) for the reference period. Then, linear regression models were constructed to capture the interdependency among the multiple time series. It shows that the observed air pollutant concentrations in Beijing were well consistent with the model-fitted results. More importantly, further analysis suggests that the air pollutants in Beijing were strongly affected by regional transportation, as the local sources only contributed 17.88%, 27.12%, 14.63% and 31.36% of PM2.5, SO2, NO2 and CO concentrations, respectively. And the major foreign source for Beijing was from Southwest (Baoding) direction, account for more than 42% of all these air pollutants. Thus, by combining various statistical models, it may not only be able to quickly predict the air qualities of any cities on a regional scale, but also to evaluate the local and regional source contributions for a particular city. Key words: regional transportation, air pollution, Granger causality test, statistical models

  10. Statistical Analysis of Designed Experiments Theory and Applications

    CERN Document Server

    Tamhane, Ajit C

    2012-01-01

    A indispensable guide to understanding and designing modern experiments The tools and techniques of Design of Experiments (DOE) allow researchers to successfully collect, analyze, and interpret data across a wide array of disciplines. Statistical Analysis of Designed Experiments provides a modern and balanced treatment of DOE methodology with thorough coverage of the underlying theory and standard designs of experiments, guiding the reader through applications to research in various fields such as engineering, medicine, business, and the social sciences. The book supplies a foundation for the

  11. Multivariate statistical analysis of major and trace element data for ...

    African Journals Online (AJOL)

    Multivariate statistical analysis of major and trace element data for niobium exploration in the peralkaline granites of the anorogenic ring-complex province of Nigeria. PO Ogunleye, EC Ike, I Garba. Abstract. No Abstract Available Journal of Mining and Geology Vol.40(2) 2004: 107-117. Full Text: EMAIL FULL TEXT EMAIL ...

  12. Statistical shape analysis using 3D Poisson equation--A quantitatively validated approach.

    Science.gov (United States)

    Gao, Yi; Bouix, Sylvain

    2016-05-01

    Statistical shape analysis has been an important area of research with applications in biology, anatomy, neuroscience, agriculture, paleontology, etc. Unfortunately, the proposed methods are rarely quantitatively evaluated, and as shown in recent studies, when they are evaluated, significant discrepancies exist in their outputs. In this work, we concentrate on the problem of finding the consistent location of deformation between two population of shapes. We propose a new shape analysis algorithm along with a framework to perform a quantitative evaluation of its performance. Specifically, the algorithm constructs a Signed Poisson Map (SPoM) by solving two Poisson equations on the volumetric shapes of arbitrary topology, and statistical analysis is then carried out on the SPoMs. The method is quantitatively evaluated on synthetic shapes and applied on real shape data sets in brain structures. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. For the Love of Statistics: Appreciating and Learning to Apply Experimental Analysis and Statistics through Computer Programming Activities

    Science.gov (United States)

    Mascaró, Maite; Sacristán, Ana Isabel; Rufino, Marta M.

    2016-01-01

    For the past 4 years, we have been involved in a project that aims to enhance the teaching and learning of experimental analysis and statistics, of environmental and biological sciences students, through computational programming activities (using R code). In this project, through an iterative design, we have developed sequences of R-code-based…

  14. Statistical characterisation of COSMO Sky-Med X-SAR retrieved precipitation fields by scale-invariance analysis

    Science.gov (United States)

    Deidda, Roberto; Mascaro, Giuseppe; Hellies, Matteo; Baldini, Luca; Roberto, Nicoletta

    2013-04-01

    COSMO Sky-Med (CSK) is an important programme of the Italian Space Agency aiming at supporting environmental monitoring and management of exogenous, endogenous and anthropogenic risks through X-band Synthetic Aperture Radar (X-SAR) on board of 4 satellites forming a constellation. Most of typical SAR applications are focused on land or ocean observation. However, X-band SAR can be detect precipitation that results in a specific signature caused by the combination of attenuation of surface returns induced by precipitation and enhancement of backscattering determined by the hydrometeors in the SAR resolution volume. Within CSK programme, we conducted an intercomparison between the statistical properties of precipitation fields derived by CSK SARs and those derived by the CNR Polar 55C (C-band) ground based weather radar located in Rome (Italy). This contribution presents main results of this research which was aimed at the robust characterisation of rainfall statistical properties across different scales by means of scale-invariance analysis and multifractal theory. The analysis was performed on a dataset of more two years of precipitation observations collected by the CNR Polar 55C radar and rainfall fields derived from available images collected by the CSK satellites during intense rainfall events. Scale-invariance laws and multifractal properties were detected on the most intense rainfall events derived from the CNR Polar 55C radar for spatial scales from 4 km to 64 km. The analysis on X-SAR retrieved rainfall fields, although based on few images, leaded to similar results and confirmed the existence of scale-invariance and multifractal properties for scales larger than 4 km. These outcomes encourage investigating SAR methodologies for future development of meteo-hydrological forecasting models based on multifractal theory.

  15. Statistical analysis of angular correlation measurements

    International Nuclear Information System (INIS)

    Oliveira, R.A.A.M. de.

    1986-01-01

    Obtaining the multipole mixing ratio, δ, of γ transitions in angular correlation measurements is a statistical problem characterized by the small number of angles in which the observation is made and by the limited statistic of counting, α. The inexistence of a sufficient statistics for the estimator of δ, is shown. Three different estimators for δ were constructed and their properties of consistency, bias and efficiency were tested. Tests were also performed in experimental results obtained in γ-γ directional correlation measurements. (Author) [pt

  16. Statistical analysis of disruptions in JET

    International Nuclear Information System (INIS)

    De Vries, P.C.; Johnson, M.F.; Segui, I.

    2009-01-01

    The disruption rate (the percentage of discharges that disrupt) in JET was found to drop steadily over the years. Recent campaigns (2005-2007) show a yearly averaged disruption rate of only 6% while from 1991 to 1995 this was often higher than 20%. Besides the disruption rate, the so-called disruptivity, or the likelihood of a disruption depending on the plasma parameters, has been determined. The disruptivity of plasmas was found to be significantly higher close to the three main operational boundaries for tokamaks; the low-q, high density and β-limit. The frequency at which JET operated close to the density-limit increased six fold over the last decade; however, only a small reduction in disruptivity was found. Similarly the disruptivity close to the low-q and β-limit was found to be unchanged. The most significant reduction in disruptivity was found far from the operational boundaries, leading to the conclusion that the improved disruption rate is due to a better technical capability of operating JET, instead of safer operations close to the physics limits. The statistics showed that a simple protection system was able to mitigate the forces of a large fraction of disruptions, although it has proved to be at present more difficult to ameliorate the heat flux.

  17. Oscillatory flow at the end of parallel-plate stacks: phenomenological and similarity analysis

    International Nuclear Information System (INIS)

    Mao Xiaoan; Jaworski, Artur J

    2010-01-01

    This paper addresses the physics of the oscillatory flow in the vicinity of a series of parallel plates forming geometrically identical channels. This type of flow is particularly relevant to thermoacoustic engines and refrigerators, where a reciprocating flow is responsible for the desirable energy transfer, but it is also of interest to general fluid mechanics of oscillatory flows past bluff bodies. In this paper, the physics of an acoustically induced flow past a series of plates in an isothermal condition is studied in detail using the data provided by PIV imaging. Particular attention is given to the analysis of the wake flow during the ejection part of the flow cycle, where either closed recirculating vortices or alternating vortex shedding can be observed. This is followed by a similarity analysis of the governing Navier-Stokes equations in order to derive the similarity criteria governing the wake flow behaviour. To this end, similarity numbers including two types of Reynolds number, the Keulegan-Carpenter number and a non-dimensional stack configuration parameter, d/h, are considered and their influence on the phenomena are discussed.

  18. Statistical potential-based amino acid similarity matrices for aligning distantly related protein sequences.

    Science.gov (United States)

    Tan, Yen Hock; Huang, He; Kihara, Daisuke

    2006-08-15

    Aligning distantly related protein sequences is a long-standing problem in bioinformatics, and a key for successful protein structure prediction. Its importance is increasing recently in the context of structural genomics projects because more and more experimentally solved structures are available as templates for protein structure modeling. Toward this end, recent structure prediction methods employ profile-profile alignments, and various ways of aligning two profiles have been developed. More fundamentally, a better amino acid similarity matrix can improve a profile itself; thereby resulting in more accurate profile-profile alignments. Here we have developed novel amino acid similarity matrices from knowledge-based amino acid contact potentials. Contact potentials are used because the contact propensity to the other amino acids would be one of the most conserved features of each position of a protein structure. The derived amino acid similarity matrices are tested on benchmark alignments at three different levels, namely, the family, the superfamily, and the fold level. Compared to BLOSUM45 and the other existing matrices, the contact potential-based matrices perform comparably in the family level alignments, but clearly outperform in the fold level alignments. The contact potential-based matrices perform even better when suboptimal alignments are considered. Comparing the matrices themselves with each other revealed that the contact potential-based matrices are very different from BLOSUM45 and the other matrices, indicating that they are located in a different basin in the amino acid similarity matrix space.

  19. Statistical mechanical analysis of the linear vector channel in digital communication

    International Nuclear Information System (INIS)

    Takeda, Koujin; Hatabu, Atsushi; Kabashima, Yoshiyuki

    2007-01-01

    A statistical mechanical framework to analyze linear vector channel models in digital wireless communication is proposed for a large system. The framework is a generalization of that proposed for code-division multiple-access systems in Takeda et al (2006 Europhys. Lett. 76 1193) and enables the analysis of the system in which the elements of the channel transfer matrix are statistically correlated with each other. The significance of the proposed scheme is demonstrated by assessing the performance of an existing model of multi-input multi-output communication systems

  20. Monte Carlo based statistical power analysis for mediation models: methods and software.

    Science.gov (United States)

    Zhang, Zhiyong

    2014-12-01

    The existing literature on statistical power analysis for mediation models often assumes data normality and is based on a less powerful Sobel test instead of the more powerful bootstrap test. This study proposes to estimate statistical power to detect mediation effects on the basis of the bootstrap method through Monte Carlo simulation. Nonnormal data with excessive skewness and kurtosis are allowed in the proposed method. A free R package called bmem is developed to conduct the power analysis discussed in this study. Four examples, including a simple mediation model, a multiple-mediator model with a latent mediator, a multiple-group mediation model, and a longitudinal mediation model, are provided to illustrate the proposed method.

  1. Similarity criterion analysis of dose-response curves in biological assay and radioimmunoassay of hormones

    International Nuclear Information System (INIS)

    Cristakou, H.D.

    1983-01-01

    The difficulties involved in the control of biological and radioimmunological assay systems, and in the maintenance of standard, as well as, the usual heterogeneity of assayed samples require some evidence of similarity between the dose-response curves obtained with the standard and the sample. Nowadays the parallelism test is used to provide such evidence. However, some indications of non-normal errors distribution, such as the presence of out layers, render the parallelism test both conceptually implausible and statistically inefficient. In such a manner we suggest the non-parametric 'frequencial' test as a more sounding option. (author)

  2. A statistical analysis of the impact of advertising signs on road safety.

    Science.gov (United States)

    Yannis, George; Papadimitriou, Eleonora; Papantoniou, Panagiotis; Voulgari, Chrisoula

    2013-01-01

    This research aims to investigate the impact of advertising signs on road safety. An exhaustive review of international literature was carried out on the effect of advertising signs on driver behaviour and safety. Moreover, a before-and-after statistical analysis with control groups was applied on several road sites with different characteristics in the Athens metropolitan area, in Greece, in order to investigate the correlation between the placement or removal of advertising signs and the related occurrence of road accidents. Road accident data for the 'before' and 'after' periods on the test sites and the control sites were extracted from the database of the Hellenic Statistical Authority, and the selected 'before' and 'after' periods vary from 2.5 to 6 years. The statistical analysis shows no statistical correlation between road accidents and advertising signs in none of the nine sites examined, as the confidence intervals of the estimated safety effects are non-significant at 95% confidence level. This can be explained by the fact that, in the examined road sites, drivers are overloaded with information (traffic signs, directions signs, labels of shops, pedestrians and other vehicles, etc.) so that the additional information load from advertising signs may not further distract them.

  3. Building the Community Online Resource for Statistical Seismicity Analysis (CORSSA)

    Science.gov (United States)

    Michael, A. J.; Wiemer, S.; Zechar, J. D.; Hardebeck, J. L.; Naylor, M.; Zhuang, J.; Steacy, S.; Corssa Executive Committee

    2010-12-01

    Statistical seismology is critical to the understanding of seismicity, the testing of proposed earthquake prediction and forecasting methods, and the assessment of seismic hazard. Unfortunately, despite its importance to seismology - especially to those aspects with great impact on public policy - statistical seismology is mostly ignored in the education of seismologists, and there is no central repository for the existing open-source software tools. To remedy these deficiencies, and with the broader goal to enhance the quality of statistical seismology research, we have begun building the Community Online Resource for Statistical Seismicity Analysis (CORSSA). CORSSA is a web-based educational platform that is authoritative, up-to-date, prominent, and user-friendly. We anticipate that the users of CORSSA will range from beginning graduate students to experienced researchers. More than 20 scientists from around the world met for a week in Zurich in May 2010 to kick-start the creation of CORSSA: the format and initial table of contents were defined; a governing structure was organized; and workshop participants began drafting articles. CORSSA materials are organized with respect to six themes, each containing between four and eight articles. The CORSSA web page, www.corssa.org, officially unveiled on September 6, 2010, debuts with an initial set of approximately 10 to 15 articles available online for viewing and commenting with additional articles to be added over the coming months. Each article will be peer-reviewed and will present a balanced discussion, including illustrative examples and code snippets. Topics in the initial set of articles will include: introductions to both CORSSA and statistical seismology, basic statistical tests and their role in seismology; understanding seismicity catalogs and their problems; basic techniques for modeling seismicity; and methods for testing earthquake predictability hypotheses. A special article will compare and review

  4. Meta- and statistical analysis of single-case intervention research data: quantitative gifts and a wish list.

    Science.gov (United States)

    Kratochwill, Thomas R; Levin, Joel R

    2014-04-01

    In this commentary, we add to the spirit of the articles appearing in the special series devoted to meta- and statistical analysis of single-case intervention-design data. Following a brief discussion of historical factors leading to our initial involvement in statistical analysis of such data, we discuss: (a) the value added by including statistical-analysis recommendations in the What Works Clearinghouse Standards for single-case intervention designs; (b) the importance of visual analysis in single-case intervention research, along with the distinctive role that could be played by single-case effect-size measures; and (c) the elevated internal validity and statistical-conclusion validity afforded by the incorporation of various forms of randomization into basic single-case design structures. For the future, we envision more widespread application of quantitative analyses, as critical adjuncts to visual analysis, in both primary single-case intervention research studies and literature reviews in the behavioral, educational, and health sciences. Copyright © 2014 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  5. Statistical analysis of the spatial distribution of galaxies and clusters

    International Nuclear Information System (INIS)

    Cappi, Alberto

    1993-01-01

    This thesis deals with the analysis of the distribution of galaxies and clusters, describing some observational problems and statistical results. First chapter gives a theoretical introduction, aiming to describe the framework of the formation of structures, tracing the history of the Universe from the Planck time, t_p = 10"-"4"3 sec and temperature corresponding to 10"1"9 GeV, to the present epoch. The most usual statistical tools and models of the galaxy distribution, with their advantages and limitations, are described in chapter two. A study of the main observed properties of galaxy clustering, together with a detailed statistical analysis of the effects of selecting galaxies according to apparent magnitude or diameter, is reported in chapter three. Chapter four delineates some properties of groups of galaxies, explaining the reasons of discrepant results on group distributions. Chapter five is a study of the distribution of galaxy clusters, with different statistical tools, like correlations, percolation, void probability function and counts in cells; it is found the same scaling-invariant behaviour of galaxies. Chapter six describes our finding that rich galaxy clusters too belong to the fundamental plane of elliptical galaxies, and gives a discussion of its possible implications. Finally chapter seven reviews the possibilities offered by multi-slit and multi-fibre spectrographs, and I present some observational work on nearby and distant galaxy clusters. In particular, I show the opportunities offered by ongoing surveys of galaxies coupled with multi-object fibre spectrographs, focusing on the ESO Key Programme A galaxy redshift survey in the south galactic pole region to which I collaborate and on MEFOS, a multi-fibre instrument with automatic positioning. Published papers related to the work described in this thesis are reported in the last appendix. (author) [fr

  6. Operational statistical analysis of the results of computer-based testing of students

    Directory of Open Access Journals (Sweden)

    Виктор Иванович Нардюжев

    2018-12-01

    Full Text Available The article is devoted to the issues of statistical analysis of results of computer-based testing for evaluation of educational achievements of students. The issues are relevant due to the fact that computerbased testing in Russian universities has become an important method for evaluation of educational achievements of students and quality of modern educational process. Usage of modern methods and programs for statistical analysis of results of computer-based testing and assessment of quality of developed tests is an actual problem for every university teacher. The article shows how the authors solve this problem using their own program “StatInfo”. For several years the program has been successfully applied in a credit system of education at such technological stages as loading computerbased testing protocols into a database, formation of queries, generation of reports, lists, and matrices of answers for statistical analysis of quality of test items. Methodology, experience and some results of its usage by university teachers are described in the article. Related topics of a test development, models, algorithms, technologies, and software for large scale computer-based testing has been discussed by the authors in their previous publications which are presented in the reference list.

  7. Statistical analysis of seismicity and hazard estimation for Italy (mixed approach). Statistical parameters of main shocks and aftershocks in the Italian region

    International Nuclear Information System (INIS)

    Molchan, G.M.; Kronrod, T.L.; Dmitrieva, O.E.

    1995-03-01

    The catalog of earthquakes of Italy (1900-1993) is analyzed in the present work. The following problems have been considered: 1) a choice of the operating magnitude, 2) an analysis of data completeness, and 3) a grouping (in time and in space). The catalog has been separated into main shocks and aftershocks. Statistical estimations of seismicity parameters (a,b) are performed for the seismogenetic zones defined by GNDT. The non-standard elements of the analysis performed are: (a) statistical estimation and comparison of seismicity parameters under the condition of arbitrary data grouping in magnitude, time and space; (b) use of a not conventional statistical method for the aftershock identification; the method is based on the idea of optimizing two kinds of errors in the aftershock identification process; (c) use of the aftershock zones to reveal seismically- interrelated seismogenic zones. This procedure contributes to the stability of the estimation of the ''b-value'' Refs, 25 figs, tabs

  8. Analysis and classification of ECG-waves and rhythms using circular statistics and vector strength

    Directory of Open Access Journals (Sweden)

    Janßen Jan-Dirk

    2017-09-01

    Full Text Available The most common way to analyse heart rhythm is to calculate the RR-interval and the heart rate variability. For further evaluation, descriptive statistics are often used. Here we introduce a new and more natural heart rhythm analysis tool that is based on circular statistics and vector strength. Vector strength is a tool to measure the periodicity or lack of periodicity of a signal. We divide the signal into non-overlapping window segments and project the detected R-waves around the unit circle using the complex exponential function and the median RR-interval. In addition, we calculate the vector strength and apply circular statistics as wells as an angular histogram on the R-wave vectors. This approach enables an intuitive visualization and analysis of rhythmicity. Our results show that ECG-waves and rhythms can be easily visualized, analysed and classified by circular statistics and vector strength.

  9. Modelling the perceptual similarity of facial expressions from image statistics and neural responses.

    Science.gov (United States)

    Sormaz, Mladen; Watson, David M; Smith, William A P; Young, Andrew W; Andrews, Timothy J

    2016-04-01

    The ability to perceive facial expressions of emotion is essential for effective social communication. We investigated how the perception of facial expression emerges from the image properties that convey this important social signal, and how neural responses in face-selective brain regions might track these properties. To do this, we measured the perceptual similarity between expressions of basic emotions, and investigated how this is reflected in image measures and in the neural response of different face-selective regions. We show that the perceptual similarity of different facial expressions (fear, anger, disgust, sadness, happiness) can be predicted by both surface and feature shape information in the image. Using block design fMRI, we found that the perceptual similarity of expressions could also be predicted from the patterns of neural response in the face-selective posterior superior temporal sulcus (STS), but not in the fusiform face area (FFA). These results show that the perception of facial expression is dependent on the shape and surface properties of the image and on the activity of specific face-selective regions. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. A Guideline to Univariate Statistical Analysis for LC/MS-Based Untargeted Metabolomics-Derived Data

    Directory of Open Access Journals (Sweden)

    Maria Vinaixa

    2012-10-01

    Full Text Available Several metabolomic software programs provide methods for peak picking, retention time alignment and quantification of metabolite features in LC/MS-based metabolomics. Statistical analysis, however, is needed in order to discover those features significantly altered between samples. By comparing the retention time and MS/MS data of a model compound to that from the altered feature of interest in the research sample, metabolites can be then unequivocally identified. This paper reports on a comprehensive overview of a workflow for statistical analysis to rank relevant metabolite features that will be selected for further MS/MS experiments. We focus on univariate data analysis applied in parallel on all detected features. Characteristics and challenges of this analysis are discussed and illustrated using four different real LC/MS untargeted metabolomic datasets. We demonstrate the influence of considering or violating mathematical assumptions on which univariate statistical test rely, using high-dimensional LC/MS datasets. Issues in data analysis such as determination of sample size, analytical variation, assumption of normality and homocedasticity, or correction for multiple testing are discussed and illustrated in the context of our four untargeted LC/MS working examples.

  11. Analysis and Comprehensive Analytical Modeling of Statistical Variations in Subthreshold MOSFET's High Frequency Characteristics

    Directory of Open Access Journals (Sweden)

    Rawid Banchuin

    2014-01-01

    Full Text Available In this research, the analysis of statistical variations in subthreshold MOSFET's high frequency characteristics defined in terms of gate capacitance and transition frequency, have been shown and the resulting comprehensive analytical models of such variations in terms of their variances have been proposed. Major imperfection in the physical level properties including random dopant fluctuation and effects of variations in MOSFET's manufacturing process, have been taken into account in the proposed analysis and modeling. The up to dated comprehensive analytical model of statistical variation in MOSFET's parameter has been used as the basis of analysis and modeling. The resulting models have been found to be both analytic and comprehensive as they are the precise mathematical expressions in terms of physical level variables of MOSFET. Furthermore, they have been verified at the nanometer level by using 65~nm level BSIM4 based benchmarks and have been found to be very accurate with smaller than 5 % average percentages of errors. Hence, the performed analysis gives the resulting models which have been found to be the potential mathematical tool for the statistical and variability aware analysis and design of subthreshold MOSFET based VHF circuits, systems and applications.

  12. Spatial Analysis Along Networks Statistical and Computational Methods

    CERN Document Server

    Okabe, Atsuyuki

    2012-01-01

    In the real world, there are numerous and various events that occur on and alongside networks, including the occurrence of traffic accidents on highways, the location of stores alongside roads, the incidence of crime on streets and the contamination along rivers. In order to carry out analyses of those events, the researcher needs to be familiar with a range of specific techniques. Spatial Analysis Along Networks provides a practical guide to the necessary statistical techniques and their computational implementation. Each chapter illustrates a specific technique, from Stochastic Point Process

  13. Statistical Analysis Of Failure Strength Of Material Using Weibull Distribution

    International Nuclear Information System (INIS)

    Entin Hartini; Mike Susmikanti; Antonius Sitompul

    2008-01-01

    In evaluation of ceramic and glass materials strength a statistical approach is necessary Strength of ceramic and glass depend on its measure and size distribution of flaws in these material. The distribution of strength for ductile material is narrow and close to a Gaussian distribution while strength of brittle materials as ceramic and glass following Weibull distribution. The Weibull distribution is an indicator of the failure of material strength resulting from a distribution of flaw size. In this paper, cumulative probability of material strength to failure probability, cumulative probability of failure versus fracture stress and cumulative probability of reliability of material were calculated. Statistical criteria calculation supporting strength analysis of Silicon Nitride material were done utilizing MATLAB. (author)

  14. On two methods of statistical image analysis

    NARCIS (Netherlands)

    Missimer, J; Knorr, U; Maguire, RP; Herzog, H; Seitz, RJ; Tellman, L; Leenders, K.L.

    1999-01-01

    The computerized brain atlas (CBA) and statistical parametric mapping (SPM) are two procedures for voxel-based statistical evaluation of PET activation studies. Each includes spatial standardization of image volumes, computation of a statistic, and evaluation of its significance. In addition,

  15. PROSA: A computer program for statistical analysis of near-real-time-accountancy (NRTA) data

    International Nuclear Information System (INIS)

    Beedgen, R.; Bicking, U.

    1987-04-01

    The computer program PROSA (Program for Statistical Analysis of NRTA Data) is a tool to decide on the basis of statistical considerations if, in a given sequence of materials balance periods, a loss of material might have occurred or not. The evaluation of the material balance data is based on statistical test procedures. In PROSA three truncated sequential tests are applied to a sequence of material balances. The manual describes the statistical background of PROSA and how to use the computer program on an IBM-PC with DOS 3.1. (orig.) [de

  16. Statistical testing and power analysis for brain-wide association study.

    Science.gov (United States)

    Gong, Weikang; Wan, Lin; Lu, Wenlian; Ma, Liang; Cheng, Fan; Cheng, Wei; Grünewald, Stefan; Feng, Jianfeng

    2018-04-05

    The identification of connexel-wise associations, which involves examining functional connectivities between pairwise voxels across the whole brain, is both statistically and computationally challenging. Although such a connexel-wise methodology has recently been adopted by brain-wide association studies (BWAS) to identify connectivity changes in several mental disorders, such as schizophrenia, autism and depression, the multiple correction and power analysis methods designed specifically for connexel-wise analysis are still lacking. Therefore, we herein report the development of a rigorous statistical framework for connexel-wise significance testing based on the Gaussian random field theory. It includes controlling the family-wise error rate (FWER) of multiple hypothesis testings using topological inference methods, and calculating power and sample size for a connexel-wise study. Our theoretical framework can control the false-positive rate accurately, as validated empirically using two resting-state fMRI datasets. Compared with Bonferroni correction and false discovery rate (FDR), it can reduce false-positive rate and increase statistical power by appropriately utilizing the spatial information of fMRI data. Importantly, our method bypasses the need of non-parametric permutation to correct for multiple comparison, thus, it can efficiently tackle large datasets with high resolution fMRI images. The utility of our method is shown in a case-control study. Our approach can identify altered functional connectivities in a major depression disorder dataset, whereas existing methods fail. A software package is available at https://github.com/weikanggong/BWAS. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Brain F-18 FDG PET for localization of epileptogenic zones in frontal lobe epilepsy: visual assessment and statistical parametric mapping analysis

    International Nuclear Information System (INIS)

    Kim, Yu Kyeong; Lee, Dong Soo; Lee, Sang Kun; Chung, Chun Kee; Yeo, Jeong Seok; Chung, June Key; Lee, Myung Chul

    2001-01-01

    We evaluated the sensitivity of the F-18 FDG PET by visual assessment and statistical parametric mapping (SPM) analysis for the localization of the epileptogenic zones in frontal lobe epilepsy. Twenty-four patients with frontal lobe epilepsy were examined. All patients exhibited improvements after surgical resection (Engel class I or II). Upon pathological examination, 18 patients revealed cortical dysplasia, 4 patients revealed tumor, and 2 patients revealed cortical scar. The hypometabolic lesions were found in F-18 FDG PET by visual assessment and SPM analysis. On SPM analysis, cutoff threshold was changed. MRI showed structural lesions in 12 patients and normal results in the remaining 12. F-18 FDG PET correctly localized epileptogenic zones in 13 patients (54%) by visual assessment. Sensitivity of F-18 FDG PET in MR-negative patients (50%) was similar to that in MR-positive patients (67%). On SPM analysis, sensitivity deceased according to the decrease of p value. Using uncorrected p value of 0.05 as threshold, sensitivity of SPM analysis was 63%, which was not statistically different from that of visual assessment. F-18 FDG PET was sensitive in finding epileptogenic zones by revealing hypometabolic areas even in MR-negative patients with frontal lobe epilepsy as well as in MR-positive patients. SPM analysis showed comparable sensitivity to visual assessment and could be used as an aid in the diagnosis of epileptogenic zones in frontal lobe epilepsy

  18. Statistical analysis of magnetically soft particles in magnetorheological elastomers

    Science.gov (United States)

    Gundermann, T.; Cremer, P.; Löwen, H.; Menzel, A. M.; Odenbach, S.

    2017-04-01

    The physical properties of magnetorheological elastomers (MRE) are a complex issue and can be influenced and controlled in many ways, e.g. by applying a magnetic field, by external mechanical stimuli, or by an electric potential. In general, the response of MRE materials to these stimuli is crucially dependent on the distribution of the magnetic particles inside the elastomer. Specific knowledge of the interactions between particles or particle clusters is of high relevance for understanding the macroscopic rheological properties and provides an important input for theoretical calculations. In order to gain a better insight into the correlation between the macroscopic effects and microstructure and to generate a database for theoretical analysis, x-ray micro-computed tomography (X-μCT) investigations as a base for a statistical analysis of the particle configurations were carried out. Different MREs with quantities of 2-15 wt% (0.27-2.3 vol%) of iron powder and different allocations of the particles inside the matrix were prepared. The X-μCT results were edited by an image processing software regarding the geometrical properties of the particles with and without the influence of an external magnetic field. Pair correlation functions for the positions of the particles inside the elastomer were calculated to statistically characterize the distributions of the particles in the samples.

  19. Pattern recognition in menstrual bleeding diaries by statistical cluster analysis

    Directory of Open Access Journals (Sweden)

    Wessel Jens

    2009-07-01

    Full Text Available Abstract Background The aim of this paper is to empirically identify a treatment-independent statistical method to describe clinically relevant bleeding patterns by using bleeding diaries of clinical studies on various sex hormone containing drugs. Methods We used the four cluster analysis methods single, average and complete linkage as well as the method of Ward for the pattern recognition in menstrual bleeding diaries. The optimal number of clusters was determined using the semi-partial R2, the cubic cluster criterion, the pseudo-F- and the pseudo-t2-statistic. Finally, the interpretability of the results from a gynecological point of view was assessed. Results The method of Ward yielded distinct clusters of the bleeding diaries. The other methods successively chained the observations into one cluster. The optimal number of distinctive bleeding patterns was six. We found two desirable and four undesirable bleeding patterns. Cyclic and non cyclic bleeding patterns were well separated. Conclusion Using this cluster analysis with the method of Ward medications and devices having an impact on bleeding can be easily compared and categorized.

  20. Statistical Analysis and validation

    NARCIS (Netherlands)

    Hoefsloot, H.C.J.; Horvatovich, P.; Bischoff, R.

    2013-01-01

    In this chapter guidelines are given for the selection of a few biomarker candidates from a large number of compounds with a relative low number of samples. The main concepts concerning the statistical validation of the search for biomarkers are discussed. These complicated methods and concepts are

  1. Statistical Measures to Quantify Similarity between Molecular Dynamics Simulation Trajectories

    Directory of Open Access Journals (Sweden)

    Jenny Farmer

    2017-11-01

    Full Text Available Molecular dynamics simulation is commonly employed to explore protein dynamics. Despite the disparate timescales between functional mechanisms and molecular dynamics (MD trajectories, functional differences are often inferred from differences in conformational ensembles between two proteins in structure-function studies that investigate the effect of mutations. A common measure to quantify differences in dynamics is the root mean square fluctuation (RMSF about the average position of residues defined by C α -atoms. Using six MD trajectories describing three native/mutant pairs of beta-lactamase, we make comparisons with additional measures that include Jensen-Shannon, modifications of Kullback-Leibler divergence, and local p-values from 1-sample Kolmogorov-Smirnov tests. These additional measures require knowing a probability density function, which we estimate by using a nonparametric maximum entropy method that quantifies rare events well. The same measures are applied to distance fluctuations between C α -atom pairs. Results from several implementations for quantitative comparison of a pair of MD trajectories are made based on fluctuations for on-residue and residue-residue local dynamics. We conclude that there is almost always a statistically significant difference between pairs of 100 ns all-atom simulations on moderate-sized proteins as evident from extraordinarily low p-values.

  2. Variability analysis of AGN: a review of results using new statistical criteria

    Science.gov (United States)

    Zibecchi, L.; Andruchow, I.; Cellone, S. A.; Romero, G. E.; Combi, J. A.

    We present here a re-analysis of the variability results of a sample of active galactic nuclei (AGN), which have been observed on several sessions with the 2.15 m "Jorge Sahade" telescope (CASLEO), San Juan, Argentina, and whose results are published (Romero et al. 1999, 2000, 2002; Cellone et al. 2000). The motivation for this new analysis is the implementation, dur- ing the last years, of improvements in the statistical criteria applied, taking quantitatively into account the incidence of the photometric errors (Cellone et al. 2007). This work is framed as a first step in an integral study on the statistical estimators of AGN variability. This study is motivated by the great diversity of statistical tests that have been proposed to analyze the variability of these objects. Since we note that, in some cases, the results of the object variability depend on the test used, we attempt to make a com- parative study of the various tests and analyze, under the given conditions, which of them is the most efficient and reliable.

  3. Similarity ratio analysis for early stage fault detection with optical emission spectrometer in plasma etching process.

    Directory of Open Access Journals (Sweden)

    Jie Yang

    Full Text Available A Similarity Ratio Analysis (SRA method is proposed for early-stage Fault Detection (FD in plasma etching processes using real-time Optical Emission Spectrometer (OES data as input. The SRA method can help to realise a highly precise control system by detecting abnormal etch-rate faults in real-time during an etching process. The method processes spectrum scans at successive time points and uses a windowing mechanism over the time series to alleviate problems with timing uncertainties due to process shift from one process run to another. A SRA library is first built to capture features of a healthy etching process. By comparing with the SRA library, a Similarity Ratio (SR statistic is then calculated for each spectrum scan as the monitored process progresses. A fault detection mechanism, named 3-Warning-1-Alarm (3W1A, takes the SR values as inputs and triggers a system alarm when certain conditions are satisfied. This design reduces the chance of false alarm, and provides a reliable fault reporting service. The SRA method is demonstrated on a real semiconductor manufacturing dataset. The effectiveness of SRA-based fault detection is evaluated using a time-series SR test and also using a post-process SR test. The time-series SR provides an early-stage fault detection service, so less energy and materials will be wasted by faulty processing. The post-process SR provides a fault detection service with higher reliability than the time-series SR, but with fault testing conducted only after each process run completes.

  4. Anomalous Traffic Detection and Self-Similarity Analysis in the Environment of ATMSim

    Directory of Open Access Journals (Sweden)

    Hae-Duck J. Jeong

    2017-12-01

    Full Text Available Internet utilisation has steadily increased, predominantly due to the rapid recent development of information and communication networks and the widespread distribution of smartphones. As a result of this increase in Internet consumption, various types of services, including web services, social networking services (SNS, Internet banking, and remote processing systems have been created. These services have significantly enhanced global quality of life. However, as a negative side-effect of this rapid development, serious information security problems have also surfaced, which has led to serious to Internet privacy invasions and network attacks. In an attempt to contribute to the process of addressing these problems, this paper proposes a process to detect anomalous traffic using self-similarity analysis in the Anomaly Teletraffic detection Measurement analysis Simulator (ATMSim environment as a research method. Simulations were performed to measure normal and anomalous traffic. First, normal traffic for each attack, including the Address Resolution Protocol (ARP and distributed denial-of-service (DDoS was measured for 48 h over 10 iterations. Hadoop was used to facilitate processing of the large amount of collected data, after which MapReduce was utilised after storing the data in the Hadoop Distributed File System (HDFS. A new platform on Hadoop, the detection system ATMSim, was used to identify anomalous traffic after which a comparative analysis of the normal and anomalous traffic was performed through a self-similarity analysis. There were four categories of collected traffic that were divided according to the attack methods used: normal local area network (LAN traffic, DDoS attack, and ARP spoofing, as well as DDoS and ARP attack. ATMSim, the anomaly traffic detection system, was used to determine if real attacks could be identified effectively. To achieve this, the ATMSim was used in simulations for each scenario to test its ability to

  5. A Statistical Primer: Understanding Descriptive and Inferential Statistics

    OpenAIRE

    Gillian Byrne

    2007-01-01

    As libraries and librarians move more towards evidence‐based decision making, the data being generated in libraries is growing. Understanding the basics of statistical analysis is crucial for evidence‐based practice (EBP), in order to correctly design and analyze researchas well as to evaluate the research of others. This article covers the fundamentals of descriptive and inferential statistics, from hypothesis construction to sampling to common statistical techniques including chi‐square, co...

  6. Statistical mechanics of cellular automata

    International Nuclear Information System (INIS)

    Wolfram, S.

    1983-01-01

    Cellular automata are used as simple mathematical models to investigate self-organization in statistical mechanics. A detailed analysis is given of ''elementary'' cellular automata consisting of a sequence of sites with values 0 or 1 on a line, with each site evolving deterministically in discrete time steps according to p definite rules involving the values of its nearest neighbors. With simple initial configurations, the cellular automata either tend to homogeneous states, or generate self-similar patterns with fractal dimensions approx. =1.59 or approx. =1.69. With ''random'' initial configurations, the irreversible character of the cellular automaton evolution leads to several self-organization phenomena. Statistical properties of the structures generated are found to lie in two universality classes, independent of the details of the initial state or the cellular automaton rules. More complicated cellular automata are briefly considered, and connections with dynamical systems theory and the formal theory of computation are discussed

  7. Statistical Compilation of the ICT Sector and Policy Analysis | Page 5 ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    The project is designed to expand the scope of conventional investigation beyond the telecommunications industry to include other vertically integrated components of the ICT sector such as manufacturing and services. ... Statistical Compilation of the ICT Sector and Policy Analysis project : country experiences; Malaysia.

  8. Statistical Analysis of CFD Solutions From the Fifth AIAA Drag Prediction Workshop

    Science.gov (United States)

    Morrison, Joseph H.

    2013-01-01

    A graphical framework is used for statistical analysis of the results from an extensive N-version test of a collection of Reynolds-averaged Navier-Stokes computational fluid dynamics codes. The solutions were obtained by code developers and users from North America, Europe, Asia, and South America using a common grid sequence and multiple turbulence models for the June 2012 fifth Drag Prediction Workshop sponsored by the AIAA Applied Aerodynamics Technical Committee. The aerodynamic configuration for this workshop was the Common Research Model subsonic transport wing-body previously used for the 4th Drag Prediction Workshop. This work continues the statistical analysis begun in the earlier workshops and compares the results from the grid convergence study of the most recent workshop with previous workshops.

  9. The statistical analysis of the mobility and the labor force use

    Directory of Open Access Journals (Sweden)

    Daniela-Emanuela Dãnãcicã

    2006-05-01

    Full Text Available The paper approaches some of the classical methods used in statistics for theanalysis of labor force and proposes new ways of current analysis required foradopting optimal economic patterns and strategies. The proposed methods, thelinear mean deviation used in the analysis of the external mobility of the laborforce, the coefficient of variation used in the analysis of the external mobility of thelabor force and two-dimensional table used the coefficient of internal mobilitycalculation, are illustrated by the premises, the calculus methodology, practicalapplications and guidance for their use in adopting and applying optimal economicpolicy.

  10. Dimensional analysis, similarity, analogy, and the simulation theory

    International Nuclear Information System (INIS)

    Davis, A.A.

    1978-01-01

    Dimensional analysis, similarity, analogy, and cybernetics are shown to be four consecutive steps in application of the simulation theory. This paper introduces the classes of phenomena which follow the same formal mathematical equations as models of the natural laws and the interior sphere of restraints groups of phenomena in which one can introduce simplfied nondimensional mathematical equations. The simulation by similarity in a specific field of physics, by analogy in two or more different fields of physics, and by cybernetics in nature in two or more fields of mathematics, physics, biology, economics, politics, sociology, etc., appears as a unique theory which permits one to transport the results of experiments from the models, convenably selected to meet the conditions of researches, constructions, and measurements in the laboratories to the originals which are the primary objectives of the researches. Some interesting conclusions which cannot be avoided in the use of simplified nondimensional mathematical equations as models of natural laws are presented. Interesting limitations on the use of simulation theory based on assumed simplifications are recognized. This paper shows as necessary, in scientific research, that one write mathematical models of general laws which will be applied to nature in its entirety. The paper proposes the extent of the second law of thermodynamics as the generalized law of entropy to model life and its activities. This paper shows that the physical studies and philosophical interpretations of phenomena and natural laws cannot be separated in scientific work; they are interconnected and one cannot be put above the others

  11. Statistics of equally weighted random paths on a class of self-similar structures

    International Nuclear Information System (INIS)

    Knezevic, Milan; Knezevic, Dragica; Spasojevic, Djordje

    2004-01-01

    We study the statistics of equally weighted random walk paths on a family of Sierpinski gasket lattices whose members are labelled by an integer b (2 ≤ b 2, mean path end-to-end distance grows more slowly than any power of its length N. We provide arguments for the emergence of usual power law critical behaviour in the limit b → ∞ when fractal lattices become almost compact

  12. Statistics of ductile fracture surfaces: the effect of material parameters

    DEFF Research Database (Denmark)

    Ponson, Laurent; Cao, Yuanyuan; Bouchaud, Elisabeth

    2013-01-01

    distributed. The three dimensional analysis permits modeling of a three dimensional material microstructure and of the resulting three dimensional stress and deformation states that develop in the fracture process region. Material parameters characterizing void nucleation are varied and the statistics...... of the resulting fracture surfaces is investigated. All the fracture surfaces are found to be self-affine over a size range of about two orders of magnitude with a very similar roughness exponent of 0.56 ± 0.03. In contrast, the full statistics of the fracture surfaces is found to be more sensitive to the material...

  13. Statistical analysis of the hydrodynamic pressure in the near field of compressible jets

    International Nuclear Information System (INIS)

    Camussi, R.; Di Marco, A.; Castelain, T.

    2017-01-01

    Highlights: • Statistical properties of pressure fluctuations retrieved through wavelet analysis • Time delay PDFs approximated by a log-normal distribution • Amplitude PDFs approximated by a Gamma distribution • Random variable PDFs weakly dependent upon position and Mach number. • A general stochastic model achieved for the distance dependency - Abstract: This paper is devoted to the statistical characterization of the pressure fluctuations measured in the near field of a compressible jet at two subsonic Mach numbers, 0.6 and 0.9. The analysis is focused on the hydrodynamic pressure measured at different distances from the jet exit and analyzed at the typical frequency associated to the Kelvin–Helmholtz instability. Statistical properties are retrieved by the application of the wavelet transform to the experimental data and the computation of the wavelet scalogram around that frequency. This procedure highlights traces of events that appear intermittently in time and have variable strength. A wavelet-based event tracking procedure has been applied providing a statistical characterization of the time delay between successive events and of their energy level. On this basis, two stochastic models are proposed and validated against the experimental data in the different flow conditions

  14. Categorical data processing for real estate objects valuation using statistical analysis

    Science.gov (United States)

    Parygin, D. S.; Malikov, V. P.; Golubev, A. V.; Sadovnikova, N. P.; Petrova, T. M.; Finogeev, A. G.

    2018-05-01

    Theoretical and practical approaches to the use of statistical methods for studying various properties of infrastructure objects are analyzed in the paper. Methods of forecasting the value of objects are considered. A method for coding categorical variables describing properties of real estate objects is proposed. The analysis of the results of modeling the price of real estate objects using regression analysis and an algorithm based on a comparative approach is carried out.

  15. Statistical Compilation of the ICT Sector and Policy Analysis | Page 2 ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    ... to widen and deepen, so too does its impact on economic development. ... The outcomes of such efforts will subsequently inform policy discourse and ... Studies. Statistical Compilation of the ICT Sector and Policy Analysis project : country experiences; Malaysia ... Asian outlook: New growth dependent on new productivity.

  16. Statistical analysis plan for the EuroHYP-1 trial

    DEFF Research Database (Denmark)

    Winkel, Per; Bath, Philip M; Gluud, Christian

    2017-01-01

    Score; (4) brain infarct size at 48 +/-24 hours; (5) EQ-5D-5 L score, and (6) WHODAS 2.0 score. Other outcomes are: the primary safety outcome serious adverse events; and the incremental cost-effectiveness, and cost utility ratios. The analysis sets include (1) the intention-to-treat population, and (2...... outcome), logistic regression (binary outcomes), general linear model (continuous outcomes), and the Poisson or negative binomial model (rate outcomes). DISCUSSION: Major adjustments compared with the original statistical analysis plan encompass: (1) adjustment of analyses by nationality; (2) power......) the per protocol population. The sample size is estimated to 800 patients (5% type 1 and 20% type 2 errors). All analyses are adjusted for the protocol-specified stratification variables (nationality of centre), and the minimisation variables. In the analysis, we use ordinal regression (the primary...

  17. A Quantitative Comparison of the Similarity between Genes and Geography in Worldwide Human Populations

    Science.gov (United States)

    Wang, Chaolong; Zöllner, Sebastian; Rosenberg, Noah A.

    2012-01-01

    Multivariate statistical techniques such as principal components analysis (PCA) and multidimensional scaling (MDS) have been widely used to summarize the structure of human genetic variation, often in easily visualized two-dimensional maps. Many recent studies have reported similarity between geographic maps of population locations and MDS or PCA maps of genetic variation inferred from single-nucleotide polymorphisms (SNPs). However, this similarity has been evident primarily in a qualitative sense; and, because different multivariate techniques and marker sets have been used in different studies, it has not been possible to formally compare genetic variation datasets in terms of their levels of similarity with geography. In this study, using genome-wide SNP data from 128 populations worldwide, we perform a systematic analysis to quantitatively evaluate the similarity of genes and geography in different geographic regions. For each of a series of regions, we apply a Procrustes analysis approach to find an optimal transformation that maximizes the similarity between PCA maps of genetic variation and geographic maps of population locations. We consider examples in Europe, Sub-Saharan Africa, Asia, East Asia, and Central/South Asia, as well as in a worldwide sample, finding that significant similarity between genes and geography exists in general at different geographic levels. The similarity is highest in our examples for Asia and, once highly distinctive populations have been removed, Sub-Saharan Africa. Our results provide a quantitative assessment of the geographic structure of human genetic variation worldwide, supporting the view that geography plays a strong role in giving rise to human population structure. PMID:22927824

  18. A quantitative comparison of the similarity between genes and geography in worldwide human populations.

    Science.gov (United States)

    Wang, Chaolong; Zöllner, Sebastian; Rosenberg, Noah A

    2012-08-01

    Multivariate statistical techniques such as principal components analysis (PCA) and multidimensional scaling (MDS) have been widely used to summarize the structure of human genetic variation, often in easily visualized two-dimensional maps. Many recent studies have reported similarity between geographic maps of population locations and MDS or PCA maps of genetic variation inferred from single-nucleotide polymorphisms (SNPs). However, this similarity has been evident primarily in a qualitative sense; and, because different multivariate techniques and marker sets have been used in different studies, it has not been possible to formally compare genetic variation datasets in terms of their levels of similarity with geography. In this study, using genome-wide SNP data from 128 populations worldwide, we perform a systematic analysis to quantitatively evaluate the similarity of genes and geography in different geographic regions. For each of a series of regions, we apply a Procrustes analysis approach to find an optimal transformation that maximizes the similarity between PCA maps of genetic variation and geographic maps of population locations. We consider examples in Europe, Sub-Saharan Africa, Asia, East Asia, and Central/South Asia, as well as in a worldwide sample, finding that significant similarity between genes and geography exists in general at different geographic levels. The similarity is highest in our examples for Asia and, once highly distinctive populations have been removed, Sub-Saharan Africa. Our results provide a quantitative assessment of the geographic structure of human genetic variation worldwide, supporting the view that geography plays a strong role in giving rise to human population structure.

  19. NEW PARADIGM OF ANALYSIS OF STATISTICAL AND EXPERT DATA IN PROBLEMS OF ECONOMICS AND MANAGEMENT

    OpenAIRE

    Orlov A. I.

    2014-01-01

    The article is devoted to the methods of analysis of statistical and expert data in problems of economics and management that are discussed in the framework of scientific specialization "Mathematical methods of economy", including organizational-economic and economic-mathematical modeling, econometrics and statistics, as well as economic aspects of decision theory, systems analysis, cybernetics, operations research. The main provisions of the new paradigm of this scientific and practical fiel...

  20. Integrating Expert Knowledge with Statistical Analysis for Landslide Susceptibility Assessment at Regional Scale

    Directory of Open Access Journals (Sweden)

    Christos Chalkias

    2016-03-01

    Full Text Available In this paper, an integration landslide susceptibility model by combining expert-based and bivariate statistical analysis (Landslide Susceptibility Index—LSI approaches is presented. Factors related with the occurrence of landslides—such as elevation, slope angle, slope aspect, lithology, land cover, Mean Annual Precipitation (MAP and Peak Ground Acceleration (PGA—were analyzed within a GIS environment. This integrated model produced a landslide susceptibility map which categorized the study area according to the probability level of landslide occurrence. The accuracy of the final map was evaluated by Receiver Operating Characteristics (ROC analysis depending on an independent (validation dataset of landslide events. The prediction ability was found to be 76% revealing that the integration of statistical analysis with human expertise can provide an acceptable landslide susceptibility assessment at regional scale.

  1. Wavelet Statistical Analysis of Low-Latitude Geomagnetic Measurements

    Science.gov (United States)

    Papa, A. R.; Akel, A. F.

    2009-05-01

    Following previous works by our group (Papa et al., JASTP, 2006), where we analyzed a series of records acquired at the Vassouras National Geomagnetic Observatory in Brazil for the month of October 2000, we introduced a wavelet analysis for the same type of data and for other periods. It is well known that wavelets allow a more detailed study in several senses: the time window for analysis can be drastically reduced if compared to other traditional methods (Fourier, for example) and at the same time allow an almost continuous accompaniment of both amplitude and frequency of signals as time goes by. This advantage brings some possibilities for potentially useful forecasting methods of the type also advanced by our group in previous works (see for example, Papa and Sosman, JASTP, 2008). However, the simultaneous statistical analysis of both time series (in our case amplitude and frequency) is a challenging matter and is in this sense that we have found what we consider our main goal. Some possible trends for future works are advanced.

  2. PVeStA: A Parallel Statistical Model Checking and Quantitative Analysis Tool

    KAUST Repository

    AlTurki, Musab; Meseguer, José

    2011-01-01

    Statistical model checking is an attractive formal analysis method for probabilistic systems such as, for example, cyber-physical systems which are often probabilistic in nature. This paper is about drastically increasing the scalability

  3. Statistical power in parallel group point exposure studies with time-to-event outcomes: an empirical comparison of the performance of randomized controlled trials and the inverse probability of treatment weighting (IPTW) approach.

    Science.gov (United States)

    Austin, Peter C; Schuster, Tibor; Platt, Robert W

    2015-10-15

    Estimating statistical power is an important component of the design of both randomized controlled trials (RCTs) and observational studies. Methods for estimating statistical power in RCTs have been well described and can be implemented simply. In observational studies, statistical methods must be used to remove the effects of confounding that can occur due to non-random treatment assignment. Inverse probability of treatment weighting (IPTW) using the propensity score is an attractive method for estimating the effects of treatment using observational data. However, sample size and power calculations have not been adequately described for these methods. We used an extensive series of Monte Carlo simulations to compare the statistical power of an IPTW analysis of an observational study with time-to-event outcomes with that of an analysis of a similarly-structured RCT. We examined the impact of four factors on the statistical power function: number of observed events, prevalence of treatment, the marginal hazard ratio, and the strength of the treatment-selection process. We found that, on average, an IPTW analysis had lower statistical power compared to an analysis of a similarly-structured RCT. The difference in statistical power increased as the magnitude of the treatment-selection model increased. The statistical power of an IPTW analysis tended to be lower than the statistical power of a similarly-structured RCT.

  4. Statistical analysis of the W Cyg light curve

    International Nuclear Information System (INIS)

    Klyus, I.A.

    1983-01-01

    A statistical analysis of the light curve of W Cygni has been carried out. The process of brightness variations brightness of the star is shown to be a stationary stochastic one. The hypothesis of stationarity of the process was checked at the significance level of α=0.05. Oscillations of the brightness with average durations of 131 and 250 days have been found. It is proved that oscillations are narrow-band noise, i.e. cycles. Peaks on the power spectrum corresponding to these cycles exceed 99% confidence interval. It has been stated that the oscillations are independent

  5. Statistical Analysis of a Method to Predict Drug-Polymer Miscibility

    DEFF Research Database (Denmark)

    Knopp, Matthias Manne; Olesen, Niels Erik; Huang, Yanbin

    2016-01-01

    In this study, a method proposed to predict drug-polymer miscibility from differential scanning calorimetry measurements was subjected to statistical analysis. The method is relatively fast and inexpensive and has gained popularity as a result of the increasing interest in the formulation of drug...... as provided in this study. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association J Pharm Sci....

  6. Compositional differences among Chinese soy sauce types studied by (13)C NMR spectroscopy coupled with multivariate statistical analysis.

    Science.gov (United States)

    Kamal, Ghulam Mustafa; Wang, Xiaohua; Bin Yuan; Wang, Jie; Sun, Peng; Zhang, Xu; Liu, Maili

    2016-09-01

    Soy sauce a well known seasoning all over the world, especially in Asia, is available in global market in a wide range of types based on its purpose and the processing methods. Its composition varies with respect to the fermentation processes and addition of additives, preservatives and flavor enhancers. A comprehensive (1)H NMR based study regarding the metabonomic variations of soy sauce to differentiate among different types of soy sauce available on the global market has been limited due to the complexity of the mixture. In present study, (13)C NMR spectroscopy coupled with multivariate statistical data analysis like principle component analysis (PCA), and orthogonal partial least square-discriminant analysis (OPLS-DA) was applied to investigate metabonomic variations among different types of soy sauce, namely super light, super dark, red cooking and mushroom soy sauce. The main additives in soy sauce like glutamate, sucrose and glucose were easily distinguished and quantified using (13)C NMR spectroscopy which were otherwise difficult to be assigned and quantified due to serious signal overlaps in (1)H NMR spectra. The significantly higher concentration of sucrose in dark, red cooking and mushroom flavored soy sauce can directly be linked to the addition of caramel in soy sauce. Similarly, significantly higher level of glutamate in super light as compared to super dark and mushroom flavored soy sauce may come from the addition of monosodium glutamate. The study highlights the potentiality of (13)C NMR based metabonomics coupled with multivariate statistical data analysis in differentiating between the types of soy sauce on the basis of level of additives, raw materials and fermentation procedures. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. A Bifactor Approach to Model Multifaceted Constructs in Statistical Mediation Analysis

    Science.gov (United States)

    Gonzalez, Oscar; MacKinnon, David P.

    2018-01-01

    Statistical mediation analysis allows researchers to identify the most important mediating constructs in the causal process studied. Identifying specific mediators is especially relevant when the hypothesized mediating construct consists of multiple related facets. The general definition of the construct and its facets might relate differently to…

  8. Statistical Compilation of the ICT Sector and Policy Analysis | Page 4 ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Statistical Compilation of the ICT Sector and Policy Analysis. As the presence and influence of information and communication technologies (ICTs) continues to widen and deepen, so too does its impact on economic development. However, much work needs to be done before the linkages between economic development ...

  9. Statistical Compilation of the ICT Sector and Policy Analysis | Page 3 ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Statistical Compilation of the ICT Sector and Policy Analysis. As the presence and influence of information and communication technologies (ICTs) continues to widen and deepen, so too does its impact on economic development. However, much work needs to be done before the linkages between economic development ...

  10. Reliability Evaluation of Concentric Butterfly Valve Using Statistical Hypothesis Test

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Mu Seong; Choi, Jong Sik; Choi, Byung Oh; Kim, Do Sik [Korea Institute of Machinery and Materials, Daejeon (Korea, Republic of)

    2015-12-15

    A butterfly valve is a type of flow-control device typically used to regulate a fluid flow. This paper presents an estimation of the shape parameter of the Weibull distribution, characteristic life, and B10 life for a concentric butterfly valve based on a statistical analysis of the reliability test data taken before and after the valve improvement. The difference in the shape and scale parameters between the existing and improved valves is reviewed using a statistical hypothesis test. The test results indicate that the shape parameter of the improved valve is similar to that of the existing valve, and that the scale parameter of the improved valve is found to have increased. These analysis results are particularly useful for a reliability qualification test and the determination of the service life cycles.

  11. Reliability Evaluation of Concentric Butterfly Valve Using Statistical Hypothesis Test

    International Nuclear Information System (INIS)

    Chang, Mu Seong; Choi, Jong Sik; Choi, Byung Oh; Kim, Do Sik

    2015-01-01

    A butterfly valve is a type of flow-control device typically used to regulate a fluid flow. This paper presents an estimation of the shape parameter of the Weibull distribution, characteristic life, and B10 life for a concentric butterfly valve based on a statistical analysis of the reliability test data taken before and after the valve improvement. The difference in the shape and scale parameters between the existing and improved valves is reviewed using a statistical hypothesis test. The test results indicate that the shape parameter of the improved valve is similar to that of the existing valve, and that the scale parameter of the improved valve is found to have increased. These analysis results are particularly useful for a reliability qualification test and the determination of the service life cycles

  12. ROOT — A C++ framework for petabyte data storage, statistical analysis and visualization

    CERN Document Server

    Antcheva, I; Bellenot, B; Biskup,1, M; Brun, R; Buncic, N; Canal, Ph; Casadei, D; Couet, O; Fine, V; Franco,1, L; Ganis, G; Gheata, A; Gonzalez Maline, D; Goto, M; Iwaszkiewicz, J; Kreshuk, A; Marcos Segura, D; Maunder, R; Moneta, L; Naumann, A; Offermann, E; Onuchin, V; Panacek, S; Rademakers, F; Russo, P; Tadel, M

    2009-01-01

    ROOT is an object-oriented C++ framework conceived in the high-energy physics (HEP) community, designed for storing and analyzing petabytes of data in an efficient way. Any instance of a C++ class can be stored into a ROOT file in a machine-independent compressed binary format. In ROOT the TTree object container is optimized for statistical data analysis over very large data sets by using vertical data storage techniques. These containers can span a large number of files on local disks, the web, or a number of different shared file systems. In order to analyze this data, the user can chose out of a wide set of mathematical and statistical functions, including linear algebra classes, numerical algorithms such as integration and minimization, and various methods for performing regression analysis (fitting). In particular, the RooFit package allows the user to perform complex data modeling and fitting while the RooStats library provides abstractions and implementations for advanced statistical tools. Multivariat...

  13. Quantile regression for the statistical analysis of immunological data with many non-detects.

    Science.gov (United States)

    Eilers, Paul H C; Röder, Esther; Savelkoul, Huub F J; van Wijk, Roy Gerth

    2012-07-07

    Immunological parameters are hard to measure. A well-known problem is the occurrence of values below the detection limit, the non-detects. Non-detects are a nuisance, because classical statistical analyses, like ANOVA and regression, cannot be applied. The more advanced statistical techniques currently available for the analysis of datasets with non-detects can only be used if a small percentage of the data are non-detects. Quantile regression, a generalization of percentiles to regression models, models the median or higher percentiles and tolerates very high numbers of non-detects. We present a non-technical introduction and illustrate it with an implementation to real data from a clinical trial. We show that by using quantile regression, groups can be compared and that meaningful linear trends can be computed, even if more than half of the data consists of non-detects. Quantile regression is a valuable addition to the statistical methods that can be used for the analysis of immunological datasets with non-detects.

  14. mapDIA: Preprocessing and statistical analysis of quantitative proteomics data from data independent acquisition mass spectrometry.

    Science.gov (United States)

    Teo, Guoshou; Kim, Sinae; Tsou, Chih-Chiang; Collins, Ben; Gingras, Anne-Claude; Nesvizhskii, Alexey I; Choi, Hyungwon

    2015-11-03

    Data independent acquisition (DIA) mass spectrometry is an emerging technique that offers more complete detection and quantification of peptides and proteins across multiple samples. DIA allows fragment-level quantification, which can be considered as repeated measurements of the abundance of the corresponding peptides and proteins in the downstream statistical analysis. However, few statistical approaches are available for aggregating these complex fragment-level data into peptide- or protein-level statistical summaries. In this work, we describe a software package, mapDIA, for statistical analysis of differential protein expression using DIA fragment-level intensities. The workflow consists of three major steps: intensity normalization, peptide/fragment selection, and statistical analysis. First, mapDIA offers normalization of fragment-level intensities by total intensity sums as well as a novel alternative normalization by local intensity sums in retention time space. Second, mapDIA removes outlier observations and selects peptides/fragments that preserve the major quantitative patterns across all samples for each protein. Last, using the selected fragments and peptides, mapDIA performs model-based statistical significance analysis of protein-level differential expression between specified groups of samples. Using a comprehensive set of simulation datasets, we show that mapDIA detects differentially expressed proteins with accurate control of the false discovery rates. We also describe the analysis procedure in detail using two recently published DIA datasets generated for 14-3-3β dynamic interaction network and prostate cancer glycoproteome. The software was written in C++ language and the source code is available for free through SourceForge website http://sourceforge.net/projects/mapdia/.This article is part of a Special Issue entitled: Computational Proteomics. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. GIS and statistical analysis for landslide susceptibility mapping in the Daunia area, Italy

    Science.gov (United States)

    Mancini, F.; Ceppi, C.; Ritrovato, G.

    2010-09-01

    This study focuses on landslide susceptibility mapping in the Daunia area (Apulian Apennines, Italy) and achieves this by using a multivariate statistical method and data processing in a Geographical Information System (GIS). The Logistic Regression (hereafter LR) method was chosen to produce a susceptibility map over an area of 130 000 ha where small settlements are historically threatened by landslide phenomena. By means of LR analysis, the tendency to landslide occurrences was, therefore, assessed by relating a landslide inventory (dependent variable) to a series of causal factors (independent variables) which were managed in the GIS, while the statistical analyses were performed by means of the SPSS (Statistical Package for the Social Sciences) software. The LR analysis produced a reliable susceptibility map of the investigated area and the probability level of landslide occurrence was ranked in four classes. The overall performance achieved by the LR analysis was assessed by local comparison between the expected susceptibility and an independent dataset extrapolated from the landslide inventory. Of the samples classified as susceptible to landslide occurrences, 85% correspond to areas where landslide phenomena have actually occurred. In addition, the consideration of the regression coefficients provided by the analysis demonstrated that a major role is played by the "land cover" and "lithology" causal factors in determining the occurrence and distribution of landslide phenomena in the Apulian Apennines.

  16. Integrating eQTL data with GWAS summary statistics in pathway-based analysis with application to schizophrenia.

    Science.gov (United States)

    Wu, Chong; Pan, Wei

    2018-04-01

    Many genetic variants affect complex traits through gene expression, which can be exploited to boost statistical power and enhance interpretation in genome-wide association studies (GWASs) as demonstrated by the transcriptome-wide association study (TWAS) approach. Furthermore, due to polygenic inheritance, a complex trait is often affected by multiple genes with similar functions as annotated in gene pathways. Here, we extend TWAS from gene-based analysis to pathway-based analysis: we integrate public pathway collections, expression quantitative trait locus (eQTL) data and GWAS summary association statistics (or GWAS individual-level data) to identify gene pathways associated with complex traits. The basic idea is to weight the SNPs of the genes in a pathway based on their estimated cis-effects on gene expression, then adaptively test for association of the pathway with a GWAS trait by effectively aggregating possibly weak association signals across the genes in the pathway. The P values can be calculated analytically and thus fast. We applied our proposed test with the KEGG and GO pathways to two schizophrenia (SCZ) GWAS summary association data sets, denoted by SCZ1 and SCZ2 with about 20,000 and 150,000 subjects, respectively. Most of the significant pathways identified by analyzing the SCZ1 data were reproduced by the SCZ2 data. Importantly, we identified 15 novel pathways associated with SCZ, such as GABA receptor complex (GO:1902710), which could not be uncovered by the standard single SNP-based analysis or gene-based TWAS. The newly identified pathways may help us gain insights into the biological mechanism underlying SCZ. Our results showcase the power of incorporating gene expression information and gene functional annotations into pathway-based association testing for GWAS. © 2018 WILEY PERIODICALS, INC.

  17. Statistics 101 for Radiologists.

    Science.gov (United States)

    Anvari, Arash; Halpern, Elkan F; Samir, Anthony E

    2015-10-01

    Diagnostic tests have wide clinical applications, including screening, diagnosis, measuring treatment effect, and determining prognosis. Interpreting diagnostic test results requires an understanding of key statistical concepts used to evaluate test efficacy. This review explains descriptive statistics and discusses probability, including mutually exclusive and independent events and conditional probability. In the inferential statistics section, a statistical perspective on study design is provided, together with an explanation of how to select appropriate statistical tests. Key concepts in recruiting study samples are discussed, including representativeness and random sampling. Variable types are defined, including predictor, outcome, and covariate variables, and the relationship of these variables to one another. In the hypothesis testing section, we explain how to determine if observed differences between groups are likely to be due to chance. We explain type I and II errors, statistical significance, and study power, followed by an explanation of effect sizes and how confidence intervals can be used to generalize observed effect sizes to the larger population. Statistical tests are explained in four categories: t tests and analysis of variance, proportion analysis tests, nonparametric tests, and regression techniques. We discuss sensitivity, specificity, accuracy, receiver operating characteristic analysis, and likelihood ratios. Measures of reliability and agreement, including κ statistics, intraclass correlation coefficients, and Bland-Altman graphs and analysis, are introduced. © RSNA, 2015.

  18. Flame surface statistics of constant-pressure turbulent expanding premixed flames

    Science.gov (United States)

    Saha, Abhishek; Chaudhuri, Swetaprovo; Law, Chung K.

    2014-04-01

    In this paper we investigate the local flame surface statistics of constant-pressure turbulent expanding flames. First the statistics of local length ratio is experimentally determined from high-speed planar Mie scattering images of spherically expanding flames, with the length ratio on the measurement plane, at predefined equiangular sectors, defined as the ratio of the actual flame length to the length of a circular-arc of radius equal to the average radius of the flame. Assuming isotropic distribution of such flame segments we then convolute suitable forms of the length-ratio probability distribution functions (pdfs) to arrive at the corresponding area-ratio pdfs. It is found that both the length ratio and area ratio pdfs are near log-normally distributed and shows self-similar behavior with increasing radius. Near log-normality and rather intermittent behavior of the flame-length ratio suggests similarity with dissipation rate quantities which stimulates multifractal analysis.

  19. Statistical Analysis of Spectral Properties and Prosodic Parameters of Emotional Speech

    Science.gov (United States)

    Přibil, J.; Přibilová, A.

    2009-01-01

    The paper addresses reflection of microintonation and spectral properties in male and female acted emotional speech. Microintonation component of speech melody is analyzed regarding its spectral and statistical parameters. According to psychological research of emotional speech, different emotions are accompanied by different spectral noise. We control its amount by spectral flatness according to which the high frequency noise is mixed in voiced frames during cepstral speech synthesis. Our experiments are aimed at statistical analysis of cepstral coefficient values and ranges of spectral flatness in three emotions (joy, sadness, anger), and a neutral state for comparison. Calculated histograms of spectral flatness distribution are visually compared and modelled by Gamma probability distribution. Histograms of cepstral coefficient distribution are evaluated and compared using skewness and kurtosis. Achieved statistical results show good correlation comparing male and female voices for all emotional states portrayed by several Czech and Slovak professional actors.

  20. Multivariate statistical analysis of wildfires in Portugal

    Science.gov (United States)

    Costa, Ricardo; Caramelo, Liliana; Pereira, Mário

    2013-04-01

    Several studies demonstrate that wildfires in Portugal present high temporal and spatial variability as well as cluster behavior (Pereira et al., 2005, 2011). This study aims to contribute to the characterization of the fire regime in Portugal with the multivariate statistical analysis of the time series of number of fires and area burned in Portugal during the 1980 - 2009 period. The data used in the analysis is an extended version of the Rural Fire Portuguese Database (PRFD) (Pereira et al, 2011), provided by the National Forest Authority (Autoridade Florestal Nacional, AFN), the Portuguese Forest Service, which includes information for more than 500,000 fire records. There are many multiple advanced techniques for examining the relationships among multiple time series at the same time (e.g., canonical correlation analysis, principal components analysis, factor analysis, path analysis, multiple analyses of variance, clustering systems). This study compares and discusses the results obtained with these different techniques. Pereira, M.G., Trigo, R.M., DaCamara, C.C., Pereira, J.M.C., Leite, S.M., 2005: "Synoptic patterns associated with large summer forest fires in Portugal". Agricultural and Forest Meteorology. 129, 11-25. Pereira, M. G., Malamud, B. D., Trigo, R. M., and Alves, P. I.: The history and characteristics of the 1980-2005 Portuguese rural fire database, Nat. Hazards Earth Syst. Sci., 11, 3343-3358, doi:10.5194/nhess-11-3343-2011, 2011 This work is supported by European Union Funds (FEDER/COMPETE - Operational Competitiveness Programme) and by national funds (FCT - Portuguese Foundation for Science and Technology) under the project FCOMP-01-0124-FEDER-022692, the project FLAIR (PTDC/AAC-AMB/104702/2008) and the EU 7th Framework Program through FUME (contract number 243888).

  1. Cognition of and Demand for Education and Teaching in Medical Statistics in China: A Systematic Review and Meta-Analysis

    Science.gov (United States)

    Li, Gaoming; Yi, Dali; Wu, Xiaojiao; Liu, Xiaoyu; Zhang, Yanqi; Liu, Ling; Yi, Dong

    2015-01-01

    Background Although a substantial number of studies focus on the teaching and application of medical statistics in China, few studies comprehensively evaluate the recognition of and demand for medical statistics. In addition, the results of these various studies differ and are insufficiently comprehensive and systematic. Objectives This investigation aimed to evaluate the general cognition of and demand for medical statistics by undergraduates, graduates, and medical staff in China. Methods We performed a comprehensive database search related to the cognition of and demand for medical statistics from January 2007 to July 2014 and conducted a meta-analysis of non-controlled studies with sub-group analysis for undergraduates, graduates, and medical staff. Results There are substantial differences with respect to the cognition of theory in medical statistics among undergraduates (73.5%), graduates (60.7%), and medical staff (39.6%). The demand for theory in medical statistics is high among graduates (94.6%), undergraduates (86.1%), and medical staff (88.3%). Regarding specific statistical methods, the cognition of basic statistical methods is higher than of advanced statistical methods. The demand for certain advanced statistical methods, including (but not limited to) multiple analysis of variance (ANOVA), multiple linear regression, and logistic regression, is higher than that for basic statistical methods. The use rates of the Statistical Package for the Social Sciences (SPSS) software and statistical analysis software (SAS) are only 55% and 15%, respectively. Conclusion The overall statistical competence of undergraduates, graduates, and medical staff is insufficient, and their ability to practically apply their statistical knowledge is limited, which constitutes an unsatisfactory state of affairs for medical statistics education. Because the demand for skills in this area is increasing, the need to reform medical statistics education in China has become urgent

  2. [Methods of statistical analysis in differential diagnostics of the degree of brain glioma anaplasia during preoperative stage].

    Science.gov (United States)

    Glavatskiĭ, A Ia; Guzhovskaia, N V; Lysenko, S N; Kulik, A V

    2005-12-01

    The authors proposed a possible preoperative diagnostics of the degree of supratentorial brain gliom anaplasia using statistical analysis methods. It relies on a complex examination of 934 patients with I-IV degree anaplasias, which had been treated in the Institute of Neurosurgery from 1990 to 2004. The use of statistical analysis methods for differential diagnostics of the degree of brain gliom anaplasia may optimize a diagnostic algorithm, increase reliability of obtained data and in some cases avoid carrying out irrational operative intrusions. Clinically important signs for the use of statistical analysis methods directed to preoperative diagnostics of brain gliom anaplasia have been defined

  3. Statistics without Tears: Complex Statistics with Simple Arithmetic

    Science.gov (United States)

    Smith, Brian

    2011-01-01

    One of the often overlooked aspects of modern statistics is the analysis of time series data. Modern introductory statistics courses tend to rush to probabilistic applications involving risk and confidence. Rarely does the first level course linger on such useful and fascinating topics as time series decomposition, with its practical applications…

  4. ZnO crystals obtained by electrodeposition: Statistical analysis of most important process variables

    International Nuclear Information System (INIS)

    Cembrero, Jesus; Busquets-Mataix, David

    2009-01-01

    In this paper a comparative study by means of a statistical analysis of the main process variables affecting ZnO crystal electrodeposition is presented. ZnO crystals were deposited on two different substrates, silicon wafer and indium tin oxide. The control variables were substrate types, electrolyte concentration, temperature, exposition time and current density. The morphologies of the different substrates were observed using scanning electron microscopy. The percentage of substrate area covered by ZnO deposit was calculated by computational image analysis. The design of the applied experiments was based on a two-level factorial analysis involving a series of 32 experiments and an analysis of variance. Statistical results reveal that variables exerting a significant influence on the area covered by ZnO deposit are electrolyte concentration, substrate type and time of deposition, together with a combined two-factor interaction between temperature and current density. However, morphology is also influenced by surface roughness of the substrates

  5. [The research protocol VI: How to choose the appropriate statistical test. Inferential statistics].

    Science.gov (United States)

    Flores-Ruiz, Eric; Miranda-Novales, María Guadalupe; Villasís-Keever, Miguel Ángel

    2017-01-01

    The statistical analysis can be divided in two main components: descriptive analysis and inferential analysis. An inference is to elaborate conclusions from the tests performed with the data obtained from a sample of a population. Statistical tests are used in order to establish the probability that a conclusion obtained from a sample is applicable to the population from which it was obtained. However, choosing the appropriate statistical test in general poses a challenge for novice researchers. To choose the statistical test it is necessary to take into account three aspects: the research design, the number of measurements and the scale of measurement of the variables. Statistical tests are divided into two sets, parametric and nonparametric. Parametric tests can only be used if the data show a normal distribution. Choosing the right statistical test will make it easier for readers to understand and apply the results.

  6. The research protocol VI: How to choose the appropriate statistical test. Inferential statistics

    Directory of Open Access Journals (Sweden)

    Eric Flores-Ruiz

    2017-10-01

    Full Text Available The statistical analysis can be divided in two main components: descriptive analysis and inferential analysis. An inference is to elaborate conclusions from the tests performed with the data obtained from a sample of a population. Statistical tests are used in order to establish the probability that a conclusion obtained from a sample is applicable to the population from which it was obtained. However, choosing the appropriate statistical test in general poses a challenge for novice researchers. To choose the statistical test it is necessary to take into account three aspects: the research design, the number of measurements and the scale of measurement of the variables. Statistical tests are divided into two sets, parametric and nonparametric. Parametric tests can only be used if the data show a normal distribution. Choosing the right statistical test will make it easier for readers to understand and apply the results.

  7. Per Object statistical analysis

    DEFF Research Database (Denmark)

    2008-01-01

    of a specific class in turn, and uses as pair of PPO stages to derive the statistics and then assign them to the objects' Object Variables. It may be that this could all be done in some other, simply way, but several other ways that were tried did not succeed. The procedure ouptut has been tested against...

  8. Statistical analysis of installed wind capacity in the United States

    International Nuclear Information System (INIS)

    Staid, Andrea; Guikema, Seth D.

    2013-01-01

    There is a large disparity in the amount of wind power capacity installed in each of the states in the U.S. It is often thought that the different policies of individual state governments are the main reason for these differences, but this may not necessarily be the case. The aim of this paper is to use statistical methods to study the factors that have the most influence on the amount of installed wind capacity in each state. From this analysis, we were able to use these variables to accurately predict the installed wind capacity and to gain insight into the driving factors for wind power development and the reasons behind the differences among states. Using our best model, we find that the most important variables for explaining the amount of wind capacity have to do with the physical and geographic characteristics of the state as opposed to policies in place that favor renewable energy. - Highlights: • We conduct a statistical analysis of factors influencing wind capacity in the U.S. • We find that state policies do not strongly influence the differences among states. • Driving factors are wind resources, cropland area, and available percentage of land

  9. Hashing for Statistics over K-Partitions

    DEFF Research Database (Denmark)

    Dahlgaard, Soren; Knudsen, Mathias Baek Tejs; Rotenberg, Eva

    2015-01-01

    In this paper we analyze a hash function for k-partitioning a set into bins, obtaining strong concentration bounds for standard algorithms combining statistics from each bin. This generic method was originally introduced by Flajolet and Martin [FOCS'83] in order to save a factor Ω(k) of time per...... concentration bounds on the most popular applications of k-partitioning similar to those we would get using a truly random hash function. The analysis is very involved and implies several new results of independent interest for both simple and double tabulation, e.g. A simple and efficient construction...

  10. Statistical Learning in Specific Language Impairment and Autism Spectrum Disorder: A Meta-Analysis

    Directory of Open Access Journals (Sweden)

    Rita Obeid

    2016-08-01

    Full Text Available Impairments in statistical learning might be a common deficit among individuals with Specific Language Impairment (SLI and Autism Spectrum Disorder (ASD. Using meta-analysis, we examined statistical learning in SLI (14 studies, 15 comparisons and ASD (13 studies, 20 comparisons to evaluate this hypothesis. Effect sizes were examined as a function of diagnosis across multiple statistical learning tasks (Serial Reaction Time, Contextual Cueing, Artificial Grammar Learning, Speech Stream, Observational Learning, Probabilistic Classification. Individuals with SLI showed deficits in statistical learning relative to age-matched controls g = .47, 95% CI [.28, .66], p < .001. In contrast, statistical learning was intact in individuals with ASD relative to controls, g = –.13, 95% CI [–.34, .08], p = .22. Effect sizes did not vary as a function of task modality or participant age. Our findings inform debates about overlapping social-communicative difficulties in children with SLI and ASD by suggesting distinct underlying mechanisms. In line with the procedural deficit hypothesis (Ullman & Pierpont, 2005, impaired statistical learning may account for phonological and syntactic difficulties associated with SLI. In contrast, impaired statistical learning fails to account for the social-pragmatic difficulties associated with ASD.

  11. Similarity analysis between quantum images

    Science.gov (United States)

    Zhou, Ri-Gui; Liu, XingAo; Zhu, Changming; Wei, Lai; Zhang, Xiafen; Ian, Hou

    2018-06-01

    Similarity analyses between quantum images are so essential in quantum image processing that it provides fundamental research for the other fields, such as quantum image matching, quantum pattern recognition. In this paper, a quantum scheme based on a novel quantum image representation and quantum amplitude amplification algorithm is proposed. At the end of the paper, three examples and simulation experiments show that the measurement result must be 0 when two images are same, and the measurement result has high probability of being 1 when two images are different.

  12. Statistical analysis of environmental dose data for Trombay environment

    International Nuclear Information System (INIS)

    Kale, M.S.; Padmanabhan, N.; Rekha Kutty, R.; Sharma, D.N.; Iyengar, T.S.; Iyer, M.R.

    1993-01-01

    The microprocessor based environmental dose logging system is functioning at six stations at Trombay for the past couple of years. The site emergency control centre (SECC) at modular laboratory receives telemetered data every five minutes from main guard house (South Site), Bhabha point (top of the hill), Cirus reactor, Mod Lab terrace, Hall No. 7 and Training School Hostel. The data collected are being stored in dbase III + format for easy processing in a PC. Various statistical parameters and distributions of environmental gamma dose are determined from the hourly dose data. On the basis of the reactor operation status an attempt has been made to separate the natural background and the gamma dose contribution due to the operating research reactors in each one of these monitoring stations. Similar investigations are being carried out for Tarapur environment. (author). 2 refs., 3 tabs., 2 figs

  13. Statistical analysis of non-homogeneous Poisson processes. Statistical processing of a particle multidetector

    International Nuclear Information System (INIS)

    Lacombe, J.P.

    1985-12-01

    Statistic study of Poisson non-homogeneous and spatial processes is the first part of this thesis. A Neyman-Pearson type test is defined concerning the intensity measurement of these processes. Conditions are given for which consistency of the test is assured, and others giving the asymptotic normality of the test statistics. Then some techniques of statistic processing of Poisson fields and their applications to a particle multidetector study are given. Quality tests of the device are proposed togetherwith signal extraction methods [fr

  14. Integrated Data Collection Analysis (IDCA) Program - Statistical Analysis of RDX Standard Data Sets

    Energy Technology Data Exchange (ETDEWEB)

    Sandstrom, Mary M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Brown, Geoffrey W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Preston, Daniel N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Pollard, Colin J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Warner, Kirstin F. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Sorensen, Daniel N. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Remmers, Daniel L. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Phillips, Jason J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Shelley, Timothy J. [Air Force Research Lab. (AFRL), Tyndall AFB, FL (United States); Reyes, Jose A. [Applied Research Associates, Tyndall AFB, FL (United States); Hsu, Peter C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Reynolds, John G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-10-30

    The Integrated Data Collection Analysis (IDCA) program is conducting a Proficiency Test for Small- Scale Safety and Thermal (SSST) testing of homemade explosives (HMEs). Described here are statistical analyses of the results for impact, friction, electrostatic discharge, and differential scanning calorimetry analysis of the RDX Type II Class 5 standard. The material was tested as a well-characterized standard several times during the proficiency study to assess differences among participants and the range of results that may arise for well-behaved explosive materials. The analyses show that there are detectable differences among the results from IDCA participants. While these differences are statistically significant, most of them can be disregarded for comparison purposes to assess potential variability when laboratories attempt to measure identical samples using methods assumed to be nominally the same. The results presented in this report include the average sensitivity results for the IDCA participants and the ranges of values obtained. The ranges represent variation about the mean values of the tests of between 26% and 42%. The magnitude of this variation is attributed to differences in operator, method, and environment as well as the use of different instruments that are also of varying age. The results appear to be a good representation of the broader safety testing community based on the range of methods, instruments, and environments included in the IDCA Proficiency Test.

  15. STATISTICS. The reusable holdout: Preserving validity in adaptive data analysis.

    Science.gov (United States)

    Dwork, Cynthia; Feldman, Vitaly; Hardt, Moritz; Pitassi, Toniann; Reingold, Omer; Roth, Aaron

    2015-08-07

    Misapplication of statistical data analysis is a common cause of spurious discoveries in scientific research. Existing approaches to ensuring the validity of inferences drawn from data assume a fixed procedure to be performed, selected before the data are examined. In common practice, however, data analysis is an intrinsically adaptive process, with new analyses generated on the basis of data exploration, as well as the results of previous analyses on the same data. We demonstrate a new approach for addressing the challenges of adaptivity based on insights from privacy-preserving data analysis. As an application, we show how to safely reuse a holdout data set many times to validate the results of adaptively chosen analyses. Copyright © 2015, American Association for the Advancement of Science.

  16. International Conference on Modern Problems of Stochastic Analysis and Statistics

    CERN Document Server

    2017-01-01

    This book brings together the latest findings in the area of stochastic analysis and statistics. The individual chapters cover a wide range of topics from limit theorems, Markov processes, nonparametric methods, acturial science, population dynamics, and many others. The volume is dedicated to Valentin Konakov, head of the International Laboratory of Stochastic Analysis and its Applications on the occasion of his 70th birthday. Contributions were prepared by the participants of the international conference of the international conference “Modern problems of stochastic analysis and statistics”, held at the Higher School of Economics in Moscow from May 29 - June 2, 2016. It offers a valuable reference resource for researchers and graduate students interested in modern stochastics.

  17. Statistical analysis of longitudinal quality of life data with missing measurements

    NARCIS (Netherlands)

    Zwinderman, A. H.

    1992-01-01

    The statistical analysis of longitudinal quality of life data in the presence of missing data is discussed. In cancer trials missing data are generated due to the fact that patients die, drop out, or are censored. These missing data are problematic in the monitoring of the quality of life during the

  18. A framework for the economic analysis of data collection methods for vital statistics.

    Science.gov (United States)

    Jimenez-Soto, Eliana; Hodge, Andrew; Nguyen, Kim-Huong; Dettrick, Zoe; Lopez, Alan D

    2014-01-01

    Over recent years there has been a strong movement towards the improvement of vital statistics and other types of health data that inform evidence-based policies. Collecting such data is not cost free. To date there is no systematic framework to guide investment decisions on methods of data collection for vital statistics or health information in general. We developed a framework to systematically assess the comparative costs and outcomes/benefits of the various data methods for collecting vital statistics. The proposed framework is four-pronged and utilises two major economic approaches to systematically assess the available data collection methods: cost-effectiveness analysis and efficiency analysis. We built a stylised example of a hypothetical low-income country to perform a simulation exercise in order to illustrate an application of the framework. Using simulated data, the results from the stylised example show that the rankings of the data collection methods are not affected by the use of either cost-effectiveness or efficiency analysis. However, the rankings are affected by how quantities are measured. There have been several calls for global improvements in collecting useable data, including vital statistics, from health information systems to inform public health policies. Ours is the first study that proposes a systematic framework to assist countries undertake an economic evaluation of DCMs. Despite numerous challenges, we demonstrate that a systematic assessment of outputs and costs of DCMs is not only necessary, but also feasible. The proposed framework is general enough to be easily extended to other areas of health information.

  19. Statistical methods in nuclear material accountancy: Past, present and future

    International Nuclear Information System (INIS)

    Pike, D.J.; Woods, A.J.

    1983-01-01

    The analysis of nuclear material inventory data is motivated by the desire to detect any loss or diversion of nuclear material, insofar as such detection may be feasible by statistical analysis of repeated inventory and throughput measurements. The early regulations, which laid down the specifications for the analysis of inventory data, were framed without acknowledging the essentially sequential nature of the data. It is the broad aim of this paper to discuss the historical nature of statistical analysis of inventory data including an evaluation of why statistical methods should be required at all. If it is accepted that statistical techniques are required, then two main areas require extensive discussion. First, it is important to assess the extent to which stated safeguards aims can be met in practice. Second, there is a vital need for reassessment of the statistical techniques which have been proposed for use in nuclear material accountancy. Part of this reassessment must involve a reconciliation of the apparent differences in philosophy shown by statisticians; but, in addition, the techniques themselves need comparative study to see to what extent they are capable of meeting realistic safeguards aims. This paper contains a brief review of techniques with an attempt to compare and contrast the approaches. It will be suggested that much current research is following closely similar lines, and that national and international bodies should encourage collaborative research and practical in-plant implementations. The techniques proposed require credibility and power; but at this point in time statisticians require credibility and a greater level of unanimity in their approach. A way ahead is proposed based on a clear specification of realistic safeguards aims, and a development of a unified statistical approach with encouragement for the performance of joint research. (author)

  20. A Statistical Parameter Analysis and SVM Based Fault Diagnosis Strategy for Dynamically Tuned Gyroscopes

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Gyro's fault diagnosis plays a critical role in inertia navigation systems for higher reliability and precision. A new fault diagnosis strategy based on the statistical parameter analysis (SPA) and support vector machine(SVM) classification model was proposed for dynamically tuned gyroscopes (DTG). The SPA, a kind of time domain analysis approach, was introduced to compute a set of statistical parameters of vibration signal as the state features of DTG, with which the SVM model, a novel learning machine based on statistical learning theory (SLT), was applied and constructed to train and identify the working state of DTG. The experimental results verify that the proposed diagnostic strategy can simply and effectively extract the state features of DTG, and it outperforms the radial-basis function (RBF) neural network based diagnostic method and can more reliably and accurately diagnose the working state of DTG.