WorldWideScience

Sample records for based statistical analysis

  1. Rweb:Web-based Statistical Analysis

    Directory of Open Access Journals (Sweden)

    Jeff Banfield

    1999-03-01

    Full Text Available Rweb is a freely accessible statistical analysis environment that is delivered through the World Wide Web (WWW. It is based on R, a well known statistical analysis package. The only requirement to run the basic Rweb interface is a WWW browser that supports forms. If you want graphical output you must, of course, have a browser that supports graphics. The interface provides access to WWW accessible data sets, so you may run Rweb on your own data. Rweb can provide a four window statistical computing environment (code input, text output, graphical output, and error information through browsers that support Javascript. There is also a set of point and click modules under development for use in introductory statistics courses.

  2. Time Series Analysis Based on Running Mann Whitney Z Statistics

    Science.gov (United States)

    A sensitive and objective time series analysis method based on the calculation of Mann Whitney U statistics is described. This method samples data rankings over moving time windows, converts those samples to Mann-Whitney U statistics, and then normalizes the U statistics to Z statistics using Monte-...

  3. Feature-Based Statistical Analysis of Combustion Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, J; Krishnamoorthy, V; Liu, S; Grout, R; Hawkes, E; Chen, J; Pascucci, V; Bremer, P T

    2011-11-18

    We present a new framework for feature-based statistical analysis of large-scale scientific data and demonstrate its effectiveness by analyzing features from Direct Numerical Simulations (DNS) of turbulent combustion. Turbulent flows are ubiquitous and account for transport and mixing processes in combustion, astrophysics, fusion, and climate modeling among other disciplines. They are also characterized by coherent structure or organized motion, i.e. nonlocal entities whose geometrical features can directly impact molecular mixing and reactive processes. While traditional multi-point statistics provide correlative information, they lack nonlocal structural information, and hence, fail to provide mechanistic causality information between organized fluid motion and mixing and reactive processes. Hence, it is of great interest to capture and track flow features and their statistics together with their correlation with relevant scalar quantities, e.g. temperature or species concentrations. In our approach we encode the set of all possible flow features by pre-computing merge trees augmented with attributes, such as statistical moments of various scalar fields, e.g. temperature, as well as length-scales computed via spectral analysis. The computation is performed in an efficient streaming manner in a pre-processing step and results in a collection of meta-data that is orders of magnitude smaller than the original simulation data. This meta-data is sufficient to support a fully flexible and interactive analysis of the features, allowing for arbitrary thresholds, providing per-feature statistics, and creating various global diagnostics such as Cumulative Density Functions (CDFs), histograms, or time-series. We combine the analysis with a rendering of the features in a linked-view browser that enables scientists to interactively explore, visualize, and analyze the equivalent of one terabyte of simulation data. We highlight the utility of this new framework for combustion

  4. EBprot: Statistical analysis of labeling-based quantitative proteomics data.

    Science.gov (United States)

    Koh, Hiromi W L; Swa, Hannah L F; Fermin, Damian; Ler, Siok Ghee; Gunaratne, Jayantha; Choi, Hyungwon

    2015-08-01

    Labeling-based proteomics is a powerful method for detection of differentially expressed proteins (DEPs). The current data analysis platform typically relies on protein-level ratios, which is obtained by summarizing peptide-level ratios for each protein. In shotgun proteomics, however, some proteins are quantified with more peptides than others, and this reproducibility information is not incorporated into the differential expression (DE) analysis. Here, we propose a novel probabilistic framework EBprot that directly models the peptide-protein hierarchy and rewards the proteins with reproducible evidence of DE over multiple peptides. To evaluate its performance with known DE states, we conducted a simulation study to show that the peptide-level analysis of EBprot provides better receiver-operating characteristic and more accurate estimation of the false discovery rates than the methods based on protein-level ratios. We also demonstrate superior classification performance of peptide-level EBprot analysis in a spike-in dataset. To illustrate the wide applicability of EBprot in different experimental designs, we applied EBprot to a dataset for lung cancer subtype analysis with biological replicates and another dataset for time course phosphoproteome analysis of EGF-stimulated HeLa cells with multiplexed labeling. Through these examples, we show that the peptide-level analysis of EBprot is a robust alternative to the existing statistical methods for the DE analysis of labeling-based quantitative datasets. The software suite is freely available on the Sourceforge website http://ebprot.sourceforge.net/. All MS data have been deposited in the ProteomeXchange with identifier PXD001426 (http://proteomecentral.proteomexchange.org/dataset/PXD001426/). © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Precision Statistical Analysis of Images Based on Brightness Distribution

    Directory of Open Access Journals (Sweden)

    Muzhir Shaban Al-Ani

    2017-07-01

    Full Text Available Study the content of images is considered an important topic in which reasonable and accurate analysis of images are generated. Recently image analysis becomes a vital field because of huge number of images transferred via transmission media in our daily life. These crowded media with images lead to highlight in research area of image analysis. In this paper, the implemented system is passed into many steps to perform the statistical measures of standard deviation and mean values of both color and grey images. Whereas the last step of the proposed method concerns to compare the obtained results in different cases of the test phase. In this paper, the statistical parameters are implemented to characterize the content of an image and its texture. Standard deviation, mean and correlation values are used to study the intensity distribution of the tested images. Reasonable results are obtained for both standard deviation and mean value via the implementation of the system. The major issue addressed in the work is concentrated on brightness distribution via statistical measures applying different types of lighting.

  6. Signal processing and statistical analysis of spaced-based measurements

    International Nuclear Information System (INIS)

    Iranpour, K.

    1996-05-01

    The reports deals with data obtained by the ROSE rocket project. This project was designed to investigate the low altitude auroral instabilities in the electrojet region. The spectral and statistical analyses indicate the existence of unstable waves in the ionized gas in the region. An experimentally obtained dispersion relation for these waves were established. It was demonstrated that the characteristic phase velocities are much lower than what is expected from the standard theoretical results. This analysis of the ROSE data indicate the cascading of energy from lower to higher frequencies. 44 refs., 54 figs

  7. Energy consumption quota of public buildings based on statistical analysis

    International Nuclear Information System (INIS)

    Zhao Jing; Xin Yajuan; Tong Dingding

    2012-01-01

    The establishment of building energy consumption quota as a comprehensive indicator used to evaluate the actual energy consumption level is an important measure for promoting the development of building energy efficiency. This paper focused on the determination method of the quota, and firstly introduced the procedure of establishing energy consumption quota of public buildings including four important parts: collecting data, classifying and calculating EUIs, standardizing EUIs, determining the measure method of central tendency. The paper also illustrated the standardization process of EUI by actual calculation based on the samples of 10 commercial buildings and 19 hotel buildings. According to the analysis of the frequency distribution of standardized EUIs of sample buildings and combining the characteristics of each measure method of central tendency, comprehensive application of mode and percentage rank is selected to be the best method for determining the energy consumption quota of public buildings. Finally the paper gave some policy proposals on energy consumption quota to help achieve the goal of further energy conservation. - Highlights: ► We introduce the procedure of determining energy consumption quota (ECQ). ► We illustrate the standardization process of EUI by actual calculation of samples. ► Measures of central tendency are brought into determine the ECQ. ► Comprehensive application of mode and percentage rank is the best method for ECQ. ► Punitive or incentive measures for ECQ are proposed.

  8. Statistical analysis of MRI-only based dose planning

    DEFF Research Database (Denmark)

    Korsholm, M. E.; Waring, L. W.; Paulsen, Rasmus Reinhold

    2012-01-01

    . MRIonly based RT eliminates these errors and reduce the time and costs of a CT scan. The aim of this study is to investigate the dosimetric differences of a treatment plan when the dose calculation is based on MRI as compared to CT. Materials and Methods: Four diagnostic groups are investigated; 12...... as a clinically approved treatment plan. The treatment planning software is Eclipse v.10.0 (Varian Medical Systems). The dose calculation based on MRI data is evaluated in two different ways; a homogeneous density assigned MRI (MRI unit), where the entire body is assigned an HU equal to water and a heterogeneous...... density assigned MRI (MRI bulk) where in addition the CT segmented bone is transferred to the MRI and assigned an age dependent HU based on ICRU report 46. The CT based clinical treatment plan and structure set are registered to the corresponding MRI unit and MRI bulk. The body is outlined on both the MRI...

  9. Statistical analysis of questionnaires a unified approach based on R and Stata

    CERN Document Server

    Bartolucci, Francesco; Gnaldi, Michela

    2015-01-01

    Statistical Analysis of Questionnaires: A Unified Approach Based on R and Stata presents special statistical methods for analyzing data collected by questionnaires. The book takes an applied approach to testing and measurement tasks, mirroring the growing use of statistical methods and software in education, psychology, sociology, and other fields. It is suitable for graduate students in applied statistics and psychometrics and practitioners in education, health, and marketing.The book covers the foundations of classical test theory (CTT), test reliability, va

  10. Statistically based uncertainty assessments in nuclear risk analysis

    International Nuclear Information System (INIS)

    Spencer, F.W.; Diegert, K.V.; Easterling, R.G.

    1987-01-01

    Over the last decade, the problems of estimation and uncertainty assessment in probabilistics risk assessment (PRAs) have been addressed in a variety of NRC and industry-sponsored projects. These problems have received attention because of a recognition that major uncertainties in risk estimation exist, which can be reduced by collecting more and better data and other information, and because of a recognition that better methods for assessing these uncertainties are needed. In particular, a clear understanding of the nature and magnitude of various sources of uncertainty is needed to facilitate descision-making on possible plant changes and research options. Recent PRAs have employed methods of probability propagation, sometimes involving the use of Bayes Theorem, and intended to formalize the use of ''engineering judgment'' or ''expert opinion.'' All sources, or feelings, of uncertainty are expressed probabilistically, so that uncertainty analysis becomes simply a matter of probability propagation. Alternatives to forcing a probabilistic framework at all stages of a PRA are a major concern in this paper, however

  11. Compliance strategy for statistically based neutron overpower protection safety analysis methodology

    International Nuclear Information System (INIS)

    Holliday, E.; Phan, B.; Nainer, O.

    2009-01-01

    The methodology employed in the safety analysis of the slow Loss of Regulation (LOR) event in the OPG and Bruce Power CANDU reactors, referred to as Neutron Overpower Protection (NOP) analysis, is a statistically based methodology. Further enhancement to this methodology includes the use of Extreme Value Statistics (EVS) for the explicit treatment of aleatory and epistemic uncertainties, and probabilistic weighting of the initial core states. A key aspect of this enhanced NOP methodology is to demonstrate adherence, or compliance, with the analysis basis. This paper outlines a compliance strategy capable of accounting for the statistical nature of the enhanced NOP methodology. (author)

  12. Methodology сomparative statistical analysis of Russian industry based on cluster analysis

    Directory of Open Access Journals (Sweden)

    Sergey S. Shishulin

    2017-01-01

    Full Text Available The article is devoted to researching of the possibilities of applying multidimensional statistical analysis in the study of industrial production on the basis of comparing its growth rates and structure with other developed and developing countries of the world. The purpose of this article is to determine the optimal set of statistical methods and the results of their application to industrial production data, which would give the best access to the analysis of the result.Data includes such indicators as output, output, gross value added, the number of employed and other indicators of the system of national accounts and operational business statistics. The objects of observation are the industry of the countrys of the Customs Union, the United States, Japan and Erope in 2005-2015. As the research tool used as the simplest methods of transformation, graphical and tabular visualization of data, and methods of statistical analysis. In particular, based on a specialized software package (SPSS, the main components method, discriminant analysis, hierarchical methods of cluster analysis, Ward’s method and k-means were applied.The application of the method of principal components to the initial data makes it possible to substantially and effectively reduce the initial space of industrial production data. Thus, for example, in analyzing the structure of industrial production, the reduction was from fifteen industries to three basic, well-interpreted factors: the relatively extractive industries (with a low degree of processing, high-tech industries and consumer goods (medium-technology sectors. At the same time, as a result of comparison of the results of application of cluster analysis to the initial data and data obtained on the basis of the principal components method, it was established that clustering industrial production data on the basis of new factors significantly improves the results of clustering.As a result of analyzing the parameters of

  13. A Statistical Parameter Analysis and SVM Based Fault Diagnosis Strategy for Dynamically Tuned Gyroscopes

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Gyro's fault diagnosis plays a critical role in inertia navigation systems for higher reliability and precision. A new fault diagnosis strategy based on the statistical parameter analysis (SPA) and support vector machine(SVM) classification model was proposed for dynamically tuned gyroscopes (DTG). The SPA, a kind of time domain analysis approach, was introduced to compute a set of statistical parameters of vibration signal as the state features of DTG, with which the SVM model, a novel learning machine based on statistical learning theory (SLT), was applied and constructed to train and identify the working state of DTG. The experimental results verify that the proposed diagnostic strategy can simply and effectively extract the state features of DTG, and it outperforms the radial-basis function (RBF) neural network based diagnostic method and can more reliably and accurately diagnose the working state of DTG.

  14. Statistical data analysis handbook

    National Research Council Canada - National Science Library

    Wall, Francis J

    1986-01-01

    It must be emphasized that this is not a text book on statistics. Instead it is a working tool that presents data analysis in clear, concise terms which can be readily understood even by those without formal training in statistics...

  15. Three-Dimensional Assembly Tolerance Analysis Based on the Jacobian-Torsor Statistical Model

    Directory of Open Access Journals (Sweden)

    Peng Heping

    2017-01-01

    Full Text Available The unified Jacobian-Torsor model has been developed for deterministic (worst case tolerance analysis. This paper presents a comprehensive model for performing statistical tolerance analysis by integrating the unified Jacobian-Torsor model and Monte Carlo simulation. In this model, an assembly is sub-divided into surfaces, the Small Displacements Torsor (SDT parameters are used to express the relative position between any two surfaces of the assembly. Then, 3D dimension-chain can be created by using a surface graph of the assembly and the unified Jacobian-Torsor model is developed based on the effect of each functional element on the whole functional requirements of products. Finally, Monte Carlo simulation is implemented for the statistical tolerance analysis. A numerical example is given to demonstrate the capability of the proposed method in handling three-dimensional assembly tolerance analysis.

  16. Multivariate meta-analysis: a robust approach based on the theory of U-statistic.

    Science.gov (United States)

    Ma, Yan; Mazumdar, Madhu

    2011-10-30

    Meta-analysis is the methodology for combining findings from similar research studies asking the same question. When the question of interest involves multiple outcomes, multivariate meta-analysis is used to synthesize the outcomes simultaneously taking into account the correlation between the outcomes. Likelihood-based approaches, in particular restricted maximum likelihood (REML) method, are commonly utilized in this context. REML assumes a multivariate normal distribution for the random-effects model. This assumption is difficult to verify, especially for meta-analysis with small number of component studies. The use of REML also requires iterative estimation between parameters, needing moderately high computation time, especially when the dimension of outcomes is large. A multivariate method of moments (MMM) is available and is shown to perform equally well to REML. However, there is a lack of information on the performance of these two methods when the true data distribution is far from normality. In this paper, we propose a new nonparametric and non-iterative method for multivariate meta-analysis on the basis of the theory of U-statistic and compare the properties of these three procedures under both normal and skewed data through simulation studies. It is shown that the effect on estimates from REML because of non-normal data distribution is marginal and that the estimates from MMM and U-statistic-based approaches are very similar. Therefore, we conclude that for performing multivariate meta-analysis, the U-statistic estimation procedure is a viable alternative to REML and MMM. Easy implementation of all three methods are illustrated by their application to data from two published meta-analysis from the fields of hip fracture and periodontal disease. We discuss ideas for future research based on U-statistic for testing significance of between-study heterogeneity and for extending the work to meta-regression setting. Copyright © 2011 John Wiley & Sons, Ltd.

  17. A robust statistical method for association-based eQTL analysis.

    Directory of Open Access Journals (Sweden)

    Ning Jiang

    Full Text Available It has been well established that theoretical kernel for recently surging genome-wide association study (GWAS is statistical inference of linkage disequilibrium (LD between a tested genetic marker and a putative locus affecting a disease trait. However, LD analysis is vulnerable to several confounding factors of which population stratification is the most prominent. Whilst many methods have been proposed to correct for the influence either through predicting the structure parameters or correcting inflation in the test statistic due to the stratification, these may not be feasible or may impose further statistical problems in practical implementation.We propose here a novel statistical method to control spurious LD in GWAS from population structure by incorporating a control marker into testing for significance of genetic association of a polymorphic marker with phenotypic variation of a complex trait. The method avoids the need of structure prediction which may be infeasible or inadequate in practice and accounts properly for a varying effect of population stratification on different regions of the genome under study. Utility and statistical properties of the new method were tested through an intensive computer simulation study and an association-based genome-wide mapping of expression quantitative trait loci in genetically divergent human populations.The analyses show that the new method confers an improved statistical power for detecting genuine genetic association in subpopulations and an effective control of spurious associations stemmed from population structure when compared with other two popularly implemented methods in the literature of GWAS.

  18. No-Reference Video Quality Assessment Based on Statistical Analysis in 3D-DCT Domain.

    Science.gov (United States)

    Li, Xuelong; Guo, Qun; Lu, Xiaoqiang

    2016-05-13

    It is an important task to design models for universal no-reference video quality assessment (NR-VQA) in multiple video processing and computer vision applications. However, most existing NR-VQA metrics are designed for specific distortion types which are not often aware in practical applications. A further deficiency is that the spatial and temporal information of videos is hardly considered simultaneously. In this paper, we propose a new NR-VQA metric based on the spatiotemporal natural video statistics (NVS) in 3D discrete cosine transform (3D-DCT) domain. In the proposed method, a set of features are firstly extracted based on the statistical analysis of 3D-DCT coefficients to characterize the spatiotemporal statistics of videos in different views. These features are used to predict the perceived video quality via the efficient linear support vector regression (SVR) model afterwards. The contributions of this paper are: 1) we explore the spatiotemporal statistics of videos in 3DDCT domain which has the inherent spatiotemporal encoding advantage over other widely used 2D transformations; 2) we extract a small set of simple but effective statistical features for video visual quality prediction; 3) the proposed method is universal for multiple types of distortions and robust to different databases. The proposed method is tested on four widely used video databases. Extensive experimental results demonstrate that the proposed method is competitive with the state-of-art NR-VQA metrics and the top-performing FR-VQA and RR-VQA metrics.

  19. Operational statistical analysis of the results of computer-based testing of students

    Directory of Open Access Journals (Sweden)

    Виктор Иванович Нардюжев

    2018-12-01

    Full Text Available The article is devoted to the issues of statistical analysis of results of computer-based testing for evaluation of educational achievements of students. The issues are relevant due to the fact that computerbased testing in Russian universities has become an important method for evaluation of educational achievements of students and quality of modern educational process. Usage of modern methods and programs for statistical analysis of results of computer-based testing and assessment of quality of developed tests is an actual problem for every university teacher. The article shows how the authors solve this problem using their own program “StatInfo”. For several years the program has been successfully applied in a credit system of education at such technological stages as loading computerbased testing protocols into a database, formation of queries, generation of reports, lists, and matrices of answers for statistical analysis of quality of test items. Methodology, experience and some results of its usage by university teachers are described in the article. Related topics of a test development, models, algorithms, technologies, and software for large scale computer-based testing has been discussed by the authors in their previous publications which are presented in the reference list.

  20. A Guideline to Univariate Statistical Analysis for LC/MS-Based Untargeted Metabolomics-Derived Data

    Directory of Open Access Journals (Sweden)

    Maria Vinaixa

    2012-10-01

    Full Text Available Several metabolomic software programs provide methods for peak picking, retention time alignment and quantification of metabolite features in LC/MS-based metabolomics. Statistical analysis, however, is needed in order to discover those features significantly altered between samples. By comparing the retention time and MS/MS data of a model compound to that from the altered feature of interest in the research sample, metabolites can be then unequivocally identified. This paper reports on a comprehensive overview of a workflow for statistical analysis to rank relevant metabolite features that will be selected for further MS/MS experiments. We focus on univariate data analysis applied in parallel on all detected features. Characteristics and challenges of this analysis are discussed and illustrated using four different real LC/MS untargeted metabolomic datasets. We demonstrate the influence of considering or violating mathematical assumptions on which univariate statistical test rely, using high-dimensional LC/MS datasets. Issues in data analysis such as determination of sample size, analytical variation, assumption of normality and homocedasticity, or correction for multiple testing are discussed and illustrated in the context of our four untargeted LC/MS working examples.

  1. Monte Carlo based statistical power analysis for mediation models: methods and software.

    Science.gov (United States)

    Zhang, Zhiyong

    2014-12-01

    The existing literature on statistical power analysis for mediation models often assumes data normality and is based on a less powerful Sobel test instead of the more powerful bootstrap test. This study proposes to estimate statistical power to detect mediation effects on the basis of the bootstrap method through Monte Carlo simulation. Nonnormal data with excessive skewness and kurtosis are allowed in the proposed method. A free R package called bmem is developed to conduct the power analysis discussed in this study. Four examples, including a simple mediation model, a multiple-mediator model with a latent mediator, a multiple-group mediation model, and a longitudinal mediation model, are provided to illustrate the proposed method.

  2. Hydration sites of unpaired RNA bases: a statistical analysis of the PDB structures

    Directory of Open Access Journals (Sweden)

    Carugo Oliviero

    2011-10-01

    Full Text Available Abstract Background Hydration is crucial for RNA structure and function. X-ray crystallography is the most commonly used method to determine RNA structures and hydration and, therefore, statistical surveys are based on crystallographic results, the number of which is quickly increasing. Results A statistical analysis of the water molecule distribution in high-resolution X-ray structures of unpaired RNA nucleotides showed that: different bases have the same penchant to be surrounded by water molecules; clusters of water molecules indicate possible hydration sites, which, in some cases, match those of the major and minor grooves of RNA and DNA double helices; complex hydrogen bond networks characterize the solvation of the nucleotides, resulting in a significant rigidity of the base and its surrounding water molecules. Interestingly, the hydration sites around unpaired RNA bases do not match, in general, the positions that are occupied by the second nucleotide when the base-pair is formed. Conclusions The hydration sites around unpaired RNA bases were found. They do not replicate the atom positions of complementary bases in the Watson-Crick pairs.

  3. Information systems development of analysis company financial state based on the expert-statistical approach

    Directory of Open Access Journals (Sweden)

    M. N. Ivliev

    2016-01-01

    Full Text Available The work is devoted to methods of analysis the company financial condition, including aggregated ratings. It is proposed to use the generalized solvency and liquidity indicator and the capital structure composite index. Mathematically, the generalized index is a sum of variables-characteristics and weighting factors characterizing the relative importance of individual characteristics composition. It is offered to select the significant features from a set of standard financial ratios, calculated according to enterprises balance sheets. To obtain the weighting factors values it is proposed to use one of the expert statistical approaches, the analytic hierarchy process. The method is as follows: we choose the most important characteristic and after the experts determine the degree of preference for the main feature based on the linguistic scale. Further, matrix of pairwise comparisons based on the assigned ranks is compiled, which characterizes the relative importance of attributes. The required coefficients are determined as elements of a vector of priorities, which is the first vector of the matrix of paired comparisons. The paper proposes a mechanism for finding the fields for rating numbers analysis. In addition, the paper proposes a method for the statistical evaluation of the balance sheets of various companies by calculating the mutual correlation matrices. Based on the considered mathematical methods to determine quantitative characteristics of technical objects financial and economic activities, was developed algorithms, information and software allowing to realize of different systems economic analysis.

  4. SOCR Analyses - an Instructional Java Web-based Statistical Analysis Toolkit.

    Science.gov (United States)

    Chu, Annie; Cui, Jenny; Dinov, Ivo D

    2009-03-01

    The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test.The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website.In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most

  5. Statistical analysis of 4 types of neck whiplash injuries based on classical meridian theory.

    Science.gov (United States)

    Chen, Yemeng; Zhao, Yan; Xue, Xiaolin; Li, Hui; Wu, Xiuyan; Zhang, Qunce; Zheng, Xin; Wang, Tianfang

    2015-01-01

    As one component of the Chinese medicine meridian system, the meridian sinew (Jingjin, (see text), tendino-musculo) is specially described as being for acupuncture treatment of the musculoskeletal system because of its dynamic attributes and tender point correlations. In recent decades, the therapeutic importance of the sinew meridian has become revalued in clinical application. Based on this theory, the authors have established therapeutic strategies of acupuncture treatment in Whiplash-Associated Disorders (WAD) by categorizing four types of neck symptom presentations. The advantage of this new system is to make it much easier for the clinician to find effective acupuncture points. This study attempts to prove the significance of the proposed therapeutic strategies by analyzing data collected from a clinical survey of various WAD using non-supervised statistical methods, such as correlation analysis, factor analysis, and cluster analysis. The clinical survey data have successfully verified discrete characteristics of four neck syndromes, based upon the range of motion (ROM) and tender point location findings. A summary of the relationships among the symptoms of the four neck syndromes has shown the correlation coefficient as having a statistical significance (P < 0.01 or P < 0.05), especially with regard to ROM. Furthermore, factor and cluster analyses resulted in a total of 11 categories of general symptoms, which implies syndrome factors are more related to the Liver, as originally described in classical theory. The hypothesis of meridian sinew syndromes in WAD is clearly supported by the statistical analysis of the clinical trials. This new discovery should be beneficial in improving therapeutic outcomes.

  6. Equipment Maintenance management support system based on statistical analysis of maintenance history data

    International Nuclear Information System (INIS)

    Shimizu, S.; Ando, Y.; Morioka, T.

    1990-01-01

    Plant maintenance is recently becoming important with the increase in the number of nuclear power stations and in plant operating time. Various kinds of requirements for plant maintenance, such as countermeasures for equipment degradation and saving maintenance costs while keeping up plant reliability and productivity, are proposed. For this purpose, plant maintenance programs should be improved based on equipment reliability estimated by field data. In order to meet these requirements, it is planned to develop an equipment maintenance management support system for nuclear power plants based on statistical analysis of equipment maintenance history data. The large difference between this proposed new method and current similar methods is to evaluate not only failure data but maintenance data, which includes normal termination data and some degree of degradation or functional disorder data for equipment and parts. So, it is possible to utilize these field data for improving maintenance schedules and to evaluate actual equipment and parts reliability under the current maintenance schedule. In the present paper, the authors show the objectives of this system, an outline of this system and its functions, and the basic technique for collecting and managing of maintenance history data on statistical analysis. It is shown, from the results of feasibility tests using simulation data of maintenance history, that this system has the ability to provide useful information for maintenance and the design enhancement

  7. ESTIMATING RELIABILITY OF DISTURBANCES IN SATELLITE TIME SERIES DATA BASED ON STATISTICAL ANALYSIS

    Directory of Open Access Journals (Sweden)

    Z.-G. Zhou

    2016-06-01

    Full Text Available Normally, the status of land cover is inherently dynamic and changing continuously on temporal scale. However, disturbances or abnormal changes of land cover — caused by such as forest fire, flood, deforestation, and plant diseases — occur worldwide at unknown times and locations. Timely detection and characterization of these disturbances is of importance for land cover monitoring. Recently, many time-series-analysis methods have been developed for near real-time or online disturbance detection, using satellite image time series. However, the detection results were only labelled with “Change/ No change” by most of the present methods, while few methods focus on estimating reliability (or confidence level of the detected disturbances in image time series. To this end, this paper propose a statistical analysis method for estimating reliability of disturbances in new available remote sensing image time series, through analysis of full temporal information laid in time series data. The method consists of three main steps. (1 Segmenting and modelling of historical time series data based on Breaks for Additive Seasonal and Trend (BFAST. (2 Forecasting and detecting disturbances in new time series data. (3 Estimating reliability of each detected disturbance using statistical analysis based on Confidence Interval (CI and Confidence Levels (CL. The method was validated by estimating reliability of disturbance regions caused by a recent severe flooding occurred around the border of Russia and China. Results demonstrated that the method can estimate reliability of disturbances detected in satellite image with estimation error less than 5% and overall accuracy up to 90%.

  8. Damping layout optimization for ship's cabin noise reduction based on statistical energy analysis

    Directory of Open Access Journals (Sweden)

    WU Weiguo

    2017-08-01

    Full Text Available An optimization analysis study concerning the damping control of ship's cabin noise was carried out in order to improve the effect and reduce the weight of damping. Based on the Statistical Energy Analysis (SEA method, a theoretical deduction and numerical analysis of the first-order sensitivity analysis of the A-weighted sound pressure level concerning the damping loss factor of the subsystem were carried out. On this basis, a mathematical optimization model was proposed and an optimization program developed. Next, the secondary development of VA One software was implemented through the use of MATLAB, while the cabin noise damping control layout optimization system was established. Finally, the optimization model of the ship was constructed and numerical experiments of damping control optimization conducted. The damping installation region was divided into five parts with different damping thicknesses. The total weight of damping was set as an objective function and the A-weighted sound pressure level of the target cabin was set as a constraint condition. The best damping thickness was obtained through the optimization program, and the total damping weight was reduced by 60.4%. The results show that the damping noise reduction effect of unit weight is significantly improved through the optimization method. This research successfully solves the installation position and thickness selection problems in the acoustic design of damping control, providing a reliable analysis method and guidance for the design.

  9. Analysis of the interannual variability of tropical cyclones striking the California coast based on statistical downscaling

    Science.gov (United States)

    Mendez, F. J.; Rueda, A.; Barnard, P.; Mori, N.; Nakajo, S.; Espejo, A.; del Jesus, M.; Diez Sierra, J.; Cofino, A. S.; Camus, P.

    2016-02-01

    Hurricanes hitting California have a very low ocurrence probability due to typically cool ocean temperature and westward tracks. However, damages associated to these improbable events would be dramatic in Southern California and understanding the oceanographic and atmospheric drivers is of paramount importance for coastal risk management for present and future climates. A statistical analysis of the historical events is very difficult due to the limited resolution of atmospheric and oceanographic forcing data available. In this work, we propose a combination of: (a) statistical downscaling methods (Espejo et al, 2015); and (b) a synthetic stochastic tropical cyclone (TC) model (Nakajo et al, 2014). To build the statistical downscaling model, Y=f(X), we apply a combination of principal component analysis and the k-means classification algorithm to find representative patterns from a potential TC index derived from large-scale SST fields in Eastern Central Pacific (predictor X) and the associated tropical cyclone ocurrence (predictand Y). SST data comes from NOAA Extended Reconstructed SST V3b providing information from 1854 to 2013 on a 2.0 degree x 2.0 degree global grid. As data for the historical occurrence and paths of tropical cycloneas are scarce, we apply a stochastic TC model which is based on a Monte Carlo simulation of the joint distribution of track, minimum sea level pressure and translation speed of the historical events in the Eastern Central Pacific Ocean. Results will show the ability of the approach to explain seasonal-to-interannual variability of the predictor X, which is clearly related to El Niño Southern Oscillation. References Espejo, A., Méndez, F.J., Diez, J., Medina, R., Al-Yahyai, S. (2015) Seasonal probabilistic forecasting of tropical cyclone activity in the North Indian Ocean, Journal of Flood Risk Management, DOI: 10.1111/jfr3.12197 Nakajo, S., N. Mori, T. Yasuda, and H. Mase (2014) Global Stochastic Tropical Cyclone Model Based on

  10. Regularized Statistical Analysis of Anatomy

    DEFF Research Database (Denmark)

    Sjöstrand, Karl

    2007-01-01

    This thesis presents the application and development of regularized methods for the statistical analysis of anatomical structures. Focus is on structure-function relationships in the human brain, such as the connection between early onset of Alzheimer’s disease and shape changes of the corpus...... and mind. Statistics represents a quintessential part of such investigations as they are preluded by a clinical hypothesis that must be verified based on observed data. The massive amounts of image data produced in each examination pose an important and interesting statistical challenge...... efficient algorithms which make the analysis of large data sets feasible, and gives examples of applications....

  11. DHLAS: A web-based information system for statistical genetic analysis of HLA population data.

    Science.gov (United States)

    Thriskos, P; Zintzaras, E; Germenis, A

    2007-03-01

    DHLAS (database HLA system) is a user-friendly, web-based information system for the analysis of human leukocyte antigens (HLA) data from population studies. DHLAS has been developed using JAVA and the R system, it runs on a Java Virtual Machine and its user-interface is web-based powered by the servlet engine TOMCAT. It utilizes STRUTS, a Model-View-Controller framework and uses several GNU packages to perform several of its tasks. The database engine it relies upon for fast access is MySQL, but others can be used a well. The system estimates metrics, performs statistical testing and produces graphs required for HLA population studies: (i) Hardy-Weinberg equilibrium (calculated using both asymptotic and exact tests), (ii) genetics distances (Euclidian or Nei), (iii) phylogenetic trees using the unweighted pair group method with averages and neigbor-joining method, (iv) linkage disequilibrium (pairwise and overall, including variance estimations), (v) haplotype frequencies (estimate using the expectation-maximization algorithm) and (vi) discriminant analysis. The main merit of DHLAS is the incorporation of a database, thus, the data can be stored and manipulated along with integrated genetic data analysis procedures. In addition, it has an open architecture allowing the inclusion of other functions and procedures.

  12. Statistical analysis tolerance using jacobian torsor model based on uncertainty propagation method

    Directory of Open Access Journals (Sweden)

    W Ghie

    2016-04-01

    Full Text Available One risk inherent in the use of assembly components is that the behaviourof these components is discovered only at the moment an assembly isbeing carried out. The objective of our work is to enable designers to useknown component tolerances as parameters in models that can be usedto predict properties at the assembly level. In this paper we present astatistical approach to assemblability evaluation, based on tolerance andclearance propagations. This new statistical analysis method for toleranceis based on the Jacobian-Torsor model and the uncertainty measurementapproach. We show how this can be accomplished by modeling thedistribution of manufactured dimensions through applying a probabilitydensity function. By presenting an example we show how statisticaltolerance analysis should be used in the Jacobian-Torsor model. This workis supported by previous efforts aimed at developing a new generation ofcomputational tools for tolerance analysis and synthesis, using theJacobian-Torsor approach. This approach is illustrated on a simple threepartassembly, demonstrating the method’s capability in handling threedimensionalgeometry.

  13. Beginning statistics with data analysis

    CERN Document Server

    Mosteller, Frederick; Rourke, Robert EK

    2013-01-01

    This introduction to the world of statistics covers exploratory data analysis, methods for collecting data, formal statistical inference, and techniques of regression and analysis of variance. 1983 edition.

  14. Rapid thyroid dysfunction screening based on serum surface-enhanced Raman scattering and multivariate statistical analysis

    Science.gov (United States)

    Tian, Dayong; Lü, Guodong; Zhai, Zhengang; Du, Guoli; Mo, Jiaqing; Lü, Xiaoyi

    2018-01-01

    In this paper, serum surface-enhanced Raman scattering and multivariate statistical analysis are used to investigate a rapid screening technique for thyroid function diseases. At present, the detection of thyroid function has become increasingly important, and it is urgently necessary to develop a rapid and portable method for the detection of thyroid function. Our experimental results show that, by using the Silmeco-based enhanced Raman signal, the signal strength greatly increases and the characteristic peak appears obviously. It is also observed that the Raman spectra of normal and anomalous thyroid function human serum are significantly different. Principal component analysis (PCA) combined with linear discriminant analysis (LDA) was used to diagnose thyroid dysfunction, and the diagnostic accuracy was 87.4%. The use of serum surface-enhanced Raman scattering technology combined with PCA-LDA shows good diagnostic performance for the rapid detection of thyroid function. By means of Raman technology, it is expected that a portable device for the rapid detection of thyroid function will be developed.

  15. Solution identification and quantitative analysis of fiber-capacitive drop analyzer based on multivariate statistical methods

    Science.gov (United States)

    Chen, Zhe; Qiu, Zurong; Huo, Xinming; Fan, Yuming; Li, Xinghua

    2017-03-01

    A fiber-capacitive drop analyzer is an instrument which monitors a growing droplet to produce a capacitive opto-tensiotrace (COT). Each COT is an integration of fiber light intensity signals and capacitance signals and can reflect the unique physicochemical property of a liquid. In this study, we propose a solution analytical and concentration quantitative method based on multivariate statistical methods. Eight characteristic values are extracted from each COT. A series of COT characteristic values of training solutions at different concentrations compose a data library of this kind of solution. A two-stage linear discriminant analysis is applied to analyze different solution libraries and establish discriminant functions. Test solutions can be discriminated by these functions. After determining the variety of test solutions, Spearman correlation test and principal components analysis are used to filter and reduce dimensions of eight characteristic values, producing a new representative parameter. A cubic spline interpolation function is built between the parameters and concentrations, based on which we can calculate the concentration of the test solution. Methanol, ethanol, n-propanol, and saline solutions are taken as experimental subjects in this paper. For each solution, nine or ten different concentrations are chosen to be the standard library, and the other two concentrations compose the test group. By using the methods mentioned above, all eight test solutions are correctly identified and the average relative error of quantitative analysis is 1.11%. The method proposed is feasible which enlarges the applicable scope of recognizing liquids based on the COT and improves the concentration quantitative precision, as well.

  16. Applied multivariate statistical analysis

    CERN Document Server

    Härdle, Wolfgang Karl

    2015-01-01

    Focusing on high-dimensional applications, this 4th edition presents the tools and concepts used in multivariate data analysis in a style that is also accessible for non-mathematicians and practitioners.  It surveys the basic principles and emphasizes both exploratory and inferential statistics; a new chapter on Variable Selection (Lasso, SCAD and Elastic Net) has also been added.  All chapters include practical exercises that highlight applications in different multivariate data analysis fields: in quantitative financial studies, where the joint dynamics of assets are observed; in medicine, where recorded observations of subjects in different locations form the basis for reliable diagnoses and medication; and in quantitative marketing, where consumers’ preferences are collected in order to construct models of consumer behavior.  All of these examples involve high to ultra-high dimensions and represent a number of major fields in big data analysis. The fourth edition of this book on Applied Multivariate ...

  17. A Statistics-Based Material Property Analysis to Support TPS Characterization

    Science.gov (United States)

    Copeland, Sean R.; Cozmuta, Ioana; Alonso, Juan J.

    2012-01-01

    Accurate characterization of entry capsule heat shield material properties is a critical component in modeling and simulating Thermal Protection System (TPS) response in a prescribed aerothermal environment. The thermal decomposition of the TPS material during the pyrolysis and charring processes is poorly characterized and typically results in large uncertainties in material properties as inputs for ablation models. These material property uncertainties contribute to large design margins on flight systems and cloud re- construction efforts for data collected during flight and ground testing, making revision to existing models for entry systems more challenging. The analysis presented in this work quantifies how material property uncertainties propagate through an ablation model and guides an experimental test regimen aimed at reducing these uncertainties and characterizing the dependencies between properties in the virgin and charred states for a Phenolic Impregnated Carbon Ablator (PICA) based TPS. A sensitivity analysis identifies how the high-fidelity model behaves in the expected flight environment, while a Monte Carlo based uncertainty propagation strategy is used to quantify the expected spread in the in-depth temperature response of the TPS. An examination of how perturbations to the input probability density functions affect output temperature statistics is accomplished using a Kriging response surface of the high-fidelity model. Simulations are based on capsule configuration and aerothermal environments expected during the Mars Science Laboratory (MSL) entry sequence. We identify and rank primary sources of uncertainty from material properties in a flight-relevant environment, show the dependence on spatial orientation and in-depth location on those uncertainty contributors, and quantify how sensitive the expected results are.

  18. GIS-based bivariate statistical techniques for groundwater potential analysis (an example of Iran)

    Science.gov (United States)

    Haghizadeh, Ali; Moghaddam, Davoud Davoudi; Pourghasemi, Hamid Reza

    2017-12-01

    Groundwater potential analysis prepares better comprehension of hydrological settings of different regions. This study shows the potency of two GIS-based data driven bivariate techniques namely statistical index (SI) and Dempster-Shafer theory (DST) to analyze groundwater potential in Broujerd region of Iran. The research was done using 11 groundwater conditioning factors and 496 spring positions. Based on the ground water potential maps (GPMs) of SI and DST methods, 24.22% and 23.74% of the study area is covered by poor zone of groundwater potential, and 43.93% and 36.3% of Broujerd region is covered by good and very good potential zones, respectively. The validation of outcomes displayed that area under the curve (AUC) of SI and DST techniques are 81.23% and 79.41%, respectively, which shows SI method has slightly a better performance than the DST technique. Therefore, SI and DST methods are advantageous to analyze groundwater capacity and scrutinize the complicated relation between groundwater occurrence and groundwater conditioning factors, which permits investigation of both systemic and stochastic uncertainty. Finally, it can be realized that these techniques are very beneficial for groundwater potential analyzing and can be practical for water-resource management experts.

  19. Degree-based statistic and center persistency for brain connectivity analysis.

    Science.gov (United States)

    Yoo, Kwangsun; Lee, Peter; Chung, Moo K; Sohn, William S; Chung, Sun Ju; Na, Duk L; Ju, Daheen; Jeong, Yong

    2017-01-01

    Brain connectivity analyses have been widely performed to investigate the organization and functioning of the brain, or to observe changes in neurological or psychiatric conditions. However, connectivity analysis inevitably introduces the problem of mass-univariate hypothesis testing. Although, several cluster-wise correction methods have been suggested to address this problem and shown to provide high sensitivity, these approaches fundamentally have two drawbacks: the lack of spatial specificity (localization power) and the arbitrariness of an initial cluster-forming threshold. In this study, we propose a novel method, degree-based statistic (DBS), performing cluster-wise inference. DBS is designed to overcome the above-mentioned two shortcomings. From a network perspective, a few brain regions are of critical importance and considered to play pivotal roles in network integration. Regarding this notion, DBS defines a cluster as a set of edges of which one ending node is shared. This definition enables the efficient detection of clusters and their center nodes. Furthermore, a new measure of a cluster, center persistency (CP) was introduced. The efficiency of DBS with a known "ground truth" simulation was demonstrated. Then they applied DBS to two experimental datasets and showed that DBS successfully detects the persistent clusters. In conclusion, by adopting a graph theoretical concept of degrees and borrowing the concept of persistence from algebraic topology, DBS could sensitively identify clusters with centric nodes that would play pivotal roles in an effect of interest. DBS is potentially widely applicable to variable cognitive or clinical situations and allows us to obtain statistically reliable and easily interpretable results. Hum Brain Mapp 38:165-181, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  20. Quantification of Solar Cell Failure Signatures Based on Statistical Analysis of Electroluminescence Images

    DEFF Research Database (Denmark)

    Spataru, Sergiu; Parikh, Harsh; Hacke, Peter

    2017-01-01

    We demonstrate a method to quantify the extent of solar cell cracks, shunting, or damaged cell interconnects, present in crystalline silicon photovoltaic (PV) modules by statistical analysis of the electroluminescence (EL) intensity distributions of individual cells within the module. From the EL...... intensity distributions (ELID) of each cell, we calculated summary statistics such as standard deviation, median, skewness and kurtosis, and analyzed how they correlate with the magnitude of the solar cell degradation. We found that the dispersion of the ELID increases with the size and severity...

  1. Quantification of solar cell failure signatures based on statistical analysis of electroluminescence images

    DEFF Research Database (Denmark)

    Spataru, Sergiu; Parikh, Harsh; Benatto, Gisele Alves dos Reis

    2017-01-01

    We propose a method to identify and quantify the extent of solar cell cracks, shunting, or damaged cell interconnects, present in crystalline silicon photovoltaic (PV) modules by statistical analysis of the electroluminescence (EL) intensity distributions of individual cells within the module. From...... the EL intensity distributions (ELID) of each cell, we calculated summary statistics such as standard deviation, median, skewness and kurtosis, and analyzed how they correlate with the type of the solar cell degradation. We found that the dispersion of the ELID increases with the size and severity...

  2. Automatic Fault Recognition of Photovoltaic Modules Based on Statistical Analysis of Uav Thermography

    Science.gov (United States)

    Kim, D.; Youn, J.; Kim, C.

    2017-08-01

    As a malfunctioning PV (Photovoltaic) cell has a higher temperature than adjacent normal cells, we can detect it easily with a thermal infrared sensor. However, it will be a time-consuming way to inspect large-scale PV power plants by a hand-held thermal infrared sensor. This paper presents an algorithm for automatically detecting defective PV panels using images captured with a thermal imaging camera from an UAV (unmanned aerial vehicle). The proposed algorithm uses statistical analysis of thermal intensity (surface temperature) characteristics of each PV module to verify the mean intensity and standard deviation of each panel as parameters for fault diagnosis. One of the characteristics of thermal infrared imaging is that the larger the distance between sensor and target, the lower the measured temperature of the object. Consequently, a global detection rule using the mean intensity of all panels in the fault detection algorithm is not applicable. Therefore, a local detection rule based on the mean intensity and standard deviation range was developed to detect defective PV modules from individual array automatically. The performance of the proposed algorithm was tested on three sample images; this verified a detection accuracy of defective panels of 97 % or higher. In addition, as the proposed algorithm can adjust the range of threshold values for judging malfunction at the array level, the local detection rule is considered better suited for highly sensitive fault detection compared to a global detection rule.

  3. AUTOMATIC FAULT RECOGNITION OF PHOTOVOLTAIC MODULES BASED ON STATISTICAL ANALYSIS OF UAV THERMOGRAPHY

    Directory of Open Access Journals (Sweden)

    D. Kim

    2017-08-01

    Full Text Available As a malfunctioning PV (Photovoltaic cell has a higher temperature than adjacent normal cells, we can detect it easily with a thermal infrared sensor. However, it will be a time-consuming way to inspect large-scale PV power plants by a hand-held thermal infrared sensor. This paper presents an algorithm for automatically detecting defective PV panels using images captured with a thermal imaging camera from an UAV (unmanned aerial vehicle. The proposed algorithm uses statistical analysis of thermal intensity (surface temperature characteristics of each PV module to verify the mean intensity and standard deviation of each panel as parameters for fault diagnosis. One of the characteristics of thermal infrared imaging is that the larger the distance between sensor and target, the lower the measured temperature of the object. Consequently, a global detection rule using the mean intensity of all panels in the fault detection algorithm is not applicable. Therefore, a local detection rule based on the mean intensity and standard deviation range was developed to detect defective PV modules from individual array automatically. The performance of the proposed algorithm was tested on three sample images; this verified a detection accuracy of defective panels of 97 % or higher. In addition, as the proposed algorithm can adjust the range of threshold values for judging malfunction at the array level, the local detection rule is considered better suited for highly sensitive fault detection compared to a global detection rule.

  4. Anomalous heat transfer modes of nanofluids: a review based on statistical analysis

    Science.gov (United States)

    2011-01-01

    This paper contains the results of a concise statistical review analysis of a large amount of publications regarding the anomalous heat transfer modes of nanofluids. The application of nanofluids as coolants is a novel practise with no established physical foundations explaining the observed anomalous heat transfer. As a consequence, traditional methods of performing a literature review may not be adequate in presenting objectively the results representing the bulk of the available literature. The current literature review analysis aims to resolve the problems faced by researchers in the past by employing an unbiased statistical analysis to present and reveal the current trends and general belief of the scientific community regarding the anomalous heat transfer modes of nanofluids. The thermal performance analysis indicated that statistically there exists a variable enhancement for conduction, convection/mixed heat transfer, pool boiling heat transfer and critical heat flux modes. The most popular proposed mechanisms in the literature to explain heat transfer in nanofluids are revealed, as well as possible trends between nanofluid properties and thermal performance. The review also suggests future experimentation to provide more conclusive answers to the control mechanisms and influential parameters of heat transfer in nanofluids. PMID:21711932

  5. Anomalous heat transfer modes of nanofluids: a review based on statistical analysis

    Science.gov (United States)

    Sergis, Antonis; Hardalupas, Yannis

    2011-05-01

    This paper contains the results of a concise statistical review analysis of a large amount of publications regarding the anomalous heat transfer modes of nanofluids. The application of nanofluids as coolants is a novel practise with no established physical foundations explaining the observed anomalous heat transfer. As a consequence, traditional methods of performing a literature review may not be adequate in presenting objectively the results representing the bulk of the available literature. The current literature review analysis aims to resolve the problems faced by researchers in the past by employing an unbiased statistical analysis to present and reveal the current trends and general belief of the scientific community regarding the anomalous heat transfer modes of nanofluids. The thermal performance analysis indicated that statistically there exists a variable enhancement for conduction, convection/mixed heat transfer, pool boiling heat transfer and critical heat flux modes. The most popular proposed mechanisms in the literature to explain heat transfer in nanofluids are revealed, as well as possible trends between nanofluid properties and thermal performance. The review also suggests future experimentation to provide more conclusive answers to the control mechanisms and influential parameters of heat transfer in nanofluids.

  6. Anomalous heat transfer modes of nanofluids: a review based on statistical analysis

    Directory of Open Access Journals (Sweden)

    Sergis Antonis

    2011-01-01

    Full Text Available Abstract This paper contains the results of a concise statistical review analysis of a large amount of publications regarding the anomalous heat transfer modes of nanofluids. The application of nanofluids as coolants is a novel practise with no established physical foundations explaining the observed anomalous heat transfer. As a consequence, traditional methods of performing a literature review may not be adequate in presenting objectively the results representing the bulk of the available literature. The current literature review analysis aims to resolve the problems faced by researchers in the past by employing an unbiased statistical analysis to present and reveal the current trends and general belief of the scientific community regarding the anomalous heat transfer modes of nanofluids. The thermal performance analysis indicated that statistically there exists a variable enhancement for conduction, convection/mixed heat transfer, pool boiling heat transfer and critical heat flux modes. The most popular proposed mechanisms in the literature to explain heat transfer in nanofluids are revealed, as well as possible trends between nanofluid properties and thermal performance. The review also suggests future experimentation to provide more conclusive answers to the control mechanisms and influential parameters of heat transfer in nanofluids.

  7. A knowledge-based T2-statistic to perform pathway analysis for quantitative proteomic data.

    Science.gov (United States)

    Lai, En-Yu; Chen, Yi-Hau; Wu, Kun-Pin

    2017-06-01

    Approaches to identify significant pathways from high-throughput quantitative data have been developed in recent years. Still, the analysis of proteomic data stays difficult because of limited sample size. This limitation also leads to the practice of using a competitive null as common approach; which fundamentally implies genes or proteins as independent units. The independent assumption ignores the associations among biomolecules with similar functions or cellular localization, as well as the interactions among them manifested as changes in expression ratios. Consequently, these methods often underestimate the associations among biomolecules and cause false positives in practice. Some studies incorporate the sample covariance matrix into the calculation to address this issue. However, sample covariance may not be a precise estimation if the sample size is very limited, which is usually the case for the data produced by mass spectrometry. In this study, we introduce a multivariate test under a self-contained null to perform pathway analysis for quantitative proteomic data. The covariance matrix used in the test statistic is constructed by the confidence scores retrieved from the STRING database or the HitPredict database. We also design an integrating procedure to retain pathways of sufficient evidence as a pathway group. The performance of the proposed T2-statistic is demonstrated using five published experimental datasets: the T-cell activation, the cAMP/PKA signaling, the myoblast differentiation, and the effect of dasatinib on the BCR-ABL pathway are proteomic datasets produced by mass spectrometry; and the protective effect of myocilin via the MAPK signaling pathway is a gene expression dataset of limited sample size. Compared with other popular statistics, the proposed T2-statistic yields more accurate descriptions in agreement with the discussion of the original publication. We implemented the T2-statistic into an R package T2GA, which is available at https

  8. Natural disaster risk analysis for critical infrastructure systems: An approach based on statistical learning theory

    International Nuclear Information System (INIS)

    Guikema, Seth D.

    2009-01-01

    Probabilistic risk analysis has historically been developed for situations in which measured data about the overall reliability of a system are limited and expert knowledge is the best source of information available. There continue to be a number of important problem areas characterized by a lack of hard data. However, in other important problem areas the emergence of information technology has transformed the situation from one characterized by little data to one characterized by data overabundance. Natural disaster risk assessments for events impacting large-scale, critical infrastructure systems such as electric power distribution systems, transportation systems, water supply systems, and natural gas supply systems are important examples of problems characterized by data overabundance. There are often substantial amounts of information collected and archived about the behavior of these systems over time. Yet it can be difficult to effectively utilize these large data sets for risk assessment. Using this information for estimating the probability or consequences of system failure requires a different approach and analysis paradigm than risk analysis for data-poor systems does. Statistical learning theory, a diverse set of methods designed to draw inferences from large, complex data sets, can provide a basis for risk analysis for data-rich systems. This paper provides an overview of statistical learning theory methods and discusses their potential for greater use in risk analysis

  9. Neural substrates of motor and cognitive dysfunctions in SCA2 patients: A network based statistics analysis

    Directory of Open Access Journals (Sweden)

    G. Olivito

    2017-01-01

    In the present study, the network-based statistics (NBS approach was used to assess differences in functional connectivity between specific cerebellar and cerebral “nodes” in SCA2 patients. Altered inter-nodal connectivity was found between more posterior regions in the cerebellum and regions in the cerebral cortex clearly related to cognition and emotion. Furthermore, more anterior cerebellar lobules showed altered inter-nodal connectivity with motor and somatosensory cerebral regions. The present data suggest that in SCA2 a cerebellar dysfunction affects long-distance cerebral regions and that the clinical symptoms may be specifically related with connectivity changes between motor and non-motor cerebello-cortical nodes.

  10. Per Object statistical analysis

    DEFF Research Database (Denmark)

    2008-01-01

    of a specific class in turn, and uses as pair of PPO stages to derive the statistics and then assign them to the objects' Object Variables. It may be that this could all be done in some other, simply way, but several other ways that were tried did not succeed. The procedure ouptut has been tested against...

  11. Statistical Analysis and validation

    NARCIS (Netherlands)

    Hoefsloot, H.C.J.; Horvatovich, P.; Bischoff, R.

    2013-01-01

    In this chapter guidelines are given for the selection of a few biomarker candidates from a large number of compounds with a relative low number of samples. The main concepts concerning the statistical validation of the search for biomarkers are discussed. These complicated methods and concepts are

  12. Simulation on a car interior aerodynamic noise control based on statistical energy analysis

    Science.gov (United States)

    Chen, Xin; Wang, Dengfeng; Ma, Zhengdong

    2012-09-01

    How to simulate interior aerodynamic noise accurately is an important question of a car interior noise reduction. The unsteady aerodynamic pressure on body surfaces is proved to be the key effect factor of car interior aerodynamic noise control in high frequency on high speed. In this paper, a detail statistical energy analysis (SEA) model is built. And the vibra-acoustic power inputs are loaded on the model for the valid result of car interior noise analysis. The model is the solid foundation for further optimization on car interior noise control. After the most sensitive subsystems for the power contribution to car interior noise are pointed by SEA comprehensive analysis, the sound pressure level of car interior aerodynamic noise can be reduced by improving their sound and damping characteristics. The further vehicle testing results show that it is available to improve the interior acoustic performance by using detailed SEA model, which comprised by more than 80 subsystems, with the unsteady aerodynamic pressure calculation on body surfaces and the materials improvement of sound/damping properties. It is able to acquire more than 2 dB reduction on the central frequency in the spectrum over 800 Hz. The proposed optimization method can be looked as a reference of car interior aerodynamic noise control by the detail SEA model integrated unsteady computational fluid dynamics (CFD) and sensitivity analysis of acoustic contribution.

  13. Statistical analysis of dimer formation in supersaturated metal vapor based on molecular dynamics simulation

    Science.gov (United States)

    Korenchenko, Anna E.; Vorontsov, Alexander G.; Gelchinski, Boris R.; Sannikov, Grigorii P.

    2018-04-01

    We discuss the problem of dimer formation during the homogeneous nucleation of atomic metal vapor in an inert gas environment. We simulated nucleation with molecular dynamics and carried out the statistical analysis of double- and triple-atomic collisions as the two ways of long-lived diatomic complex formation. Close pair of atoms with lifetime greater than the mean time interval between atom-atom collisions is called a long-lived diatomic complex. We found that double- and triple-atomic collisions gave approximately the same probabilities of long-lived diatomic complex formation, but internal energy of the resulted state was essentially lower in the second case. Some diatomic complexes formed in three-particle collisions are stable enough to be a critical nucleus.

  14. Models and statistical analysis of organic micropollutants in groundwater-based drinking water resources

    DEFF Research Database (Denmark)

    Malaguerra, Flavio

    The access to safe drinking water is essential for the well being of the population. The spread of micropollutant contamination jeopardise many freshwater reservoirs, and is a serious threat for human health, especially because of its long-term effects. To asses the threat of contamination, models...... to model. The identification of dominant processes is an essential step in the understanding of system behaviour, because it enables the development of simplified models that can approximate the fate of contaminants with the best trade-off between model complexity and reliability of results. In this thesis......, global sensitivity analysis techniques are used to assess detailed models in order to identify the main processes involved in the degradation of chlorinated solvents in the subsurface, and in the transport of pesticides from surface water into nearby wells in confined aquifers. Statistical techniques...

  15. Multiple Statistical Models Based Analysis of Causative Factors and Loess Landslides in Tianshui City, China

    Science.gov (United States)

    Su, Xing; Meng, Xingmin; Ye, Weilin; Wu, Weijiang; Liu, Xingrong; Wei, Wanhong

    2018-03-01

    Tianshui City is one of the mountainous cities that are threatened by severe geo-hazards in Gansu Province, China. Statistical probability models have been widely used in analyzing and evaluating geo-hazards such as landslide. In this research, three approaches (Certainty Factor Method, Weight of Evidence Method and Information Quantity Method) were adopted to quantitively analyze the relationship between the causative factors and the landslides, respectively. The source data used in this study are including the SRTM DEM and local geological maps in the scale of 1:200,000. 12 causative factors (i.e., altitude, slope, aspect, curvature, plan curvature, profile curvature, roughness, relief amplitude, and distance to rivers, distance to faults, distance to roads, and the stratum lithology) were selected to do correlation analysis after thorough investigation of geological conditions and historical landslides. The results indicate that the outcomes of the three models are fairly consistent.

  16. Statistical Analysis of Bending Rigidity Coefficient Determined Using Fluorescence-Based Flicker-Noise Spectroscopy.

    Science.gov (United States)

    Doskocz, Joanna; Drabik, Dominik; Chodaczek, Grzegorz; Przybyło, Magdalena; Langner, Marek

    2018-06-01

    Bending rigidity coefficient describes propensity of a lipid bilayer to deform. In order to measure the parameter experimentally using flickering noise spectroscopy, the microscopic imaging is required, which necessitates the application of giant unilamellar vesicles (GUV) lipid bilayer model. The major difficulty associated with the application of the model is the statistical character of GUV population with respect to their size and the homogeneity of lipid bilayer composition, if a mixture of lipids is used. In the paper, the bending rigidity coefficient was measured using the fluorescence-enhanced flicker-noise spectroscopy. In the paper, the bending rigidity coefficient was determined for large populations of 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine and 1,2-dioleoyl-sn-glycero-3-phosphocholine vesicles. The quantity of obtained experimental data allows to perform statistical analysis aiming at the identification of the distribution, which is the most appropriate for the calculation of the value of the membrane bending rigidity coefficient. It has been demonstrated that the bending rigidity coefficient is characterized by an asymmetrical distribution, which is well approximated with the gamma distribution. Since there are no biophysical reasons for that we propose to use the difference between normal and gamma fits as a measure of the homogeneity of vesicle population. In addition, the effect of a fluorescent label and types of instrumental setups on determined values has been tested. Obtained results show that the value of the bending rigidity coefficient does not depend on the type of a fluorescent label nor on the type of microscope used.

  17. Statistical data analysis

    International Nuclear Information System (INIS)

    Hahn, A.A.

    1994-11-01

    The complexity of instrumentation sometimes requires data analysis to be done before the result is presented to the control room. This tutorial reviews some of the theoretical assumptions underlying the more popular forms of data analysis and presents simple examples to illuminate the advantages and hazards of different techniques

  18. Multivariate statistical analysis software technologies for astrophysical research involving large data bases

    Science.gov (United States)

    Djorgovski, S. George

    1994-01-01

    We developed a package to process and analyze the data from the digital version of the Second Palomar Sky Survey. This system, called SKICAT, incorporates the latest in machine learning and expert systems software technology, in order to classify the detected objects objectively and uniformly, and facilitate handling of the enormous data sets from digital sky surveys and other sources. The system provides a powerful, integrated environment for the manipulation and scientific investigation of catalogs from virtually any source. It serves three principal functions: image catalog construction, catalog management, and catalog analysis. Through use of the GID3* Decision Tree artificial induction software, SKICAT automates the process of classifying objects within CCD and digitized plate images. To exploit these catalogs, the system also provides tools to merge them into a large, complete database which may be easily queried and modified when new data or better methods of calibrating or classifying become available. The most innovative feature of SKICAT is the facility it provides to experiment with and apply the latest in machine learning technology to the tasks of catalog construction and analysis. SKICAT provides a unique environment for implementing these tools for any number of future scientific purposes. Initial scientific verification and performance tests have been made using galaxy counts and measurements of galaxy clustering from small subsets of the survey data, and a search for very high redshift quasars. All of the tests were successful, and produced new and interesting scientific results. Attachments to this report give detailed accounts of the technical aspects for multivariate statistical analysis of small and moderate-size data sets, called STATPROG. The package was tested extensively on a number of real scientific applications, and has produced real, published results.

  19. Multivariate Statistical Analysis Software Technologies for Astrophysical Research Involving Large Data Bases

    Science.gov (United States)

    Djorgovski, S. G.

    1994-01-01

    We developed a package to process and analyze the data from the digital version of the Second Palomar Sky Survey. This system, called SKICAT, incorporates the latest in machine learning and expert systems software technology, in order to classify the detected objects objectively and uniformly, and facilitate handling of the enormous data sets from digital sky surveys and other sources. The system provides a powerful, integrated environment for the manipulation and scientific investigation of catalogs from virtually any source. It serves three principal functions: image catalog construction, catalog management, and catalog analysis. Through use of the GID3* Decision Tree artificial induction software, SKICAT automates the process of classifying objects within CCD and digitized plate images. To exploit these catalogs, the system also provides tools to merge them into a large, complex database which may be easily queried and modified when new data or better methods of calibrating or classifying become available. The most innovative feature of SKICAT is the facility it provides to experiment with and apply the latest in machine learning technology to the tasks of catalog construction and analysis. SKICAT provides a unique environment for implementing these tools for any number of future scientific purposes. Initial scientific verification and performance tests have been made using galaxy counts and measurements of galaxy clustering from small subsets of the survey data, and a search for very high redshift quasars. All of the tests were successful and produced new and interesting scientific results. Attachments to this report give detailed accounts of the technical aspects of the SKICAT system, and of some of the scientific results achieved to date. We also developed a user-friendly package for multivariate statistical analysis of small and moderate-size data sets, called STATPROG. The package was tested extensively on a number of real scientific applications and has

  20. Statistical Analysis Plan

    DEFF Research Database (Denmark)

    Ris Hansen, Inge; Søgaard, Karen; Gram, Bibi

    2015-01-01

    This is the analysis plan for the multicentre randomised control study looking at the effect of training and exercises in chronic neck pain patients that is being conducted in Jutland and Funen, Denmark. This plan will be used as a work description for the analyses of the data collected....

  1. Formalized Conflicts Detection Based on the Analysis of Multiple Emails: An Approach Combining Statistics and Ontologies

    Science.gov (United States)

    Zakaria, Chahnez; Curé, Olivier; Salzano, Gabriella; Smaïli, Kamel

    In Computer Supported Cooperative Work (CSCW), it is crucial for project leaders to detect conflicting situations as early as possible. Generally, this task is performed manually by studying a set of documents exchanged between team members. In this paper, we propose a full-fledged automatic solution that identifies documents, subjects and actors involved in relational conflicts. Our approach detects conflicts in emails, probably the most popular type of documents in CSCW, but the methods used can handle other text-based documents. These methods rely on the combination of statistical and ontological operations. The proposed solution is decomposed in several steps: (i) we enrich a simple negative emotion ontology with terms occuring in the corpus of emails, (ii) we categorize each conflicting email according to the concepts of this ontology and (iii) we identify emails, subjects and team members involved in conflicting emails using possibilistic description logic and a set of proposed measures. Each of these steps are evaluated and validated on concrete examples. Moreover, this approach's framework is generic and can be easily adapted to domains other than conflicts, e.g. security issues, and extended with operations making use of our proposed set of measures.

  2. Optimization of cardiovascular stent against restenosis: factorial design-based statistical analysis of polymer coating conditions.

    Directory of Open Access Journals (Sweden)

    Gayathri Acharya

    Full Text Available The objective of this study was to optimize the physicodynamic conditions of polymeric system as a coating substrate for drug eluting stents against restenosis. As Nitric Oxide (NO has multifunctional activities, such as regulating blood flow and pressure, and influencing thrombus formation, a continuous and spatiotemporal delivery of NO loaded in the polymer based nanoparticles could be a viable option to reduce and prevent restenosis. To identify the most suitable carrier for S-Nitrosoglutathione (GSNO, a NO prodrug, stents were coated with various polymers, such as poly (lactic-co-glycolic acid (PLGA, polyethylene glycol (PEG and polycaprolactone (PCL, using solvent evaporation technique. Full factorial design was used to evaluate the effects of the formulation variables in polymer-based stent coatings on the GSNO release rate and weight loss rate. The least square regression model was used for data analysis in the optimization process. The polymer-coated stents were further assessed with Differential scanning calorimetry (DSC, Fourier transform infrared spectroscopy analysis (FTIR, Scanning electron microscopy (SEM images and platelet adhesion studies. Stents coated with PCL matrix displayed more sustained and controlled drug release profiles than those coated with PLGA and PEG. Stents coated with PCL matrix showed the least platelet adhesion rate. Subsequently, stents coated with PCL matrix were subjected to the further optimization processes for improvement of surface morphology and enhancement of the drug release duration. The results of this study demonstrated that PCL matrix containing GSNO is a promising system for stent surface coating against restenosis.

  3. Statistical analysis of vehicle crashes in Mississippi based on crash data from 2010 to 2014.

    Science.gov (United States)

    2017-08-15

    Traffic crash data from 2010 to 2014 were collected by Mississippi Department of Transportation (MDOT) and extracted for the study. Three tasks were conducted in this study: (1) geographic distribution of crashes; (2) descriptive statistics of crash ...

  4. Research design and statistical analysis

    CERN Document Server

    Myers, Jerome L; Lorch Jr, Robert F

    2013-01-01

    Research Design and Statistical Analysis provides comprehensive coverage of the design principles and statistical concepts necessary to make sense of real data.  The book's goal is to provide a strong conceptual foundation to enable readers to generalize concepts to new research situations.  Emphasis is placed on the underlying logic and assumptions of the analysis and what it tells the researcher, the limitations of the analysis, and the consequences of violating assumptions.  Sampling, design efficiency, and statistical models are emphasized throughout. As per APA recommendations

  5. Statistics-Based Prediction Analysis for Head and Neck Cancer Tumor Deformation

    Directory of Open Access Journals (Sweden)

    Maryam Azimi

    2012-01-01

    Full Text Available Most of the current radiation therapy planning systems, which are based on pre-treatment Computer Tomography (CT images, assume that the tumor geometry does not change during the course of treatment. However, tumor geometry is shown to be changing over time. We propose a methodology to monitor and predict daily size changes of head and neck cancer tumors during the entire radiation therapy period. Using collected patients' CT scan data, MATLAB routines are developed to quantify the progressive geometric changes occurring in patients during radiation therapy. Regression analysis is implemented to develop predictive models for tumor size changes through entire period. The generated models are validated using leave-one-out cross validation. The proposed method will increase the accuracy of therapy and improve patient's safety and quality of life by reducing the number of harmful unnecessary CT scans.

  6. A rank-based algorithm of differential expression analysis for small cell line data with statistical control.

    Science.gov (United States)

    Li, Xiangyu; Cai, Hao; Wang, Xianlong; Ao, Lu; Guo, You; He, Jun; Gu, Yunyan; Qi, Lishuang; Guan, Qingzhou; Lin, Xu; Guo, Zheng

    2017-10-13

    To detect differentially expressed genes (DEGs) in small-scale cell line experiments, usually with only two or three technical replicates for each state, the commonly used statistical methods such as significance analysis of microarrays (SAM), limma and RankProd (RP) lack statistical power, while the fold change method lacks any statistical control. In this study, we demonstrated that the within-sample relative expression orderings (REOs) of gene pairs were highly stable among technical replicates of a cell line but often widely disrupted after certain treatments such like gene knockdown, gene transfection and drug treatment. Based on this finding, we customized the RankComp algorithm, previously designed for individualized differential expression analysis through REO comparison, to identify DEGs with certain statistical control for small-scale cell line data. In both simulated and real data, the new algorithm, named CellComp, exhibited high precision with much higher sensitivity than the original RankComp, SAM, limma and RP methods. Therefore, CellComp provides an efficient tool for analyzing small-scale cell line data. © The Author 2017. Published by Oxford University Press.

  7. Connectivity-based fixel enhancement: Whole-brain statistical analysis of diffusion MRI measures in the presence of crossing fibres

    Science.gov (United States)

    Raffelt, David A.; Smith, Robert E.; Ridgway, Gerard R.; Tournier, J-Donald; Vaughan, David N.; Rose, Stephen; Henderson, Robert; Connelly, Alan

    2015-01-01

    In brain regions containing crossing fibre bundles, voxel-average diffusion MRI measures such as fractional anisotropy (FA) are difficult to interpret, and lack within-voxel single fibre population specificity. Recent work has focused on the development of more interpretable quantitative measures that can be associated with a specific fibre population within a voxel containing crossing fibres (herein we use fixel to refer to a specific fibre population within a single voxel). Unfortunately, traditional 3D methods for smoothing and cluster-based statistical inference cannot be used for voxel-based analysis of these measures, since the local neighbourhood for smoothing and cluster formation can be ambiguous when adjacent voxels may have different numbers of fixels, or ill-defined when they belong to different tracts. Here we introduce a novel statistical method to perform whole-brain fixel-based analysis called connectivity-based fixel enhancement (CFE). CFE uses probabilistic tractography to identify structurally connected fixels that are likely to share underlying anatomy and pathology. Probabilistic connectivity information is then used for tract-specific smoothing (prior to the statistical analysis) and enhancement of the statistical map (using a threshold-free cluster enhancement-like approach). To investigate the characteristics of the CFE method, we assessed sensitivity and specificity using a large number of combinations of CFE enhancement parameters and smoothing extents, using simulated pathology generated with a range of test-statistic signal-to-noise ratios in five different white matter regions (chosen to cover a broad range of fibre bundle features). The results suggest that CFE input parameters are relatively insensitive to the characteristics of the simulated pathology. We therefore recommend a single set of CFE parameters that should give near optimal results in future studies where the group effect is unknown. We then demonstrate the proposed method

  8. Patch-based generative shape model and MDL model selection for statistical analysis of archipelagos

    DEFF Research Database (Denmark)

    Ganz, Melanie; Nielsen, Mads; Brandt, Sami

    2010-01-01

    We propose a statistical generative shape model for archipelago-like structures. These kind of structures occur, for instance, in medical images, where our intention is to model the appearance and shapes of calcifications in x-ray radio graphs. The generative model is constructed by (1) learning ...

  9. Marine ecology conditions at Weda Bay, North Maluku based on statistical analysis on distribution of recent foraminifera

    Directory of Open Access Journals (Sweden)

    Kurniasih Anis

    2017-01-01

    Full Text Available Analysis of foraminifera in geology,usually being used to find the age of rocks/ sediments and depositional environment. In this study, recent foraminifera was used not only to determinethe sedimentary environment,but also to estimate the ecological condition of the water through a statistical approach.Analysis was performed quantitatively in 10 surface seabed sediment samples in Weda Bay North Maluku. The analysis includes dominance (Sympson Index, diversity and evenness (Shannon Index, and the ratio of planktonic -benthic. The results were shown in the plotting diagram of M-R-T (Miliolid-Rotalid-Textularid to determine the depositional environment. Quantitative analysis was performed using Past software (paleontological version Statistic 1:29.The analysis result showed there was no domination of certain taxon with a moderate degree of evenness and stable communities and considerably a moderate diversity. The results of this analysis indicated that research area had a stable water conditions with the optimum level of carbonate content, oxygen supply, salinity, and temperature. The ratio of planktonic and benthic indicate the relative depth, which was deeper the water increased the percentage of planktonic foraminifera. Based on M-R-T diagram showed the distribution of sediment deposited on exposed carbonate (carbonate platform environment with normal saline.

  10. The Use of Statistically Based Rolling Supply Curves for Electricity Market Analysis: A Preliminary Look

    Energy Technology Data Exchange (ETDEWEB)

    Jenkin, Thomas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Larson, Andrew [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Ruth, Mark F [National Renewable Energy Laboratory (NREL), Golden, CO (United States); King, Ben [U.S. Department of Energy; Spitsen, Paul [U.S. Department of Energy

    2018-03-27

    In light of the changing electricity resource mixes across the United States, an important question in electricity modeling is how additions and retirements of generation, including additions in variable renewable energy (VRE) generation could impact markets by changing hourly wholesale energy prices. Instead of using resource-intensive production cost models (PCMs) or building and using simple generator supply curves, this analysis uses a 'top-down' approach based on regression analysis of hourly historical energy and load data to estimate the impact of supply changes on wholesale electricity prices, provided the changes are not so substantial that they fundamentally alter the market and dispatch-order driven behavior of non-retiring units. The rolling supply curve (RSC) method used in this report estimates the shape of the supply curve that fits historical hourly price and load data for given time intervals, such as two-weeks, and then repeats this on a rolling basis through the year. These supply curves can then be modified on an hourly basis to reflect the impact of generation retirements or additions, including VRE and then reapplied to the same load data to estimate the change in hourly electricity price. The choice of duration over which these RSCs are estimated has a significant impact on goodness of fit. For example, in PJM in 2015, moving from fitting one curve per year to 26 rolling two-week supply curves improves the standard error of the regression from 16 dollars/MWh to 6 dollars/MWh and the R-squared of the estimate from 0.48 to 0.76. We illustrate the potential use and value of the RSC method by estimating wholesale price effects under various generator retirement and addition scenarios, and we discuss potential limits of the technique, some of which are inherent. The ability to do this type of analysis is important to a wide range of market participants and other stakeholders, and it may have a role in complementing use of or providing

  11. Pixel based statistical analysis of differences between lung SPECT images: methodological aspects

    International Nuclear Information System (INIS)

    Bendada, S.; Rocchisani, J.L.M.; Moretti, J.L.

    2002-01-01

    The statistical parametric mapping method is applied in Neurology for activation studies. We had adapted this powerful method on Lungs SPECT to help for the diagnosis and the follow-up of pulmonary embolism and other lung diseases. The SPECT slices of pairs of examination were normalized thanks to the total acquired counts, reconstruction background subtracted, smoothed and realigned. A parametric image of statistical differences was finally computed. We had thus obtained a 3D image showing regions of improved or altered region under treatment. A tuning of the various parameters could lead to more accurate image. This new approach of lung SPECT processing appears to be a promising useful tool for the physician. (author)

  12. Statistical inference based on divergence measures

    CERN Document Server

    Pardo, Leandro

    2005-01-01

    The idea of using functionals of Information Theory, such as entropies or divergences, in statistical inference is not new. However, in spite of the fact that divergence statistics have become a very good alternative to the classical likelihood ratio test and the Pearson-type statistic in discrete models, many statisticians remain unaware of this powerful approach.Statistical Inference Based on Divergence Measures explores classical problems of statistical inference, such as estimation and hypothesis testing, on the basis of measures of entropy and divergence. The first two chapters form an overview, from a statistical perspective, of the most important measures of entropy and divergence and study their properties. The author then examines the statistical analysis of discrete multivariate data with emphasis is on problems in contingency tables and loglinear models using phi-divergence test statistics as well as minimum phi-divergence estimators. The final chapter looks at testing in general populations, prese...

  13. Integrating eQTL data with GWAS summary statistics in pathway-based analysis with application to schizophrenia.

    Science.gov (United States)

    Wu, Chong; Pan, Wei

    2018-04-01

    Many genetic variants affect complex traits through gene expression, which can be exploited to boost statistical power and enhance interpretation in genome-wide association studies (GWASs) as demonstrated by the transcriptome-wide association study (TWAS) approach. Furthermore, due to polygenic inheritance, a complex trait is often affected by multiple genes with similar functions as annotated in gene pathways. Here, we extend TWAS from gene-based analysis to pathway-based analysis: we integrate public pathway collections, expression quantitative trait locus (eQTL) data and GWAS summary association statistics (or GWAS individual-level data) to identify gene pathways associated with complex traits. The basic idea is to weight the SNPs of the genes in a pathway based on their estimated cis-effects on gene expression, then adaptively test for association of the pathway with a GWAS trait by effectively aggregating possibly weak association signals across the genes in the pathway. The P values can be calculated analytically and thus fast. We applied our proposed test with the KEGG and GO pathways to two schizophrenia (SCZ) GWAS summary association data sets, denoted by SCZ1 and SCZ2 with about 20,000 and 150,000 subjects, respectively. Most of the significant pathways identified by analyzing the SCZ1 data were reproduced by the SCZ2 data. Importantly, we identified 15 novel pathways associated with SCZ, such as GABA receptor complex (GO:1902710), which could not be uncovered by the standard single SNP-based analysis or gene-based TWAS. The newly identified pathways may help us gain insights into the biological mechanism underlying SCZ. Our results showcase the power of incorporating gene expression information and gene functional annotations into pathway-based association testing for GWAS. © 2018 WILEY PERIODICALS, INC.

  14. Regression-based statistical mediation and moderation analysis in clinical research: Observations, recommendations, and implementation.

    Science.gov (United States)

    Hayes, Andrew F; Rockwood, Nicholas J

    2017-11-01

    There have been numerous treatments in the clinical research literature about various design, analysis, and interpretation considerations when testing hypotheses about mechanisms and contingencies of effects, popularly known as mediation and moderation analysis. In this paper we address the practice of mediation and moderation analysis using linear regression in the pages of Behaviour Research and Therapy and offer some observations and recommendations, debunk some popular myths, describe some new advances, and provide an example of mediation, moderation, and their integration as conditional process analysis using the PROCESS macro for SPSS and SAS. Our goal is to nudge clinical researchers away from historically significant but increasingly old school approaches toward modifications, revisions, and extensions that characterize more modern thinking about the analysis of the mechanisms and contingencies of effects. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. [Prudent use price controls in Chinese medicines market: based on statistical data analysis].

    Science.gov (United States)

    Yang, Guang; Wang, Nuo; Huang, Lu-Qi; Qiu, Hong-Yan; Guo, Lan-Ping

    2014-01-01

    A dispute about the decreasing-price problem of traditional Chinese medicine (TCM) has recently arisen. This article analyzes the statistical data of 1995-2011 in China, the results showed that the main responsibility of expensive health care has no direct relationship with the drug price. The price index of TCM rose significantly slower than the medicine prices, the production margins of TCM affected by the material prices has been diminishing since 1995, continuous price reduction will further depress profits of the TCM industry. Considering the pros and cons of raw materials vary greatly in price, decreasing medicine price behavior will force enterprises to use inferior materials in order to maintain corporate profits. The results have the guiding meaning to medicine price management.

  16. Statistical analysis and data management

    International Nuclear Information System (INIS)

    Anon.

    1981-01-01

    This report provides an overview of the history of the WIPP Biology Program. The recommendations of the American Institute of Biological Sciences (AIBS) for the WIPP biology program are summarized. The data sets available for statistical analyses and problems associated with these data sets are also summarized. Biological studies base maps are presented. A statistical model is presented to evaluate any correlation between climatological data and small mammal captures. No statistically significant relationship between variance in small mammal captures on Dr. Gennaro's 90m x 90m grid and precipitation records from the Duval Potash Mine were found

  17. Variation in rates of breast cancer surgery: A national analysis based on French Hospital Episode Statistics.

    Science.gov (United States)

    Rococo, E; Mazouni, C; Or, Z; Mobillion, V; Koon Sun Pat, M; Bonastre, J

    2016-01-01

    Minimum volume thresholds were introduced in France in 2008 to improve the quality of cancer care. We investigated whether/how the quality of treatment decisions in breast cancer surgery had evolved before and after this policy was implemented. We used Hospital Episode Statistics for all women having undergone breast conserving surgery (BCS) or mastectomy in France in 2005 and 2012. Three surgical procedures considered as better treatment options were analyzed: BCS, immediate breast reconstruction (IBR) and sentinel lymph node biopsy (SLNB). We studied the mean rates and variation according to the hospital profile and volume. Between 2005 and 2012, the volume of breast cancer surgery increased by 11% whereas one third of the hospitals no longer performed this type of surgery. In 2012, the mean rate of BCS was 74% and similar in all hospitals whatever the volume. Conversely, IBR and SLNB rates were much higher in cancer centers (CC) and regional teaching hospitals (RTH) [IBR: 19% and 14% versus 8% on average; SLNB: 61% and 47% versus 39% on average]; the greater the hospital volume, the higher the IBR and SLNB rates (p < 0.0001). Overall, whatever the surgical procedure considered, inter-hospital variation in rates declined substantially in CC and RTH. We identified considerable variation in IBR and SLNB rates between French hospitals. Although more complex and less standardized than BCS, most clinical guidelines recommended these procedures. This apparent heterogeneity suggests unequal access to high-quality procedures for women with breast cancer. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. The effect of image enhancement on the statistical analysis of functional neuroimages : Wavelet-based denoising and Gaussian smoothing

    NARCIS (Netherlands)

    Wink, AM; Roerdink, JBTM; Sonka, M; Fitzpatrick, JM

    2003-01-01

    The quality of statistical analyses of functional neuroimages is studied after applying various preprocessing methods. We present wavelet-based denoising as an alternative to Gaussian smoothing, the standard denoising method in statistical parametric mapping (SPM). The wavelet-based denoising

  19. Wavelet Transform Based Higher Order Statistical Analysis of Wind and Wave Time Histories

    Science.gov (United States)

    Habib Huseni, Gulamhusenwala; Balaji, Ramakrishnan

    2017-10-01

    Wind, blowing on the surface of the ocean, imparts the energy to generate the waves. Understanding the wind-wave interactions is essential for an oceanographer. This study involves higher order spectral analyses of wind speeds and significant wave height time histories, extracted from European Centre for Medium-Range Weather Forecast database at an offshore location off Mumbai coast, through continuous wavelet transform. The time histories were divided by the seasons; pre-monsoon, monsoon, post-monsoon and winter and the analysis were carried out to the individual data sets, to assess the effect of various seasons on the wind-wave interactions. The analysis revealed that the frequency coupling of wind speeds and wave heights of various seasons. The details of data, analysing technique and results are presented in this paper.

  20. Estimation Model of Spacecraft Parameters and Cost Based on a Statistical Analysis of COMPASS Designs

    Science.gov (United States)

    Gerberich, Matthew W.; Oleson, Steven R.

    2013-01-01

    The Collaborative Modeling for Parametric Assessment of Space Systems (COMPASS) team at Glenn Research Center has performed integrated system analysis of conceptual spacecraft mission designs since 2006 using a multidisciplinary concurrent engineering process. The set of completed designs was archived in a database, to allow for the study of relationships between design parameters. Although COMPASS uses a parametric spacecraft costing model, this research investigated the possibility of using a top-down approach to rapidly estimate the overall vehicle costs. This paper presents the relationships between significant design variables, including breakdowns of dry mass, wet mass, and cost. It also develops a model for a broad estimate of these parameters through basic mission characteristics, including the target location distance, the payload mass, the duration, the delta-v requirement, and the type of mission, propulsion, and electrical power. Finally, this paper examines the accuracy of this model in regards to past COMPASS designs, with an assessment of outlying spacecraft, and compares the results to historical data of completed NASA missions.

  1. Bayesian Inference in Statistical Analysis

    CERN Document Server

    Box, George E P

    2011-01-01

    The Wiley Classics Library consists of selected books that have become recognized classics in their respective fields. With these new unabridged and inexpensive editions, Wiley hopes to extend the life of these important works by making them available to future generations of mathematicians and scientists. Currently available in the Series: T. W. Anderson The Statistical Analysis of Time Series T. S. Arthanari & Yadolah Dodge Mathematical Programming in Statistics Emil Artin Geometric Algebra Norman T. J. Bailey The Elements of Stochastic Processes with Applications to the Natural Sciences Rob

  2. The statistical analysis of anisotropies

    International Nuclear Information System (INIS)

    Webster, A.

    1977-01-01

    One of the many uses to which a radio survey may be put is an analysis of the distribution of the radio sources on the celestial sphere to find out whether they are bunched into clusters or lie in preferred regions of space. There are many methods of testing for clustering in point processes and since they are not all equally good this contribution is presented as a brief guide to what seems to be the best of them. The radio sources certainly do not show very strong clusering and may well be entirely unclustered so if a statistical method is to be useful it must be both powerful and flexible. A statistic is powerful in this context if it can efficiently distinguish a weakly clustered distribution of sources from an unclustered one, and it is flexible if it can be applied in a way which avoids mistaking defects in the survey for true peculiarities in the distribution of sources. The paper divides clustering statistics into two classes: number density statistics and log N/log S statistics. (Auth.)

  3. Spatial analysis statistics, visualization, and computational methods

    CERN Document Server

    Oyana, Tonny J

    2015-01-01

    An introductory text for the next generation of geospatial analysts and data scientists, Spatial Analysis: Statistics, Visualization, and Computational Methods focuses on the fundamentals of spatial analysis using traditional, contemporary, and computational methods. Outlining both non-spatial and spatial statistical concepts, the authors present practical applications of geospatial data tools, techniques, and strategies in geographic studies. They offer a problem-based learning (PBL) approach to spatial analysis-containing hands-on problem-sets that can be worked out in MS Excel or ArcGIS-as well as detailed illustrations and numerous case studies. The book enables readers to: Identify types and characterize non-spatial and spatial data Demonstrate their competence to explore, visualize, summarize, analyze, optimize, and clearly present statistical data and results Construct testable hypotheses that require inferential statistical analysis Process spatial data, extract explanatory variables, conduct statisti...

  4. Performance Analysis of Millimeter-Wave Multi-hop Machine-to-Machine Networks Based on Hop Distance Statistics

    Directory of Open Access Journals (Sweden)

    Haejoon Jung

    2018-01-01

    Full Text Available As an intrinsic part of the Internet of Things (IoT ecosystem, machine-to-machine (M2M communications are expected to provide ubiquitous connectivity between machines. Millimeter-wave (mmWave communication is another promising technology for the future communication systems to alleviate the pressure of scarce spectrum resources. For this reason, in this paper, we consider multi-hop M2M communications, where a machine-type communication (MTC device with the limited transmit power relays to help other devices using mmWave. To be specific, we focus on hop distance statistics and their impacts on system performances in multi-hop wireless networks (MWNs with directional antenna arrays in mmWave for M2M communications. Different from microwave systems, in mmWave communications, wireless channel suffers from blockage by obstacles that heavily attenuate line-of-sight signals, which may result in limited per-hop progress in MWNs. We consider two routing strategies aiming at different types of applications and derive the probability distributions of their hop distances. Moreover, we provide their baseline statistics assuming the blockage-free scenario to quantify the impact of blockages. Based on the hop distance analysis, we propose a method to estimate the end-to-end performances (e.g., outage probability, hop count, and transmit energy of the mmWave MWNs, which provides important insights into mmWave MWN design without time-consuming and repetitive end-to-end simulation.

  5. Statistical analysis of hydrological response in urbanising catchments based on adaptive sampling using inter-amount times

    Science.gov (United States)

    ten Veldhuis, Marie-Claire; Schleiss, Marc

    2017-04-01

    Urban catchments are typically characterised by a more flashy nature of the hydrological response compared to natural catchments. Predicting flow changes associated with urbanisation is not straightforward, as they are influenced by interactions between impervious cover, basin size, drainage connectivity and stormwater management infrastructure. In this study, we present an alternative approach to statistical analysis of hydrological response variability and basin flashiness, based on the distribution of inter-amount times. We analyse inter-amount time distributions of high-resolution streamflow time series for 17 (semi-)urbanised basins in North Carolina, USA, ranging from 13 to 238 km2 in size. We show that in the inter-amount-time framework, sampling frequency is tuned to the local variability of the flow pattern, resulting in a different representation and weighting of high and low flow periods in the statistical distribution. This leads to important differences in the way the distribution quantiles, mean, coefficient of variation and skewness vary across scales and results in lower mean intermittency and improved scaling. Moreover, we show that inter-amount-time distributions can be used to detect regulation effects on flow patterns, identify critical sampling scales and characterise flashiness of hydrological response. The possibility to use both the classical approach and the inter-amount-time framework to identify minimum observable scales and analyse flow data opens up interesting areas for future research.

  6. Performance Analysis of Millimeter-Wave Multi-hop Machine-to-Machine Networks Based on Hop Distance Statistics.

    Science.gov (United States)

    Jung, Haejoon; Lee, In-Ho

    2018-01-12

    As an intrinsic part of the Internet of Things (IoT) ecosystem, machine-to-machine (M2M) communications are expected to provide ubiquitous connectivity between machines. Millimeter-wave (mmWave) communication is another promising technology for the future communication systems to alleviate the pressure of scarce spectrum resources. For this reason, in this paper, we consider multi-hop M2M communications, where a machine-type communication (MTC) device with the limited transmit power relays to help other devices using mmWave. To be specific, we focus on hop distance statistics and their impacts on system performances in multi-hop wireless networks (MWNs) with directional antenna arrays in mmWave for M2M communications. Different from microwave systems, in mmWave communications, wireless channel suffers from blockage by obstacles that heavily attenuate line-of-sight signals, which may result in limited per-hop progress in MWNs. We consider two routing strategies aiming at different types of applications and derive the probability distributions of their hop distances. Moreover, we provide their baseline statistics assuming the blockage-free scenario to quantify the impact of blockages. Based on the hop distance analysis, we propose a method to estimate the end-to-end performances (e.g., outage probability, hop count, and transmit energy) of the mmWave MWNs, which provides important insights into mmWave MWN design without time-consuming and repetitive end-to-end simulation.

  7. metaCCA: summary statistics-based multivariate meta-analysis of genome-wide association studies using canonical correlation analysis.

    Science.gov (United States)

    Cichonska, Anna; Rousu, Juho; Marttinen, Pekka; Kangas, Antti J; Soininen, Pasi; Lehtimäki, Terho; Raitakari, Olli T; Järvelin, Marjo-Riitta; Salomaa, Veikko; Ala-Korpela, Mika; Ripatti, Samuli; Pirinen, Matti

    2016-07-01

    A dominant approach to genetic association studies is to perform univariate tests between genotype-phenotype pairs. However, analyzing related traits together increases statistical power, and certain complex associations become detectable only when several variants are tested jointly. Currently, modest sample sizes of individual cohorts, and restricted availability of individual-level genotype-phenotype data across the cohorts limit conducting multivariate tests. We introduce metaCCA, a computational framework for summary statistics-based analysis of a single or multiple studies that allows multivariate representation of both genotype and phenotype. It extends the statistical technique of canonical correlation analysis to the setting where original individual-level records are not available, and employs a covariance shrinkage algorithm to achieve robustness.Multivariate meta-analysis of two Finnish studies of nuclear magnetic resonance metabolomics by metaCCA, using standard univariate output from the program SNPTEST, shows an excellent agreement with the pooled individual-level analysis of original data. Motivated by strong multivariate signals in the lipid genes tested, we envision that multivariate association testing using metaCCA has a great potential to provide novel insights from already published summary statistics from high-throughput phenotyping technologies. Code is available at https://github.com/aalto-ics-kepaco anna.cichonska@helsinki.fi or matti.pirinen@helsinki.fi Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  8. Statistical analysis of environmental data

    International Nuclear Information System (INIS)

    Beauchamp, J.J.; Bowman, K.O.; Miller, F.L. Jr.

    1975-10-01

    This report summarizes the analyses of data obtained by the Radiological Hygiene Branch of the Tennessee Valley Authority from samples taken around the Browns Ferry Nuclear Plant located in Northern Alabama. The data collection was begun in 1968 and a wide variety of types of samples have been gathered on a regular basis. The statistical analysis of environmental data involving very low-levels of radioactivity is discussed. Applications of computer calculations for data processing are described

  9. Statistical considerations on safety analysis

    International Nuclear Information System (INIS)

    Pal, L.; Makai, M.

    2004-01-01

    The authors have investigated the statistical methods applied to safety analysis of nuclear reactors and arrived at alarming conclusions: a series of calculations with the generally appreciated safety code ATHLET were carried out to ascertain the stability of the results against input uncertainties in a simple experimental situation. Scrutinizing those calculations, we came to the conclusion that the ATHLET results may exhibit chaotic behavior. A further conclusion is that the technological limits are incorrectly set when the output variables are correlated. Another formerly unnoticed conclusion of the previous ATHLET calculations that certain innocent looking parameters (like wall roughness factor, the number of bubbles per unit volume, the number of droplets per unit volume) can influence considerably such output parameters as water levels. The authors are concerned with the statistical foundation of present day safety analysis practices and can only hope that their own misjudgment will be dispelled. Until then, the authors suggest applying correct statistical methods in safety analysis even if it makes the analysis more expensive. It would be desirable to continue exploring the role of internal parameters (wall roughness factor, steam-water surface in thermal hydraulics codes, homogenization methods in neutronics codes) in system safety codes and to study their effects on the analysis. In the validation and verification process of a code one carries out a series of computations. The input data are not precisely determined because measured data have an error, calculated data are often obtained from a more or less accurate model. Some users of large codes are content with comparing the nominal output obtained from the nominal input, whereas all the possible inputs should be taken into account when judging safety. At the same time, any statement concerning safety must be aleatory, and its merit can be judged only when the probability is known with which the

  10. Tract-based spatial statistics analysis of diffusion-tensor imaging data in pediatric- and adult-onset multiple sclerosis.

    Science.gov (United States)

    Aliotta, Rachel; Cox, Jennifer L; Donohue, Katelyn; Weinstock-Guttman, Bianca; Yeh, E Ann; Polak, Paul; Dwyer, Michael G; Zivadinov, Robert

    2014-01-01

    White matter (WM) microstructure may vary significantly in pediatric-onset (PO) and adult-onset (AO) patients with multiple sclerosis (MS), a difference that could be explained by the effects of an inherent plasticity in the affected pediatric brains early in the disease, and a phenomenon that does not occur later in life. This hypothesis would support the observation that disease progression is much slower in POMS compared to AOMS patients. To examine WM microstructure in the brain of adults with POMS and AOMS, using tract based spatial statistics (TBSS) analysis of diffusion-tensor imaging (DTI). Adults with relapsing-remitting (RR) POMS, who were diagnosed before age of 18 years (n = 16), were compared with age-matched (AOA, n = 23) and disease duration-matched (AOD, n = 22) RR patients who developed MS after the age of 18 years. Scans were analyzed using the FSL software package (Oxford, UK) and statistics were performed using TBSS to evaluate WM microstructure between groups based on the mean fractional anisotropy (FA) values obtained from the DTI. Widespread cortical and deep WM area differences characterized by increased FA values were seen in the AOAMS compared with POMS group (P < 0.05, TFCE corrected). Significantly increased FA values of posterior WM areas were detected in the AODMS compared with POMS group (P < 0.05, TFCE corrected). Increased FA values in WM areas of the AOMS compared with the POMS patients suggest that diffuse WM microstructure changes are more attributable to age of onset than a simple function of disease duration and age. Copyright © 2012 Wiley Periodicals, Inc.

  11. Sensitivity analysis of simulated SOA loadings using a variance-based statistical approach: SENSITIVITY ANALYSIS OF SOA

    Energy Technology Data Exchange (ETDEWEB)

    Shrivastava, Manish [Pacific Northwest National Laboratory, Richland Washington USA; Zhao, Chun [Pacific Northwest National Laboratory, Richland Washington USA; Easter, Richard C. [Pacific Northwest National Laboratory, Richland Washington USA; Qian, Yun [Pacific Northwest National Laboratory, Richland Washington USA; Zelenyuk, Alla [Pacific Northwest National Laboratory, Richland Washington USA; Fast, Jerome D. [Pacific Northwest National Laboratory, Richland Washington USA; Liu, Ying [Pacific Northwest National Laboratory, Richland Washington USA; Zhang, Qi [Department of Environmental Toxicology, University of California Davis, California USA; Guenther, Alex [Department of Earth System Science, University of California, Irvine California USA

    2016-04-08

    We investigate the sensitivity of secondary organic aerosol (SOA) loadings simulated by a regional chemical transport model to 7 selected tunable model parameters: 4 involving emissions of anthropogenic and biogenic volatile organic compounds, anthropogenic semi-volatile and intermediate volatility organics (SIVOCs), and NOx, 2 involving dry deposition of SOA precursor gases, and one involving particle-phase transformation of SOA to low volatility. We adopt a quasi-Monte Carlo sampling approach to effectively sample the high-dimensional parameter space, and perform a 250 member ensemble of simulations using a regional model, accounting for some of the latest advances in SOA treatments based on our recent work. We then conduct a variance-based sensitivity analysis using the generalized linear model method to study the responses of simulated SOA loadings to the tunable parameters. Analysis of SOA variance from all 250 simulations shows that the volatility transformation parameter, which controls whether particle-phase transformation of SOA from semi-volatile SOA to non-volatile is on or off, is the dominant contributor to variance of simulated surface-level daytime SOA (65% domain average contribution). We also split the simulations into 2 subsets of 125 each, depending on whether the volatility transformation is turned on/off. For each subset, the SOA variances are dominated by the parameters involving biogenic VOC and anthropogenic SIVOC emissions. Furthermore, biogenic VOC emissions have a larger contribution to SOA variance when the SOA transformation to non-volatile is on, while anthropogenic SIVOC emissions have a larger contribution when the transformation is off. NOx contributes less than 4.3% to SOA variance, and this low contribution is mainly attributed to dominance of intermediate to high NOx conditions throughout the simulated domain. The two parameters related to dry deposition of SOA precursor gases also have very low contributions to SOA variance

  12. A Framework for Establishing Standard Reference Scale of Texture by Multivariate Statistical Analysis Based on Instrumental Measurement and Sensory Evaluation.

    Science.gov (United States)

    Zhi, Ruicong; Zhao, Lei; Xie, Nan; Wang, Houyin; Shi, Bolin; Shi, Jingye

    2016-01-13

    A framework of establishing standard reference scale (texture) is proposed by multivariate statistical analysis according to instrumental measurement and sensory evaluation. Multivariate statistical analysis is conducted to rapidly select typical reference samples with characteristics of universality, representativeness, stability, substitutability, and traceability. The reasonableness of the framework method is verified by establishing standard reference scale of texture attribute (hardness) with Chinese well-known food. More than 100 food products in 16 categories were tested using instrumental measurement (TPA test), and the result was analyzed with clustering analysis, principal component analysis, relative standard deviation, and analysis of variance. As a result, nine kinds of foods were determined to construct the hardness standard reference scale. The results indicate that the regression coefficient between the estimated sensory value and the instrumentally measured value is significant (R(2) = 0.9765), which fits well with Stevens's theory. The research provides reliable a theoretical basis and practical guide for quantitative standard reference scale establishment on food texture characteristics.

  13. Statistical analysis of JET disruptions

    International Nuclear Information System (INIS)

    Tanga, A.; Johnson, M.F.

    1991-07-01

    In the operation of JET and of any tokamak many discharges are terminated by a major disruption. The disruptive termination of a discharge is usually an unwanted event which may cause damage to the structure of the vessel. In a reactor disruptions are potentially a very serious problem, hence the importance of studying them and devising methods to avoid disruptions. Statistical information has been collected about the disruptions which have occurred at JET over a long span of operations. The analysis is focused on the operational aspects of the disruptions rather than on the underlining physics. (Author)

  14. Statistical Analysis of Protein Ensembles

    Science.gov (United States)

    Máté, Gabriell; Heermann, Dieter

    2014-04-01

    As 3D protein-configuration data is piling up, there is an ever-increasing need for well-defined, mathematically rigorous analysis approaches, especially that the vast majority of the currently available methods rely heavily on heuristics. We propose an analysis framework which stems from topology, the field of mathematics which studies properties preserved under continuous deformations. First, we calculate a barcode representation of the molecules employing computational topology algorithms. Bars in this barcode represent different topological features. Molecules are compared through their barcodes by statistically determining the difference in the set of their topological features. As a proof-of-principle application, we analyze a dataset compiled of ensembles of different proteins, obtained from the Ensemble Protein Database. We demonstrate that our approach correctly detects the different protein groupings.

  15. Statistical data analysis using SAS intermediate statistical methods

    CERN Document Server

    Marasinghe, Mervyn G

    2018-01-01

    The aim of this textbook (previously titled SAS for Data Analytics) is to teach the use of SAS for statistical analysis of data for advanced undergraduate and graduate students in statistics, data science, and disciplines involving analyzing data. The book begins with an introduction beyond the basics of SAS, illustrated with non-trivial, real-world, worked examples. It proceeds to SAS programming and applications, SAS graphics, statistical analysis of regression models, analysis of variance models, analysis of variance with random and mixed effects models, and then takes the discussion beyond regression and analysis of variance to conclude. Pedagogically, the authors introduce theory and methodological basis topic by topic, present a problem as an application, followed by a SAS analysis of the data provided and a discussion of results. The text focuses on applied statistical problems and methods. Key features include: end of chapter exercises, downloadable SAS code and data sets, and advanced material suitab...

  16. Register-based statistics statistical methods for administrative data

    CERN Document Server

    Wallgren, Anders

    2014-01-01

    This book provides a comprehensive and up to date treatment of  theory and practical implementation in Register-based statistics. It begins by defining the area, before explaining how to structure such systems, as well as detailing alternative approaches. It explains how to create statistical registers, how to implement quality assurance, and the use of IT systems for register-based statistics. Further to this, clear details are given about the practicalities of implementing such statistical methods, such as protection of privacy and the coordination and coherence of such an undertaking. Thi

  17. An analysis of Greek seismicity based on Non Extensive Statistical Physics: The interdependence of magnitude, interevent time and interevent distance.

    Science.gov (United States)

    Efstathiou, Angeliki; Tzanis, Andreas; Vallianatos, Filippos

    2014-05-01

    The context of Non Extensive Statistical Physics (NESP) has recently been suggested to comprise an appropriate tool for the analysis of complex dynamic systems with scale invariance, long-range interactions, long-range memory and systems that evolve in a fractal-like space-time. This is because the active tectonic grain is thought to comprise a (self-organizing) complex system; therefore, its expression (seismicity) should be manifested in the temporal and spatial statistics of energy release rates. In addition to energy release rates expressed by the magnitude M, measures of the temporal and spatial interactions are the time (Δt) and hypocentral distance (Δd) between consecutive events. Recent work indicated that if the distributions of M, Δt and Δd are independent so that the joint probability p(M,Δt,Δd) factorizes into the probabilities of M, Δt and Δd, i.e. p(M,Δt,Δd)= p(M)p(Δt)p(Δd), then the frequency of earthquake occurrence is multiply related, not only to magnitude as the celebrated Gutenberg - Richter law predicts, but also to interevent time and distance by means of well-defined power-laws consistent with NESP. The present work applies these concepts to investigate the self-organization and temporal/spatial dynamics of seismicity in Greece and western Turkey, for the period 1964-2011. The analysis was based on the ISC earthquake catalogue which is homogenous by construction with consistently determined hypocenters and magnitude. The presentation focuses on the analysis of bivariate Frequency-Magnitude-Time distributions, while using the interevent distances as spatial constraints (or spatial filters) for studying the spatial dependence of the energy and time dynamics of the seismicity. It is demonstrated that the frequency of earthquake occurrence is multiply related to the magnitude and the interevent time by means of well-defined multi-dimensional power-laws consistent with NESP and has attributes of universality,as its holds for a broad

  18. Imaging mass spectrometry statistical analysis.

    Science.gov (United States)

    Jones, Emrys A; Deininger, Sören-Oliver; Hogendoorn, Pancras C W; Deelder, André M; McDonnell, Liam A

    2012-08-30

    Imaging mass spectrometry is increasingly used to identify new candidate biomarkers. This clinical application of imaging mass spectrometry is highly multidisciplinary: expertise in mass spectrometry is necessary to acquire high quality data, histology is required to accurately label the origin of each pixel's mass spectrum, disease biology is necessary to understand the potential meaning of the imaging mass spectrometry results, and statistics to assess the confidence of any findings. Imaging mass spectrometry data analysis is further complicated because of the unique nature of the data (within the mass spectrometry field); several of the assumptions implicit in the analysis of LC-MS/profiling datasets are not applicable to imaging. The very large size of imaging datasets and the reporting of many data analysis routines, combined with inadequate training and accessible reviews, have exacerbated this problem. In this paper we provide an accessible review of the nature of imaging data and the different strategies by which the data may be analyzed. Particular attention is paid to the assumptions of the data analysis routines to ensure that the reader is apprised of their correct usage in imaging mass spectrometry research. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. Voxel-based analysis of cerebral glucose metabolism in AD and non-AD degenerative dementia using statistical parametric mapping

    International Nuclear Information System (INIS)

    Li Zugui; Gao Shuo; Zhang Benshu; Ma Aijun; Cai Li; Li Dacheng; Li Yansheng; Liu Lei

    2008-01-01

    Objective: It is know that Alzheimer's disease (AD) and non-AD degenerative dementia have some clinical features in common. The aim of this study was to investigate the specific patterns of regional, cerebral glucose metabolism of AD and non-AD degenerative dementia patients, using a voxel-based 18 F-fluorodeoxyglucose (FDG) PET study. Methods: Twenty-three AD patients and 24 non-AD degenerative dementia patients including 9 Parkinson's disease with dementia(PDD), 7 frontal-temporal dementia (FTD), 8 dementia of Lewy bodies (DLB) patients, and 40 normal controls (NC)were included in the study. To evaluate the relative cerebral metabolic rate of glucose (rCMRglc), 18 F-FDG PET imaging was performed in all subjects. Subsequently, statistical comparison of PET data with NC was performed using statistical parametric mapping (SPM). Results: The AD-associated FDG imaging pattern typically presented as focal cortical hypometabolism in bilateral parietotemporal association cortes and(or) frontal lobe and the posterior cingulate gyms. As compared with the comparative NC, FTD group demonstrated significant regional reductions in rCMRglc in bilateral frontal, parietal lobes, the cingulate gyri, insulae, left precuneus, and the subcortical structures (including right putamen, right medial dorsal nucleus and ventral anterior nucleus). The PDD group showed regional reductions in rCMRglc in bilateral frontal cortexes, parietotemporal association cortexes, and the subcortical structures (including left caudate, right putamen, the dorsomedial thalamus, lateral posterior nucleus, and pulvinar). By the voxel-by-voxel comparison between the DLB group and NC group, regional reductions in rCMRglc included bilateral occipital cortexes, precuneuses, frontal and parietal lobes, left anterior cingulate gyms, right superior temporal cortex, and the subcortical structures including putamen, caudate, lateral posterior nucleus, and pulvinar. Conclusions: The rCMRglc was found to be different

  20. A Statistical Toolkit for Data Analysis

    International Nuclear Information System (INIS)

    Donadio, S.; Guatelli, S.; Mascialino, B.; Pfeiffer, A.; Pia, M.G.; Ribon, A.; Viarengo, P.

    2006-01-01

    The present project aims to develop an open-source and object-oriented software Toolkit for statistical data analysis. Its statistical testing component contains a variety of Goodness-of-Fit tests, from Chi-squared to Kolmogorov-Smirnov, to less known, but generally much more powerful tests such as Anderson-Darling, Goodman, Fisz-Cramer-von Mises, Kuiper, Tiku. Thanks to the component-based design and the usage of the standard abstract interfaces for data analysis, this tool can be used by other data analysis systems or integrated in experimental software frameworks. This Toolkit has been released and is downloadable from the web. In this paper we describe the statistical details of the algorithms, the computational features of the Toolkit and describe the code validation

  1. Parametric statistical change point analysis

    CERN Document Server

    Chen, Jie

    2000-01-01

    This work is an in-depth study of the change point problem from a general point of view and a further examination of change point analysis of the most commonly used statistical models Change point problems are encountered in such disciplines as economics, finance, medicine, psychology, signal processing, and geology, to mention only several The exposition is clear and systematic, with a great deal of introductory material included Different models are presented in each chapter, including gamma and exponential models, rarely examined thus far in the literature Other models covered in detail are the multivariate normal, univariate normal, regression, and discrete models Extensive examples throughout the text emphasize key concepts and different methodologies are used, namely the likelihood ratio criterion, and the Bayesian and information criterion approaches A comprehensive bibliography and two indices complete the study

  2. Statistical analysis and decoding of neural activity in the rodent geniculate ganglion using a metric-based inference system.

    Directory of Open Access Journals (Sweden)

    Wei Wu

    Full Text Available We analyzed the spike discharge patterns of two types of neurons in the rodent peripheral gustatory system, Na specialists (NS and acid generalists (AG to lingual stimulation with NaCl, acetic acid, and mixtures of the two stimuli. Previous computational investigations found that both spike rate and spike timing contribute to taste quality coding. These studies used commonly accepted computational methods, but they do not provide a consistent statistical evaluation of spike trains. In this paper, we adopted a new computational framework that treated each spike train as an individual data point for computing summary statistics such as mean and variance in the spike train space. We found that these statistical summaries properly characterized the firing patterns (e. g. template and variability and quantified the differences between NS and AG neurons. The same framework was also used to assess the discrimination performance of NS and AG neurons and to remove spontaneous background activity or "noise" from the spike train responses. The results indicated that the new metric system provided the desired decoding performance and noise-removal improved stimulus classification accuracy, especially of neurons with high spontaneous rates. In summary, this new method naturally conducts statistical analysis and neural decoding under one consistent framework, and the results demonstrated that individual peripheral-gustatory neurons generate a unique and reliable firing pattern during sensory stimulation and that this pattern can be reliably decoded.

  3. Quantitative profiling of polar metabolites in herbal medicine injections for multivariate statistical evaluation based on independence principal component analysis.

    Directory of Open Access Journals (Sweden)

    Miaomiao Jiang

    Full Text Available Botanical primary metabolites extensively exist in herbal medicine injections (HMIs, but often were ignored to control. With the limitation of bias towards hydrophilic substances, the primary metabolites with strong polarity, such as saccharides, amino acids and organic acids, are usually difficult to detect by the routinely applied reversed-phase chromatographic fingerprint technology. In this study, a proton nuclear magnetic resonance (1H NMR profiling method was developed for efficient identification and quantification of small polar molecules, mostly primary metabolites in HMIs. A commonly used medicine, Danhong injection (DHI, was employed as a model. With the developed method, 23 primary metabolites together with 7 polyphenolic acids were simultaneously identified, of which 13 metabolites with fully separated proton signals were quantified and employed for further multivariate quality control assay. The quantitative 1H NMR method was validated with good linearity, precision, repeatability, stability and accuracy. Based on independence principal component analysis (IPCA, the contents of 13 metabolites were characterized and dimensionally reduced into the first two independence principal components (IPCs. IPC1 and IPC2 were then used to calculate the upper control limits (with 99% confidence ellipsoids of χ2 and Hotelling T2 control charts. Through the constructed upper control limits, the proposed method was successfully applied to 36 batches of DHI to examine the out-of control sample with the perturbed levels of succinate, malonate, glucose, fructose, salvianic acid and protocatechuic aldehyde. The integrated strategy has provided a reliable approach to identify and quantify multiple polar metabolites of DHI in one fingerprinting spectrum, and it has also assisted in the establishment of IPCA models for the multivariate statistical evaluation of HMIs.

  4. Statistical state dynamics-based analysis of the physical mechanisms sustaining and regulating turbulence in Couette flow

    Science.gov (United States)

    Farrell, Brian F.; Ioannou, Petros J.

    2017-08-01

    This paper describes a study of the self-sustaining process in wall turbulence. The study is based on a second order statistical state dynamics model of Couette flow in which the state variables are the streamwise mean flow (first cumulant) and perturbation covariance (second cumulant). This statistical state dynamics model is closed by either setting the third cumulant to zero or by replacing it with a stochastic parametrization. Statistical state dynamics models with this form are referred to as S3T models. S3T models have been shown to self-sustain turbulence with a mean flow and second order perturbation structure similar to that obtained by direct numerical simulation of the equations of motion. The use of a statistical state dynamics model to study the physical mechanisms underlying turbulence has important advantages over the traditional approach of studying the dynamics of individual realizations of turbulence. One advantage is that the analytical structure of S3T statistical state dynamics models isolates the interaction between the mean flow and the perturbation components of the turbulence. Isolation of the interaction between these components reveals how this interaction underlies both the maintenance of the turbulence variance by transfer of energy from the externally driven flow to the perturbation components as well as the enforcement of the observed statistical mean turbulent state by feedback regulation between the mean and perturbation fields. Another advantage of studying turbulence using statistical state dynamics models of S3T form is that the analytical structure of S3T turbulence can be completely characterized. For example, the perturbation component of turbulence in the S3T system is demonstrably maintained by a parametric perturbation growth mechanism in which fluctuation of the mean flow maintains the perturbation field which in turn maintains the mean flow fluctuations in a synergistic interaction. Furthermore, the equilibrium

  5. A novel genome-information content-based statistic for genome-wide association analysis designed for next-generation sequencing data.

    Science.gov (United States)

    Luo, Li; Zhu, Yun; Xiong, Momiao

    2012-06-01

    The genome-wide association studies (GWAS) designed for next-generation sequencing data involve testing association of genomic variants, including common, low frequency, and rare variants. The current strategies for association studies are well developed for identifying association of common variants with the common diseases, but may be ill-suited when large amounts of allelic heterogeneity are present in sequence data. Recently, group tests that analyze their collective frequency differences between cases and controls shift the current variant-by-variant analysis paradigm for GWAS of common variants to the collective test of multiple variants in the association analysis of rare variants. However, group tests ignore differences in genetic effects among SNPs at different genomic locations. As an alternative to group tests, we developed a novel genome-information content-based statistics for testing association of the entire allele frequency spectrum of genomic variation with the diseases. To evaluate the performance of the proposed statistics, we use large-scale simulations based on whole genome low coverage pilot data in the 1000 Genomes Project to calculate the type 1 error rates and power of seven alternative statistics: a genome-information content-based statistic, the generalized T(2), collapsing method, multivariate and collapsing (CMC) method, individual χ(2) test, weighted-sum statistic, and variable threshold statistic. Finally, we apply the seven statistics to published resequencing dataset from ANGPTL3, ANGPTL4, ANGPTL5, and ANGPTL6 genes in the Dallas Heart Study. We report that the genome-information content-based statistic has significantly improved type 1 error rates and higher power than the other six statistics in both simulated and empirical datasets.

  6. GIS-based spatial statistical analysis of risk areas for liver flukes in Surin Province of Thailand.

    Science.gov (United States)

    Rujirakul, Ratana; Ueng-arporn, Naporn; Kaewpitoon, Soraya; Loyd, Ryan J; Kaewthani, Sarochinee; Kaewpitoon, Natthawut

    2015-01-01

    It is urgently necessary to be aware of the distribution and risk areas of liver fluke, Opisthorchis viverrini, for proper allocation of prevention and control measures. This study aimed to investigate the human behavior, and environmental factors influencing the distribution in Surin Province of Thailand, and to build a model using stepwise multiple regression analysis with a geographic information system (GIS) on environment and climate data. The relationship between the human behavior, attitudes (R Square=0.878, and, Adjust R Square=0.849. By GIS analysis, we found Si Narong, Sangkha, Phanom Dong Rak, Mueang Surin, Non Narai, Samrong Thap, Chumphon Buri, and Rattanaburi to have the highest distributions in Surin province. In conclusion, the combination of GIS and statistical analysis can help simulate the spatial distribution and risk areas of liver fluke, and thus may be an important tool for future planning of prevention and control measures.

  7. Analysis of photon statistics with Silicon Photomultiplier

    International Nuclear Information System (INIS)

    D'Ascenzo, N.; Saveliev, V.; Wang, L.; Xie, Q.

    2015-01-01

    The Silicon Photomultiplier (SiPM) is a novel silicon-based photodetector, which represents the modern perspective of low photon flux detection. The aim of this paper is to provide an introduction on the statistical analysis methods needed to understand and estimate in quantitative way the correct features and description of the response of the SiPM to a coherent source of light

  8. Statistical analysis of management data

    CERN Document Server

    Gatignon, Hubert

    2013-01-01

    This book offers a comprehensive approach to multivariate statistical analyses. It provides theoretical knowledge of the concepts underlying the most important multivariate techniques and an overview of actual applications.

  9. Analysis of statistical misconception in terms of statistical reasoning

    Science.gov (United States)

    Maryati, I.; Priatna, N.

    2018-05-01

    Reasoning skill is needed for everyone to face globalization era, because every person have to be able to manage and use information from all over the world which can be obtained easily. Statistical reasoning skill is the ability to collect, group, process, interpret, and draw conclusion of information. Developing this skill can be done through various levels of education. However, the skill is low because many people assume that statistics is just the ability to count and using formulas and so do students. Students still have negative attitude toward course which is related to research. The purpose of this research is analyzing students’ misconception in descriptive statistic course toward the statistical reasoning skill. The observation was done by analyzing the misconception test result and statistical reasoning skill test; observing the students’ misconception effect toward statistical reasoning skill. The sample of this research was 32 students of math education department who had taken descriptive statistic course. The mean value of misconception test was 49,7 and standard deviation was 10,6 whereas the mean value of statistical reasoning skill test was 51,8 and standard deviation was 8,5. If the minimal value is 65 to state the standard achievement of a course competence, students’ mean value is lower than the standard competence. The result of students’ misconception study emphasized on which sub discussion that should be considered. Based on the assessment result, it was found that students’ misconception happen on this: 1) writing mathematical sentence and symbol well, 2) understanding basic definitions, 3) determining concept that will be used in solving problem. In statistical reasoning skill, the assessment was done to measure reasoning from: 1) data, 2) representation, 3) statistic format, 4) probability, 5) sample, and 6) association.

  10. Voxel based statistical analysis method for microPET studies to assess the cerebral glucose metabolism in cat deafness model: comparison to ROI based method

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jin Su; Lee, Jae Sung; Park, Min Hyun; Lee, Jong Jin; Kang, Hye Jin; Lee, Hyo Jeong; Oh, Seung Ha; Kim, Chong Sun; Jung, June Key; Lee, Myung Chul; Lee, Dong Soo [Seoul National University College of Medicine, Seoul (Korea, Republic of); Lim, Sang Moo [KIRAMS, Seoul (Korea, Republic of)

    2005-07-01

    Imaging research on the brain of sensory-deprived cats using small animal PET scanner has gained interest since the abundant information about the sensory system of ths animal is available and close examination of the brain is possible due to larger size of its brain than mouse or rat. In this study, we have established the procedures for 3D voxel-based statistical analysis (SPM) of FDG PET image of cat brain, and confirmed using ROI based-method. FDG PET scans of 4 normal and 4 deaf cats were acquired for 30 minutes using microPET R4 scanner. Only the brain cortices were extracted using a masking and threshold method to facilitate spatial normalization. After spatial normalization and smoothing, 3D voxel-wise and ROI based t-test were performed to identify the regions with significant different FDG uptake between the normal and deaf cats. In ROI analysis, 26 ROIs were drawn on both hemispheres, and regional mean pixel value in each ROI was normalized to the global mean of the brain. Cat brains were spatially normalized well onto the target brain due to the removal of background activity. When cerebral glucose metabolism of deaf cats were compared to the normal controls after removing the effects of the global count, the glucose metabolism in the auditory cortex, head of caudate nucleus, and thalamus in both hemispheres of the deaf cats was significantly lower than that of the controls (P<0.01). No area showed a significantly increased metabolism in the deaf cats even in higher significance level (P<0.05). ROI analysis also showed significant reduction of glucose metabolism in the same region. This study established and confirmed a method for voxel-based analysis of animal PET data of cat brain, which showed high localization accuracy and specificity and was useful for examining the cerebral glucose metabolism in a cat cortical deafness model.

  11. Voxel based statistical analysis method for microPET studies to assess the cerebral glucose metabolism in cat deafness model: comparison to ROI based method

    International Nuclear Information System (INIS)

    Kim, Jin Su; Lee, Jae Sung; Park, Min Hyun; Lee, Jong Jin; Kang, Hye Jin; Lee, Hyo Jeong; Oh, Seung Ha; Kim, Chong Sun; Jung, June Key; Lee, Myung Chul; Lee, Dong Soo; Lim, Sang Moo

    2005-01-01

    Imaging research on the brain of sensory-deprived cats using small animal PET scanner has gained interest since the abundant information about the sensory system of ths animal is available and close examination of the brain is possible due to larger size of its brain than mouse or rat. In this study, we have established the procedures for 3D voxel-based statistical analysis (SPM) of FDG PET image of cat brain, and confirmed using ROI based-method. FDG PET scans of 4 normal and 4 deaf cats were acquired for 30 minutes using microPET R4 scanner. Only the brain cortices were extracted using a masking and threshold method to facilitate spatial normalization. After spatial normalization and smoothing, 3D voxel-wise and ROI based t-test were performed to identify the regions with significant different FDG uptake between the normal and deaf cats. In ROI analysis, 26 ROIs were drawn on both hemispheres, and regional mean pixel value in each ROI was normalized to the global mean of the brain. Cat brains were spatially normalized well onto the target brain due to the removal of background activity. When cerebral glucose metabolism of deaf cats were compared to the normal controls after removing the effects of the global count, the glucose metabolism in the auditory cortex, head of caudate nucleus, and thalamus in both hemispheres of the deaf cats was significantly lower than that of the controls (P<0.01). No area showed a significantly increased metabolism in the deaf cats even in higher significance level (P<0.05). ROI analysis also showed significant reduction of glucose metabolism in the same region. This study established and confirmed a method for voxel-based analysis of animal PET data of cat brain, which showed high localization accuracy and specificity and was useful for examining the cerebral glucose metabolism in a cat cortical deafness model

  12. Statistical analysis of brake squeal noise

    Science.gov (United States)

    Oberst, S.; Lai, J. C. S.

    2011-06-01

    Despite substantial research efforts applied to the prediction of brake squeal noise since the early 20th century, the mechanisms behind its generation are still not fully understood. Squealing brakes are of significant concern to the automobile industry, mainly because of the costs associated with warranty claims. In order to remedy the problems inherent in designing quieter brakes and, therefore, to understand the mechanisms, a design of experiments study, using a noise dynamometer, was performed by a brake system manufacturer to determine the influence of geometrical parameters (namely, the number and location of slots) of brake pads on brake squeal noise. The experimental results were evaluated with a noise index and ranked for warm and cold brake stops. These data are analysed here using statistical descriptors based on population distributions, and a correlation analysis, to gain greater insight into the functional dependency between the time-averaged friction coefficient as the input and the peak sound pressure level data as the output quantity. The correlation analysis between the time-averaged friction coefficient and peak sound pressure data is performed by applying a semblance analysis and a joint recurrence quantification analysis. Linear measures are compared with complexity measures (nonlinear) based on statistics from the underlying joint recurrence plots. Results show that linear measures cannot be used to rank the noise performance of the four test pad configurations. On the other hand, the ranking of the noise performance of the test pad configurations based on the noise index agrees with that based on nonlinear measures: the higher the nonlinearity between the time-averaged friction coefficient and peak sound pressure, the worse the squeal. These results highlight the nonlinear character of brake squeal and indicate the potential of using nonlinear statistical analysis tools to analyse disc brake squeal.

  13. Urban seismic risk assessment: statistical repair cost data and probable structural losses based on damage scenario—correlation analysis

    Science.gov (United States)

    Eleftheriadou, Anastasia K.; Baltzopoulou, Aikaterini D.; Karabinis, Athanasios I.

    2016-06-01

    The current seismic risk assessment is based on two discrete approaches, actual and probable, validating afterwards the produced results. In the first part of this research, the seismic risk is evaluated from the available data regarding the mean statistical repair/strengthening or replacement cost for the total number of damaged structures (180,427 buildings) after the 7/9/1999 Parnitha (Athens) earthquake. The actual evaluated seismic risk is afterwards compared to the estimated probable structural losses, which is presented in the second part of the paper, based on a damage scenario in the referring earthquake. The applied damage scenario is based on recently developed damage probability matrices (DPMs) from Athens (Greece) damage database. The seismic risk estimation refers to 750,085 buildings situated in the extended urban region of Athens. The building exposure is categorized in five typical structural types and represents 18.80 % of the entire building stock in Greece. The last information is provided by the National Statistics Service of Greece (NSSG) according to the 2000-2001 census. The seismic input is characterized by the ratio, a g/ a o, where a g is the regional peak ground acceleration (PGA) which is evaluated from the earlier estimated research macroseismic intensities, and a o is the PGA according to the hazard map of the 2003 Greek Seismic Code. Finally, the collected investigated financial data derived from different National Services responsible for the post-earthquake crisis management concerning the repair/strengthening or replacement costs or other categories of costs for the rehabilitation of earthquake victims (construction and function of settlements for earthquake homeless, rent supports, demolitions, shorings) are used to determine the final total seismic risk factor.

  14. A standards-based method for compositional analysis by energy dispersive X-ray spectrometry using multivariate statistical analysis: application to multicomponent alloys.

    Science.gov (United States)

    Rathi, Monika; Ahrenkiel, S P; Carapella, J J; Wanlass, M W

    2013-02-01

    Given an unknown multicomponent alloy, and a set of standard compounds or alloys of known composition, can one improve upon popular standards-based methods for energy dispersive X-ray (EDX) spectrometry to quantify the elemental composition of the unknown specimen? A method is presented here for determining elemental composition of alloys using transmission electron microscopy-based EDX with appropriate standards. The method begins with a discrete set of related reference standards of known composition, applies multivariate statistical analysis to those spectra, and evaluates the compositions with a linear matrix algebra method to relate the spectra to elemental composition. By using associated standards, only limited assumptions about the physical origins of the EDX spectra are needed. Spectral absorption corrections can be performed by providing an estimate of the foil thickness of one or more reference standards. The technique was applied to III-V multicomponent alloy thin films: composition and foil thickness were determined for various III-V alloys. The results were then validated by comparing with X-ray diffraction and photoluminescence analysis, demonstrating accuracy of approximately 1% in atomic fraction.

  15. Statistical analysis of electromagnetic radiation measurements in the vicinity of GSM/UMTS base station antenna masts.

    Science.gov (United States)

    Koprivica, Mladen; Neskovic, Natasa; Neskovic, Aleksandar; Paunovic, George

    2014-01-01

    As a result of dense installations of public mobile base station, additional electromagnetic radiation occurs in the living environment. In order to determine the level of radio-frequency radiation generated by base stations, extensive electromagnetic field strength measurements were carried out for 664 base station locations. Base station locations were classified into three categories: indoor, masts and locations with installations on buildings. Having in mind the large percentage (47 %) of sites with antenna masts, a detailed analysis of this location category was performed, and the measurement results were presented. It was concluded that the total electric field strength in the vicinity of base station antenna masts in no case exceeded 10 V m(-1), which is quite below the International Commission on Non-Ionizing Radiation Protection reference levels. At horizontal distances >50 m from the mast bottom, the median and maximum values were <1 and 2 V m(-1), respectively.

  16. Statistical analysis of electromagnetic radiation measurements in the vicinity of GSM/UMTS base station antenna masts

    International Nuclear Information System (INIS)

    Koprivica, M.; Neskovic, N.; Neskovic, A.; Paunovic, G.

    2014-01-01

    As a result of dense installations of public mobile base station, additional electromagnetic radiation occurs in the living environment. In order to determine the level of radio-frequency radiation generated by base stations, extensive electromagnetic field strength measurements were carried out for 664 base station locations. Base station locations were classified into three categories: indoor, masts and locations with installations on buildings. Having in mind the large percentage (47 %) of sites with antenna masts, a detailed analysis of this location category was performed, and the measurement results were presented. It was concluded that the total electric field strength in the vicinity of base station antenna masts in no case exceeded 10 V m -1 , which is quite below the International Commission on Non-Ionizing Radiation Protection reference levels. At horizontal distances >50 m from the mast bottom, the median and maximum values were -1 , respectively. (authors)

  17. Statistically based uncertainty analysis for ranking of component importance in the thermal-hydraulic safety analysis of the Advanced Neutron Source Reactor

    International Nuclear Information System (INIS)

    Wilson, G.E.

    1992-01-01

    The Analytic Hierarchy Process (AHP) has been used to help determine the importance of components and phenomena in thermal-hydraulic safety analyses of nuclear reactors. The AHP results are based, in part on expert opinion. Therefore, it is prudent to evaluate the uncertainty of the AHP ranks of importance. Prior applications have addressed uncertainty with experimental data comparisons and bounding sensitivity calculations. These methods work well when a sufficient experimental data base exists to justify the comparisons. However, in the case of limited or no experimental data the size of the uncertainty is normally made conservatively large. Accordingly, the author has taken another approach, that of performing a statistically based uncertainty analysis. The new work is based on prior evaluations of the importance of components and phenomena in the thermal-hydraulic safety analysis of the Advanced Neutron Source Reactor (ANSR), a new facility now in the design phase. The uncertainty during large break loss of coolant, and decay heat removal scenarios is estimated by assigning a probability distribution function (pdf) to the potential error in the initial expert estimates of pair-wise importance between the components. Using a Monte Carlo sampling technique, the error pdfs are propagated through the AHP software solutions to determine a pdf of uncertainty in the system wide importance of each component. To enhance the generality of the results, study of one other problem having different number of elements is reported, as are the effects of a larger assumed pdf error in the expert ranks. Validation of the Monte Carlo sample size and repeatability are also documented

  18. A Statistical Analysis of Cryptocurrencies

    Directory of Open Access Journals (Sweden)

    Stephen Chan

    2017-05-01

    Full Text Available We analyze statistical properties of the largest cryptocurrencies (determined by market capitalization, of which Bitcoin is the most prominent example. We characterize their exchange rates versus the U.S. Dollar by fitting parametric distributions to them. It is shown that returns are clearly non-normal, however, no single distribution fits well jointly to all the cryptocurrencies analysed. We find that for the most popular currencies, such as Bitcoin and Litecoin, the generalized hyperbolic distribution gives the best fit, while for the smaller cryptocurrencies the normal inverse Gaussian distribution, generalized t distribution, and Laplace distribution give good fits. The results are important for investment and risk management purposes.

  19. FLOODPLAIN PLANNING BASED ON STATISTICAL ANALYSIS OF TILPARA BARRAGE DISCHARGE: A CASE STUDY ON MAYURAKSHI RIVER BASIN

    Directory of Open Access Journals (Sweden)

    Vibhash Chandra Jha

    2012-10-01

    Full Text Available Floods in the West Bengal are responsible for colossal loss of human life, crops, and property. In recent years, various measures of flood control and management have been adopted. However, flooding in such rivers like Brahmani profoundly challenges flood-hazard management, because of the inadequacy of conventional data and high spatio-temporal variability of floods. To understand flood hazards and environmental change it is imperative that engineers and hydrologists utilize historical and paleoflood records to improve risk analyses as well as to estimate probable maximum flood on rivers such as these in a highly flood-prone region(Parkar,2000. The flood frequency analysis, probable peak discharge analysis, its return period analysis and floodplain zoning based on ancillary data will help better management of flood in the Mayurakshi River basin situated in the districts of Birbhum and Murshidabad.

  20. Research on the optimization of air quality monitoring station layout based on spatial grid statistical analysis method.

    Science.gov (United States)

    Li, Tianxin; Zhou, Xing Chen; Ikhumhen, Harrison Odion; Difei, An

    2018-05-01

    In recent years, with the significant increase in urban development, it has become necessary to optimize the current air monitoring stations to reflect the quality of air in the environment. Highlighting the spatial representation of some air monitoring stations using Beijing's regional air monitoring station data from 2012 to 2014, the monthly mean particulate matter concentration (PM10) in the region was calculated and through the IDW interpolation method and spatial grid statistical method using GIS, the spatial distribution of PM10 concentration in the whole region was deduced. The spatial distribution variation of districts in Beijing using the gridding model was performed, and through the 3-year spatial analysis, PM10 concentration data including the variation and spatial overlay (1.5 km × 1.5 km cell resolution grid), the spatial distribution result obtained showed that the total PM10 concentration frequency variation exceeded the standard. It is very important to optimize the layout of the existing air monitoring stations by combining the concentration distribution of air pollutants with the spatial region using GIS.

  1. Statistical analysis of modal parameters of a suspension bridge based on Bayesian spectral density approach and SHM data

    Science.gov (United States)

    Li, Zhijun; Feng, Maria Q.; Luo, Longxi; Feng, Dongming; Xu, Xiuli

    2018-01-01

    Uncertainty of modal parameters estimation appear in structural health monitoring (SHM) practice of civil engineering to quite some significant extent due to environmental influences and modeling errors. Reasonable methodologies are needed for processing the uncertainty. Bayesian inference can provide a promising and feasible identification solution for the purpose of SHM. However, there are relatively few researches on the application of Bayesian spectral method in the modal identification using SHM data sets. To extract modal parameters from large data sets collected by SHM system, the Bayesian spectral density algorithm was applied to address the uncertainty of mode extraction from output-only response of a long-span suspension bridge. The posterior most possible values of modal parameters and their uncertainties were estimated through Bayesian inference. A long-term variation and statistical analysis was performed using the sensor data sets collected from the SHM system of the suspension bridge over a one-year period. The t location-scale distribution was shown to be a better candidate function for frequencies of lower modes. On the other hand, the burr distribution provided the best fitting to the higher modes which are sensitive to the temperature. In addition, wind-induced variation of modal parameters was also investigated. It was observed that both the damping ratios and modal forces increased during the period of typhoon excitations. Meanwhile, the modal damping ratios exhibit significant correlation with the spectral intensities of the corresponding modal forces.

  2. The joint return period analysis of natural disasters based on monitoring and statistical modeling of multidimensional hazard factors

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Xueqin [State Key Laboratory of Earth Surface Processes and Resource Ecology, Beijing Normal University, Beijing 100875 (China); National Marine Environmental Monitoring Center, State Oceanic Administration, Dalian 116023 (China); School of Social Development and Public Policy, Beijing Normal University, Beijing 100875 (China); Li, Ning [State Key Laboratory of Earth Surface Processes and Resource Ecology, Beijing Normal University, Beijing 100875 (China); Yuan, Shuai, E-mail: syuan@nmemc.org.cn [National Marine Environmental Monitoring Center, State Oceanic Administration, Dalian 116023 (China); Xu, Ning; Shi, Wenqin; Chen, Weibin [National Marine Environmental Monitoring Center, State Oceanic Administration, Dalian 116023 (China)

    2015-12-15

    As a random event, a natural disaster has the complex occurrence mechanism. The comprehensive analysis of multiple hazard factors is important in disaster risk assessment. In order to improve the accuracy of risk analysis and forecasting, the formation mechanism of a disaster should be considered in the analysis and calculation of multi-factors. Based on the consideration of the importance and deficiencies of multivariate analysis of dust storm disasters, 91 severe dust storm disasters in Inner Mongolia from 1990 to 2013 were selected as study cases in the paper. Main hazard factors from 500-hPa atmospheric circulation system, near-surface meteorological system, and underlying surface conditions were selected to simulate and calculate the multidimensional joint return periods. After comparing the simulation results with actual dust storm events in 54 years, we found that the two-dimensional Frank Copula function showed the better fitting results at the lower tail of hazard factors and that three-dimensional Frank Copula function displayed the better fitting results at the middle and upper tails of hazard factors. However, for dust storm disasters with the short return period, three-dimensional joint return period simulation shows no obvious advantage. If the return period is longer than 10 years, it shows significant advantages in extreme value fitting. Therefore, we suggest the multivariate analysis method may be adopted in forecasting and risk analysis of serious disasters with the longer return period, such as earthquake and tsunami. Furthermore, the exploration of this method laid the foundation for the prediction and warning of other nature disasters. - Highlights: • A method to estimate the multidimensional joint return periods is presented. • 2D function allows better fitting results at the lower tail of hazard factors. • Three-dimensional simulation has obvious advantages in extreme value fitting. • Joint return periods are closer to the reality

  3. The joint return period analysis of natural disasters based on monitoring and statistical modeling of multidimensional hazard factors

    International Nuclear Information System (INIS)

    Liu, Xueqin; Li, Ning; Yuan, Shuai; Xu, Ning; Shi, Wenqin; Chen, Weibin

    2015-01-01

    As a random event, a natural disaster has the complex occurrence mechanism. The comprehensive analysis of multiple hazard factors is important in disaster risk assessment. In order to improve the accuracy of risk analysis and forecasting, the formation mechanism of a disaster should be considered in the analysis and calculation of multi-factors. Based on the consideration of the importance and deficiencies of multivariate analysis of dust storm disasters, 91 severe dust storm disasters in Inner Mongolia from 1990 to 2013 were selected as study cases in the paper. Main hazard factors from 500-hPa atmospheric circulation system, near-surface meteorological system, and underlying surface conditions were selected to simulate and calculate the multidimensional joint return periods. After comparing the simulation results with actual dust storm events in 54 years, we found that the two-dimensional Frank Copula function showed the better fitting results at the lower tail of hazard factors and that three-dimensional Frank Copula function displayed the better fitting results at the middle and upper tails of hazard factors. However, for dust storm disasters with the short return period, three-dimensional joint return period simulation shows no obvious advantage. If the return period is longer than 10 years, it shows significant advantages in extreme value fitting. Therefore, we suggest the multivariate analysis method may be adopted in forecasting and risk analysis of serious disasters with the longer return period, such as earthquake and tsunami. Furthermore, the exploration of this method laid the foundation for the prediction and warning of other nature disasters. - Highlights: • A method to estimate the multidimensional joint return periods is presented. • 2D function allows better fitting results at the lower tail of hazard factors. • Three-dimensional simulation has obvious advantages in extreme value fitting. • Joint return periods are closer to the reality

  4. On two methods of statistical image analysis

    NARCIS (Netherlands)

    Missimer, J; Knorr, U; Maguire, RP; Herzog, H; Seitz, RJ; Tellman, L; Leenders, K.L.

    1999-01-01

    The computerized brain atlas (CBA) and statistical parametric mapping (SPM) are two procedures for voxel-based statistical evaluation of PET activation studies. Each includes spatial standardization of image volumes, computation of a statistic, and evaluation of its significance. In addition,

  5. Comparative Estimation of Russia’s Regions Investment Potential on the Base of the Multivariate Statistical Analysis

    Directory of Open Access Journals (Sweden)

    Victor V. Nikitin

    2013-01-01

    Full Text Available The article introduces the algorithm of Russia’s regions investment potential estimation, developed by means of multivariate statistical methods, determines the factors, reflecting regions investment state. The integral indicator was developed on their basis, using statistical data. The article presents regions’ classification on the basis of the integral index

  6. The characterisation of blood rotation in a human heart chamber based on statistical analysis of vorticity maps

    Science.gov (United States)

    Wong, Kelvin K. L.; Kelso, Richard M.; Worthley, Stephen G.; Sanders, Prashanthan; Mazumdar, Jagannath; Abbott, Derek

    2008-12-01

    Modelling of non-stationary cardiac structures is complicated by the complexity of their intrinsic and extrinsic motion. The first known study of haemodynamics due to the beating of heart was made by Leonardo Da Vinci, giving the idea of fluid-solid interaction by describing how vortices develop during cardiac structural interaction with the blood. Heart morphology affects in changes of cardio dynamics during the systolic and diastolic phrases. In a chamber of the heart, vortices are discovered to exist as the result of the unique morphological changes of the cardiac chamber wall by using flow-imaging techniques such as phase contrast magnetic resonance imaging. The first part of this paper attempts to quantify vortex characteristics by means of calculating vorticity numerically and devising two dimensional vortical flow maps. The technique relies on determining the properties of vorticity using a statistical quantification of the flow maps and comparison of these quantities based on different scenarios. As the characteristics of our vorticity maps vary depending on the phase of a cardiac cycle, there is a need for robust quantification method to analyse vorticity. In the second part of the paper, the approach is then utilised for examining vortices within the human right atrium. Our study has shown that a proper quantification of vorticity for the flow field can indicate the strength and number of vortices within a heart chamber.

  7. Statistical Power in Meta-Analysis

    Science.gov (United States)

    Liu, Jin

    2015-01-01

    Statistical power is important in a meta-analysis study, although few studies have examined the performance of simulated power in meta-analysis. The purpose of this study is to inform researchers about statistical power estimation on two sample mean difference test under different situations: (1) the discrepancy between the analytical power and…

  8. Statistical methods for astronomical data analysis

    CERN Document Server

    Chattopadhyay, Asis Kumar

    2014-01-01

    This book introduces “Astrostatistics” as a subject in its own right with rewarding examples, including work by the authors with galaxy and Gamma Ray Burst data to engage the reader. This includes a comprehensive blending of Astrophysics and Statistics. The first chapter’s coverage of preliminary concepts and terminologies for astronomical phenomenon will appeal to both Statistics and Astrophysics readers as helpful context. Statistics concepts covered in the book provide a methodological framework. A unique feature is the inclusion of different possible sources of astronomical data, as well as software packages for converting the raw data into appropriate forms for data analysis. Readers can then use the appropriate statistical packages for their particular data analysis needs. The ideas of statistical inference discussed in the book help readers determine how to apply statistical tests. The authors cover different applications of statistical techniques already developed or specifically introduced for ...

  9. Statistical trend analysis methods for temporal phenomena

    International Nuclear Information System (INIS)

    Lehtinen, E.; Pulkkinen, U.; Poern, K.

    1997-04-01

    We consider point events occurring in a random way in time. In many applications the pattern of occurrence is of intrinsic interest as indicating a trend or some other systematic feature in the rate of occurrence. The purpose of this report is to survey briefly different statistical trend analysis methods and illustrate their applicability to temporal phenomena in particular. The trend testing of point events is usually seen as the testing of the hypotheses concerning the intensity of the occurrence of events. When the intensity function is parametrized, the testing of trend is a typical parametric testing problem. In industrial applications the operational experience generally does not suggest any specified model and method in advance. Therefore, and particularly, if the Poisson process assumption is very questionable, it is desirable to apply tests that are valid for a wide variety of possible processes. The alternative approach for trend testing is to use some non-parametric procedure. In this report we have presented four non-parametric tests: The Cox-Stuart test, the Wilcoxon signed ranks test, the Mann test, and the exponential ordered scores test. In addition to the classical parametric and non-parametric approaches we have also considered the Bayesian trend analysis. First we discuss a Bayesian model, which is based on a power law intensity model. The Bayesian statistical inferences are based on the analysis of the posterior distribution of the trend parameters, and the probability of trend is immediately seen from these distributions. We applied some of the methods discussed in an example case. It should be noted, that this report is a feasibility study rather than a scientific evaluation of statistical methods, and the examples can only be seen as demonstrations of the methods

  10. Statistical trend analysis methods for temporal phenomena

    Energy Technology Data Exchange (ETDEWEB)

    Lehtinen, E.; Pulkkinen, U. [VTT Automation, (Finland); Poern, K. [Poern Consulting, Nykoeping (Sweden)

    1997-04-01

    We consider point events occurring in a random way in time. In many applications the pattern of occurrence is of intrinsic interest as indicating a trend or some other systematic feature in the rate of occurrence. The purpose of this report is to survey briefly different statistical trend analysis methods and illustrate their applicability to temporal phenomena in particular. The trend testing of point events is usually seen as the testing of the hypotheses concerning the intensity of the occurrence of events. When the intensity function is parametrized, the testing of trend is a typical parametric testing problem. In industrial applications the operational experience generally does not suggest any specified model and method in advance. Therefore, and particularly, if the Poisson process assumption is very questionable, it is desirable to apply tests that are valid for a wide variety of possible processes. The alternative approach for trend testing is to use some non-parametric procedure. In this report we have presented four non-parametric tests: The Cox-Stuart test, the Wilcoxon signed ranks test, the Mann test, and the exponential ordered scores test. In addition to the classical parametric and non-parametric approaches we have also considered the Bayesian trend analysis. First we discuss a Bayesian model, which is based on a power law intensity model. The Bayesian statistical inferences are based on the analysis of the posterior distribution of the trend parameters, and the probability of trend is immediately seen from these distributions. We applied some of the methods discussed in an example case. It should be noted, that this report is a feasibility study rather than a scientific evaluation of statistical methods, and the examples can only be seen as demonstrations of the methods. 14 refs, 10 figs.

  11. Spatiotemporal Analysis for Wildlife-Vehicle Based on Accident Statistics of the County Straubing-Bogen in Lower Bavaria

    Science.gov (United States)

    Pagany, R.; Dorner, W.

    2016-06-01

    During the last years the numbers of wildlife-vehicle-collisions (WVC) in Bavaria increased considerably. Despite the statistical registration of WVC and preventive measures at areas of risk along the roads, the number of such accidents could not be contained. Using geospatial analysis on WVC data of the last five years for county Straubing-Bogen, Bavaria, a small-scale methodology was found to analyse the risk of WVC along the roads in the investigated area. Various indicators were examined, which may be related to WVC. The risk depends on the time of the day and year which shows correlations in turn to the traffic density and wildlife population. Additionally the location of the collision depends on the species and on different environmental parameters. Accidents seem to correlate with the land use left and right of the street. Land use data and current vegetation were derived from remote sensing data, providing information of the general land use, also considering the vegetation period. For this a number of hot spots was selected to identify potential dependencies between land use, vegetation and season. First results from these hotspots show, that WVCs do not only depend on land use, but may show a correlation with the vegetation period. With regard to agriculture and seasonal as well as annual changes this indicates that warnings will fail due to their static character in contrast to the dynamic situation of land use and resulting risk for WVCs. This shows that there is a demand for remote sensing data with a high spatial and temporal resolution as well as a methodology to derive WVC warnings considering land use and vegetation. With remote sensing data, it could become possible to classify land use and calculate risk levels for WVC. Additional parameters, derived from remote sensed data that could be considered are relief and crops as well as other parameters such as ponds, natural and infrastructural barriers that could be related to animal behaviour and

  12. Statistical analysis of electromagnetic radiation measurements in the vicinity of GSM/UMTS base station installed on buildings in Serbia.

    Science.gov (United States)

    Koprivica, Mladen; Slavkovic, Vladimir; Neskovic, Natasa; Neskovic, Aleksandar

    2016-03-01

    As a result of dense deployment of public mobile base stations, additional electromagnetic (EM) radiation occurs in the modern human environment. At the same time, public concern about the exposure to EM radiation emitted by such sources has increased. In order to determine the level of radio frequency radiation generated by base stations, extensive EM field strength measurements were carried out for 664 base station locations, from which 276 locations refer to the case of base stations with antenna system installed on buildings. Having in mind the large percentage (42 %) of locations with installations on buildings, as well as the inevitable presence of people in their vicinity, a detailed analysis of this location category was performed. Measurement results showed that the maximum recorded value of total electric field strength has exceeded International Commission on Non-Ionizing Radiation Protection general public exposure reference levels at 2.5 % of locations and Serbian national reference levels at 15.6 % of locations. It should be emphasised that the values exceeding the reference levels were observed only outdoor, while in indoor total electric field strength in no case exceeded the defined reference levels. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. Statistical analysis of electromagnetic radiation measurements in the vicinity of GSM/UMTS base station installed on buildings in Serbia

    International Nuclear Information System (INIS)

    Koprivica, Mladen; Slavkovic, Vladimir; Neskovic, Natasa; Neskovic, Aleksandar

    2016-01-01

    As a result of dense deployment of public mobile base stations, additional electromagnetic (EM) radiation occurs in the modern human environment. At the same time, public concern about the exposure to EM radiation emitted by such sources has increased. In order to determine the level of radio frequency radiation generated by base stations, extensive EM field strength measurements were carried out for 664 base station locations, from which 276 locations refer to the case of base stations with antenna system installed on buildings. Having in mind the large percentage (42 %) of locations with installations on buildings, as well as the inevitable presence of people in their vicinity, a detailed analysis of this location category was performed. Measurement results showed that the maximum recorded value of total electric field strength has exceeded International Commission on Non-Ionizing Radiation Protection general public exposure reference levels at 2.5 % of locations and Serbian national reference levels at 15.6 % of locations. It should be emphasised that the values exceeding the reference levels were observed only outdoor, while in indoor total electric field strength in no case exceeded the defined reference levels. (authors)

  14. Statistical analysis with Excel for dummies

    CERN Document Server

    Schmuller, Joseph

    2013-01-01

    Take the mystery out of statistical terms and put Excel to work! If you need to create and interpret statistics in business or classroom settings, this easy-to-use guide is just what you need. It shows you how to use Excel's powerful tools for statistical analysis, even if you've never taken a course in statistics. Learn the meaning of terms like mean and median, margin of error, standard deviation, and permutations, and discover how to interpret the statistics of everyday life. You'll learn to use Excel formulas, charts, PivotTables, and other tools to make sense of everything fro

  15. MO-F-BRA-04: Voxel-Based Statistical Analysis of Deformable Image Registration Error via a Finite Element Method.

    Science.gov (United States)

    Li, S; Lu, M; Kim, J; Glide-Hurst, C; Chetty, I; Zhong, H

    2012-06-01

    Purpose Clinical implementation of adaptive treatment planning is limited by the lack of quantitative tools to assess deformable image registration errors (R-ERR). The purpose of this study was to develop a method, using finite element modeling (FEM), to estimate registration errors based on mechanical changes resulting from them. Methods An experimental platform to quantify the correlation between registration errors and their mechanical consequences was developed as follows: diaphragm deformation was simulated on the CT images in patients with lung cancer using a finite element method (FEM). The simulated displacement vector fields (F-DVF) were used to warp each CT image to generate a FEM image. B-Spline based (Elastix) registrations were performed from reference to FEM images to generate a registration DVF (R-DVF). The F- DVF was subtracted from R-DVF. The magnitude of the difference vector was defined as the registration error, which is a consequence of mechanically unbalanced energy (UE), computed using 'in-house-developed' FEM software. A nonlinear regression model was used based on imaging voxel data and the analysis considered clustered voxel data within images. Results A regression model analysis showed that UE was significantly correlated with registration error, DVF and the product of registration error and DVF respectively with R̂2=0.73 (R=0.854). The association was verified independently using 40 tracked landmarks. A linear function between the means of UE values and R- DVF*R-ERR has been established. The mean registration error (N=8) was 0.9 mm. 85.4% of voxels fit this model within one standard deviation. Conclusions An encouraging relationship between UE and registration error has been found. These experimental results suggest the feasibility of UE as a valuable tool for evaluating registration errors, thus supporting 4D and adaptive radiotherapy. The research was supported by NIH/NCI R01CA140341. © 2012 American Association of Physicists in

  16. Network based statistical analysis detects changes induced by continuous theta burst stimulation on brain activity at rest.

    Directory of Open Access Journals (Sweden)

    Chiara eMastropasqua

    2014-08-01

    Full Text Available We combined continuous theta burst stimulation (cTBS and resting state (RS -fMRI approaches to investigate changes in functional connectivity (FC induced by right dorso-lateral prefrontal cortex (DLPFC cTBS at rest in a group of healthy subjects. Seed based fMRI analysis revealed a specific pattern of correlation between the right prefrontal cortex and several brain regions: based on these results, we defined a 29-node network to assess changes in each network connection before and after, respectively, DLPFC-cTBS and sham sessions. A decrease of correlation between the right prefrontal cortex and right parietal cortex (Brodmann areas 46 and 40 respectively was detected after cTBS, while no significant result was found when analyzing sham-session data. To our knowledge, this is the first study that demonstrates within-subject changes in FC induced by cTBS applied on prefrontal area. The possibility to induce selective changes in a specific region without interfering with functionally correlated area could have several implications for the study of functional properties of the brain, and for the emerging therapeutic strategies based on transcranial stimulation.

  17. Application of multivariate statistical methods to classify archaeological pottery from Tel-Alramad site, Syria, based on x-ray fluorescence analysis

    International Nuclear Information System (INIS)

    Bakraji, E. H.

    2007-01-01

    Radioisotopic x-ray fluorescence (XRF) analysis has been utilized to determine the elemental composition of 55 archaeological pottery samples by the determination of 17 chemical elements. Fifty-four of them came from the Tel-Alramad Site in Katana town, near Damascus city, Syria, and one sample came from Brazil. The XRF results have been processed using two multivariate statistical methods, cluster and factor analysis, in order to determine similarities and correlation between the selected samples based on their elemental composition. The methodology successfully separates the samples where four distinct chemical groups were identified. (author)

  18. Collecting operational event data for statistical analysis

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1994-09-01

    This report gives guidance for collecting operational data to be used for statistical analysis, especially analysis of event counts. It discusses how to define the purpose of the study, the unit (system, component, etc.) to be studied, events to be counted, and demand or exposure time. Examples are given of classification systems for events in the data sources. A checklist summarizes the essential steps in data collection for statistical analysis

  19. SPATIOTEMPORAL ANALYSIS FOR WILDLIFE-VEHICLE-COLLISIONS BASED ON ACCIDENT STATISTICS OF THE COUNTY STRAUBING-BOGEN IN LOWER BAVARIA

    Directory of Open Access Journals (Sweden)

    R. Pagany

    2016-06-01

    Full Text Available During the last years the numbers of wildlife-vehicle-collisions (WVC in Bavaria increased considerably. Despite the statistical registration of WVC and preventive measures at areas of risk along the roads, the number of such accidents could not be contained. Using geospatial analysis on WVC data of the last five years for county Straubing-Bogen, Bavaria, a small-scale methodology was found to analyse the risk of WVC along the roads in the investigated area. Various indicators were examined, which may be related to WVC. The risk depends on the time of the day and year which shows correlations in turn to the traffic density and wildlife population. Additionally the location of the collision depends on the species and on different environmental parameters. Accidents seem to correlate with the land use left and right of the street. Land use data and current vegetation were derived from remote sensing data, providing information of the general land use, also considering the vegetation period. For this a number of hot spots was selected to identify potential dependencies between land use, vegetation and season. First results from these hotspots show, that WVCs do not only depend on land use, but may show a correlation with the vegetation period. With regard to agriculture and seasonal as well as annual changes this indicates that warnings will fail due to their static character in contrast to the dynamic situation of land use and resulting risk for WVCs. This shows that there is a demand for remote sensing data with a high spatial and temporal resolution as well as a methodology to derive WVC warnings considering land use and vegetation. With remote sensing data, it could become possible to classify land use and calculate risk levels for WVC. Additional parameters, derived from remote sensed data that could be considered are relief and crops as well as other parameters such as ponds, natural and infrastructural barriers that could be related to

  20. Plasma data analysis using statistical analysis system

    International Nuclear Information System (INIS)

    Yoshida, Z.; Iwata, Y.; Fukuda, Y.; Inoue, N.

    1987-01-01

    Multivariate factor analysis has been applied to a plasma data base of REPUTE-1. The characteristics of the reverse field pinch plasma in REPUTE-1 are shown to be explained by four independent parameters which are described in the report. The well known scaling laws F/sub chi/ proportional to I/sub p/, T/sub e/ proportional to I/sub p/, and tau/sub E/ proportional to N/sub e/ are also confirmed. 4 refs., 8 figs., 1 tab

  1. Recent advances in statistical energy analysis

    Science.gov (United States)

    Heron, K. H.

    1992-01-01

    Statistical Energy Analysis (SEA) has traditionally been developed using modal summation and averaging approach, and has led to the need for many restrictive SEA assumptions. The assumption of 'weak coupling' is particularly unacceptable when attempts are made to apply SEA to structural coupling. It is now believed that this assumption is more a function of the modal formulation rather than a necessary formulation of SEA. The present analysis ignores this restriction and describes a wave approach to the calculation of plate-plate coupling loss factors. Predictions based on this method are compared with results obtained from experiments using point excitation on one side of an irregular six-sided box structure. Conclusions show that the use and calculation of infinite transmission coefficients is the way forward for the development of a purely predictive SEA code.

  2. Characteristic fingerprinting based on macamides for discrimination of maca (Lepidium meyenii) by LC/MS/MS and multivariate statistical analysis.

    Science.gov (United States)

    Pan, Yu; Zhang, Ji; Li, Hong; Wang, Yuan-Zhong; Li, Wan-Yi

    2016-10-01

    Macamides with a benzylalkylamide nucleus are characteristic and major bioactive compounds in the functional food maca (Lepidium meyenii Walp). The aim of this study was to explore variations in macamide content among maca from China and Peru. Twenty-seven batches of maca hypocotyls with different phenotypes, sampled from different geographical origins, were extracted and profiled by liquid chromatography with ultraviolet detection/tandem mass spectrometry (LC-UV/MS/MS). Twelve macamides were identified by MS operated in multiple scanning modes. Similarity analysis showed that maca samples differed significantly in their macamide fingerprinting. Partial least squares discriminant analysis (PLS-DA) was used to differentiate samples according to their geographical origin and to identify the most relevant variables in the classification model. The prediction accuracy for raw maca was 91% and five macamides were selected and considered as chemical markers for sample classification. When combined with a PLS-DA model, characteristic fingerprinting based on macamides could be recommended for labelling for the authentication of maca from different geographical origins. The results provided potential evidence for the relationships between environmental or other factors and distribution of macamides. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  3. Analysis of uncertainties and convergence of the statistical quantities in turbulent wall-bounded flows by means of a physically based criterion

    Science.gov (United States)

    Andrade, João Rodrigo; Martins, Ramon Silva; Thompson, Roney Leon; Mompean, Gilmar; da Silveira Neto, Aristeu

    2018-04-01

    The present paper provides an analysis of the statistical uncertainties associated with direct numerical simulation (DNS) results and experimental data for turbulent channel and pipe flows, showing a new physically based quantification of these errors, to improve the determination of the statistical deviations between DNSs and experiments. The analysis is carried out using a recently proposed criterion by Thompson et al. ["A methodology to evaluate statistical errors in DNS data of plane channel flows," Comput. Fluids 130, 1-7 (2016)] for fully turbulent plane channel flows, where the mean velocity error is estimated by considering the Reynolds stress tensor, and using the balance of the mean force equation. It also presents how the residual error evolves in time for a DNS of a plane channel flow, and the influence of the Reynolds number on its convergence rate. The root mean square of the residual error is shown in order to capture a single quantitative value of the error associated with the dimensionless averaging time. The evolution in time of the error norm is compared with the final error provided by DNS data of similar Reynolds numbers available in the literature. A direct consequence of this approach is that it was possible to compare different numerical results and experimental data, providing an improved understanding of the convergence of the statistical quantities in turbulent wall-bounded flows.

  4. Explorations in Statistics: The Analysis of Change

    Science.gov (United States)

    Curran-Everett, Douglas; Williams, Calvin L.

    2015-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This tenth installment of "Explorations in Statistics" explores the analysis of a potential change in some physiological response. As researchers, we often express absolute change as percent change so we can…

  5. A statistical method to base nutrient recommendations on meta-analysis of intake and health-related status biomarkers.

    Directory of Open Access Journals (Sweden)

    Hilko van der Voet

    Full Text Available Nutrient recommendations in use today are often derived from relatively old data of few studies with few individuals. However, for many nutrients, including vitamin B-12, extensive data have now become available from both observational studies and randomized controlled trials, addressing the relation between intake and health-related status biomarkers. The purpose of this article is to provide new methodology for dietary planning based on dose-response data and meta-analysis. The methodology builds on existing work, and is consistent with current methodology and measurement error models for dietary assessment. The detailed purposes of this paper are twofold. Firstly, to define a Population Nutrient Level (PNL for dietary planning in groups. Secondly, to show how data from different sources can be combined in an extended meta-analysis of intake-status datasets for estimating PNL as well as other nutrient intake values, such as the Average Nutrient Requirement (ANR and the Individual Nutrient Level (INL. For this, a computational method is presented for comparing a bivariate lognormal distribution to a health criterion value. Procedures to meta-analyse available data in different ways are described. Example calculations on vitamin B-12 requirements were made for four models, assuming different ways of estimating the dose-response relation, and different values of the health criterion. Resulting estimates of ANRs and less so for INLs were found to be sensitive to model assumptions, whereas estimates of PNLs were much less sensitive to these assumptions as they were closer to the average nutrient intake in the available data.

  6. Statistical analysis of tourism destination competitiveness

    Directory of Open Access Journals (Sweden)

    Attilio Gardini

    2013-05-01

    Full Text Available The growing relevance of tourism industry for modern advanced economies has increased the interest among researchers and policy makers in the statistical analysis of destination competitiveness. In this paper we outline a new model of destination competitiveness based on sound theoretical grounds and we develop a statistical test of the model on sample data based on Italian tourist destination decisions and choices. Our model focuses on the tourism decision process which starts from the demand schedule for holidays and ends with the choice of a specific holiday destination. The demand schedule is a function of individual preferences and of destination positioning, while the final decision is a function of the initial demand schedule and the information concerning services for accommodation and recreation in the selected destinations. Moreover, we extend previous studies that focused on image or attributes (such as climate and scenery by paying more attention to the services for accommodation and recreation in the holiday destinations. We test the proposed model using empirical data collected from a sample of 1.200 Italian tourists interviewed in 2007 (October - December. Data analysis shows that the selection probability for the destination included in the consideration set is not proportional to the share of inclusion because the share of inclusion is determined by the brand image, while the selection of the effective holiday destination is influenced by the real supply conditions. The analysis of Italian tourists preferences underline the existence of a latent demand for foreign holidays which points out a risk of market share reduction for Italian tourism system in the global market. We also find a snow ball effect which helps the most popular destinations, mainly in the northern Italian regions.

  7. Preliminary results of sup(40)Ca(e,e'c) reaction analysis c p,α, based on statistical model

    International Nuclear Information System (INIS)

    Herdade, S.B.; Emrich, H.J.

    1990-01-01

    Statistical model calculations relative to the reactions sup(40)Ca (e,e'p) sup(39)K and sup(40)Ca(e,e'P sub(o)) sup(39)K sup(gs), using a modified version of the program STAPRE are compared with experimental results obtained from coincidence experiments carried out at the Mainz microtron MAMI A. Preliminary results indicate that the statistical decay of a 1 sup(-) level in the sup(40)Ca compound nucleus, at an excitation energy + 20 MeV, to the ground state of the sup(39)K residual nucleus is only about 15% of the total decay, indicating that direct and/or semi-direct mechanisms contribute to the major part of the decay. (author)

  8. Statistical shape analysis with applications in R

    CERN Document Server

    Dryden, Ian L

    2016-01-01

    A thoroughly revised and updated edition of this introduction to modern statistical methods for shape analysis Shape analysis is an important tool in the many disciplines where objects are compared using geometrical features. Examples include comparing brain shape in schizophrenia; investigating protein molecules in bioinformatics; and describing growth of organisms in biology. This book is a significant update of the highly-regarded `Statistical Shape Analysis’ by the same authors. The new edition lays the foundations of landmark shape analysis, including geometrical concepts and statistical techniques, and extends to include analysis of curves, surfaces, images and other types of object data. Key definitions and concepts are discussed throughout, and the relative merits of different approaches are presented. The authors have included substantial new material on recent statistical developments and offer numerous examples throughout the text. Concepts are introduced in an accessible manner, while reta...

  9. Strategies for improving the Voxel-based statistical analysis for animal PET studies: assessment of cerebral glucose metabolism in cat deafness model

    International Nuclear Information System (INIS)

    Kim, Jin Su; Lee, Jae Sung; Park, Min Hyun; Kang, Hye Jin; Im, Ki Chun; Moon, Dae Hyuk; Lim, Sang Moo; Oh, Seung Ha; Lee, Dong Soo

    2007-01-01

    In imaging studies of the human brain, voxel-based statistical analysis method was widely used, since these methods were originally developed for the analysis of the human brain data, they are not optimal for the animal brain data. The aim of this study is to optimize the procedures for the 3D voxel-based statistical analysis of cat FDG PET brain images. A microPET Focus 120 scanner was used. Eight cats underwent FDG PET scans twice before and after inducing the deafness. Only the brain and adjacent regions were extracted from each data set by manual masking. Individual PET image at normal and deaf state was realigned to each other to remove the confounding effects by the different spatial normalization parameters on the results of statistical analyses. Distance between the sampling points on the reference image and kernel size of Gaussian filter applied to the images before estimating the realignment parameters were adjusted to 0.5 mm and 2 mm. Both data was then spatial normalized onto study-specific cat brain template. Spatially normalized PET data were smoothed and voxel-based paired t-test was performed. Cerebral glucose metabolism decreased significantly after the loss of hearing capability in parietal lobes, postcentral gyri, STG, MTG, lTG, and IC at both hemisphere and left SC (FDR corrected P < 0.05, k=50). Cerebral glucose metabolism in deaf cats was found to be significantly higher than in controls in the right cingulate (FDR corrected P < 0.05, k=50). The ROI analysis also showed significant reduction of glucose metabolism in the same areas as in the SPM analysis, except for some regions (P < 0.05). Method for the voxel-based analysis of cat brain PET data was optimized for analysis of cat brain PET. This result was also confirmed by ROI analysis. The results obtained demonstrated the high localization accuracy and specificity of the developed method, and were found to be useful for examining cerebral glucose metabolism in a cat cortical deafness model

  10. Strategies for improving the Voxel-based statistical analysis for animal PET studies: assessment of cerebral glucose metabolism in cat deafness model

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jin Su; Lee, Jae Sung; Park, Min Hyun; Kang, Hye Jin; Im, Ki Chun; Moon, Dae Hyuk; Lim, Sang Moo; Oh, Seung Ha; Lee, Dong Soo [Seoul National Univ. College of Medicine, Seoul (Korea, Republic of)

    2007-07-01

    In imaging studies of the human brain, voxel-based statistical analysis method was widely used, since these methods were originally developed for the analysis of the human brain data, they are not optimal for the animal brain data. The aim of this study is to optimize the procedures for the 3D voxel-based statistical analysis of cat FDG PET brain images. A microPET Focus 120 scanner was used. Eight cats underwent FDG PET scans twice before and after inducing the deafness. Only the brain and adjacent regions were extracted from each data set by manual masking. Individual PET image at normal and deaf state was realigned to each other to remove the confounding effects by the different spatial normalization parameters on the results of statistical analyses. Distance between the sampling points on the reference image and kernel size of Gaussian filter applied to the images before estimating the realignment parameters were adjusted to 0.5 mm and 2 mm. Both data was then spatial normalized onto study-specific cat brain template. Spatially normalized PET data were smoothed and voxel-based paired t-test was performed. Cerebral glucose metabolism decreased significantly after the loss of hearing capability in parietal lobes, postcentral gyri, STG, MTG, lTG, and IC at both hemisphere and left SC (FDR corrected P < 0.05, k=50). Cerebral glucose metabolism in deaf cats was found to be significantly higher than in controls in the right cingulate (FDR corrected P < 0.05, k=50). The ROI analysis also showed significant reduction of glucose metabolism in the same areas as in the SPM analysis, except for some regions (P < 0.05). Method for the voxel-based analysis of cat brain PET data was optimized for analysis of cat brain PET. This result was also confirmed by ROI analysis. The results obtained demonstrated the high localization accuracy and specificity of the developed method, and were found to be useful for examining cerebral glucose metabolism in a cat cortical deafness model.

  11. Advances in statistical models for data analysis

    CERN Document Server

    Minerva, Tommaso; Vichi, Maurizio

    2015-01-01

    This edited volume focuses on recent research results in classification, multivariate statistics and machine learning and highlights advances in statistical models for data analysis. The volume provides both methodological developments and contributions to a wide range of application areas such as economics, marketing, education, social sciences and environment. The papers in this volume were first presented at the 9th biannual meeting of the Classification and Data Analysis Group (CLADAG) of the Italian Statistical Society, held in September 2013 at the University of Modena and Reggio Emilia, Italy.

  12. Reconstructing Macroeconomics Based on Statistical Physics

    Science.gov (United States)

    Aoki, Masanao; Yoshikawa, Hiroshi

    We believe that time has come to integrate the new approach based on statistical physics or econophysics into macroeconomics. Toward this goal, there must be more dialogues between physicists and economists. In this paper, we argue that there is no reason why the methods of statistical physics so successful in many fields of natural sciences cannot be usefully applied to macroeconomics that is meant to analyze the macroeconomy comprising a large number of economic agents. It is, in fact, weird to regard the macroeconomy as a homothetic enlargement of the representative micro agent. We trust the bright future of the new approach to macroeconomies based on statistical physics.

  13. A wind proxy based on migrating dunes at the Baltic coast: statistical analysis of the link between wind conditions and sand movement

    Science.gov (United States)

    Bierstedt, Svenja E.; Hünicke, Birgit; Zorita, Eduardo; Ludwig, Juliane

    2017-07-01

    We statistically analyse the relationship between the structure of migrating dunes in the southern Baltic and the driving wind conditions over the past 26 years, with the long-term aim of using migrating dunes as a proxy for past wind conditions at an interannual resolution. The present analysis is based on the dune record derived from geo-radar measurements by Ludwig et al. (2017). The dune system is located at the Baltic Sea coast of Poland and is migrating from west to east along the coast. The dunes present layers with different thicknesses that can be assigned to absolute dates at interannual timescales and put in relation to seasonal wind conditions. To statistically analyse this record and calibrate it as a wind proxy, we used a gridded regional meteorological reanalysis data set (coastDat2) covering recent decades. The identified link between the dune annual layers and wind conditions was additionally supported by the co-variability between dune layers and observed sea level variations in the southern Baltic Sea. We include precipitation and temperature into our analysis, in addition to wind, to learn more about the dependency between these three atmospheric factors and their common influence on the dune system. We set up a statistical linear model based on the correlation between the frequency of days with specific wind conditions in a given season and dune migration velocities derived for that season. To some extent, the dune records can be seen as analogous to tree-ring width records, and hence we use a proxy validation method usually applied in dendrochronology, cross-validation with the leave-one-out method, when the observational record is short. The revealed correlations between the wind record from the reanalysis and the wind record derived from the dune structure is in the range between 0.28 and 0.63, yielding similar statistical validation skill as dendroclimatological records.

  14. Statistical analysis of ultrasonic measurements in concrete

    Science.gov (United States)

    Chiang, Chih-Hung; Chen, Po-Chih

    2002-05-01

    Stress wave techniques such as measurements of ultrasonic pulse velocity are often used to evaluate concrete quality in structures. For proper interpretation of measurement results, the dependence of pulse transit time on the average acoustic impedance and the material homogeneity along the sound path need to be examined. Semi-direct measurement of pulse velocity could be more convenient than through transmission measurement. It is not necessary to assess both sides of concrete floors or walls. A novel measurement scheme is proposed and verified based on statistical analysis. It is shown that Semi-direct measurements are very effective for gathering large amount of pulse velocity data from concrete reference specimens. The variability of measurements is comparable with that reported by American Concrete Institute using either break-off or pullout tests.

  15. Dating and classification of Syrian excavated pottery from Tell Saka Site, by means of thermoluminescence analysis, and multivariate statistical methods, based on PIXE analysis

    International Nuclear Information System (INIS)

    Bakraji, E.H.; Ahmad, M.; Salman, N.; Haloum, D.; Boutros, N.; Abboud, R.

    2011-01-01

    Thermoluminescence (TL) dating and Proton Induced X-ray Emission (PIXE) techniques have been utilized for the study of archaeological pottery fragment samples from Tell Saka Site, which is located at 25 km south east of Damascus city, Syria. Four samples were chosen randomly from the site, two from third level and two from fourth level for dating using TL technique and the results were in good agreement with the date assigned by archaeologists. Twenty-eight sherds were analyzed using PIXE technique in order to identify and characterize the elemental composition of pottery excavated from third and fourth levels, using 3 MV tandem accelerator in Damascus. The analysis provided almost 20 elements (Na, Mg, Al, Si, P, S, K, Ca, Ti, Mn, Fe, Co, Ni, Cu, Zn, Rb, Sr, Y, Zr, Nb). However, only 14 elements as follows: K, Ca, Ti, Mn, Fe, Co, Ni, Cu, Zn, Rb, Sr, Y, Zr, Nb were chosen for statistical analysis and have been processed using two multivariate statistical methods, Cluster and Factor analysis. The studied pottery were classify into two well defined groups. (author)

  16. A statistical GIS-based analysis of Wild boar (Sus scrofa traffic collisions in a Mediterranean area

    Directory of Open Access Journals (Sweden)

    Andrea Amici

    2010-01-01

    Full Text Available vThis study was aimed at analysing the most relevant environmental variables involved in vehicle–wild boar road collisions in a Mediterranean area, starting from a territorial information system implemented in a GIS environment. Landscape structure indices and local qualitative and quantitative variables were correlated to identify the most frequent predisposing factors of collisions. Statistical tests of the considered parameters indicated a higher frequency of collisions in the evening hours of late summer and autumn (P<0.05 compared with daytime and night-time, localized nearness to attraction points (feeding or resting sites; P<0.001 and closeness to no-hunting areas (P<0.001. In addition, local road parameters (shape of road, visibility at road sides, etc. considerably increased the risk of collisions.

  17. Statistical analysis of electromagnetic radiation measurements in the vicinity of indoor microcell GSM/UMTS base stations in Serbia.

    Science.gov (United States)

    Koprivica, Mladen; Petrić, Majda; Nešković, Nataša; Nešković, Aleksandar

    2016-01-01

    To determine the level of radiofrequency radiation generated by base stations of Global System for Mobile Communications and Universal Mobile Telecommunication System, extensive electromagnetic field strength measurements were carried out in the vicinity of 664 base station locations. These were classified into three categories: indoor, masts, and locations with installations on buildings. Although microcell base stations with antennas installed indoors typically emit less power than outdoor macrocell base stations, the fact that people can be found close to antennas requires exposure originating from these base stations to be carefully considered. Measurement results showed that maximum recorded value of electric field strength exceeded International Commission on Non-Ionizing Radiation Protection reference levels at 7% of indoor base station locations. At the same time, this percentage was much lower in the case of masts and installations on buildings (0% and 2.5%, respectively). © 2015 Wiley Periodicals, Inc.

  18. Classification, (big) data analysis and statistical learning

    CERN Document Server

    Conversano, Claudio; Vichi, Maurizio

    2018-01-01

    This edited book focuses on the latest developments in classification, statistical learning, data analysis and related areas of data science, including statistical analysis of large datasets, big data analytics, time series clustering, integration of data from different sources, as well as social networks. It covers both methodological aspects as well as applications to a wide range of areas such as economics, marketing, education, social sciences, medicine, environmental sciences and the pharmaceutical industry. In addition, it describes the basic features of the software behind the data analysis results, and provides links to the corresponding codes and data sets where necessary. This book is intended for researchers and practitioners who are interested in the latest developments and applications in the field. The peer-reviewed contributions were presented at the 10th Scientific Meeting of the Classification and Data Analysis Group (CLADAG) of the Italian Statistical Society, held in Santa Margherita di Pul...

  19. Statistical hot spot analysis of reactor cores

    International Nuclear Information System (INIS)

    Schaefer, H.

    1974-05-01

    This report is an introduction into statistical hot spot analysis. After the definition of the term 'hot spot' a statistical analysis is outlined. The mathematical method is presented, especially the formula concerning the probability of no hot spots in a reactor core is evaluated. A discussion with the boundary conditions of a statistical hot spot analysis is given (technological limits, nominal situation, uncertainties). The application of the hot spot analysis to the linear power of pellets and the temperature rise in cooling channels is demonstrated with respect to the test zone of KNK II. Basic values, such as probability of no hot spots, hot spot potential, expected hot spot diagram and cumulative distribution function of hot spots, are discussed. It is shown, that the risk of hot channels can be dispersed equally over all subassemblies by an adequate choice of the nominal temperature distribution in the core

  20. Statistics and analysis of scientific data

    CERN Document Server

    Bonamente, Massimiliano

    2013-01-01

    Statistics and Analysis of Scientific Data covers the foundations of probability theory and statistics, and a number of numerical and analytical methods that are essential for the present-day analyst of scientific data. Topics covered include probability theory, distribution functions of statistics, fits to two-dimensional datasheets and parameter estimation, Monte Carlo methods and Markov chains. Equal attention is paid to the theory and its practical application, and results from classic experiments in various fields are used to illustrate the importance of statistics in the analysis of scientific data. The main pedagogical method is a theory-then-application approach, where emphasis is placed first on a sound understanding of the underlying theory of a topic, which becomes the basis for an efficient and proactive use of the material for practical applications. The level is appropriate for undergraduates and beginning graduate students, and as a reference for the experienced researcher. Basic calculus is us...

  1. Testing for Statistical Discrimination based on Gender

    DEFF Research Database (Denmark)

    Lesner, Rune Vammen

    . It is shown that the implications of both screening discrimination and stereotyping are consistent with observable wage dynamics. In addition, it is found that the gender wage gap decreases in tenure but increases in job transitions and that the fraction of women in high-ranking positions within a firm does......This paper develops a model which incorporates the two most commonly cited strands of the literature on statistical discrimination, namely screening discrimination and stereotyping. The model is used to provide empirical evidence of statistical discrimination based on gender in the labour market...... not affect the level of statistical discrimination by gender....

  2. Semiclassical analysis, Witten Laplacians, and statistical mechanis

    CERN Document Server

    Helffer, Bernard

    2002-01-01

    This important book explains how the technique of Witten Laplacians may be useful in statistical mechanics. It considers the problem of analyzing the decay of correlations, after presenting its origin in statistical mechanics. In addition, it compares the Witten Laplacian approach with other techniques, such as the transfer matrix approach and its semiclassical analysis. The author concludes by providing a complete proof of the uniform Log-Sobolev inequality. Contents: Witten Laplacians Approach; Problems in Statistical Mechanics with Discrete Spins; Laplace Integrals and Transfer Operators; S

  3. A statistical approach to plasma profile analysis

    International Nuclear Information System (INIS)

    Kardaun, O.J.W.F.; McCarthy, P.J.; Lackner, K.; Riedel, K.S.

    1990-05-01

    A general statistical approach to the parameterisation and analysis of tokamak profiles is presented. The modelling of the profile dependence on both the radius and the plasma parameters is discussed, and pertinent, classical as well as robust, methods of estimation are reviewed. Special attention is given to statistical tests for discriminating between the various models, and to the construction of confidence intervals for the parameterised profiles and the associated global quantities. The statistical approach is shown to provide a rigorous approach to the empirical testing of plasma profile invariance. (orig.)

  4. Reproducible statistical analysis with multiple languages

    DEFF Research Database (Denmark)

    Lenth, Russell; Højsgaard, Søren

    2011-01-01

    This paper describes the system for making reproducible statistical analyses. differs from other systems for reproducible analysis in several ways. The two main differences are: (1) Several statistics programs can be in used in the same document. (2) Documents can be prepared using OpenOffice or ......Office or \\LaTeX. The main part of this paper is an example showing how to use and together in an OpenOffice text document. The paper also contains some practical considerations on the use of literate programming in statistics....

  5. Statistical analysis of surface roughness in turning based on cutting parameters and tool vibrations with response surface methodology (RSM)

    Science.gov (United States)

    Touati, Soufiane; Mekhilef, Slimane

    2018-03-01

    In this paper, we present an experimental study to determine the effect of the cutting conditions and tool vibration on the surface roughness in finish turning of 32CrMoV12-28 steel, using carbide cutting tool YT15. For these purposes, a linear quadratic model in interaction of connecting surface roughness (Ra, Rz) with different combinations of cutting parameters such as cutting speed, feed rate, depth of cut and tool vibration, in radial and in tangential cutting force directions (Vy) and (Vz) is elaborated. In order to express the degree of interaction of cutting parameters and tool vibration, a multiple linear regression and response surface methodology are adopted. The application of this statistical technique for predicting the surface roughness shows that the feed rate is the most dominant factor followed by the cutting speed. However, the depth of the cut and tool vibrations have secondary effect. The presented models have some interest since they are used in the cutting process optimization.

  6. A statistical analysis of the freshness of postharvest leafy vegetables with application of water based on chlorophyll fluorescence measurement

    Directory of Open Access Journals (Sweden)

    Yichen Qiu

    2017-12-01

    Full Text Available Vegetable freshness is very important for both restaurant and home consumers. In market, sellers frequently apply water to leafy vegetables to make them not lose weight and look fresh; however, these vegetables may not be stored for a long time as they appear. After a time limit, they may be quickly rotten. It is thus meaningful to investigate early and simple detection tools to measure leafy vegetable freshness while they are frequently applied water in selling. In this work, three types of newly harvested leafy vegetables were bought from a local farmer market and stored in the air with room temperature and roots submerging in water. Chlorophyll a fluorescence (ChlF from the vegetables was measured each half a day for three days. The obtained ChlF data were analyzed statistically and the correlation of ChlF parameters and vegetable freshness/storage time was obtained. The k-mean classification was also performed. It is found that Fo, Fj, Fm/Fo, and Fv/Fm can be used as an early detection tool to differentiate the freshness of leafy vegetables on which water is constantly applied in storage without visible difference. Keywords: Vegetable freshness, Chlorophyll fluorescence, Food measurement

  7. Foundation of statistical energy analysis in vibroacoustics

    CERN Document Server

    Le Bot, A

    2015-01-01

    This title deals with the statistical theory of sound and vibration. The foundation of statistical energy analysis is presented in great detail. In the modal approach, an introduction to random vibration with application to complex systems having a large number of modes is provided. For the wave approach, the phenomena of propagation, group speed, and energy transport are extensively discussed. Particular emphasis is given to the emergence of diffuse field, the central concept of the theory.

  8. Testing for Statistical Discrimination based on Gender

    OpenAIRE

    Lesner, Rune Vammen

    2016-01-01

    This paper develops a model which incorporates the two most commonly cited strands of the literature on statistical discrimination, namely screening discrimination and stereotyping. The model is used to provide empirical evidence of statistical discrimination based on gender in the labour market. It is shown that the implications of both screening discrimination and stereotyping are consistent with observable wage dynamics. In addition, it is found that the gender wage gap decreases in tenure...

  9. Statistical models based on conditional probability distributions

    International Nuclear Information System (INIS)

    Narayanan, R.S.

    1991-10-01

    We present a formulation of statistical mechanics models based on conditional probability distribution rather than a Hamiltonian. We show that it is possible to realize critical phenomena through this procedure. Closely linked with this formulation is a Monte Carlo algorithm, in which a configuration generated is guaranteed to be statistically independent from any other configuration for all values of the parameters, in particular near the critical point. (orig.)

  10. Evaluation and validation of a single-dilution potency assay based upon serology of vaccines containing diphtheria toxoid: statistical analysis

    NARCIS (Netherlands)

    Marsman FR; Akkermans AM; Hendriksen CFM; de Jong WH

    1993-01-01

    This document presents the results of a validation study to the use of a single dilution assay in potency testing of the diphtheria component of DPT-polio vaccines. Based on historical data of multi-dilution assays on 27 consecutive batches a simulation study was performed to test the actual

  11. Quantitative X-ray Map Analyser (Q-XRMA): A new GIS-based statistical approach to Mineral Image Analysis

    Science.gov (United States)

    Ortolano, Gaetano; Visalli, Roberto; Godard, Gaston; Cirrincione, Rosolino

    2018-06-01

    We present a new ArcGIS®-based tool developed in the Python programming language for calibrating EDS/WDS X-ray element maps, with the aim of acquiring quantitative information of petrological interest. The calibration procedure is based on a multiple linear regression technique that takes into account interdependence among elements and is constrained by the stoichiometry of minerals. The procedure requires an appropriate number of spot analyses for use as internal standards and provides several test indexes for a rapid check of calibration accuracy. The code is based on an earlier image-processing tool designed primarily for classifying minerals in X-ray element maps; the original Python code has now been enhanced to yield calibrated maps of mineral end-members or the chemical parameters of each classified mineral. The semi-automated procedure can be used to extract a dataset that is automatically stored within queryable tables. As a case study, the software was applied to an amphibolite-facies garnet-bearing micaschist. The calibrated images obtained for both anhydrous (i.e., garnet and plagioclase) and hydrous (i.e., biotite) phases show a good fit with corresponding electron microprobe analyses. This new GIS-based tool package can thus find useful application in petrology and materials science research. Moreover, the huge quantity of data extracted opens new opportunities for the development of a thin-section microchemical database that, using a GIS platform, can be linked with other major global geoscience databases.

  12. Statistical analysis of network data with R

    CERN Document Server

    Kolaczyk, Eric D

    2014-01-01

    Networks have permeated everyday life through everyday realities like the Internet, social networks, and viral marketing. As such, network analysis is an important growth area in the quantitative sciences, with roots in social network analysis going back to the 1930s and graph theory going back centuries. Measurement and analysis are integral components of network research. As a result, statistical methods play a critical role in network analysis. This book is the first of its kind in network research. It can be used as a stand-alone resource in which multiple R packages are used to illustrate how to conduct a wide range of network analyses, from basic manipulation and visualization, to summary and characterization, to modeling of network data. The central package is igraph, which provides extensive capabilities for studying network graphs in R. This text builds on Eric D. Kolaczyk’s book Statistical Analysis of Network Data (Springer, 2009).

  13. Statistical-techniques-based computer-aided diagnosis (CAD) using texture feature analysis: application in computed tomography (CT) imaging to fatty liver disease

    Science.gov (United States)

    Chung, Woon-Kwan; Park, Hyong-Hu; Im, In-Chul; Lee, Jae-Seung; Goo, Eun-Hoe; Dong, Kyung-Rae

    2012-09-01

    This paper proposes a computer-aided diagnosis (CAD) system based on texture feature analysis and statistical wavelet transformation technology to diagnose fatty liver disease with computed tomography (CT) imaging. In the target image, a wavelet transformation was performed for each lesion area to set the region of analysis (ROA, window size: 50 × 50 pixels) and define the texture feature of a pixel. Based on the extracted texture feature values, six parameters (average gray level, average contrast, relative smoothness, skewness, uniformity, and entropy) were determined to calculate the recognition rate for a fatty liver. In addition, a multivariate analysis of the variance (MANOVA) method was used to perform a discriminant analysis to verify the significance of the extracted texture feature values and the recognition rate for a fatty liver. According to the results, each texture feature value was significant for a comparison of the recognition rate for a fatty liver ( p fatty liver had the same scale as that for the F-value, showing 100% (average gray level) at the maximum and 80% (average contrast) at the minimum. Therefore, the recognition rate is believed to be a useful clinical value for the automatic detection and computer-aided diagnosis (CAD) using the texture feature value. Nevertheless, further study on various diseases and singular diseases will be needed in the future.

  14. Method for statistical data analysis of multivariate observations

    CERN Document Server

    Gnanadesikan, R

    1997-01-01

    A practical guide for multivariate statistical techniques-- now updated and revised In recent years, innovations in computer technology and statistical methodologies have dramatically altered the landscape of multivariate data analysis. This new edition of Methods for Statistical Data Analysis of Multivariate Observations explores current multivariate concepts and techniques while retaining the same practical focus of its predecessor. It integrates methods and data-based interpretations relevant to multivariate analysis in a way that addresses real-world problems arising in many areas of inte

  15. Statistics and analysis of scientific data

    CERN Document Server

    Bonamente, Massimiliano

    2017-01-01

    The revised second edition of this textbook provides the reader with a solid foundation in probability theory and statistics as applied to the physical sciences, engineering and related fields. It covers a broad range of numerical and analytical methods that are essential for the correct analysis of scientific data, including probability theory, distribution functions of statistics, fits to two-dimensional data and parameter estimation, Monte Carlo methods and Markov chains. Features new to this edition include: • a discussion of statistical techniques employed in business science, such as multiple regression analysis of multivariate datasets. • a new chapter on the various measures of the mean including logarithmic averages. • new chapters on systematic errors and intrinsic scatter, and on the fitting of data with bivariate errors. • a new case study and additional worked examples. • mathematical derivations and theoretical background material have been appropriately marked,to improve the readabili...

  16. Statistical Prediction of Solar Particle Event Frequency Based on the Measurements of Recent Solar Cycles for Acute Radiation Risk Analysis

    Science.gov (United States)

    Myung-Hee, Y. Kim; Shaowen, Hu; Cucinotta, Francis A.

    2009-01-01

    Large solar particle events (SPEs) present significant acute radiation risks to the crew members during extra-vehicular activities (EVAs) or in lightly shielded space vehicles for space missions beyond the protection of the Earth's magnetic field. Acute radiation sickness (ARS) can impair performance and result in failure of the mission. Improved forecasting capability and/or early-warning systems and proper shielding solutions are required to stay within NASA's short-term dose limits. Exactly how to make use of observations of SPEs for predicting occurrence and size is a great challenge, because SPE occurrences themselves are random in nature even though the expected frequency of SPEs is strongly influenced by the time position within the solar activity cycle. Therefore, we developed a probabilistic model approach, where a cumulative expected occurrence curve of SPEs for a typical solar cycle was formed from a non-homogeneous Poisson process model fitted to a database of proton fluence measurements of SPEs that occurred during the past 5 solar cycles (19 - 23) and those of large SPEs identified from impulsive nitrate enhancements in polar ice. From the fitted model, the expected frequency of SPEs was estimated at any given proton fluence threshold (Phi(sub E)) with energy (E) >30 MeV during a defined space mission period. Corresponding Phi(sub E) (E=30, 60, and 100 MeV) fluence distributions were simulated with a random draw from a gamma distribution, and applied for SPE ARS risk analysis for a specific mission period. It has been found that the accurate prediction of deep-seated organ doses was more precisely predicted at high energies, Phi(sub 100), than at lower energies such as Phi(sub 30) or Phi(sub 60), because of the high penetration depth of high energy protons. Estimates of ARS are then described for 90th and 95th percentile events for several mission lengths and for several likely organ dose-rates. The ability to accurately measure high energy protons

  17. Statistical analysis on extreme wave height

    Digital Repository Service at National Institute of Oceanography (India)

    Teena, N.V.; SanilKumar, V.; Sudheesh, K.; Sajeev, R.

    -294. • WAFO (2000) – A MATLAB toolbox for analysis of random waves and loads, Lund University, Sweden, homepage http://www.maths.lth.se/matstat/wafo/,2000. 15    Table 1: Statistical results of data and fitted distribution for cumulative distribution...

  18. Applied Behavior Analysis and Statistical Process Control?

    Science.gov (United States)

    Hopkins, B. L.

    1995-01-01

    Incorporating statistical process control (SPC) methods into applied behavior analysis is discussed. It is claimed that SPC methods would likely reduce applied behavior analysts' intimate contacts with problems and would likely yield poor treatment and research decisions. Cases and data presented by Pfadt and Wheeler (1995) are cited as examples.…

  19. The fuzzy approach to statistical analysis

    NARCIS (Netherlands)

    Coppi, Renato; Gil, Maria A.; Kiers, Henk A. L.

    2006-01-01

    For the last decades, research studies have been developed in which a coalition of Fuzzy Sets Theory and Statistics has been established with different purposes. These namely are: (i) to introduce new data analysis problems in which the objective involves either fuzzy relationships or fuzzy terms;

  20. Online Statistical Modeling (Regression Analysis) for Independent Responses

    Science.gov (United States)

    Made Tirta, I.; Anggraeni, Dian; Pandutama, Martinus

    2017-06-01

    Regression analysis (statistical analmodelling) are among statistical methods which are frequently needed in analyzing quantitative data, especially to model relationship between response and explanatory variables. Nowadays, statistical models have been developed into various directions to model various type and complex relationship of data. Rich varieties of advanced and recent statistical modelling are mostly available on open source software (one of them is R). However, these advanced statistical modelling, are not very friendly to novice R users, since they are based on programming script or command line interface. Our research aims to developed web interface (based on R and shiny), so that most recent and advanced statistical modelling are readily available, accessible and applicable on web. We have previously made interface in the form of e-tutorial for several modern and advanced statistical modelling on R especially for independent responses (including linear models/LM, generalized linier models/GLM, generalized additive model/GAM and generalized additive model for location scale and shape/GAMLSS). In this research we unified them in the form of data analysis, including model using Computer Intensive Statistics (Bootstrap and Markov Chain Monte Carlo/ MCMC). All are readily accessible on our online Virtual Statistics Laboratory. The web (interface) make the statistical modeling becomes easier to apply and easier to compare them in order to find the most appropriate model for the data.

  1. Statistical analysis of metallicity in spiral galaxies

    Energy Technology Data Exchange (ETDEWEB)

    Galeotti, P [Consiglio Nazionale delle Ricerche, Turin (Italy). Lab. di Cosmo-Geofisica; Turin Univ. (Italy). Ist. di Fisica Generale)

    1981-04-01

    A principal component analysis of metallicity and other integral properties of 33 spiral galaxies is presented; the involved parameters are: morphological type, diameter, luminosity and metallicity. From the statistical analysis it is concluded that the sample has only two significant dimensions and additonal tests, involving different parameters, show similar results. Thus it seems that only type and luminosity are independent variables, being the other integral properties of spiral galaxies correlated with them.

  2. Selected papers on analysis, probability, and statistics

    CERN Document Server

    Nomizu, Katsumi

    1994-01-01

    This book presents papers that originally appeared in the Japanese journal Sugaku. The papers fall into the general area of mathematical analysis as it pertains to probability and statistics, dynamical systems, differential equations and analytic function theory. Among the topics discussed are: stochastic differential equations, spectra of the Laplacian and Schrödinger operators, nonlinear partial differential equations which generate dissipative dynamical systems, fractal analysis on self-similar sets and the global structure of analytic functions.

  3. Statistical evaluation of vibration analysis techniques

    Science.gov (United States)

    Milner, G. Martin; Miller, Patrice S.

    1987-01-01

    An evaluation methodology is presented for a selection of candidate vibration analysis techniques applicable to machinery representative of the environmental control and life support system of advanced spacecraft; illustrative results are given. Attention is given to the statistical analysis of small sample experiments, the quantification of detection performance for diverse techniques through the computation of probability of detection versus probability of false alarm, and the quantification of diagnostic performance.

  4. Statistical inference based on latent ability estimates

    NARCIS (Netherlands)

    Hoijtink, H.J.A.; Boomsma, A.

    The quality of approximations to first and second order moments (e.g., statistics like means, variances, regression coefficients) based on latent ability estimates is being discussed. The ability estimates are obtained using either the Rasch, oi the two-parameter logistic model. Straightforward use

  5. Net analyte signal based statistical quality control

    NARCIS (Netherlands)

    Skibsted, E.T.S.; Boelens, H.F.M.; Westerhuis, J.A.; Smilde, A.K.; Broad, N.W.; Rees, D.R.; Witte, D.T.

    2005-01-01

    Net analyte signal statistical quality control (NAS-SQC) is a new methodology to perform multivariate product quality monitoring based on the net analyte signal approach. The main advantage of NAS-SQC is that the systematic variation in the product due to the analyte (or property) of interest is

  6. Plan delivery quality assurance for CyberKnife: Statistical process control analysis of 350 film-based patient-specific QAs.

    Science.gov (United States)

    Bellec, J; Delaby, N; Jouyaux, F; Perdrieux, M; Bouvier, J; Sorel, S; Henry, O; Lafond, C

    2017-07-01

    Robotic radiosurgery requires plan delivery quality assurance (DQA) but there has never been a published comprehensive analysis of a patient-specific DQA process in a clinic. We proposed to evaluate 350 consecutive film-based patient-specific DQAs using statistical process control. We evaluated the performance of the process to propose achievable tolerance criteria for DQA validation and we sought to identify suboptimal DQA using control charts. DQAs were performed on a CyberKnife-M6 using Gafchromic-EBT3 films. The signal-to-dose conversion was performed using a multichannel-correction and a scanning protocol that combined measurement and calibration in a single scan. The DQA analysis comprised a gamma-index analysis at 3%/1.5mm and a separate evaluation of spatial and dosimetric accuracy of the plan delivery. Each parameter was plotted on a control chart and control limits were calculated. A capability index (Cpm) was calculated to evaluate the ability of the process to produce results within specifications. The analysis of capability showed that a gamma pass rate of 85% at 3%/1.5mm was highly achievable as acceptance criteria for DQA validation using a film-based protocol (Cpm>1.33). 3.4% of DQA were outside a control limit of 88% for gamma pass-rate. The analysis of the out-of-control DQA helped identify a dosimetric error in our institute for a specific treatment type. We have defined initial tolerance criteria for DQA validations. We have shown that the implementation of a film-based patient-specific DQA protocol with the use of control charts is an effective method to improve patient treatment safety on CyberKnife. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  7. Statistical Analysis of Data for Timber Strengths

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2003-01-01

    Statistical analyses are performed for material strength parameters from a large number of specimens of structural timber. Non-parametric statistical analysis and fits have been investigated for the following distribution types: Normal, Lognormal, 2 parameter Weibull and 3-parameter Weibull...... fits to the data available, especially if tail fits are used whereas the Log Normal distribution generally gives a poor fit and larger coefficients of variation, especially if tail fits are used. The implications on the reliability level of typical structural elements and on partial safety factors...... for timber are investigated....

  8. The Statistical Analysis of Time Series

    CERN Document Server

    Anderson, T W

    2011-01-01

    The Wiley Classics Library consists of selected books that have become recognized classics in their respective fields. With these new unabridged and inexpensive editions, Wiley hopes to extend the life of these important works by making them available to future generations of mathematicians and scientists. Currently available in the Series: T. W. Anderson Statistical Analysis of Time Series T. S. Arthanari & Yadolah Dodge Mathematical Programming in Statistics Emil Artin Geometric Algebra Norman T. J. Bailey The Elements of Stochastic Processes with Applications to the Natural Sciences George

  9. A combined tract-based spatial statistics and voxel-based morphometry study of the first MRI scan after diagnosis of amyotrophic lateral sclerosis with subgroup analysis.

    Science.gov (United States)

    Alruwaili, A R; Pannek, K; Coulthard, A; Henderson, R; Kurniawan, N D; McCombe, P

    2018-02-01

    This study aims to compare the cortical and subcortical deep gray matter (GM) and white matter (WM) of ALS subjects and controls and to compare ALS subjects with (ALScog) and without (ALSnon-cog) cognitive impairment. The study was performed in 30 ALS subjects, and 19 healthy controls. Structural T1- and diffusion-weighted MRI data were analyzed using voxel-based morphometry (VBM) and tract-based spatial statistics (TBSS). All DTI measures and GM volume differed significantly between ALS subjects and controls. Compared to controls, greater DTI changes were present in ALScog than ALSnon-cog subjects. GM results showed reduction in the caudate nucleus volume in ALScog subjects compared to ALSnon-cog. and comparing all ALS with controls, there were changes on the right side and in a small region in the left middle frontal gyrus. This combined DTI and VBM study showed changes in motor and extra-motor regions. The DTI changes were more extensive in ALScog than ALSnon-cog subjects. It is likely that the inclusion of ALS subjects with cognitive impairment in previous studies resulted in extra-motor WM abnormalities being reported in ALS subjects. Copyright © 2017. Published by Elsevier Masson SAS.

  10. Developments in statistical analysis in quantitative genetics

    DEFF Research Database (Denmark)

    Sorensen, Daniel

    2009-01-01

    of genetic means and variances, models for the analysis of categorical and count data, the statistical genetics of a model postulating that environmental variance is partly under genetic control, and a short discussion of models that incorporate massive genetic marker information. We provide an overview......A remarkable research impetus has taken place in statistical genetics since the last World Conference. This has been stimulated by breakthroughs in molecular genetics, automated data-recording devices and computer-intensive statistical methods. The latter were revolutionized by the bootstrap...... and by Markov chain Monte Carlo (McMC). In this overview a number of specific areas are chosen to illustrate the enormous flexibility that McMC has provided for fitting models and exploring features of data that were previously inaccessible. The selected areas are inferences of the trajectories over time...

  11. Statistical Analysis of Big Data on Pharmacogenomics

    Science.gov (United States)

    Fan, Jianqing; Liu, Han

    2013-01-01

    This paper discusses statistical methods for estimating complex correlation structure from large pharmacogenomic datasets. We selectively review several prominent statistical methods for estimating large covariance matrix for understanding correlation structure, inverse covariance matrix for network modeling, large-scale simultaneous tests for selecting significantly differently expressed genes and proteins and genetic markers for complex diseases, and high dimensional variable selection for identifying important molecules for understanding molecule mechanisms in pharmacogenomics. Their applications to gene network estimation and biomarker selection are used to illustrate the methodological power. Several new challenges of Big data analysis, including complex data distribution, missing data, measurement error, spurious correlation, endogeneity, and the need for robust statistical methods, are also discussed. PMID:23602905

  12. Statistical analysis of next generation sequencing data

    CERN Document Server

    Nettleton, Dan

    2014-01-01

    Next Generation Sequencing (NGS) is the latest high throughput technology to revolutionize genomic research. NGS generates massive genomic datasets that play a key role in the big data phenomenon that surrounds us today. To extract signals from high-dimensional NGS data and make valid statistical inferences and predictions, novel data analytic and statistical techniques are needed. This book contains 20 chapters written by prominent statisticians working with NGS data. The topics range from basic preprocessing and analysis with NGS data to more complex genomic applications such as copy number variation and isoform expression detection. Research statisticians who want to learn about this growing and exciting area will find this book useful. In addition, many chapters from this book could be included in graduate-level classes in statistical bioinformatics for training future biostatisticians who will be expected to deal with genomic data in basic biomedical research, genomic clinical trials and personalized med...

  13. Robust statistics and geochemical data analysis

    International Nuclear Information System (INIS)

    Di, Z.

    1987-01-01

    Advantages of robust procedures over ordinary least-squares procedures in geochemical data analysis is demonstrated using NURE data from the Hot Springs Quadrangle, South Dakota, USA. Robust principal components analysis with 5% multivariate trimming successfully guarded the analysis against perturbations by outliers and increased the number of interpretable factors. Regression with SINE estimates significantly increased the goodness-of-fit of the regression and improved the correspondence of delineated anomalies with known uranium prospects. Because of the ubiquitous existence of outliers in geochemical data, robust statistical procedures are suggested as routine procedures to replace ordinary least-squares procedures

  14. Application of Ontology Technology in Health Statistic Data Analysis.

    Science.gov (United States)

    Guo, Minjiang; Hu, Hongpu; Lei, Xingyun

    2017-01-01

    Research Purpose: establish health management ontology for analysis of health statistic data. Proposed Methods: this paper established health management ontology based on the analysis of the concepts in China Health Statistics Yearbook, and used protégé to define the syntactic and semantic structure of health statistical data. six classes of top-level ontology concepts and their subclasses had been extracted and the object properties and data properties were defined to establish the construction of these classes. By ontology instantiation, we can integrate multi-source heterogeneous data and enable administrators to have an overall understanding and analysis of the health statistic data. ontology technology provides a comprehensive and unified information integration structure of the health management domain and lays a foundation for the efficient analysis of multi-source and heterogeneous health system management data and enhancement of the management efficiency.

  15. Quantitative analysis of emphysema and airway measurements according to iterative reconstruction algorithms: comparison of filtered back projection, adaptive statistical iterative reconstruction and model-based iterative reconstruction

    International Nuclear Information System (INIS)

    Choo, Ji Yung; Goo, Jin Mo; Park, Chang Min; Park, Sang Joon; Lee, Chang Hyun; Shim, Mi-Suk

    2014-01-01

    To evaluate filtered back projection (FBP) and two iterative reconstruction (IR) algorithms and their effects on the quantitative analysis of lung parenchyma and airway measurements on computed tomography (CT) images. Low-dose chest CT obtained in 281 adult patients were reconstructed using three algorithms: FBP, adaptive statistical IR (ASIR) and model-based IR (MBIR). Measurements of each dataset were compared: total lung volume, emphysema index (EI), airway measurements of the lumen and wall area as well as average wall thickness. Accuracy of airway measurements of each algorithm was also evaluated using an airway phantom. EI using a threshold of -950 HU was significantly different among the three algorithms in decreasing order of FBP (2.30 %), ASIR (1.49 %) and MBIR (1.20 %) (P < 0.01). Wall thickness was also significantly different among the three algorithms with FBP (2.09 mm) demonstrating thicker walls than ASIR (2.00 mm) and MBIR (1.88 mm) (P < 0.01). Airway phantom analysis revealed that MBIR showed the most accurate value for airway measurements. The three algorithms presented different EIs and wall thicknesses, decreasing in the order of FBP, ASIR and MBIR. Thus, care should be taken in selecting the appropriate IR algorithm on quantitative analysis of the lung. (orig.)

  16. Quantitative analysis of emphysema and airway measurements according to iterative reconstruction algorithms: comparison of filtered back projection, adaptive statistical iterative reconstruction and model-based iterative reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Choo, Ji Yung [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of); Korea University Ansan Hospital, Ansan-si, Department of Radiology, Gyeonggi-do (Korea, Republic of); Goo, Jin Mo; Park, Chang Min; Park, Sang Joon [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of); Seoul National University, Cancer Research Institute, Seoul (Korea, Republic of); Lee, Chang Hyun; Shim, Mi-Suk [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of)

    2014-04-15

    To evaluate filtered back projection (FBP) and two iterative reconstruction (IR) algorithms and their effects on the quantitative analysis of lung parenchyma and airway measurements on computed tomography (CT) images. Low-dose chest CT obtained in 281 adult patients were reconstructed using three algorithms: FBP, adaptive statistical IR (ASIR) and model-based IR (MBIR). Measurements of each dataset were compared: total lung volume, emphysema index (EI), airway measurements of the lumen and wall area as well as average wall thickness. Accuracy of airway measurements of each algorithm was also evaluated using an airway phantom. EI using a threshold of -950 HU was significantly different among the three algorithms in decreasing order of FBP (2.30 %), ASIR (1.49 %) and MBIR (1.20 %) (P < 0.01). Wall thickness was also significantly different among the three algorithms with FBP (2.09 mm) demonstrating thicker walls than ASIR (2.00 mm) and MBIR (1.88 mm) (P < 0.01). Airway phantom analysis revealed that MBIR showed the most accurate value for airway measurements. The three algorithms presented different EIs and wall thicknesses, decreasing in the order of FBP, ASIR and MBIR. Thus, care should be taken in selecting the appropriate IR algorithm on quantitative analysis of the lung. (orig.)

  17. Statistical-techniques-based computer-aided diagnosis (CAD) using texture feature analysis: application in computed tomography (CT) imaging to fatty liver disease

    International Nuclear Information System (INIS)

    Chung, Woon-Kwan; Park, Hyong-Hu; Im, In-Chul; Lee, Jae-Seung; Goo, Eun-Hoe; Dong, Kyung-Rae

    2012-01-01

    This paper proposes a computer-aided diagnosis (CAD) system based on texture feature analysis and statistical wavelet transformation technology to diagnose fatty liver disease with computed tomography (CT) imaging. In the target image, a wavelet transformation was performed for each lesion area to set the region of analysis (ROA, window size: 50 x 50 pixels) and define the texture feature of a pixel. Based on the extracted texture feature values, six parameters (average gray level, average contrast, relative smoothness, skewness, uniformity, and entropy) were determined to calculate the recognition rate for a fatty liver. In addition, a multivariate analysis of the variance (MANOVA) method was used to perform a discriminant analysis to verify the significance of the extracted texture feature values and the recognition rate for a fatty liver. According to the results, each texture feature value was significant for a comparison of the recognition rate for a fatty liver (p < 0.05). Furthermore, the F-value, which was used as a scale for the difference in recognition rates, was highest in the average gray level, relatively high in the skewness and the entropy, and relatively low in the uniformity, the relative smoothness and the average contrast. The recognition rate for a fatty liver had the same scale as that for the F-value, showing 100% (average gray level) at the maximum and 80% (average contrast) at the minimum. Therefore, the recognition rate is believed to be a useful clinical value for the automatic detection and computer-aided diagnosis (CAD) using the texture feature value. Nevertheless, further study on various diseases and singular diseases will be needed in the future.

  18. Vapor Pressure Data Analysis and Statistics

    Science.gov (United States)

    2016-12-01

    near 8, 2000, and 200, respectively. The A (or a) value is directly related to vapor pressure and will be greater for high vapor pressure materials...1, (10) where n is the number of data points, Yi is the natural logarithm of the i th experimental vapor pressure value, and Xi is the...VAPOR PRESSURE DATA ANALYSIS AND STATISTICS ECBC-TR-1422 Ann Brozena RESEARCH AND TECHNOLOGY DIRECTORATE

  19. Voxel-based statistical analysis of cerebral glucose metabolism in the rat cortical deafness model by 3D reconstruction of brain from autoradiographic images

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jae Sung; Park, Kwang Suk [Seoul National University College of Medicine, Department of Nuclear Medicine, 28 Yungun-Dong, Chongno-Ku, Seoul (Korea); Seoul National University College of Medicine, Department of Biomedical Engineering, Seoul (Korea); Ahn, Soon-Hyun; Oh, Seung Ha; Kim, Chong Sun; Chung, June-Key; Lee, Myung Chul [Seoul National University College of Medicine, Department of Otolaryngology, Head and Neck Surgery, Seoul (Korea); Lee, Dong Soo; Jeong, Jae Min [Seoul National University College of Medicine, Department of Nuclear Medicine, 28 Yungun-Dong, Chongno-Ku, Seoul (Korea)

    2005-06-01

    Animal models of cortical deafness are essential for investigation of the cerebral glucose metabolism in congenital or prelingual deafness. Autoradiographic imaging is mainly used to assess the cerebral glucose metabolism in rodents. In this study, procedures for the 3D voxel-based statistical analysis of autoradiographic data were established to enable investigations of the within-modal and cross-modal plasticity through entire areas of the brain of sensory-deprived animals without lumping together heterogeneous subregions within each brain structure into a large region of interest. Thirteen 2-[1-{sup 14}C]-deoxy-D-glucose autoradiographic images were acquired from six deaf and seven age-matched normal rats (age 6-10 weeks). The deafness was induced by surgical ablation. For the 3D voxel-based statistical analysis, brain slices were extracted semiautomatically from the autoradiographic images, which contained the coronal sections of the brain, and were stacked into 3D volume data. Using principal axes matching and mutual information maximization algorithms, the adjacent coronal sections were co-registered using a rigid body transformation, and all sections were realigned to the first section. A study-specific template was composed and the realigned images were spatially normalized onto the template. Following count normalization, voxel-wise t tests were performed to reveal the areas with significant differences in cerebral glucose metabolism between the deaf and the control rats. Continuous and clear edges were detected in each image after registration between the coronal sections, and the internal and external landmarks extracted from the spatially normalized images were well matched, demonstrating the reliability of the spatial processing procedures. Voxel-wise t tests showed that the glucose metabolism in the bilateral auditory cortices of the deaf rats was significantly (P<0.001) lower than that in the controls. There was no significantly reduced metabolism in

  20. Voxel-based statistical analysis of cerebral glucose metabolism in the rat cortical deafness model by 3D reconstruction of brain from autoradiographic images

    International Nuclear Information System (INIS)

    Lee, Jae Sung; Park, Kwang Suk; Ahn, Soon-Hyun; Oh, Seung Ha; Kim, Chong Sun; Chung, June-Key; Lee, Myung Chul; Lee, Dong Soo; Jeong, Jae Min

    2005-01-01

    Animal models of cortical deafness are essential for investigation of the cerebral glucose metabolism in congenital or prelingual deafness. Autoradiographic imaging is mainly used to assess the cerebral glucose metabolism in rodents. In this study, procedures for the 3D voxel-based statistical analysis of autoradiographic data were established to enable investigations of the within-modal and cross-modal plasticity through entire areas of the brain of sensory-deprived animals without lumping together heterogeneous subregions within each brain structure into a large region of interest. Thirteen 2-[1- 14 C]-deoxy-D-glucose autoradiographic images were acquired from six deaf and seven age-matched normal rats (age 6-10 weeks). The deafness was induced by surgical ablation. For the 3D voxel-based statistical analysis, brain slices were extracted semiautomatically from the autoradiographic images, which contained the coronal sections of the brain, and were stacked into 3D volume data. Using principal axes matching and mutual information maximization algorithms, the adjacent coronal sections were co-registered using a rigid body transformation, and all sections were realigned to the first section. A study-specific template was composed and the realigned images were spatially normalized onto the template. Following count normalization, voxel-wise t tests were performed to reveal the areas with significant differences in cerebral glucose metabolism between the deaf and the control rats. Continuous and clear edges were detected in each image after registration between the coronal sections, and the internal and external landmarks extracted from the spatially normalized images were well matched, demonstrating the reliability of the spatial processing procedures. Voxel-wise t tests showed that the glucose metabolism in the bilateral auditory cortices of the deaf rats was significantly (P<0.001) lower than that in the controls. There was no significantly reduced metabolism in any

  1. Advanced data analysis in neuroscience integrating statistical and computational models

    CERN Document Server

    Durstewitz, Daniel

    2017-01-01

    This book is intended for use in advanced graduate courses in statistics / machine learning, as well as for all experimental neuroscientists seeking to understand statistical methods at a deeper level, and theoretical neuroscientists with a limited background in statistics. It reviews almost all areas of applied statistics, from basic statistical estimation and test theory, linear and nonlinear approaches for regression and classification, to model selection and methods for dimensionality reduction, density estimation and unsupervised clustering.  Its focus, however, is linear and nonlinear time series analysis from a dynamical systems perspective, based on which it aims to convey an understanding also of the dynamical mechanisms that could have generated observed time series. Further, it integrates computational modeling of behavioral and neural dynamics with statistical estimation and hypothesis testing. This way computational models in neuroscience are not only explanat ory frameworks, but become powerfu...

  2. 5 year radar-based rainfall statistics: disturbances analysis and development of a post-correction scheme for the German radar composite

    Science.gov (United States)

    Wagner, A.; Seltmann, J.; Kunstmann, H.

    2015-02-01

    A radar-based rainfall statistic demands high quality data that provide realistic precipitation amounts in space and time. Instead of correcting single radar images, we developed a post-correction scheme for long-term composite radar data that corrects corrupted areas, but preserves the original precipitation patterns. The post-correction scheme is based on a 5 year statistical analysis of radar composite data and its constituents. The accumulation of radar images reveals artificial effects that are not visible in the individual radar images. Some of them are already inherent to single radar data such as the effect of increasing beam height, beam blockage or clutter remnants. More artificial effects are introduced in the process of compositing such as sharp gradients at the boundaries of overlapping areas due to different beam heights and resolution. The cause of these disturbances, their behaviour with respect to reflectivity level, season or altitude is analysed based on time-series of two radar products: the single radar reflectivity product PX for each of the 16 radar systems of the German Meteorological Service (DWD) for the time span 2000 to 2006 and the radar composite product RX of DWD from 2005 through to 2009. These statistics result in additional quality information on radar data that is not available elsewhere. The resulting robust characteristics of disturbances, e.g. the dependency of the frequencies of occurrence of radar reflectivities on beam height, are then used as a basis for the post-correction algorithm. The scheme comprises corrections for shading effects and speckles, such as clutter remnants or overfiltering, as well as for systematic differences in frequencies of occurrence of radar reflectivities between the near and the far ranges of individual radar sites. An adjustment to rain gauges is also included. Applying this correction, the Root-Mean-Square-Error for the comparison of radar derived annual rain amounts with rain gauge data

  3. Statistical Analysis of SSMIS Sea Ice Concentration Threshold at the Arctic Sea Ice Edge during Summer Based on MODIS and Ship-Based Observational Data.

    Science.gov (United States)

    Ji, Qing; Li, Fei; Pang, Xiaoping; Luo, Cong

    2018-04-05

    The threshold of sea ice concentration (SIC) is the basis for accurately calculating sea ice extent based on passive microwave (PM) remote sensing data. However, the PM SIC threshold at the sea ice edge used in previous studies and released sea ice products has not always been consistent. To explore the representable value of the PM SIC threshold corresponding on average to the position of the Arctic sea ice edge during summer in recent years, we extracted sea ice edge boundaries from the Moderate-resolution Imaging Spectroradiometer (MODIS) sea ice product (MOD29 with a spatial resolution of 1 km), MODIS images (250 m), and sea ice ship-based observation points (1 km) during the fifth (CHINARE-2012) and sixth (CHINARE-2014) Chinese National Arctic Research Expeditions, and made an overlay and comparison analysis with PM SIC derived from Special Sensor Microwave Imager Sounder (SSMIS, with a spatial resolution of 25 km) in the summer of 2012 and 2014. Results showed that the average SSMIS SIC threshold at the Arctic sea ice edge based on ice-water boundary lines extracted from MOD29 was 33%, which was higher than that of the commonly used 15% discriminant threshold. The average SIC threshold at sea ice edge based on ice-water boundary lines extracted by visual interpretation from four scenes of the MODIS image was 35% when compared to the average value of 36% from the MOD29 extracted ice edge pixels for the same days. The average SIC of 31% at the sea ice edge points extracted from ship-based observations also confirmed that choosing around 30% as the SIC threshold during summer is recommended for sea ice extent calculations based on SSMIS PM data. These results can provide a reference for further studying the variation of sea ice under the rapidly changing Arctic.

  4. Surface Properties of TNOs: Preliminary Statistical Analysis

    Science.gov (United States)

    Antonieta Barucci, Maria; Fornasier, S.; Alvarez-Cantal, A.; de Bergh, C.; Merlin, F.; DeMeo, F.; Dumas, C.

    2009-09-01

    An overview of the surface properties based on the last results obtained during the Large Program performed at ESO-VLT (2007-2008) will be presented. Simultaneous high quality visible and near-infrared spectroscopy and photometry have been carried out on 40 objects with various dynamical properties, using FORS1 (V), ISAAC (J) and SINFONI (H+K bands) mounted respectively at UT2, UT1 and UT4 VLT-ESO telescopes (Cerro Paranal, Chile). For spectroscopy we computed the spectral slope for each object and searched for possible rotational inhomogeneities. A few objects show features in their visible spectra such as Eris, whose spectral bands are displaced with respect to pure methane-ice. We identify new faint absorption features on 10199 Chariklo and 42355 Typhon, possibly due to the presence of aqueous altered materials. The H+K band spectroscopy was performed with the new instrument SINFONI which is a 3D integral field spectrometer. While some objects show no diagnostic spectral bands, others reveal surface deposits of ices of H2O, CH3OH, CH4, and N2. To investigate the surface properties of these bodies, a radiative transfer model has been applied to interpret the entire 0.4-2.4 micron spectral region. The diversity of the spectra suggests that these objects represent a substantial range of bulk compositions. These different surface compositions can be diagnostic of original compositional diversity, interior source and/or different evolution with different physical processes affecting the surfaces. A statistical analysis is in progress to investigate the correlation of the TNOs’ surface properties with size and dynamical properties.

  5. Uncertainty analysis with statistically correlated failure data

    International Nuclear Information System (INIS)

    Modarres, M.; Dezfuli, H.; Roush, M.L.

    1987-01-01

    Likelihood of occurrence of the top event of a fault tree or sequences of an event tree is estimated from the failure probability of components that constitute the events of the fault/event tree. Component failure probabilities are subject to statistical uncertainties. In addition, there are cases where the failure data are statistically correlated. At present most fault tree calculations are based on uncorrelated component failure data. This chapter describes a methodology for assessing the probability intervals for the top event failure probability of fault trees or frequency of occurrence of event tree sequences when event failure data are statistically correlated. To estimate mean and variance of the top event, a second-order system moment method is presented through Taylor series expansion, which provides an alternative to the normally used Monte Carlo method. For cases where component failure probabilities are statistically correlated, the Taylor expansion terms are treated properly. Moment matching technique is used to obtain the probability distribution function of the top event through fitting the Johnson Ssub(B) distribution. The computer program, CORRELATE, was developed to perform the calculations necessary for the implementation of the method developed. (author)

  6. Designing a risk-based surveillance program for Mycobacterium avium ssp. paratuberculosis in Norwegian dairy herds using multivariate statistical process control analysis.

    Science.gov (United States)

    Whist, A C; Liland, K H; Jonsson, M E; Sæbø, S; Sviland, S; Østerås, O; Norström, M; Hopp, P

    2014-11-01

    Surveillance programs for animal diseases are critical to early disease detection and risk estimation and to documenting a population's disease status at a given time. The aim of this study was to describe a risk-based surveillance program for detecting Mycobacterium avium ssp. paratuberculosis (MAP) infection in Norwegian dairy cattle. The included risk factors for detecting MAP were purchase of cattle, combined cattle and goat farming, and location of the cattle farm in counties containing goats with MAP. The risk indicators included production data [culling of animals >3 yr of age, carcass conformation of animals >3 yr of age, milk production decrease in older lactating cows (lactations 3, 4, and 5)], and clinical data (diarrhea, enteritis, or both, in animals >3 yr of age). Except for combined cattle and goat farming and cattle farm location, all data were collected at the cow level and summarized at the herd level. Predefined risk factors and risk indicators were extracted from different national databases and combined in a multivariate statistical process control to obtain a risk assessment for each herd. The ordinary Hotelling's T(2) statistic was applied as a multivariate, standardized measure of difference between the current observed state and the average state of the risk factors for a given herd. To make the analysis more robust and adapt it to the slowly developing nature of MAP, monthly risk calculations were based on data accumulated during a 24-mo period. Monitoring of these variables was performed to identify outliers that may indicate deviance in one or more of the underlying processes. The highest-ranked herds were scattered all over Norway and clustered in high-density dairy cattle farm areas. The resulting rankings of herds are being used in the national surveillance program for MAP in 2014 to increase the sensitivity of the ongoing surveillance program in which 5 fecal samples for bacteriological examination are collected from 25 dairy herds

  7. Sensitivity analysis and related analysis : A survey of statistical techniques

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    1995-01-01

    This paper reviews the state of the art in five related types of analysis, namely (i) sensitivity or what-if analysis, (ii) uncertainty or risk analysis, (iii) screening, (iv) validation, and (v) optimization. The main question is: when should which type of analysis be applied; which statistical

  8. A statistical approach to coastal upwelling in the Baltic Sea based on the analysis of satellite data for 1990-2009

    Directory of Open Access Journals (Sweden)

    Andreas Lehmann

    2012-06-01

    Full Text Available A statistical analysis of Baltic Sea upwelling has been carried out to cover, for the first time, the entire sea area for the period 1990-2009. Weekly composite SST maps based on NOAA/AVHRR satellite data were used to evaluate the location and frequency of upwelling. The results obtained were analysed and compared with earlier studies with excellent agreement. Our study enables the most intense upwelling areas in the entire Baltic Sea to be evaluated. According to the analysis of 443 SST maps, the most common upwelling regions are found off the Swedish south and east coasts (frequency 10-25%, the Swedish coast of the Bothnian Bay (16%, the southern tip of Gotland (up to 15%, and the Finnish coast of the Gulf of Finland (up to 15%. Pronounced upwelling also occurs off the Estonian coast and the Baltic east coast (up to 15%, the Polish coast and the west coast of Rügen (10-15%; otherwise the upwelling frequency was between 5 and 10%. Additionally, simulated SST distributions derived from a Baltic Sea numerical model were analysed for the same period. Furthermore, at specific positions close to the coastline, surface winds based on the SMHI meteorological data base were analysed for the same 20-year period. Wind components parallel to the coast were discriminated into favourable and unfavourable winds forcing upwelling. The obtained frequencies of upwelling-favourable winds fit very well the observed upwelling frequencies derived from satellite SST maps. A positive trend of upwelling frequencies along the Swedish east coast and the Finnish coast of the Gulf of Finland was calculated for the period 1990-2009.

  9. Analysis of Variance in Statistical Image Processing

    Science.gov (United States)

    Kurz, Ludwik; Hafed Benteftifa, M.

    1997-04-01

    A key problem in practical image processing is the detection of specific features in a noisy image. Analysis of variance (ANOVA) techniques can be very effective in such situations, and this book gives a detailed account of the use of ANOVA in statistical image processing. The book begins by describing the statistical representation of images in the various ANOVA models. The authors present a number of computationally efficient algorithms and techniques to deal with such problems as line, edge, and object detection, as well as image restoration and enhancement. By describing the basic principles of these techniques, and showing their use in specific situations, the book will facilitate the design of new algorithms for particular applications. It will be of great interest to graduate students and engineers in the field of image processing and pattern recognition.

  10. Statistical Analysis of Zebrafish Locomotor Response.

    Science.gov (United States)

    Liu, Yiwen; Carmer, Robert; Zhang, Gaonan; Venkatraman, Prahatha; Brown, Skye Ashton; Pang, Chi-Pui; Zhang, Mingzhi; Ma, Ping; Leung, Yuk Fai

    2015-01-01

    Zebrafish larvae display rich locomotor behaviour upon external stimulation. The movement can be simultaneously tracked from many larvae arranged in multi-well plates. The resulting time-series locomotor data have been used to reveal new insights into neurobiology and pharmacology. However, the data are of large scale, and the corresponding locomotor behavior is affected by multiple factors. These issues pose a statistical challenge for comparing larval activities. To address this gap, this study has analyzed a visually-driven locomotor behaviour named the visual motor response (VMR) by the Hotelling's T-squared test. This test is congruent with comparing locomotor profiles from a time period. Different wild-type (WT) strains were compared using the test, which shows that they responded differently to light change at different developmental stages. The performance of this test was evaluated by a power analysis, which shows that the test was sensitive for detecting differences between experimental groups with sample numbers that were commonly used in various studies. In addition, this study investigated the effects of various factors that might affect the VMR by multivariate analysis of variance (MANOVA). The results indicate that the larval activity was generally affected by stage, light stimulus, their interaction, and location in the plate. Nonetheless, different factors affected larval activity differently over time, as indicated by a dynamical analysis of the activity at each second. Intriguingly, this analysis also shows that biological and technical repeats had negligible effect on larval activity. This finding is consistent with that from the Hotelling's T-squared test, and suggests that experimental repeats can be combined to enhance statistical power. Together, these investigations have established a statistical framework for analyzing VMR data, a framework that should be generally applicable to other locomotor data with similar structure.

  11. Radiation effects on cancer mortality among A-bomb survivors, 1950-72. Comparison of some statistical models and analysis based on the additive logit model

    Energy Technology Data Exchange (ETDEWEB)

    Otake, M [Hiroshima Univ. (Japan). Faculty of Science

    1976-12-01

    Various statistical models designed to determine the effects of radiation dose on mortality of atomic bomb survivors in Hiroshima and Nagasaki from specific cancers were evaluated on the basis of a basic k(age) x c(dose) x 2 contingency table. From the aspects of application and fits of different models, analysis based on the additive logit model was applied to the mortality experience of this population during the 22year period from 1 Oct. 1950 to 31 Dec. 1972. The advantages and disadvantages of the additive logit model were demonstrated. Leukemia mortality showed a sharp rise with an increase in dose. The dose response relationship suggests a possible curvature or a log linear model, particularly if the dose estimated to be more than 600 rad were set arbitrarily at 600 rad, since the average dose in the 200+ rad group would then change from 434 to 350 rad. In the 22year period from 1950 to 1972, a high mortality risk due to radiation was observed in survivors with doses of 200 rad and over for all cancers except leukemia. On the other hand, during the latest period from 1965 to 1972 a significant risk was noted also for stomach and breast cancers. Survivors who were 9 year old or less at the time of the bomb and who were exposed to high doses of 200+ rad appeared to show a high mortality risk for all cancers except leukemia, although the number of observed deaths is yet small. A number of interesting areas are discussed from the statistical and epidemiological standpoints, i.e., the numerical comparison of risks in various models, the general evaluation of cancer mortality by the additive logit model, the dose response relationship, the relative risk in the high dose group, the time period of radiation induced cancer mortality, the difference of dose response between Hiroshima and Nagasaki and the relative biological effectiveness of neutrons.

  12. On the Statistical Validation of Technical Analysis

    Directory of Open Access Journals (Sweden)

    Rosane Riera Freire

    2007-06-01

    Full Text Available Technical analysis, or charting, aims on visually identifying geometrical patterns in price charts in order to antecipate price "trends". In this paper we revisit the issue of thecnical analysis validation which has been tackled in the literature without taking care for (i the presence of heterogeneity and (ii statistical dependence in the analyzed data - various agglutinated return time series from distinct financial securities. The main purpose here is to address the first cited problem by suggesting a validation methodology that also "homogenizes" the securities according to the finite dimensional probability distribution of their return series. The general steps go through the identification of the stochastic processes for the securities returns, the clustering of similar securities and, finally, the identification of presence, or absence, of informatinal content obtained from those price patterns. We illustrate the proposed methodology with a real data exercise including several securities of the global market. Our investigation shows that there is a statistically significant informational content in two out of three common patterns usually found through technical analysis, namely: triangle, rectangle and head and shoulders.

  13. Average Nuclear properties based on statistical model

    International Nuclear Information System (INIS)

    El-Jaick, L.J.

    1974-01-01

    The rough properties of nuclei were investigated by statistical model, in systems with the same and different number of protons and neutrons, separately, considering the Coulomb energy in the last system. Some average nuclear properties were calculated based on the energy density of nuclear matter, from Weizsscker-Beth mass semiempiric formulae, generalized for compressible nuclei. In the study of a s surface energy coefficient, the great influence exercised by Coulomb energy and nuclear compressibility was verified. For a good adjust of beta stability lines and mass excess, the surface symmetry energy were established. (M.C.K.) [pt

  14. Status Quo and Outlook of the Studies of Entrepreneurship Education in China: Statistics and Analysis Based on Papers Indexed in CSSCI (2004-2013)

    Science.gov (United States)

    Xia, Tian; Shumin, Zhang; Yifeng, Wu

    2016-01-01

    We utilized cross tabulation statistics, word frequency counts, and content analysis of research output to conduct a bibliometric study, and used CiteSpace software to depict a knowledge map for research on entrepreneurship education in China from 2004 to 2013. The study shows that, in this duration, the study of Chinese entrepreneurship education…

  15. Statistical analysis of solar proton events

    Directory of Open Access Journals (Sweden)

    V. Kurt

    2004-06-01

    Full Text Available A new catalogue of 253 solar proton events (SPEs with energy >10MeV and peak intensity >10 protons/cm2.s.sr (pfu at the Earth's orbit for three complete 11-year solar cycles (1970-2002 is given. A statistical analysis of this data set of SPEs and their associated flares that occurred during this time period is presented. It is outlined that 231 of these proton events are flare related and only 22 of them are not associated with Ha flares. It is also noteworthy that 42 of these events are registered as Ground Level Enhancements (GLEs in neutron monitors. The longitudinal distribution of the associated flares shows that a great number of these events are connected with west flares. This analysis enables one to understand the long-term dependence of the SPEs and the related flare characteristics on the solar cycle which are useful for space weather prediction.

  16. STATISTICS, Program System for Statistical Analysis of Experimental Data

    International Nuclear Information System (INIS)

    Helmreich, F.

    1991-01-01

    1 - Description of problem or function: The package is composed of 83 routines, the most important of which are the following: BINDTR: Binomial distribution; HYPDTR: Hypergeometric distribution; POIDTR: Poisson distribution; GAMDTR: Gamma distribution; BETADTR: Beta-1 and Beta-2 distributions; NORDTR: Normal distribution; CHIDTR: Chi-square distribution; STUDTR : Distribution of 'Student's T'; FISDTR: Distribution of F; EXPDTR: Exponential distribution; WEIDTR: Weibull distribution; FRAKTIL: Calculation of the fractiles of the normal, chi-square, Student's, and F distributions; VARVGL: Test for equality of variance for several sample observations; ANPAST: Kolmogorov-Smirnov test and chi-square test of goodness of fit; MULIRE: Multiple linear regression analysis for a dependent variable and a set of independent variables; STPRG: Performs a stepwise multiple linear regression analysis for a dependent variable and a set of independent variables. At each step, the variable entered into the regression equation is the one which has the greatest amount of variance between it and the dependent variable. Any independent variable can be forced into or deleted from the regression equation, irrespective of its contribution to the equation. LTEST: Tests the hypotheses of linearity of the data. SPRANK: Calculates the Spearman rank correlation coefficient. 2 - Method of solution: VARVGL: The Bartlett's Test, the Cochran's Test and the Hartley's Test are performed in the program. MULIRE: The Gauss-Jordan method is used in the solution of the normal equations. STPRG: The abbreviated Doolittle method is used to (1) determine variables to enter into the regression, and (2) complete regression coefficient calculation. 3 - Restrictions on the complexity of the problem: VARVGL: The Hartley's Test is only performed if the sample observations are all of the same size

  17. Analysis of gene expression profiles of soft tissue sarcoma using a combination of knowledge-based filtering with integration of multiple statistics.

    Directory of Open Access Journals (Sweden)

    Anna Takahashi

    Full Text Available The diagnosis and treatment of soft tissue sarcomas (STS have been difficult. Of the diverse histological subtypes, undifferentiated pleomorphic sarcoma (UPS is particularly difficult to diagnose accurately, and its classification per se is still controversial. Recent advances in genomic technologies provide an excellent way to address such problems. However, it is often difficult, if not impossible, to identify definitive disease-associated genes using genome-wide analysis alone, primarily because of multiple testing problems. In the present study, we analyzed microarray data from 88 STS patients using a combination method that used knowledge-based filtering and a simulation based on the integration of multiple statistics to reduce multiple testing problems. We identified 25 genes, including hypoxia-related genes (e.g., MIF, SCD1, P4HA1, ENO1, and STAT1 and cell cycle- and DNA repair-related genes (e.g., TACC3, PRDX1, PRKDC, and H2AFY. These genes showed significant differential expression among histological subtypes, including UPS, and showed associations with overall survival. STAT1 showed a strong association with overall survival in UPS patients (logrank p = 1.84 × 10(-6 and adjusted p value 2.99 × 10(-3 after the permutation test. According to the literature, the 25 genes selected are useful not only as markers of differential diagnosis but also as prognostic/predictive markers and/or therapeutic targets for STS. Our combination method can identify genes that are potential prognostic/predictive factors and/or therapeutic targets in STS and possibly in other cancers. These disease-associated genes deserve further preclinical and clinical validation.

  18. Statistical margin to DNB safety analysis approach for LOFT

    International Nuclear Information System (INIS)

    Atkinson, S.A.

    1982-01-01

    A method was developed and used for LOFT thermal safety analysis to estimate the statistical margin to DNB for the hot rod, and to base safety analysis on desired DNB probability limits. This method is an advanced approach using response surface analysis methods, a very efficient experimental design, and a 2nd-order response surface equation with a 2nd-order error propagation analysis to define the MDNBR probability density function. Calculations for limiting transients were used in the response surface analysis thereby including transient interactions and trip uncertainties in the MDNBR probability density

  19. Multivariate statistical analysis of wildfires in Portugal

    Science.gov (United States)

    Costa, Ricardo; Caramelo, Liliana; Pereira, Mário

    2013-04-01

    Several studies demonstrate that wildfires in Portugal present high temporal and spatial variability as well as cluster behavior (Pereira et al., 2005, 2011). This study aims to contribute to the characterization of the fire regime in Portugal with the multivariate statistical analysis of the time series of number of fires and area burned in Portugal during the 1980 - 2009 period. The data used in the analysis is an extended version of the Rural Fire Portuguese Database (PRFD) (Pereira et al, 2011), provided by the National Forest Authority (Autoridade Florestal Nacional, AFN), the Portuguese Forest Service, which includes information for more than 500,000 fire records. There are many multiple advanced techniques for examining the relationships among multiple time series at the same time (e.g., canonical correlation analysis, principal components analysis, factor analysis, path analysis, multiple analyses of variance, clustering systems). This study compares and discusses the results obtained with these different techniques. Pereira, M.G., Trigo, R.M., DaCamara, C.C., Pereira, J.M.C., Leite, S.M., 2005: "Synoptic patterns associated with large summer forest fires in Portugal". Agricultural and Forest Meteorology. 129, 11-25. Pereira, M. G., Malamud, B. D., Trigo, R. M., and Alves, P. I.: The history and characteristics of the 1980-2005 Portuguese rural fire database, Nat. Hazards Earth Syst. Sci., 11, 3343-3358, doi:10.5194/nhess-11-3343-2011, 2011 This work is supported by European Union Funds (FEDER/COMPETE - Operational Competitiveness Programme) and by national funds (FCT - Portuguese Foundation for Science and Technology) under the project FCOMP-01-0124-FEDER-022692, the project FLAIR (PTDC/AAC-AMB/104702/2008) and the EU 7th Framework Program through FUME (contract number 243888).

  20. A statistical analysis of electrical cerebral activity

    International Nuclear Information System (INIS)

    Bassant, Marie-Helene

    1971-01-01

    The aim of this work was to study the statistical properties of the amplitude of the electroencephalographic signal. The experimental method is described (implantation of electrodes, acquisition and treatment of data). The program of the mathematical analysis is given (calculation of probability density functions, study of stationarity) and the validity of the tests discussed. The results concerned ten rabbits. Trips of EEG were sampled during 40 s. with very short intervals (500 μs). The probability density functions established for different brain structures (especially the dorsal hippocampus) and areas, were compared during sleep, arousal and visual stimulus. Using a Χ 2 test, it was found that the Gaussian distribution assumption was rejected in 96.7 per cent of the cases. For a given physiological state, there was no mathematical reason to reject the assumption of stationarity (in 96 per cent of the cases). (author) [fr

  1. Statistical Analysis of Data for Timber Strengths

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Hoffmeyer, P.

    Statistical analyses are performed for material strength parameters from approximately 6700 specimens of structural timber. Non-parametric statistical analyses and fits to the following distributions types have been investigated: Normal, Lognormal, 2 parameter Weibull and 3-parameter Weibull...

  2. Statistical analysis of the ASME KIc database

    International Nuclear Information System (INIS)

    Sokolov, M.A.

    1998-01-01

    The American Society of Mechanical Engineers (ASME) K Ic curve is a function of test temperature (T) normalized to a reference nil-ductility temperature, RT NDT , namely, T-RT NDT . It was constructed as the lower boundary to the available K Ic database. Being a lower bound to the unique but limited database, the ASME K Ic curve concept does not discuss probability matters. However, a continuing evolution of fracture mechanics advances has led to employment of the Weibull distribution function to model the scatter of fracture toughness values in the transition range. The Weibull statistic/master curve approach was applied to analyze the current ASME K Ic database. It is shown that the Weibull distribution function models the scatter in K Ic data from different materials very well, while the temperature dependence is described by the master curve. Probabilistic-based tolerance-bound curves are suggested to describe lower-bound K Ic values

  3. Application of descriptive statistics in analysis of experimental data

    OpenAIRE

    Mirilović Milorad; Pejin Ivana

    2008-01-01

    Statistics today represent a group of scientific methods for the quantitative and qualitative investigation of variations in mass appearances. In fact, statistics present a group of methods that are used for the accumulation, analysis, presentation and interpretation of data necessary for reaching certain conclusions. Statistical analysis is divided into descriptive statistical analysis and inferential statistics. The values which represent the results of an experiment, and which are the subj...

  4. Statistical analysis in MSW collection performance assessment.

    Science.gov (United States)

    Teixeira, Carlos Afonso; Avelino, Catarina; Ferreira, Fátima; Bentes, Isabel

    2014-09-01

    The increase of Municipal Solid Waste (MSW) generated over the last years forces waste managers pursuing more effective collection schemes, technically viable, environmentally effective and economically sustainable. The assessment of MSW services using performance indicators plays a crucial role for improving service quality. In this work, we focus on the relevance of regular system monitoring as a service assessment tool. In particular, we select and test a core-set of MSW collection performance indicators (effective collection distance, effective collection time and effective fuel consumption) that highlights collection system strengths and weaknesses and supports pro-active management decision-making and strategic planning. A statistical analysis was conducted with data collected in mixed collection system of Oporto Municipality, Portugal, during one year, a week per month. This analysis provides collection circuits' operational assessment and supports effective short-term municipality collection strategies at the level of, e.g., collection frequency and timetables, and type of containers. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Statistics Analysis Measures Painting of Cooling Tower

    Directory of Open Access Journals (Sweden)

    A. Zacharopoulou

    2013-01-01

    Full Text Available This study refers to the cooling tower of Megalopolis (construction 1975 and protection from corrosive environment. The maintenance of the cooling tower took place in 2008. The cooling tower was badly damaged from corrosion of reinforcement. The parabolic cooling towers (factory of electrical power are a typical example of construction, which has a special aggressive environment. The protection of cooling towers is usually achieved through organic coatings. Because of the different environmental impacts on the internal and external side of the cooling tower, a different system of paint application is required. The present study refers to the damages caused by corrosion process. The corrosive environments, the application of this painting, the quality control process, the measures and statistics analysis, and the results were discussed in this study. In the process of quality control the following measurements were taken into consideration: (1 examination of the adhesion with the cross-cut test, (2 examination of the film thickness, and (3 controlling of the pull-off resistance for concrete substrates and paintings. Finally, this study refers to the correlations of measurements, analysis of failures in relation to the quality of repair, and rehabilitation of the cooling tower. Also this study made a first attempt to apply the specific corrosion inhibitors in such a large structure.

  6. Value of a statistical life in road safety: a benefit-transfer function with risk-analysis guidance based on developing country data.

    Science.gov (United States)

    Milligan, Craig; Kopp, Andreas; Dahdah, Said; Montufar, Jeannette

    2014-10-01

    We model a value of statistical life (VSL) transfer function for application to road-safety engineering in developing countries through an income-disaggregated meta-analysis of scope-sensitive stated preference VSL data. The income-disaggregated meta-analysis treats developing country and high-income country data separately. Previous transfer functions are based on aggregated datasets that are composed largely of data from high-income countries. Recent evidence, particularly with respect to the income elasticity of VSL, suggests that the aggregate approach is deficient because it does not account for a possible change in income elasticity across income levels. Our dataset (a minor update of the OECD database published in 2012) includes 123 scope-sensitive VSL estimates from developing countries and 185 scope-sensitive estimates from high-income countries. The transfer function for developing countries gives VSL=1.3732E-4×(GDP per capita)(∧)2.478, with VSL and GDP per capita expressed in 2005 international dollars (an international dollar being a notional currency with the same purchasing power as the U.S. dollar). The function can be applied for low- and middle-income countries with GDPs per capita above $1268 (with a data gap for very low-income countries), whereas it is not useful above a GDP per capita of about $20,000. The corresponding function built using high-income country data is VSL=8.2474E+3×(GDP per capita)(∧).6932; it is valid for high-income countries but over-estimates VSL for low- and middle-income countries. The research finds two principal significant differences between the transfer functions modeled using developing-country and high-income-country data, supporting the disaggregated approach. The first of these differences relates to between-country VSL income elasticity, which is 2.478 for the developing country function and .693 for the high-income function; the difference is significant at peconomic performance measures for road

  7. Wind Statistics Offshore based on Satellite Images

    DEFF Research Database (Denmark)

    Hasager, Charlotte Bay; Mouche, Alexis; Badger, Merete

    2009-01-01

    -based observations become available. At present preliminary results are obtained using the routine methods. The first step in the process is to retrieve raw SAR data, calibrate the images and use a priori wind direction as input to the geophysical model function. From this process the wind speed maps are produced....... The wind maps are geo-referenced. The second process is the analysis of a series of geo-referenced SAR-based wind maps. Previous research has shown that a relatively large number of images are needed for achieving certain accuracies on mean wind speed, Weibull A and k (scale and shape parameters......Ocean wind maps from satellites are routinely processed both at Risø DTU and CLS based on the European Space Agency Envisat ASAR data. At Risø the a priori wind direction is taken from the atmospheric model NOGAPS (Navel Operational Global Atmospheric Prediction System) provided by the U.S. Navy...

  8. Geographical and Statistical Analysis on the Relationship between Land-Use Mixture and Home-Based Trip Making and More: Case of Richmond, Virginia

    Directory of Open Access Journals (Sweden)

    Yin - Shan MA

    2013-06-01

    Full Text Available Richmond, Virginia has implemented numerous mixed land-use policies to encourage non-private-vehicle commuting for decades based on the best practices of other cities and the assumption that land-use mixture would positively lead to trip reduction. This paper uses both Geographical Information Systems (GIS and statistical tools to empirically test this hypothesis. With local land use and trip making data as inputs, it first calculates two common indices of land-use mixture - entropy and dissimilarity indices, using GIS tool, supplemented by Microsoft Excel. Afterwards, it uses Statistical Package for Social Sciences (SPSS to calculate the correlation matrices among land-use mixture indices, socioeconomic variables, and home-based work/other trip rates, followed by a series of regression model runs on these variables. Through this study, it has been found that land-use mixture has some but weak effects on home-based work trip rate, and virtually no effects on home-based other trip rate. In contrast, socioeconomic variables, especially auto ownership, have larger effects on home-based trip making.

  9. Transit safety & security statistics & analysis 2002 annual report (formerly SAMIS)

    Science.gov (United States)

    2004-12-01

    The Transit Safety & Security Statistics & Analysis 2002 Annual Report (formerly SAMIS) is a compilation and analysis of mass transit accident, casualty, and crime statistics reported under the Federal Transit Administrations (FTAs) National Tr...

  10. Transit safety & security statistics & analysis 2003 annual report (formerly SAMIS)

    Science.gov (United States)

    2005-12-01

    The Transit Safety & Security Statistics & Analysis 2003 Annual Report (formerly SAMIS) is a compilation and analysis of mass transit accident, casualty, and crime statistics reported under the Federal Transit Administrations (FTAs) National Tr...

  11. Statistical analysis of long term spatial and temporal trends of ...

    Indian Academy of Sciences (India)

    Statistical analysis of long term spatial and temporal trends of temperature ... CGCM3; HadCM3; modified Mann–Kendall test; statistical analysis; Sutlej basin. ... Water Resources Systems Division, National Institute of Hydrology, Roorkee 247 ...

  12. Statistical approach to partial equilibrium analysis

    Science.gov (United States)

    Wang, Yougui; Stanley, H. E.

    2009-04-01

    A statistical approach to market equilibrium and efficiency analysis is proposed in this paper. One factor that governs the exchange decisions of traders in a market, named willingness price, is highlighted and constitutes the whole theory. The supply and demand functions are formulated as the distributions of corresponding willing exchange over the willingness price. The laws of supply and demand can be derived directly from these distributions. The characteristics of excess demand function are analyzed and the necessary conditions for the existence and uniqueness of equilibrium point of the market are specified. The rationing rates of buyers and sellers are introduced to describe the ratio of realized exchange to willing exchange, and their dependence on the market price is studied in the cases of shortage and surplus. The realized market surplus, which is the criterion of market efficiency, can be written as a function of the distributions of willing exchange and the rationing rates. With this approach we can strictly prove that a market is efficient in the state of equilibrium.

  13. Analysis of Variance: What Is Your Statistical Software Actually Doing?

    Science.gov (United States)

    Li, Jian; Lomax, Richard G.

    2011-01-01

    Users assume statistical software packages produce accurate results. In this article, the authors systematically examined Statistical Package for the Social Sciences (SPSS) and Statistical Analysis System (SAS) for 3 analysis of variance (ANOVA) designs, mixed-effects ANOVA, fixed-effects analysis of covariance (ANCOVA), and nested ANOVA. For each…

  14. Statistical Model-Based Face Pose Estimation

    Institute of Scientific and Technical Information of China (English)

    GE Xinliang; YANG Jie; LI Feng; WANG Huahua

    2007-01-01

    A robust face pose estimation approach is proposed by using face shape statistical model approach and pose parameters are represented by trigonometric functions. The face shape statistical model is firstly built by analyzing the face shapes from different people under varying poses. The shape alignment is vital in the process of building the statistical model. Then, six trigonometric functions are employed to represent the face pose parameters. Lastly, the mapping function is constructed between face image and face pose by linearly relating different parameters. The proposed approach is able to estimate different face poses using a few face training samples. Experimental results are provided to demonstrate its efficiency and accuracy.

  15. Quantifying underreporting of law-enforcement-related deaths in United States vital statistics and news-media-based data sources: A capture-recapture analysis.

    Science.gov (United States)

    Feldman, Justin M; Gruskin, Sofia; Coull, Brent A; Krieger, Nancy

    2017-10-01

    Prior research suggests that United States governmental sources documenting the number of law-enforcement-related deaths (i.e., fatalities due to injuries inflicted by law enforcement officers) undercount these incidents. The National Vital Statistics System (NVSS), administered by the federal government and based on state death certificate data, identifies such deaths by assigning them diagnostic codes corresponding to "legal intervention" in accordance with the International Classification of Diseases-10th Revision (ICD-10). Newer, nongovernmental databases track law-enforcement-related deaths by compiling news media reports and provide an opportunity to assess the magnitude and determinants of suspected NVSS underreporting. Our a priori hypotheses were that underreporting by the NVSS would exceed that by the news media sources, and that underreporting rates would be higher for decedents of color versus white, decedents in lower versus higher income counties, decedents killed by non-firearm (e.g., Taser) versus firearm mechanisms, and deaths recorded by a medical examiner versus coroner. We created a new US-wide dataset by matching cases reported in a nongovernmental, news-media-based dataset produced by the newspaper The Guardian, The Counted, to identifiable NVSS mortality records for 2015. We conducted 2 main analyses for this cross-sectional study: (1) an estimate of the total number of deaths and the proportion unreported by each source using capture-recapture analysis and (2) an assessment of correlates of underreporting of law-enforcement-related deaths (demographic characteristics of the decedent, mechanism of death, death investigator type [medical examiner versus coroner], county median income, and county urbanicity) in the NVSS using multilevel logistic regression. We estimated that the total number of law-enforcement-related deaths in 2015 was 1,166 (95% CI: 1,153, 1,184). There were 599 deaths reported in The Counted only, 36 reported in the NVSS

  16. Fisher statistics for analysis of diffusion tensor directional information.

    Science.gov (United States)

    Hutchinson, Elizabeth B; Rutecki, Paul A; Alexander, Andrew L; Sutula, Thomas P

    2012-04-30

    A statistical approach is presented for the quantitative analysis of diffusion tensor imaging (DTI) directional information using Fisher statistics, which were originally developed for the analysis of vectors in the field of paleomagnetism. In this framework, descriptive and inferential statistics have been formulated based on the Fisher probability density function, a spherical analogue of the normal distribution. The Fisher approach was evaluated for investigation of rat brain DTI maps to characterize tissue orientation in the corpus callosum, fornix, and hilus of the dorsal hippocampal dentate gyrus, and to compare directional properties in these regions following status epilepticus (SE) or traumatic brain injury (TBI) with values in healthy brains. Direction vectors were determined for each region of interest (ROI) for each brain sample and Fisher statistics were applied to calculate the mean direction vector and variance parameters in the corpus callosum, fornix, and dentate gyrus of normal rats and rats that experienced TBI or SE. Hypothesis testing was performed by calculation of Watson's F-statistic and associated p-value giving the likelihood that grouped observations were from the same directional distribution. In the fornix and midline corpus callosum, no directional differences were detected between groups, however in the hilus, significant (pstatistical comparison of tissue structural orientation. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. Statistical analysis of angular correlation measurements

    International Nuclear Information System (INIS)

    Oliveira, R.A.A.M. de.

    1986-01-01

    Obtaining the multipole mixing ratio, δ, of γ transitions in angular correlation measurements is a statistical problem characterized by the small number of angles in which the observation is made and by the limited statistic of counting, α. The inexistence of a sufficient statistics for the estimator of δ, is shown. Three different estimators for δ were constructed and their properties of consistency, bias and efficiency were tested. Tests were also performed in experimental results obtained in γ-γ directional correlation measurements. (Author) [pt

  18. Statistical and machine learning approaches for network analysis

    CERN Document Server

    Dehmer, Matthias

    2012-01-01

    Explore the multidisciplinary nature of complex networks through machine learning techniques Statistical and Machine Learning Approaches for Network Analysis provides an accessible framework for structurally analyzing graphs by bringing together known and novel approaches on graph classes and graph measures for classification. By providing different approaches based on experimental data, the book uniquely sets itself apart from the current literature by exploring the application of machine learning techniques to various types of complex networks. Comprised of chapters written by internation

  19. COMPARATIVE STATISTICAL ANALYSIS OF GENOTYPES’ COMBINING

    Directory of Open Access Journals (Sweden)

    V. Z. Stetsyuk

    2015-05-01

    The program provides the creation of desktop program complex for statistics calculations on a personal computer of doctor. Modern methods and tools for development of information systems were described to create program.

  20. Characterization and discrimination of raw and vinegar-baked Bupleuri radix based on UHPLC-Q-TOF-MS coupled with multivariate statistical analysis.

    Science.gov (United States)

    Lei, Tianli; Chen, Shifeng; Wang, Kai; Zhang, Dandan; Dong, Lin; Lv, Chongning; Wang, Jing; Lu, Jincai

    2018-02-01

    Bupleuri Radix is a commonly used herb in clinic, and raw and vinegar-baked Bupleuri Radix are both documented in the Pharmacopoeia of People's Republic of China. According to the theories of traditional Chinese medicine, Bupleuri Radix possesses different therapeutic effects before and after processing. However, the chemical mechanism of this processing is still unknown. In this study, ultra-high-performance liquid chromatography with quadruple time-of-flight mass spectrometry coupled with multivariate statistical analysis including principal component analysis and orthogonal partial least square-discriminant analysis was developed to holistically compare the difference between raw and vinegar-baked Bupleuri Radix for the first time. As a result, 50 peaks in raw and processed Bupleuri Radix were detected, respectively, and a total of 49 peak chemical compounds were identified. Saikosaponin a, saikosaponin d, saikosaponin b 3 , saikosaponin e, saikosaponin c, saikosaponin b 2 , saikosaponin b 1 , 4''-O-acetyl-saikosaponin d, hyperoside and 3',4'-dimethoxy quercetin were explored as potential markers of raw and vinegar-baked Bupleuri Radix. This study has been successfully applied for global analysis of raw and vinegar-processed samples. Furthermore, the underlying hepatoprotective mechanism of Bupleuri Radix was predicted, which was related to the changes of chemical profiling. Copyright © 2017 John Wiley & Sons, Ltd.

  1. Statistical Analysis of Research Data | Center for Cancer Research

    Science.gov (United States)

    Recent advances in cancer biology have resulted in the need for increased statistical analysis of research data. The Statistical Analysis of Research Data (SARD) course will be held on April 5-6, 2018 from 9 a.m.-5 p.m. at the National Institutes of Health's Natcher Conference Center, Balcony C on the Bethesda Campus. SARD is designed to provide an overview on the general principles of statistical analysis of research data.  The first day will feature univariate data analysis, including descriptive statistics, probability distributions, one- and two-sample inferential statistics.

  2. A statistical analysis of UK financial networks

    Science.gov (United States)

    Chu, J.; Nadarajah, S.

    2017-04-01

    In recent years, with a growing interest in big or large datasets, there has been a rise in the application of large graphs and networks to financial big data. Much of this research has focused on the construction and analysis of the network structure of stock markets, based on the relationships between stock prices. Motivated by Boginski et al. (2005), who studied the characteristics of a network structure of the US stock market, we construct network graphs of the UK stock market using same method. We fit four distributions to the degree density of the vertices from these graphs, the Pareto I, Fréchet, lognormal, and generalised Pareto distributions, and assess the goodness of fit. Our results show that the degree density of the complements of the market graphs, constructed using a negative threshold value close to zero, can be fitted well with the Fréchet and lognormal distributions.

  3. On exploiting wavelet bases in statistical region-based segmentation

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille; Forchhammer, Søren

    2002-01-01

    Statistical region-based segmentation methods such as the Active Appearance Models establish dense correspondences by modelling variation of shape and pixel intensities in low-resolution 2D images. Unfortunately, for high-resolution 2D and 3D images, this approach is rendered infeasible due to ex...... 9-7 wavelet on cardiac MRIs and human faces show that the segmentation accuracy is minimally degraded at compression ratios of 1:10 and 1:20, respectively....

  4. Statistical network analysis for analyzing policy networks

    DEFF Research Database (Denmark)

    Robins, Garry; Lewis, Jenny; Wang, Peng

    2012-01-01

    and policy network methodology is the development of statistical modeling approaches that can accommodate such dependent data. In this article, we review three network statistical methods commonly used in the current literature: quadratic assignment procedures, exponential random graph models (ERGMs......To analyze social network data using standard statistical approaches is to risk incorrect inference. The dependencies among observations implied in a network conceptualization undermine standard assumptions of the usual general linear models. One of the most quickly expanding areas of social......), and stochastic actor-oriented models. We focus most attention on ERGMs by providing an illustrative example of a model for a strategic information network within a local government. We draw inferences about the structural role played by individuals recognized as key innovators and conclude that such an approach...

  5. Numerical and statistical based damage analysis by the example of kiln elements; Numerisch und statistisch gestuetzte Schadensanalyse am Beispiel von Brennofenelementen

    Energy Technology Data Exchange (ETDEWEB)

    Reinert, U. [DEKRA-ETS Europaeische Gesellschaft fuer Technische Sicherheit mbH, Saarbruecken (Germany); Klaer, P. [Saar-Hartmetall und Werkzeuge GmbH, Voelklingen (Germany)

    1999-02-01

    The nonoxidic ceramics (for example SiC), used as a supporting structure in kilns, are subject to instantaneous damage, which as a rule is accompanied by high costs. Of interest in this context is the question whether a satisfactory statement can be made about the increase of damage frequency as a function of the firing temperature, the firing time, the geometry of the component, the material properties and the loading. Knowledge of these correlations provides the user with the possibility of minimizing the costs of damage by means of suitable selection of material and timely replacement of the components. The procedure presented in the following is based on the combination of numerical studies with statistically evaluated experiments. (orig.)

  6. Neutron activation and statistical analysis of pottery from Thera, Greece

    International Nuclear Information System (INIS)

    Kilikoglou, V.; Grimanis, A.P.; Karayannis, M.I.

    1990-01-01

    Neutron activation analysis, in combination with multivariate analysis of the generated data, was used for the chemical characterization of prehistoric pottery from the Greek islands of Thera, Melos (islands with similar geology) and Crete. The statistical procedure which proved that Theran pottery could be distinguished from Melian is described. This discrimination, attained for the first time, was mainly based on the concentrations of the trace elements Sm, Yb, Lu and Cr. Also, Cretan imports to both Thera and Melos were clearly separable from local products. (author) 22 refs.; 1 fig.; 4 tabs

  7. Statistical models and methods for reliability and survival analysis

    CERN Document Server

    Couallier, Vincent; Huber-Carol, Catherine; Mesbah, Mounir; Huber -Carol, Catherine; Limnios, Nikolaos; Gerville-Reache, Leo

    2013-01-01

    Statistical Models and Methods for Reliability and Survival Analysis brings together contributions by specialists in statistical theory as they discuss their applications providing up-to-date developments in methods used in survival analysis, statistical goodness of fit, stochastic processes for system reliability, amongst others. Many of these are related to the work of Professor M. Nikulin in statistics over the past 30 years. The authors gather together various contributions with a broad array of techniques and results, divided into three parts - Statistical Models and Methods, Statistical

  8. A statistical test for outlier identification in data envelopment analysis

    Directory of Open Access Journals (Sweden)

    Morteza Khodabin

    2010-09-01

    Full Text Available In the use of peer group data to assess individual, typical or best practice performance, the effective detection of outliers is critical for achieving useful results. In these ‘‘deterministic’’ frontier models, statistical theory is now mostly available. This paper deals with the statistical pared sample method and its capability of detecting outliers in data envelopment analysis. In the presented method, each observation is deleted from the sample once and the resulting linear program is solved, leading to a distribution of efficiency estimates. Based on the achieved distribution, a pared test is designed to identify the potential outlier(s. We illustrate the method through a real data set. The method could be used in a first step, as an exploratory data analysis, before using any frontier estimation.

  9. Statistical analysis of the determinations of the Sun's Galactocentric distance

    Science.gov (United States)

    Malkin, Zinovy

    2013-02-01

    Based on several tens of R0 measurements made during the past two decades, several studies have been performed to derive the best estimate of R0. Some used just simple averaging to derive a result, whereas others provided comprehensive analyses of possible errors in published results. In either case, detailed statistical analyses of data used were not performed. However, a computation of the best estimates of the Galactic rotation constants is not only an astronomical but also a metrological task. Here we perform an analysis of 53 R0 measurements (published in the past 20 years) to assess the consistency of the data. Our analysis shows that they are internally consistent. It is also shown that any trend in the R0 estimates from the last 20 years is statistically negligible, which renders the presence of a bandwagon effect doubtful. On the other hand, the formal errors in the published R0 estimates improve significantly with time.

  10. Perceptual and statistical analysis of cardiac phase and amplitude images

    International Nuclear Information System (INIS)

    Houston, A.; Craig, A.

    1991-01-01

    A perceptual experiment was conducted using cardiac phase and amplitude images. Estimates of statistical parameters were derived from the images and the diagnostic potential of human and statistical decisions compared. Five methods were used to generate the images from 75 gated cardiac studies, 39 of which were classified as pathological. The images were presented to 12 observers experienced in nuclear medicine. The observers rated the images using a five-category scale based on their confidence of an abnormality presenting. Circular and linear statistics were used to analyse phase and amplitude image data, respectively. Estimates of mean, standard deviation (SD), skewness, kurtosis and the first term of the spatial correlation function were evaluated in the region of the left ventricle. A receiver operating characteristic analysis was performed on both sets of data and the human and statistical decisions compared. For phase images, circular SD was shown to discriminate better between normal and abnormal than experienced observers, but no single statistic discriminated as well as the human observer for amplitude images. (orig.)

  11. Statistical analysis of hydrodynamic cavitation events

    Science.gov (United States)

    Gimenez, G.; Sommer, R.

    1980-10-01

    The frequency (number of events per unit time) of pressure pulses produced by hydrodynamic cavitation bubble collapses is investigated using statistical methods. The results indicate that this frequency is distributed according to a normal law, its parameters not being time-evolving.

  12. Statistical analysis of lineaments of Goa, India

    Digital Repository Service at National Institute of Oceanography (India)

    Iyer, S.D.; Banerjee, G.; Wagle, B.G.

    statistically to obtain the nonlinear pattern in the form of a cosine wave. Three distinct peaks were found at azimuths of 40-45 degrees, 90-95 degrees and 140-145 degrees, which have peak values of 5.85, 6.80 respectively. These three peaks are correlated...

  13. On statistical analysis of compound point process

    Czech Academy of Sciences Publication Activity Database

    Volf, Petr

    2006-01-01

    Roč. 35, 2-3 (2006), s. 389-396 ISSN 1026-597X R&D Projects: GA ČR(CZ) GA402/04/1294 Institutional research plan: CEZ:AV0Z10750506 Keywords : counting process * compound process * hazard function * Cox -model Subject RIV: BB - Applied Statistics, Operational Research

  14. Statistical Analysis Of Reconnaissance Geochemical Data From ...

    African Journals Online (AJOL)

    , Co, Mo, Hg, Sb, Tl, Sc, Cr, Ni, La, W, V, U, Th, Bi, Sr and Ga in 56 stream sediment samples collected from Orle drainage system were subjected to univariate and multivariate statistical analyses. The univariate methods used include ...

  15. Building the Community Online Resource for Statistical Seismicity Analysis (CORSSA)

    Science.gov (United States)

    Michael, A. J.; Wiemer, S.; Zechar, J. D.; Hardebeck, J. L.; Naylor, M.; Zhuang, J.; Steacy, S.; Corssa Executive Committee

    2010-12-01

    Statistical seismology is critical to the understanding of seismicity, the testing of proposed earthquake prediction and forecasting methods, and the assessment of seismic hazard. Unfortunately, despite its importance to seismology - especially to those aspects with great impact on public policy - statistical seismology is mostly ignored in the education of seismologists, and there is no central repository for the existing open-source software tools. To remedy these deficiencies, and with the broader goal to enhance the quality of statistical seismology research, we have begun building the Community Online Resource for Statistical Seismicity Analysis (CORSSA). CORSSA is a web-based educational platform that is authoritative, up-to-date, prominent, and user-friendly. We anticipate that the users of CORSSA will range from beginning graduate students to experienced researchers. More than 20 scientists from around the world met for a week in Zurich in May 2010 to kick-start the creation of CORSSA: the format and initial table of contents were defined; a governing structure was organized; and workshop participants began drafting articles. CORSSA materials are organized with respect to six themes, each containing between four and eight articles. The CORSSA web page, www.corssa.org, officially unveiled on September 6, 2010, debuts with an initial set of approximately 10 to 15 articles available online for viewing and commenting with additional articles to be added over the coming months. Each article will be peer-reviewed and will present a balanced discussion, including illustrative examples and code snippets. Topics in the initial set of articles will include: introductions to both CORSSA and statistical seismology, basic statistical tests and their role in seismology; understanding seismicity catalogs and their problems; basic techniques for modeling seismicity; and methods for testing earthquake predictability hypotheses. A special article will compare and review

  16. Statistical Analysis of Radio Propagation Channel in Ruins Environment

    Directory of Open Access Journals (Sweden)

    Jiao He

    2015-01-01

    Full Text Available The cellphone based localization system for search and rescue in complex high density ruins has attracted a great interest in recent years, where the radio channel characteristics are critical for design and development of such a system. This paper presents a spatial smoothing estimation via rotational invariance technique (SS-ESPRIT for radio channel characterization of high density ruins. The radio propagations at three typical mobile communication bands (0.9, 1.8, and 2 GHz are investigated in two different scenarios. Channel parameters, such as arrival time, delays, and complex amplitudes, are statistically analyzed. Furthermore, a channel simulator is built based on these statistics. By comparison analysis of average excess delay and delay spread, the validation results show a good agreement between the measurements and channel modeling results.

  17. Software for statistical data analysis used in Higgs searches

    International Nuclear Information System (INIS)

    Gumpert, Christian; Moneta, Lorenzo; Cranmer, Kyle; Kreiss, Sven; Verkerke, Wouter

    2014-01-01

    The analysis and interpretation of data collected by the Large Hadron Collider (LHC) requires advanced statistical tools in order to quantify the agreement between observation and theoretical models. RooStats is a project providing a statistical framework for data analysis with the focus on discoveries, confidence intervals and combination of different measurements in both Bayesian and frequentist approaches. It employs the RooFit data modelling language where mathematical concepts such as variables, (probability density) functions and integrals are represented as C++ objects. RooStats and RooFit rely on the persistency technology of the ROOT framework. The usage of a common data format enables the concept of digital publishing of complicated likelihood functions. The statistical tools have been developed in close collaboration with the LHC experiments to ensure their applicability to real-life use cases. Numerous physics results have been produced using the RooStats tools, with the discovery of the Higgs boson by the ATLAS and CMS experiments being certainly the most popular among them. We will discuss tools currently used by LHC experiments to set exclusion limits, to derive confidence intervals and to estimate discovery significances based on frequentist statistics and the asymptotic behaviour of likelihood functions. Furthermore, new developments in RooStats and performance optimisation necessary to cope with complex models depending on more than 1000 variables will be reviewed

  18. Statistical analysis of medical data using SAS

    CERN Document Server

    Der, Geoff

    2005-01-01

    An Introduction to SASDescribing and Summarizing DataBasic InferenceScatterplots Correlation: Simple Regression and SmoothingAnalysis of Variance and CovarianceMultiple RegressionLogistic RegressionThe Generalized Linear ModelGeneralized Additive ModelsNonlinear Regression ModelsThe Analysis of Longitudinal Data IThe Analysis of Longitudinal Data II: Models for Normal Response VariablesThe Analysis of Longitudinal Data III: Non-Normal ResponseSurvival AnalysisAnalysis Multivariate Date: Principal Components and Cluster AnalysisReferences

  19. Fundamentals of statistical experimental design and analysis

    CERN Document Server

    Easterling, Robert G

    2015-01-01

    Professionals in all areas - business; government; the physical, life, and social sciences; engineering; medicine, etc. - benefit from using statistical experimental design to better understand their worlds and then use that understanding to improve the products, processes, and programs they are responsible for. This book aims to provide the practitioners of tomorrow with a memorable, easy to read, engaging guide to statistics and experimental design. This book uses examples, drawn from a variety of established texts, and embeds them in a business or scientific context, seasoned with a dash of humor, to emphasize the issues and ideas that led to the experiment and the what-do-we-do-next? steps after the experiment. Graphical data displays are emphasized as means of discovery and communication and formulas are minimized, with a focus on interpreting the results that software produce. The role of subject-matter knowledge, and passion, is also illustrated. The examples do not require specialized knowledge, and t...

  20. Common misconceptions about data analysis and statistics.

    Science.gov (United States)

    Motulsky, Harvey J

    2014-11-01

    Ideally, any experienced investigator with the right tools should be able to reproduce a finding published in a peer-reviewed biomedical science journal. In fact, the reproducibility of a large percentage of published findings has been questioned. Undoubtedly, there are many reasons for this, but one reason maybe that investigators fool themselves due to a poor understanding of statistical concepts. In particular, investigators often make these mistakes: 1. P-Hacking. This is when you reanalyze a data set in many different ways, or perhaps reanalyze with additional replicates, until you get the result you want. 2. Overemphasis on P values rather than on the actual size of the observed effect. 3. Overuse of statistical hypothesis testing, and being seduced by the word "significant". 4. Overreliance on standard errors, which are often misunderstood.

  1. Common misconceptions about data analysis and statistics.

    Science.gov (United States)

    Motulsky, Harvey J

    2015-02-01

    Ideally, any experienced investigator with the right tools should be able to reproduce a finding published in a peer-reviewed biomedical science journal. In fact, the reproducibility of a large percentage of published findings has been questioned. Undoubtedly, there are many reasons for this, but one reason may be that investigators fool themselves due to a poor understanding of statistical concepts. In particular, investigators often make these mistakes: (1) P-Hacking. This is when you reanalyze a data set in many different ways, or perhaps reanalyze with additional replicates, until you get the result you want. (2) Overemphasis on P values rather than on the actual size of the observed effect. (3) Overuse of statistical hypothesis testing, and being seduced by the word "significant". (4) Overreliance on standard errors, which are often misunderstood.

  2. Statistical analysis of radioactivity in the environment

    International Nuclear Information System (INIS)

    Barnes, M.G.; Giacomini, J.J.

    1980-05-01

    The pattern of radioactivity in surface soils of Area 5 of the Nevada Test Site is analyzed statistically by means of kriging. The 1962 event code-named Smallboy effected the greatest proportion of the area sampled, but some of the area was also affected by a number of other events. The data for this study were collected on a regular grid to take advantage of the efficiency of grid sampling

  3. Advanced statistical analysis of laser-induced breakdown spectroscopy data to discriminate sedimentary rocks based on Czerny–Turner and Echelle spectrometers

    International Nuclear Information System (INIS)

    Zhu, Xiaoqin; Xu, Tao; Lin, Qingyu; Liang, Long; Niu, Guanghui; Lai, Hongjun; Xu, Mingjun; Wang, Xu; Li, Hua; Duan, Yixiang

    2014-01-01

    The correct identification of rock types is critical for understanding the origins and history of any particular rock body. Laser-induced breakdown spectroscopy (LIBS) has developed into an excellent analytical tool for geological materials research because of its numerous technical advantages compared with traditional methods. The coupling of LIBS with advanced multivariate analysis has received increasing attention because it facilitates the rapid processing of spectral information to differentiate and classify samples. In this study, we collected LIBS datasets for 16 sedimentary rocks from Triassic strata in Sichuan Basin. We compared the performance of two types of spectrometers (Czerny–Turner and Echelle) for classification of rocks using two advanced multivariate statistical techniques, i.e., partial least squares discriminant analysis (PLS-DA) and support vector machines (SVMs). Comparable levels of performance were achievable when using the two systems in the best signal reception conditions. Our results also suggest that SVM outperformed PLS-DA in classification performance. Then, we compared the results obtained when using pre-selected wavelength variables and broadband LIBS spectra as variable inputs. They provided approximately equivalent levels of performance. In addition, the rock slab samples were also analyzed directly after being polished. This minimized the analysis time greatly and showed improvement of classification performance compared with the pressed pellets. - Highlights: • SVM and PLS-DA were compared using two spectrometers to classify sedimentary rocks. • SVM combined with LIBS improved the classification accuracy compared with PLS-DA. • Minimal difference using pre-selected and broadband spectra as variable inputs • Improved classification performance achievable using polished rock slab samples

  4. Voxel-based statistical analysis of cerebral blood flow using Tc-99m ECD brain SPECT in patients with traumatic brain injury: group and individual analyses.

    Science.gov (United States)

    Shin, Yong Beom; Kim, Seong-Jang; Kim, In-Ju; Kim, Yong-Ki; Kim, Dong-Soo; Park, Jae Heung; Yeom, Seok-Ran

    2006-06-01

    Statistical parametric mapping (SPM) was applied to brain perfusion single photon emission computed tomography (SPECT) images in patients with traumatic brain injury (TBI) to investigate regional cerebral abnormalities compared to age-matched normal controls. Thirteen patients with TBI underwent brain perfusion SPECT were included in this study (10 males, three females, mean age 39.8 +/- 18.2, range 21 - 74). SPM2 software implemented in MATLAB 5.3 was used for spatial pre-processing and analysis and to determine the quantitative differences between TBI patients and age-matched normal controls. Three large voxel clusters of significantly decreased cerebral blood perfusion were found in patients with TBI. The largest clusters were area including medial frontal gyrus (voxel number 3642, peak Z-value = 4.31, 4.27, p = 0.000) in both hemispheres. The second largest clusters were areas including cingulated gyrus and anterior cingulate gyrus of left hemisphere (voxel number 381, peak Z-value = 3.67, 3.62, p = 0.000). Other clusters were parahippocampal gyrus (voxel number 173, peak Z-value = 3.40, p = 0.000) and hippocampus (voxel number 173, peak Z-value = 3.23, p = 0.001) in the left hemisphere. The false discovery rate (FDR) was less than 0.04. From this study, group and individual analyses of SPM2 could clearly identify the perfusion abnormalities of brain SPECT in patients with TBI. Group analysis of SPM2 showed hypoperfusion pattern in the areas including medial frontal gyrus of both hemispheres, cingulate gyrus, anterior cingulate gyrus, parahippocampal gyrus and hippocampus in the left hemisphere compared to age-matched normal controls. Also, left parahippocampal gyrus and left hippocampus were additional hypoperfusion areas. However, these findings deserve further investigation on a larger number of patients to be performed to allow a better validation of objective SPM analysis in patients with TBI.

  5. Critical analysis of adsorption data statistically

    Science.gov (United States)

    Kaushal, Achla; Singh, S. K.

    2017-10-01

    Experimental data can be presented, computed, and critically analysed in a different way using statistics. A variety of statistical tests are used to make decisions about the significance and validity of the experimental data. In the present study, adsorption was carried out to remove zinc ions from contaminated aqueous solution using mango leaf powder. The experimental data was analysed statistically by hypothesis testing applying t test, paired t test and Chi-square test to (a) test the optimum value of the process pH, (b) verify the success of experiment and (c) study the effect of adsorbent dose in zinc ion removal from aqueous solutions. Comparison of calculated and tabulated values of t and χ 2 showed the results in favour of the data collected from the experiment and this has been shown on probability charts. K value for Langmuir isotherm was 0.8582 and m value for Freundlich adsorption isotherm obtained was 0.725, both are mango leaf powder.

  6. Common pitfalls in statistical analysis: "P" values, statistical significance and confidence intervals

    Directory of Open Access Journals (Sweden)

    Priya Ranganathan

    2015-01-01

    Full Text Available In the second part of a series on pitfalls in statistical analysis, we look at various ways in which a statistically significant study result can be expressed. We debunk some of the myths regarding the ′P′ value, explain the importance of ′confidence intervals′ and clarify the importance of including both values in a paper

  7. Common pitfalls in statistical analysis: “P” values, statistical significance and confidence intervals

    Science.gov (United States)

    Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc

    2015-01-01

    In the second part of a series on pitfalls in statistical analysis, we look at various ways in which a statistically significant study result can be expressed. We debunk some of the myths regarding the ‘P’ value, explain the importance of ‘confidence intervals’ and clarify the importance of including both values in a paper PMID:25878958

  8. Statistical analysis and interpretation of prenatal diagnostic imaging studies, Part 2: descriptive and inferential statistical methods.

    Science.gov (United States)

    Tuuli, Methodius G; Odibo, Anthony O

    2011-08-01

    The objective of this article is to discuss the rationale for common statistical tests used for the analysis and interpretation of prenatal diagnostic imaging studies. Examples from the literature are used to illustrate descriptive and inferential statistics. The uses and limitations of linear and logistic regression analyses are discussed in detail.

  9. Statistical learning methods in high-energy and astrophysics analysis

    Energy Technology Data Exchange (ETDEWEB)

    Zimmermann, J. [Forschungszentrum Juelich GmbH, Zentrallabor fuer Elektronik, 52425 Juelich (Germany) and Max-Planck-Institut fuer Physik, Foehringer Ring 6, 80805 Munich (Germany)]. E-mail: zimmerm@mppmu.mpg.de; Kiesling, C. [Max-Planck-Institut fuer Physik, Foehringer Ring 6, 80805 Munich (Germany)

    2004-11-21

    We discuss several popular statistical learning methods used in high-energy- and astro-physics analysis. After a short motivation for statistical learning we present the most popular algorithms and discuss several examples from current research in particle- and astro-physics. The statistical learning methods are compared with each other and with standard methods for the respective application.

  10. Statistical learning methods in high-energy and astrophysics analysis

    International Nuclear Information System (INIS)

    Zimmermann, J.; Kiesling, C.

    2004-01-01

    We discuss several popular statistical learning methods used in high-energy- and astro-physics analysis. After a short motivation for statistical learning we present the most popular algorithms and discuss several examples from current research in particle- and astro-physics. The statistical learning methods are compared with each other and with standard methods for the respective application

  11. Statistical analysis of partial reduced width distributions

    International Nuclear Information System (INIS)

    Tran Quoc Thuong.

    1973-01-01

    The aim of this study was to develop rigorous methods for analysing experimental event distributions according to a law in chi 2 and to check if the number of degrees of freedom ν is compatible with the value 1 for the reduced neutron width distribution. Two statistical methods were used (the maximum-likelihood method and the method of moments); it was shown, in a few particular cases, that ν is compatible with 1. The difference between ν and 1, if it exists, should not exceed 3%. These results confirm the validity of the compound nucleus model [fr

  12. Statistical analysis of random duration times

    International Nuclear Information System (INIS)

    Engelhardt, M.E.

    1996-04-01

    This report presents basic statistical methods for analyzing data obtained by observing random time durations. It gives nonparametric estimates of the cumulative distribution function, reliability function and cumulative hazard function. These results can be applied with either complete or censored data. Several models which are commonly used with time data are discussed, and methods for model checking and goodness-of-fit tests are discussed. Maximum likelihood estimates and confidence limits are given for the various models considered. Some results for situations where repeated durations such as repairable systems are also discussed

  13. Statistical analysis of random pulse trains

    International Nuclear Information System (INIS)

    Da Costa, G.

    1977-02-01

    Some experimental and theoretical results concerning the statistical properties of optical beams formed by a finite number of independent pulses are presented. The considered waves (corresponding to each pulse) present important spatial variations of the illumination distribution in a cross-section of the beam, due to the time-varying random refractive index distribution in the active medium. Some examples of this kind of emission are: (a) Free-running ruby laser emission; (b) Mode-locked pulse trains; (c) Randomly excited nonlinear media

  14. Statistical analysis of dragline monitoring data

    Energy Technology Data Exchange (ETDEWEB)

    Mirabediny, H.; Baafi, E.Y. [University of Tehran, Tehran (Iran)

    1998-07-01

    Dragline monitoring systems are normally the best tool used to collect data on the machine performance and operational parameters of a dragline operation. This paper discusses results of a time study using data from a dragline monitoring system captured over a four month period. Statistical summaries of the time study in terms of average values, standard deviation and frequency distributions showed that the mode of operation and the geological conditions have a significant influence on the dragline performance parameters. 6 refs., 14 figs., 3 tabs.

  15. Is Cognitive Activity of Speech Based On Statistical Independence?

    DEFF Research Database (Denmark)

    Feng, Ling; Hansen, Lars Kai

    2008-01-01

    This paper explores the generality of COgnitive Component Analysis (COCA), which is defined as the process of unsupervised grouping of data such that the ensuing group structure is well-aligned with that resulting from human cognitive activity. The hypothesis of {COCA} is ecological......: the essentially independent features in a context defined ensemble can be efficiently coded using a sparse independent component representation. Our devised protocol aims at comparing the performance of supervised learning (invoking cognitive activity) and unsupervised learning (statistical regularities) based...... on similar representations, and the only difference lies in the human inferred labels. Inspired by the previous research on COCA, we introduce a new pair of models, which directly employ the independent hypothesis. Statistical regularities are revealed at multiple time scales on phoneme, gender, age...

  16. Instruction of Statistics via Computer-Based Tools: Effects on Statistics' Anxiety, Attitude, and Achievement

    Science.gov (United States)

    Ciftci, S. Koza; Karadag, Engin; Akdal, Pinar

    2014-01-01

    The purpose of this study was to determine the effect of statistics instruction using computer-based tools, on statistics anxiety, attitude, and achievement. This study was designed as quasi-experimental research and the pattern used was a matched pre-test/post-test with control group design. Data was collected using three scales: a Statistics…

  17. Analysis of Random-Loading HTR-PROTEUS Cores with Continuous Energy Monte Carlo Code Based on A Statistical Geometry Model

    International Nuclear Information System (INIS)

    Murata, Isao; Miyamaru, Hiroyuki

    2008-01-01

    Spherical elements have remarkable features in various applications in the nuclear engineering field. In 1990's, by the project of HTR-PROTEUS at PSI various pebble bed reactor experiments were conducted including cores with a lot of spherical fuel elements loaded randomly. In this study, criticality experiments of the random-loading HTR-PROTEUS cores were analyzed by MCNP-BALL, which could deal with a random arrangement of spherical fuel elements exactly with a statistical geometry model. As a result of analysis, the calculated effective multiplication factors were in fairly good agreement with the measurements within about 0.5%Δk/k. In comparison with other numerical analysis, our effective multiplication factors were between the experimental values and the VSOP calculations. To investigate the discrepancy of the effective multiplication factors between the experiments and calculations, sensitivity analyses were performed. As the result, the sensitivity of impurity boron concentration was fairly large. The reason of the present slight overestimation was not made clear at present. However, the presently existing difference was thought to be related to the impurity boron concentration, not to the modelling of the reactor and the used nuclear data. From the present study, it was confirmed that MCNP-BALL would have an advantage to conventional transport codes by comparing with their numerical results and the experimental values. As for the criticality experiment of PROTEUS, we would conclude that the two cores of Core 4.2 and 4.3 could be regarded as an equivalent experiment of a reference critical core, which was packed in the packing fraction of RLP. (authors)

  18. Analysis of Random-Loading HTR-PROTEUS Cores with Continuous Energy Monte Carlo Code Based on A Statistical Geometry Model

    Energy Technology Data Exchange (ETDEWEB)

    Murata, Isao; Miyamaru, Hiroyuki [Division of Electrical, Electronic and Information Engineering, Osaka University, Yamada-oka 2-1, Suita, Osaka, 565-0871 (Japan)

    2008-07-01

    Spherical elements have remarkable features in various applications in the nuclear engineering field. In 1990's, by the project of HTR-PROTEUS at PSI various pebble bed reactor experiments were conducted including cores with a lot of spherical fuel elements loaded randomly. In this study, criticality experiments of the random-loading HTR-PROTEUS cores were analyzed by MCNP-BALL, which could deal with a random arrangement of spherical fuel elements exactly with a statistical geometry model. As a result of analysis, the calculated effective multiplication factors were in fairly good agreement with the measurements within about 0.5%DELTAk/k. In comparison with other numerical analysis, our effective multiplication factors were between the experimental values and the VSOP calculations. To investigate the discrepancy of the effective multiplication factors between the experiments and calculations, sensitivity analyses were performed. As the result, the sensitivity of impurity boron concentration was fairly large. The reason of the present slight overestimation was not made clear at present. However, the presently existing difference was thought to be related to the impurity boron concentration, not to the modelling of the reactor and the used nuclear data. From the present study, it was confirmed that MCNP-BALL would have an advantage to conventional transport codes by comparing with their numerical results and the experimental values. As for the criticality experiment of PROTEUS, we would conclude that the two cores of Core 4.2 and 4.3 could be regarded as an equivalent experiment of a reference critical core, which was packed in the packing fraction of RLP. (authors)

  19. Comparing Visual and Statistical Analysis of Multiple Baseline Design Graphs.

    Science.gov (United States)

    Wolfe, Katie; Dickenson, Tammiee S; Miller, Bridget; McGrath, Kathleen V

    2018-04-01

    A growing number of statistical analyses are being developed for single-case research. One important factor in evaluating these methods is the extent to which each corresponds to visual analysis. Few studies have compared statistical and visual analysis, and information about more recently developed statistics is scarce. Therefore, our purpose was to evaluate the agreement between visual analysis and four statistical analyses: improvement rate difference (IRD); Tau-U; Hedges, Pustejovsky, Shadish (HPS) effect size; and between-case standardized mean difference (BC-SMD). Results indicate that IRD and BC-SMD had the strongest overall agreement with visual analysis. Although Tau-U had strong agreement with visual analysis on raw values, it had poorer agreement when those values were dichotomized to represent the presence or absence of a functional relation. Overall, visual analysis appeared to be more conservative than statistical analysis, but further research is needed to evaluate the nature of these disagreements.

  20. CONFIDENCE LEVELS AND/VS. STATISTICAL HYPOTHESIS TESTING IN STATISTICAL ANALYSIS. CASE STUDY

    Directory of Open Access Journals (Sweden)

    ILEANA BRUDIU

    2009-05-01

    Full Text Available Estimated parameters with confidence intervals and testing statistical assumptions used in statistical analysis to obtain conclusions on research from a sample extracted from the population. Paper to the case study presented aims to highlight the importance of volume of sample taken in the study and how this reflects on the results obtained when using confidence intervals and testing for pregnant. If statistical testing hypotheses not only give an answer "yes" or "no" to some questions of statistical estimation using statistical confidence intervals provides more information than a test statistic, show high degree of uncertainty arising from small samples and findings build in the "marginally significant" or "almost significant (p very close to 0.05.

  1. Statistical Analysis of 30 Years Rainfall Data: A Case Study

    Science.gov (United States)

    Arvind, G.; Ashok Kumar, P.; Girish Karthi, S.; Suribabu, C. R.

    2017-07-01

    Rainfall is a prime input for various engineering design such as hydraulic structures, bridges and culverts, canals, storm water sewer and road drainage system. The detailed statistical analysis of each region is essential to estimate the relevant input value for design and analysis of engineering structures and also for crop planning. A rain gauge station located closely in Trichy district is selected for statistical analysis where agriculture is the prime occupation. The daily rainfall data for a period of 30 years is used to understand normal rainfall, deficit rainfall, Excess rainfall and Seasonal rainfall of the selected circle headquarters. Further various plotting position formulae available is used to evaluate return period of monthly, seasonally and annual rainfall. This analysis will provide useful information for water resources planner, farmers and urban engineers to assess the availability of water and create the storage accordingly. The mean, standard deviation and coefficient of variation of monthly and annual rainfall was calculated to check the rainfall variability. From the calculated results, the rainfall pattern is found to be erratic. The best fit probability distribution was identified based on the minimum deviation between actual and estimated values. The scientific results and the analysis paved the way to determine the proper onset and withdrawal of monsoon results which were used for land preparation and sowing.

  2. Incorporating Code-Based Software in an Introductory Statistics Course

    Science.gov (United States)

    Doehler, Kirsten; Taylor, Laura

    2015-01-01

    This article is based on the experiences of two statistics professors who have taught students to write and effectively utilize code-based software in a college-level introductory statistics course. Advantages of using software and code-based software in this context are discussed. Suggestions are made on how to ease students into using code with…

  3. Statistical wind analysis for near-space applications

    Science.gov (United States)

    Roney, Jason A.

    2007-09-01

    Statistical wind models were developed based on the existing observational wind data for near-space altitudes between 60 000 and 100 000 ft (18 30 km) above ground level (AGL) at two locations, Akon, OH, USA, and White Sands, NM, USA. These two sites are envisioned as playing a crucial role in the first flights of high-altitude airships. The analysis shown in this paper has not been previously applied to this region of the stratosphere for such an application. Standard statistics were compiled for these data such as mean, median, maximum wind speed, and standard deviation, and the data were modeled with Weibull distributions. These statistics indicated, on a yearly average, there is a lull or a “knee” in the wind between 65 000 and 72 000 ft AGL (20 22 km). From the standard statistics, trends at both locations indicated substantial seasonal variation in the mean wind speed at these heights. The yearly and monthly statistical modeling indicated that Weibull distributions were a reasonable model for the data. Forecasts and hindcasts were done by using a Weibull model based on 2004 data and comparing the model with the 2003 and 2005 data. The 2004 distribution was also a reasonable model for these years. Lastly, the Weibull distribution and cumulative function were used to predict the 50%, 95%, and 99% winds, which are directly related to the expected power requirements of a near-space station-keeping airship. These values indicated that using only the standard deviation of the mean may underestimate the operational conditions.

  4. Noise removing in encrypted color images by statistical analysis

    Science.gov (United States)

    Islam, N.; Puech, W.

    2012-03-01

    Cryptographic techniques are used to secure confidential data from unauthorized access but these techniques are very sensitive to noise. A single bit change in encrypted data can have catastrophic impact over the decrypted data. This paper addresses the problem of removing bit error in visual data which are encrypted using AES algorithm in the CBC mode. In order to remove the noise, a method is proposed which is based on the statistical analysis of each block during the decryption. The proposed method exploits local statistics of the visual data and confusion/diffusion properties of the encryption algorithm to remove the errors. Experimental results show that the proposed method can be used at the receiving end for the possible solution for noise removing in visual data in encrypted domain.

  5. Statistical methods for data analysis in particle physics

    CERN Document Server

    AUTHOR|(CDS)2070643

    2015-01-01

    This concise set of course-based notes provides the reader with the main concepts and tools to perform statistical analysis of experimental data, in particular in the field of high-energy physics (HEP). First, an introduction to probability theory and basic statistics is given, mainly as reminder from advanced undergraduate studies, yet also in view to clearly distinguish the Frequentist versus Bayesian approaches and interpretations in subsequent applications. More advanced concepts and applications are gradually introduced, culminating in the chapter on upper limits as many applications in HEP concern hypothesis testing, where often the main goal is to provide better and better limits so as to be able to distinguish eventually between competing hypotheses or to rule out some of them altogether. Many worked examples will help newcomers to the field and graduate students to understand the pitfalls in applying theoretical concepts to actual data

  6. Statistical models for competing risk analysis

    International Nuclear Information System (INIS)

    Sather, H.N.

    1976-08-01

    Research results on three new models for potential applications in competing risks problems. One section covers the basic statistical relationships underlying the subsequent competing risks model development. Another discusses the problem of comparing cause-specific risk structure by competing risks theory in two homogeneous populations, P1 and P2. Weibull models which allow more generality than the Berkson and Elveback models are studied for the effect of time on the hazard function. The use of concomitant information for modeling single-risk survival is extended to the multiple failure mode domain of competing risks. The model used to illustrate the use of this methodology is a life table model which has constant hazards within pre-designated intervals of the time scale. Two parametric models for bivariate dependent competing risks, which provide interesting alternatives, are proposed and examined

  7. Statistical analysis of earthquake ground motion parameters

    International Nuclear Information System (INIS)

    1979-12-01

    Several earthquake ground response parameters that define the strength, duration, and frequency content of the motions are investigated using regression analyses techniques; these techniques incorporate statistical significance testing to establish the terms in the regression equations. The parameters investigated are the peak acceleration, velocity, and displacement; Arias intensity; spectrum intensity; bracketed duration; Trifunac-Brady duration; and response spectral amplitudes. The study provides insight into how these parameters are affected by magnitude, epicentral distance, local site conditions, direction of motion (i.e., whether horizontal or vertical), and earthquake event type. The results are presented in a form so as to facilitate their use in the development of seismic input criteria for nuclear plants and other major structures. They are also compared with results from prior investigations that have been used in the past in the criteria development for such facilities

  8. An exercise in model validation: Comparing univariate statistics and Monte Carlo-based multivariate statistics

    International Nuclear Information System (INIS)

    Weathers, J.B.; Luck, R.; Weathers, J.W.

    2009-01-01

    The complexity of mathematical models used by practicing engineers is increasing due to the growing availability of sophisticated mathematical modeling tools and ever-improving computational power. For this reason, the need to define a well-structured process for validating these models against experimental results has become a pressing issue in the engineering community. This validation process is partially characterized by the uncertainties associated with the modeling effort as well as the experimental results. The net impact of the uncertainties on the validation effort is assessed through the 'noise level of the validation procedure', which can be defined as an estimate of the 95% confidence uncertainty bounds for the comparison error between actual experimental results and model-based predictions of the same quantities of interest. Although general descriptions associated with the construction of the noise level using multivariate statistics exists in the literature, a detailed procedure outlining how to account for the systematic and random uncertainties is not available. In this paper, the methodology used to derive the covariance matrix associated with the multivariate normal pdf based on random and systematic uncertainties is examined, and a procedure used to estimate this covariance matrix using Monte Carlo analysis is presented. The covariance matrices are then used to construct approximate 95% confidence constant probability contours associated with comparison error results for a practical example. In addition, the example is used to show the drawbacks of using a first-order sensitivity analysis when nonlinear local sensitivity coefficients exist. Finally, the example is used to show the connection between the noise level of the validation exercise calculated using multivariate and univariate statistics.

  9. An exercise in model validation: Comparing univariate statistics and Monte Carlo-based multivariate statistics

    Energy Technology Data Exchange (ETDEWEB)

    Weathers, J.B. [Shock, Noise, and Vibration Group, Northrop Grumman Shipbuilding, P.O. Box 149, Pascagoula, MS 39568 (United States)], E-mail: James.Weathers@ngc.com; Luck, R. [Department of Mechanical Engineering, Mississippi State University, 210 Carpenter Engineering Building, P.O. Box ME, Mississippi State, MS 39762-5925 (United States)], E-mail: Luck@me.msstate.edu; Weathers, J.W. [Structural Analysis Group, Northrop Grumman Shipbuilding, P.O. Box 149, Pascagoula, MS 39568 (United States)], E-mail: Jeffrey.Weathers@ngc.com

    2009-11-15

    The complexity of mathematical models used by practicing engineers is increasing due to the growing availability of sophisticated mathematical modeling tools and ever-improving computational power. For this reason, the need to define a well-structured process for validating these models against experimental results has become a pressing issue in the engineering community. This validation process is partially characterized by the uncertainties associated with the modeling effort as well as the experimental results. The net impact of the uncertainties on the validation effort is assessed through the 'noise level of the validation procedure', which can be defined as an estimate of the 95% confidence uncertainty bounds for the comparison error between actual experimental results and model-based predictions of the same quantities of interest. Although general descriptions associated with the construction of the noise level using multivariate statistics exists in the literature, a detailed procedure outlining how to account for the systematic and random uncertainties is not available. In this paper, the methodology used to derive the covariance matrix associated with the multivariate normal pdf based on random and systematic uncertainties is examined, and a procedure used to estimate this covariance matrix using Monte Carlo analysis is presented. The covariance matrices are then used to construct approximate 95% confidence constant probability contours associated with comparison error results for a practical example. In addition, the example is used to show the drawbacks of using a first-order sensitivity analysis when nonlinear local sensitivity coefficients exist. Finally, the example is used to show the connection between the noise level of the validation exercise calculated using multivariate and univariate statistics.

  10. Data bases and statistical systems: demography

    NARCIS (Netherlands)

    Kreyenfeld, M.; Willekens, F.J.; Wright, James D.

    2015-01-01

    This article deals with the availability of large-scale data for demographic analysis. The main sources of data that demographers work with are censuses data, microcensus data, population registers, other administrative data, survey data, and big data. Data of this kind can be used to generate

  11. Statistical power analysis for the behavioral sciences

    National Research Council Canada - National Science Library

    Cohen, Jacob

    1988-01-01

    .... A chapter has been added for power analysis in set correlation and multivariate methods (Chapter 10). Set correlation is a realization of the multivariate general linear model, and incorporates the standard multivariate methods...

  12. Statistical power analysis for the behavioral sciences

    National Research Council Canada - National Science Library

    Cohen, Jacob

    1988-01-01

    ... offers a unifying framework and some new data-analytic possibilities. 2. A new chapter (Chapter 11) considers some general topics in power analysis in more integrted form than is possible in the earlier...

  13. Statistical methods for categorical data analysis

    CERN Document Server

    Powers, Daniel

    2008-01-01

    This book provides a comprehensive introduction to methods and models for categorical data analysis and their applications in social science research. Companion website also available, at https://webspace.utexas.edu/dpowers/www/

  14. Validation of statistical models for creep rupture by parametric analysis

    Energy Technology Data Exchange (ETDEWEB)

    Bolton, J., E-mail: john.bolton@uwclub.net [65, Fisher Ave., Rugby, Warks CV22 5HW (United Kingdom)

    2012-01-15

    Statistical analysis is an efficient method for the optimisation of any candidate mathematical model of creep rupture data, and for the comparative ranking of competing models. However, when a series of candidate models has been examined and the best of the series has been identified, there is no statistical criterion to determine whether a yet more accurate model might be devised. Hence there remains some uncertainty that the best of any series examined is sufficiently accurate to be considered reliable as a basis for extrapolation. This paper proposes that models should be validated primarily by parametric graphical comparison to rupture data and rupture gradient data. It proposes that no mathematical model should be considered reliable for extrapolation unless the visible divergence between model and data is so small as to leave no apparent scope for further reduction. This study is based on the data for a 12% Cr alloy steel used in BS PD6605:1998 to exemplify its recommended statistical analysis procedure. The models considered in this paper include a) a relatively simple model, b) the PD6605 recommended model and c) a more accurate model of somewhat greater complexity. - Highlights: Black-Right-Pointing-Pointer The paper discusses the validation of creep rupture models derived from statistical analysis. Black-Right-Pointing-Pointer It demonstrates that models can be satisfactorily validated by a visual-graphic comparison of models to data. Black-Right-Pointing-Pointer The method proposed utilises test data both as conventional rupture stress and as rupture stress gradient. Black-Right-Pointing-Pointer The approach is shown to be more reliable than a well-established and widely used method (BS PD6605).

  15. Statistical analysis of anomalous transport in resistive interchange turbulence

    International Nuclear Information System (INIS)

    Sugama, Hideo; Wakatani, Masahiro.

    1992-01-01

    A new anomalous transport model for resistive interchange turbulence is derived from statistical analysis applying two-scale direct-interaction approximation to resistive magnetohydrodynamic equations with a gravity term. Our model is similar to the K-ε model for eddy viscosity of turbulent shear flows in that anomalous transport coefficients are expressed in terms of by the turbulent kinetic energy K and its dissipation rate ε while K and ε are determined by transport equations. This anomalous transport model can describe some nonlocal effects such as those from boundary conditions which cannot be treated by conventional models based on the transport coefficients represented by locally determined plasma parameters. (author)

  16. Multivariate statistical analysis a high-dimensional approach

    CERN Document Server

    Serdobolskii, V

    2000-01-01

    In the last few decades the accumulation of large amounts of in­ formation in numerous applications. has stimtllated an increased in­ terest in multivariate analysis. Computer technologies allow one to use multi-dimensional and multi-parametric models successfully. At the same time, an interest arose in statistical analysis with a de­ ficiency of sample data. Nevertheless, it is difficult to describe the recent state of affairs in applied multivariate methods as satisfactory. Unimprovable (dominating) statistical procedures are still unknown except for a few specific cases. The simplest problem of estimat­ ing the mean vector with minimum quadratic risk is unsolved, even for normal distributions. Commonly used standard linear multivari­ ate procedures based on the inversion of sample covariance matrices can lead to unstable results or provide no solution in dependence of data. Programs included in standard statistical packages cannot process 'multi-collinear data' and there are no theoretical recommen­ ...

  17. Statistical Analysis and Modelling of Olkiluoto Structures

    International Nuclear Information System (INIS)

    Hellae, P.; Vaittinen, T.; Saksa, P.; Nummela, J.

    2004-11-01

    Posiva Oy is carrying out investigations for the disposal of the spent nuclear fuel at the Olkiluoto site in SW Finland. The investigations have focused on the central part of the island. The layout design of the entire repository requires characterization of notably larger areas and must rely at least at the current stage on borehole information from a rather sparse network and on the geophysical soundings providing information outside and between the holes. In this work, the structural data according to the current version of the Olkiluoto bedrock model is analyzed. The bedrock model relies much on the borehole data although results of the seismic surveys and, for example, pumping tests are used in determining the orientation and continuation of the structures. Especially in the analysis, questions related to the frequency of structures and size of the structures are discussed. The structures observed in the boreholes are mainly dipping gently to the southeast. About 9 % of the sample length belongs to structures. The proportion is higher in the upper parts of the rock. The number of fracture and crushed zones seems not to depend greatly on the depth, whereas the hydraulic features concentrate on the depth range above -100 m. Below level -300 m, the hydraulic conductivity occurs in connection of fractured zones. Especially the hydraulic features, but also fracture and crushed zones often occur in groups. The frequency of the structure (area of structures per total volume) is estimated to be of the order of 1/100m. The size of the local structures was estimated by calculating the intersection of the zone to the nearest borehole where the zone has not been detected. Stochastic models using the Fracman software by Golder Associates were generated based on the bedrock model data complemented with the magnetic ground survey data. The seismic surveys (from boreholes KR5, KR13, KR14, and KR19) were used as alternative input data. The generated models were tested by

  18. Informing Evidence Based Decisions: Usage Statistics for Online Journal Databases

    Directory of Open Access Journals (Sweden)

    Alexei Botchkarev

    2017-06-01

    Full Text Available Abstract Objective – The primary objective was to examine online journal database usage statistics for a provincial ministry of health in the context of evidence based decision-making. In addition, the study highlights implementation of the Journal Access Centre (JAC that is housed and powered by the Ontario Ministry of Health and Long-Term Care (MOHLTC to inform health systems policy-making. Methods – This was a prospective case study using descriptive analysis of the JAC usage statistics of journal articles from January 2009 to September 2013. Results – JAC enables ministry employees to access approximately 12,000 journals with full-text articles. JAC usage statistics for the 2011-2012 calendar years demonstrate a steady level of activity in terms of searches, with monthly averages of 5,129. In 2009-2013, a total of 4,759 journal titles were accessed including 1,675 journals with full-text. Usage statistics demonstrate that the actual consumption was over 12,790 full-text downloaded articles or approximately 2,700 articles annually. Conclusion – JAC’s steady level of activities, revealed by the study, reflects continuous demand for JAC services and products. It testifies that access to online journal databases has become part of routine government knowledge management processes. MOHLTC’s broad area of responsibilities with dynamically changing priorities translates into the diverse information needs of its employees and a large set of required journals. Usage statistics indicate that MOHLTC information needs cannot be mapped to a reasonably compact set of “core” journals with a subsequent subscription to those.

  19. STATISTICS. The reusable holdout: Preserving validity in adaptive data analysis.

    Science.gov (United States)

    Dwork, Cynthia; Feldman, Vitaly; Hardt, Moritz; Pitassi, Toniann; Reingold, Omer; Roth, Aaron

    2015-08-07

    Misapplication of statistical data analysis is a common cause of spurious discoveries in scientific research. Existing approaches to ensuring the validity of inferences drawn from data assume a fixed procedure to be performed, selected before the data are examined. In common practice, however, data analysis is an intrinsically adaptive process, with new analyses generated on the basis of data exploration, as well as the results of previous analyses on the same data. We demonstrate a new approach for addressing the challenges of adaptivity based on insights from privacy-preserving data analysis. As an application, we show how to safely reuse a holdout data set many times to validate the results of adaptively chosen analyses. Copyright © 2015, American Association for the Advancement of Science.

  20. Statistical Measures for Usage-Based Linguistics

    Science.gov (United States)

    Gries, Stefan Th.; Ellis, Nick C.

    2015-01-01

    The advent of usage-/exemplar-based approaches has resulted in a major change in the theoretical landscape of linguistics, but also in the range of methodologies that are brought to bear on the study of language acquisition/learning, structure, and use. In particular, methods from corpus linguistics are now frequently used to study distributional…

  1. Statistical feature extraction based iris recognition system

    Indian Academy of Sciences (India)

    Atul Bansal

    1 Department of Electronics and Communication, G.L.A. University, 17-km stone, NH#2, Delhi-Mathura Road, .... Based upon these range of values, a decision is taken about the ...... triplet half-band filter bank and flexible k-out-of-n: A post.

  2. Radar Derived Spatial Statistics of Summer Rain. Volume 2; Data Reduction and Analysis

    Science.gov (United States)

    Konrad, T. G.; Kropfli, R. A.

    1975-01-01

    Data reduction and analysis procedures are discussed along with the physical and statistical descriptors used. The statistical modeling techniques are outlined and examples of the derived statistical characterization of rain cells in terms of the several physical descriptors are presented. Recommendations concerning analyses which can be pursued using the data base collected during the experiment are included.

  3. Statistical Modelling of Wind Proles - Data Analysis and Modelling

    DEFF Research Database (Denmark)

    Jónsson, Tryggvi; Pinson, Pierre

    The aim of the analysis presented in this document is to investigate whether statistical models can be used to make very short-term predictions of wind profiles.......The aim of the analysis presented in this document is to investigate whether statistical models can be used to make very short-term predictions of wind profiles....

  4. Sensitivity analysis of ranked data: from order statistics to quantiles

    NARCIS (Netherlands)

    Heidergott, B.F.; Volk-Makarewicz, W.

    2015-01-01

    In this paper we provide the mathematical theory for sensitivity analysis of order statistics of continuous random variables, where the sensitivity is with respect to a distributional parameter. Sensitivity analysis of order statistics over a finite number of observations is discussed before

  5. Statistics

    CERN Document Server

    Hayslett, H T

    1991-01-01

    Statistics covers the basic principles of Statistics. The book starts by tackling the importance and the two kinds of statistics; the presentation of sample data; the definition, illustration and explanation of several measures of location; and the measures of variation. The text then discusses elementary probability, the normal distribution and the normal approximation to the binomial. Testing of statistical hypotheses and tests of hypotheses about the theoretical proportion of successes in a binomial population and about the theoretical mean of a normal population are explained. The text the

  6. A Personalized Rolling Optimal Charging Schedule for Plug-In Hybrid Electric Vehicle Based on Statistical Energy Demand Analysis and Heuristic Algorithm

    Directory of Open Access Journals (Sweden)

    Fanrong Kong

    2017-09-01

    Full Text Available To alleviate the emission of greenhouse gas and the dependence on fossil fuel, Plug-in Hybrid Electrical Vehicles (PHEVs have gained an increasing popularity in current decades. Due to the fluctuating electricity prices in the power market, a charging schedule is very influential to driving cost. Although the next-day electricity prices can be obtained in a day-ahead power market, a driving plan is not easily made in advance. Although PHEV owners can input a next-day plan into a charging system, e.g., aggregators, day-ahead, it is a very trivial task to do everyday. Moreover, the driving plan may not be very accurate. To address this problem, in this paper, we analyze energy demands according to a PHEV owner’s historical driving records and build a personalized statistic driving model. Based on the model and the electricity spot prices, a rolling optimization strategy is proposed to help make a charging decision in the current time slot. On one hand, by employing a heuristic algorithm, the schedule is made according to the situations in the following time slots. On the other hand, however, after the current time slot, the schedule will be remade according to the next tens of time slots. Hence, the schedule is made by a dynamic rolling optimization, but it only decides the charging decision in the current time slot. In this way, the fluctuation of electricity prices and driving routine are both involved in the scheduling. Moreover, it is not necessary for PHEV owners to input a day-ahead driving plan. By the optimization simulation, the results demonstrate that the proposed method is feasible to help owners save charging costs and also meet requirements for driving.

  7. Statistical analysis of disruptions in JET

    International Nuclear Information System (INIS)

    De Vries, P.C.; Johnson, M.F.; Segui, I.

    2009-01-01

    The disruption rate (the percentage of discharges that disrupt) in JET was found to drop steadily over the years. Recent campaigns (2005-2007) show a yearly averaged disruption rate of only 6% while from 1991 to 1995 this was often higher than 20%. Besides the disruption rate, the so-called disruptivity, or the likelihood of a disruption depending on the plasma parameters, has been determined. The disruptivity of plasmas was found to be significantly higher close to the three main operational boundaries for tokamaks; the low-q, high density and β-limit. The frequency at which JET operated close to the density-limit increased six fold over the last decade; however, only a small reduction in disruptivity was found. Similarly the disruptivity close to the low-q and β-limit was found to be unchanged. The most significant reduction in disruptivity was found far from the operational boundaries, leading to the conclusion that the improved disruption rate is due to a better technical capability of operating JET, instead of safer operations close to the physics limits. The statistics showed that a simple protection system was able to mitigate the forces of a large fraction of disruptions, although it has proved to be at present more difficult to ameliorate the heat flux.

  8. An introduction to statistical computing a simulation-based approach

    CERN Document Server

    Voss, Jochen

    2014-01-01

    A comprehensive introduction to sampling-based methods in statistical computing The use of computers in mathematics and statistics has opened up a wide range of techniques for studying otherwise intractable problems.  Sampling-based simulation techniques are now an invaluable tool for exploring statistical models.  This book gives a comprehensive introduction to the exciting area of sampling-based methods. An Introduction to Statistical Computing introduces the classical topics of random number generation and Monte Carlo methods.  It also includes some advanced met

  9. Visual wetness perception based on image color statistics.

    Science.gov (United States)

    Sawayama, Masataka; Adelson, Edward H; Nishida, Shin'ya

    2017-05-01

    Color vision provides humans and animals with the abilities to discriminate colors based on the wavelength composition of light and to determine the location and identity of objects of interest in cluttered scenes (e.g., ripe fruit among foliage). However, we argue that color vision can inform us about much more than color alone. Since a trichromatic image carries more information about the optical properties of a scene than a monochromatic image does, color can help us recognize complex material qualities. Here we show that human vision uses color statistics of an image for the perception of an ecologically important surface condition (i.e., wetness). Psychophysical experiments showed that overall enhancement of chromatic saturation, combined with a luminance tone change that increases the darkness and glossiness of the image, tended to make dry scenes look wetter. Theoretical analysis along with image analysis of real objects indicated that our image transformation, which we call the wetness enhancing transformation, is consistent with actual optical changes produced by surface wetting. Furthermore, we found that the wetness enhancing transformation operator was more effective for the images with many colors (large hue entropy) than for those with few colors (small hue entropy). The hue entropy may be used to separate surface wetness from other surface states having similar optical properties. While surface wetness and surface color might seem to be independent, there are higher order color statistics that can influence wetness judgments, in accord with the ecological statistics. The present findings indicate that the visual system uses color image statistics in an elegant way to help estimate the complex physical status of a scene.

  10. The Statistical Analysis of Failure Time Data

    CERN Document Server

    Kalbfleisch, John D

    2011-01-01

    Contains additional discussion and examples on left truncation as well as material on more general censoring and truncation patterns.Introduces the martingale and counting process formulation swil lbe in a new chapter.Develops multivariate failure time data in a separate chapter and extends the material on Markov and semi Markov formulations.Presents new examples and applications of data analysis.

  11. Security of statistical data bases: invasion of privacy through attribute correlational modeling

    Energy Technology Data Exchange (ETDEWEB)

    Palley, M.A.

    1985-01-01

    This study develops, defines, and applies a statistical technique for the compromise of confidential information in a statistical data base. Attribute Correlational Modeling (ACM) recognizes that the information contained in a statistical data base represents real world statistical phenomena. As such, ACM assumes correlational behavior among the database attributes. ACM proceeds to compromise confidential information through creation of a regression model, where the confidential attribute is treated as the dependent variable. The typical statistical data base may preclude the direct application of regression. In this scenario, the research introduces the notion of a synthetic data base, created through legitimate queries of the actual data base, and through proportional random variation of responses to these queries. The synthetic data base is constructed to resemble the actual data base as closely as possible in a statistical sense. ACM then applies regression analysis to the synthetic data base, and utilizes the derived model to estimate confidential information in the actual database.

  12. Statistically Based Morphodynamic Modeling of Tracer Slowdown

    Science.gov (United States)

    Borhani, S.; Ghasemi, A.; Hill, K. M.; Viparelli, E.

    2017-12-01

    Tracer particles are used to study bedload transport in gravel-bed rivers. One of the advantages associated with using of tracer particles is that they allow for direct measures of the entrainment rates and their size distributions. The main issue in large scale studies with tracer particles is the difference between tracer stone short term and long term behavior. This difference is due to the fact that particles undergo vertical mixing or move to less active locations such as bars or even floodplains. For these reasons the average virtual velocity of tracer particle decreases in time, i.e. the tracer slowdown. In summary, tracer slowdown can have a significant impact on the estimation of bedload transport rate or long term dispersal of contaminated sediment. The vast majority of the morphodynamic models that account for the non-uniformity of the bed material (tracer and not tracer, in this case) are based on a discrete description of the alluvial deposit. The deposit is divided in two different regions; the active layer and the substrate. The active layer is a thin layer in the topmost part of the deposit whose particles can interact with the bed material transport. The substrate is the part of the deposit below the active layer. Due to the discrete representation of the alluvial deposit, active layer models are not able to reproduce tracer slowdown. In this study we try to model the slowdown of tracer particles with the continuous Parker-Paola-Leclair morphodynamic framework. This continuous, i.e. not layer-based, framework is based on a stochastic description of the temporal variation of bed surface elevation, and of the elevation specific particle entrainment and deposition. Particle entrainment rates are computed as a function of the flow and sediment characteristics, while particle deposition is estimated with a step length formulation. Here we present one of the first implementation of the continuum framework at laboratory scale, its validation against

  13. STATISTICAL ANALYSIS OF TANK 18F FLOOR SAMPLE RESULTS

    Energy Technology Data Exchange (ETDEWEB)

    Harris, S.

    2010-09-02

    Representative sampling has been completed for characterization of the residual material on the floor of Tank 18F as per the statistical sampling plan developed by Shine [1]. Samples from eight locations have been obtained from the tank floor and two of the samples were archived as a contingency. Six samples, referred to in this report as the current scrape samples, have been submitted to and analyzed by SRNL [2]. This report contains the statistical analysis of the floor sample analytical results to determine if further data are needed to reduce uncertainty. Included are comparisons with the prior Mantis samples results [3] to determine if they can be pooled with the current scrape samples to estimate the upper 95% confidence limits (UCL{sub 95%}) for concentration. Statistical analysis revealed that the Mantis and current scrape sample results are not compatible. Therefore, the Mantis sample results were not used to support the quantification of analytes in the residual material. Significant spatial variability among the current sample results was not found. Constituent concentrations were similar between the North and South hemispheres as well as between the inner and outer regions of the tank floor. The current scrape sample results from all six samples fall within their 3-sigma limits. In view of the results from numerous statistical tests, the data were pooled from all six current scrape samples. As such, an adequate sample size was provided for quantification of the residual material on the floor of Tank 18F. The uncertainty is quantified in this report by an upper 95% confidence limit (UCL{sub 95%}) on each analyte concentration. The uncertainty in analyte concentration was calculated as a function of the number of samples, the average, and the standard deviation of the analytical results. The UCL{sub 95%} was based entirely on the six current scrape sample results (each averaged across three analytical determinations).

  14. Statistical Analysis Of Tank 19F Floor Sample Results

    International Nuclear Information System (INIS)

    Harris, S.

    2010-01-01

    Representative sampling has been completed for characterization of the residual material on the floor of Tank 19F as per the statistical sampling plan developed by Harris and Shine. Samples from eight locations have been obtained from the tank floor and two of the samples were archived as a contingency. Six samples, referred to in this report as the current scrape samples, have been submitted to and analyzed by SRNL. This report contains the statistical analysis of the floor sample analytical results to determine if further data are needed to reduce uncertainty. Included are comparisons with the prior Mantis samples results to determine if they can be pooled with the current scrape samples to estimate the upper 95% confidence limits (UCL95%) for concentration. Statistical analysis revealed that the Mantis and current scrape sample results are not compatible. Therefore, the Mantis sample results were not used to support the quantification of analytes in the residual material. Significant spatial variability among the current scrape sample results was not found. Constituent concentrations were similar between the North and South hemispheres as well as between the inner and outer regions of the tank floor. The current scrape sample results from all six samples fall within their 3-sigma limits. In view of the results from numerous statistical tests, the data were pooled from all six current scrape samples. As such, an adequate sample size was provided for quantification of the residual material on the floor of Tank 19F. The uncertainty is quantified in this report by an UCL95% on each analyte concentration. The uncertainty in analyte concentration was calculated as a function of the number of samples, the average, and the standard deviation of the analytical results. The UCL95% was based entirely on the six current scrape sample results (each averaged across three analytical determinations).

  15. The International Coal Statistics Data Base program maintenance guide

    International Nuclear Information System (INIS)

    1991-06-01

    The International Coal Statistics Data Base (ICSD) is a microcomputer-based system which contains information related to international coal trade. This includes coal production, consumption, imports and exports information. The ICSD is a secondary data base, meaning that information contained therein is derived entirely from other primary sources. It uses dBase III+ and Lotus 1-2-3 to locate, report and display data. The system is used for analysis in preparing the Annual Prospects for World Coal Trade (DOE/EIA-0363) publication. The ICSD system is menu driven and also permits the user who is familiar with dBase and Lotus operations to leave the menu structure to perform independent queries. Documentation for the ICSD consists of three manuals -- the User's Guide, the Operations Manual, and the Program Maintenance Manual. This Program Maintenance Manual provides the information necessary to maintain and update the ICSD system. Two major types of program maintenance documentation are presented in this manual. The first is the source code for the dBase III+ routines and related non-dBase programs used in operating the ICSD. The second is listings of the major component database field structures. A third important consideration for dBase programming, the structure of index files, is presented in the listing of source code for the index maintenance program. 1 fig

  16. An analysis of UK wind farm statistics

    International Nuclear Information System (INIS)

    Milborrow, D.J.

    1995-01-01

    An analysis of key data for 22 completed wind projects shows 134 MW of plant cost Pound 152 million, giving an average cost of Pound 1136/kW. The energy generation potential of these windfarms is around 360 GWh, derived from sites with windspeeds between 6.2 and 8.8 m/s. Relationships between wind speed, energy production and cost were examined and it was found that costs increased with wind speed, due to the difficulties of access in hilly regions. It also appears that project costs fell with time and wind energy prices have fallen much faster than electricity prices. (Author)

  17. Enhancing Cloud Resource Utilisation using Statistical Analysis

    OpenAIRE

    Sijin He; Li Guo; Yike Guo

    2014-01-01

    Resource provisioning based on virtual machine (VM) has been widely accepted and adopted in cloud computing environments. A key problem resulting from using static scheduling approaches for allocating VMs on different physical machines (PMs) is that resources tend to be not fully utilised. Although some existing cloud reconfiguration algorithms have been developed to address the problem, they normally result in high migration costs and low resource utilisation due to ignoring the multi-dimens...

  18. Simultaneous analysis of 11 main active components in Cirsium setosum based on HPLC-ESI-MS/MS and combined with statistical methods.

    Science.gov (United States)

    Sun, Qian; Chang, Lu; Ren, Yanping; Cao, Liang; Sun, Yingguang; Du, Yingfeng; Shi, Xiaowei; Wang, Qiao; Zhang, Lantong

    2012-11-01

    A novel method based on high-performance liquid chromatography coupled with electrospray ionization tandem mass spectrometry was developed for simultaneous determination of the 11 major active components including ten flavonoids and one phenolic acid in Cirsium setosum. Separation was performed on a reversed-phase C(18) column with gradient elution of methanol and 0.1‰ acetic acid (v/v). The identification and quantification of the analytes were achieved on a hybrid quadrupole linear ion trap mass spectrometer. Multiple-reaction monitoring scanning was employed for quantification with switching electrospray ion source polarity between positive and negative modes in a single run. Full validation of the assay was carried out including linearity, precision, accuracy, stability, limits of detection and quantification. The results demonstrated that the method developed was reliable, rapid, and specific. The 25 batches of C. setosum samples from different sources were first determined using the developed method and the total contents of 11 analytes ranged from 1717.460 to 23028.258 μg/g. Among them, the content of linarin was highest, and its mean value was 7340.967 μg/g. Principal component analysis and hierarchical clustering analysis were performed to differentiate and classify the samples, which is helpful for comprehensive evaluation of the quality of C. setosum. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Robust optimization based upon statistical theory.

    Science.gov (United States)

    Sobotta, B; Söhn, M; Alber, M

    2010-08-01

    Organ movement is still the biggest challenge in cancer treatment despite advances in online imaging. Due to the resulting geometric uncertainties, the delivered dose cannot be predicted precisely at treatment planning time. Consequently, all associated dose metrics (e.g., EUD and maxDose) are random variables with a patient-specific probability distribution. The method that the authors propose makes these distributions the basis of the optimization and evaluation process. The authors start from a model of motion derived from patient-specific imaging. On a multitude of geometry instances sampled from this model, a dose metric is evaluated. The resulting pdf of this dose metric is termed outcome distribution. The approach optimizes the shape of the outcome distribution based on its mean and variance. This is in contrast to the conventional optimization of a nominal value (e.g., PTV EUD) computed on a single geometry instance. The mean and variance allow for an estimate of the expected treatment outcome along with the residual uncertainty. Besides being applicable to the target, the proposed method also seamlessly includes the organs at risk (OARs). The likelihood that a given value of a metric is reached in the treatment is predicted quantitatively. This information reveals potential hazards that may occur during the course of the treatment, thus helping the expert to find the right balance between the risk of insufficient normal tissue sparing and the risk of insufficient tumor control. By feeding this information to the optimizer, outcome distributions can be obtained where the probability of exceeding a given OAR maximum and that of falling short of a given target goal can be minimized simultaneously. The method is applicable to any source of residual motion uncertainty in treatment delivery. Any model that quantifies organ movement and deformation in terms of probability distributions can be used as basis for the algorithm. Thus, it can generate dose

  20. Statistics

    Science.gov (United States)

    Links to sources of cancer-related statistics, including the Surveillance, Epidemiology and End Results (SEER) Program, SEER-Medicare datasets, cancer survivor prevalence data, and the Cancer Trends Progress Report.

  1. Comparative analysis of positive and negative attitudes toward statistics

    Science.gov (United States)

    Ghulami, Hassan Rahnaward; Ab Hamid, Mohd Rashid; Zakaria, Roslinazairimah

    2015-02-01

    Many statistics lecturers and statistics education researchers are interested to know the perception of their students' attitudes toward statistics during the statistics course. In statistics course, positive attitude toward statistics is a vital because it will be encourage students to get interested in the statistics course and in order to master the core content of the subject matters under study. Although, students who have negative attitudes toward statistics they will feel depressed especially in the given group assignment, at risk for failure, are often highly emotional, and could not move forward. Therefore, this study investigates the students' attitude towards learning statistics. Six latent constructs have been the measurement of students' attitudes toward learning statistic such as affect, cognitive competence, value, difficulty, interest, and effort. The questionnaire was adopted and adapted from the reliable and validate instrument of Survey of Attitudes towards Statistics (SATS). This study is conducted among engineering undergraduate engineering students in the university Malaysia Pahang (UMP). The respondents consist of students who were taking the applied statistics course from different faculties. From the analysis, it is found that the questionnaire is acceptable and the relationships among the constructs has been proposed and investigated. In this case, students show full effort to master the statistics course, feel statistics course enjoyable, have confidence that they have intellectual capacity, and they have more positive attitudes then negative attitudes towards statistics learning. In conclusion in terms of affect, cognitive competence, value, interest and effort construct the positive attitude towards statistics was mostly exhibited. While negative attitudes mostly exhibited by difficulty construct.

  2. Retention of Statistical Concepts in a Preliminary Randomization-Based Introductory Statistics Curriculum

    Science.gov (United States)

    Tintle, Nathan; Topliff, Kylie; VanderStoep, Jill; Holmes, Vicki-Lynn; Swanson, Todd

    2012-01-01

    Previous research suggests that a randomization-based introductory statistics course may improve student learning compared to the consensus curriculum. However, it is unclear whether these gains are retained by students post-course. We compared the conceptual understanding of a cohort of students who took a randomization-based curriculum (n = 76)…

  3. CORSSA: The Community Online Resource for Statistical Seismicity Analysis

    Science.gov (United States)

    Michael, Andrew J.; Wiemer, Stefan

    2010-01-01

    Statistical seismology is the application of rigorous statistical methods to earthquake science with the goal of improving our knowledge of how the earth works. Within statistical seismology there is a strong emphasis on the analysis of seismicity data in order to improve our scientific understanding of earthquakes and to improve the evaluation and testing of earthquake forecasts, earthquake early warning, and seismic hazards assessments. Given the societal importance of these applications, statistical seismology must be done well. Unfortunately, a lack of educational resources and available software tools make it difficult for students and new practitioners to learn about this discipline. The goal of the Community Online Resource for Statistical Seismicity Analysis (CORSSA) is to promote excellence in statistical seismology by providing the knowledge and resources necessary to understand and implement the best practices, so that the reader can apply these methods to their own research. This introduction describes the motivation for and vision of CORRSA. It also describes its structure and contents.

  4. FADTTS: functional analysis of diffusion tensor tract statistics.

    Science.gov (United States)

    Zhu, Hongtu; Kong, Linglong; Li, Runze; Styner, Martin; Gerig, Guido; Lin, Weili; Gilmore, John H

    2011-06-01

    The aim of this paper is to present a functional analysis of a diffusion tensor tract statistics (FADTTS) pipeline for delineating the association between multiple diffusion properties along major white matter fiber bundles with a set of covariates of interest, such as age, diagnostic status and gender, and the structure of the variability of these white matter tract properties in various diffusion tensor imaging studies. The FADTTS integrates five statistical tools: (i) a multivariate varying coefficient model for allowing the varying coefficient functions in terms of arc length to characterize the varying associations between fiber bundle diffusion properties and a set of covariates, (ii) a weighted least squares estimation of the varying coefficient functions, (iii) a functional principal component analysis to delineate the structure of the variability in fiber bundle diffusion properties, (iv) a global test statistic to test hypotheses of interest, and (v) a simultaneous confidence band to quantify the uncertainty in the estimated coefficient functions. Simulated data are used to evaluate the finite sample performance of FADTTS. We apply FADTTS to investigate the development of white matter diffusivities along the splenium of the corpus callosum tract and the right internal capsule tract in a clinical study of neurodevelopment. FADTTS can be used to facilitate the understanding of normal brain development, the neural bases of neuropsychiatric disorders, and the joint effects of environmental and genetic factors on white matter fiber bundles. The advantages of FADTTS compared with the other existing approaches are that they are capable of modeling the structured inter-subject variability, testing the joint effects, and constructing their simultaneous confidence bands. However, FADTTS is not crucial for estimation and reduces to the functional analysis method for the single measure. Copyright © 2011 Elsevier Inc. All rights reserved.

  5. Obtaining Internet Flow Statistics by Volunteer-Based System

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup; Bujlow, Tomasz

    2012-01-01

    In this paper we demonstrate how the Volunteer Based System for Research on the Internet, developed at Aalborg University, can be used for creating statistics of Internet usage. Since the data is collected on individual machines, the statistics can be made on the basis of both individual users......, and average flow durations. The paper is concluded with a discussion on what further statistics can be made, and the further development of the system....

  6. Statistical evaluation of diagnostic performance topics in ROC analysis

    CERN Document Server

    Zou, Kelly H; Bandos, Andriy I; Ohno-Machado, Lucila; Rockette, Howard E

    2016-01-01

    Statistical evaluation of diagnostic performance in general and Receiver Operating Characteristic (ROC) analysis in particular are important for assessing the performance of medical tests and statistical classifiers, as well as for evaluating predictive models or algorithms. This book presents innovative approaches in ROC analysis, which are relevant to a wide variety of applications, including medical imaging, cancer research, epidemiology, and bioinformatics. Statistical Evaluation of Diagnostic Performance: Topics in ROC Analysis covers areas including monotone-transformation techniques in parametric ROC analysis, ROC methods for combined and pooled biomarkers, Bayesian hierarchical transformation models, sequential designs and inferences in the ROC setting, predictive modeling, multireader ROC analysis, and free-response ROC (FROC) methodology. The book is suitable for graduate-level students and researchers in statistics, biostatistics, epidemiology, public health, biomedical engineering, radiology, medi...

  7. Statistical reliability assessment of software-based systems

    International Nuclear Information System (INIS)

    Korhonen, J.; Pulkkinen, U.; Haapanen, P.

    1997-01-01

    Plant vendors nowadays propose software-based systems even for the most critical safety functions. The reliability estimation of safety critical software-based systems is difficult since the conventional modeling techniques do not necessarily apply to the analysis of these systems, and the quantification seems to be impossible. Due to lack of operational experience and due to the nature of software faults, the conventional reliability estimation methods can not be applied. New methods are therefore needed for the safety assessment of software-based systems. In the research project Programmable automation systems in nuclear power plants (OHA), financed together by the Finnish Centre for Radiation and Nuclear Safety (STUK), the Ministry of Trade and Industry and the Technical Research Centre of Finland (VTT), various safety assessment methods and tools for software based systems are developed and evaluated. This volume in the OHA-report series deals with the statistical reliability assessment of software based systems on the basis of dynamic test results and qualitative evidence from the system design process. Other reports to be published later on in OHA-report series will handle the diversity requirements in safety critical software-based systems, generation of test data from operational profiles and handling of programmable automation in plant PSA-studies. (orig.) (25 refs.)

  8. Explorations in Statistics: The Analysis of Ratios and Normalized Data

    Science.gov (United States)

    Curran-Everett, Douglas

    2013-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This ninth installment of "Explorations in Statistics" explores the analysis of ratios and normalized--or standardized--data. As researchers, we compute a ratio--a numerator divided by a denominator--to compute a…

  9. Analysis of thrips distribution: application of spatial statistics and Kriging

    Science.gov (United States)

    John Aleong; Bruce L. Parker; Margaret Skinner; Diantha Howard

    1991-01-01

    Kriging is a statistical technique that provides predictions for spatially and temporally correlated data. Observations of thrips distribution and density in Vermont soils are made in both space and time. Traditional statistical analysis of such data assumes that the counts taken over space and time are independent, which is not necessarily true. Therefore, to analyze...

  10. Statistics

    International Nuclear Information System (INIS)

    2005-01-01

    For the years 2004 and 2005 the figures shown in the tables of Energy Review are partly preliminary. The annual statistics published in Energy Review are presented in more detail in a publication called Energy Statistics that comes out yearly. Energy Statistics also includes historical time-series over a longer period of time (see e.g. Energy Statistics, Statistics Finland, Helsinki 2004.) The applied energy units and conversion coefficients are shown in the back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supplies and total consumption of electricity GWh, Energy imports by country of origin in January-June 2003, Energy exports by recipient country in January-June 2003, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes, precautionary stock fees and oil pollution fees

  11. Statistical analysis of magnetically soft particles in magnetorheological elastomers

    Science.gov (United States)

    Gundermann, T.; Cremer, P.; Löwen, H.; Menzel, A. M.; Odenbach, S.

    2017-04-01

    The physical properties of magnetorheological elastomers (MRE) are a complex issue and can be influenced and controlled in many ways, e.g. by applying a magnetic field, by external mechanical stimuli, or by an electric potential. In general, the response of MRE materials to these stimuli is crucially dependent on the distribution of the magnetic particles inside the elastomer. Specific knowledge of the interactions between particles or particle clusters is of high relevance for understanding the macroscopic rheological properties and provides an important input for theoretical calculations. In order to gain a better insight into the correlation between the macroscopic effects and microstructure and to generate a database for theoretical analysis, x-ray micro-computed tomography (X-μCT) investigations as a base for a statistical analysis of the particle configurations were carried out. Different MREs with quantities of 2-15 wt% (0.27-2.3 vol%) of iron powder and different allocations of the particles inside the matrix were prepared. The X-μCT results were edited by an image processing software regarding the geometrical properties of the particles with and without the influence of an external magnetic field. Pair correlation functions for the positions of the particles inside the elastomer were calculated to statistically characterize the distributions of the particles in the samples.

  12. A Model of Statistics Performance Based on Achievement Goal Theory.

    Science.gov (United States)

    Bandalos, Deborah L.; Finney, Sara J.; Geske, Jenenne A.

    2003-01-01

    Tests a model of statistics performance based on achievement goal theory. Both learning and performance goals affected achievement indirectly through study strategies, self-efficacy, and test anxiety. Implications of these findings for teaching and learning statistics are discussed. (Contains 47 references, 3 tables, 3 figures, and 1 appendix.)…

  13. A statistically self-consistent type Ia supernova data analysis

    International Nuclear Information System (INIS)

    Lago, B.L.; Calvao, M.O.; Joras, S.E.; Reis, R.R.R.; Waga, I.; Giostri, R.

    2011-01-01

    Full text: The type Ia supernovae are one of the main cosmological probes nowadays and are used as standardized candles in distance measurements. The standardization processes, among which SALT2 and MLCS2k2 are the most used ones, are based on empirical relations and leave room for a residual dispersion in the light curves of the supernovae. This dispersion is introduced in the chi squared used to fit the parameters of the model in the expression for the variance of the data, as an attempt to quantify our ignorance in modeling the supernovae properly. The procedure used to assign a value to this dispersion is statistically inconsistent and excludes the possibility of comparing different cosmological models. In addition, the SALT2 light curve fitter introduces parameters on the model for the variance that are also used in the model for the data. In the chi squared statistics context the minimization of such a quantity yields, in the best case scenario, a bias. An iterative method has been developed in order to perform the minimization of this chi squared but it is not well grounded, although it is used by several groups. We propose an analysis of the type Ia supernovae data that is based on the likelihood itself and makes it possible to address both inconsistencies mentioned above in a straightforward way. (author)

  14. Statistical analysis of the count and profitability of air conditioners.

    Science.gov (United States)

    Rady, El Houssainy A; Mohamed, Salah M; Abd Elmegaly, Alaa A

    2018-08-01

    This article presents the statistical analysis of the number and profitability of air conditioners in an Egyptian company. Checking the same distribution for each categorical variable has been made using Kruskal-Wallis test.

  15. Instrumental Neutron Activation Analysis and Multivariate Statistics for Pottery Provenance

    Science.gov (United States)

    Glascock, M. D.; Neff, H.; Vaughn, K. J.

    2004-06-01

    The application of instrumental neutron activation analysis and multivariate statistics to archaeological studies of ceramics and clays is described. A small pottery data set from the Nasca culture in southern Peru is presented for illustration.

  16. Instrumental Neutron Activation Analysis and Multivariate Statistics for Pottery Provenance

    International Nuclear Information System (INIS)

    Glascock, M. D.; Neff, H.; Vaughn, K. J.

    2004-01-01

    The application of instrumental neutron activation analysis and multivariate statistics to archaeological studies of ceramics and clays is described. A small pottery data set from the Nasca culture in southern Peru is presented for illustration.

  17. Instrumental Neutron Activation Analysis and Multivariate Statistics for Pottery Provenance

    Energy Technology Data Exchange (ETDEWEB)

    Glascock, M. D.; Neff, H. [University of Missouri, Research Reactor Center (United States); Vaughn, K. J. [Pacific Lutheran University, Department of Anthropology (United States)

    2004-06-15

    The application of instrumental neutron activation analysis and multivariate statistics to archaeological studies of ceramics and clays is described. A small pottery data set from the Nasca culture in southern Peru is presented for illustration.

  18. Statistics

    International Nuclear Information System (INIS)

    2001-01-01

    For the year 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g. Energiatilastot 1999, Statistics Finland, Helsinki 2000, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions from the use of fossil fuels, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in 2000, Energy exports by recipient country in 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products

  19. Statistics

    International Nuclear Information System (INIS)

    2000-01-01

    For the year 1999 and 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g., Energiatilastot 1998, Statistics Finland, Helsinki 1999, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-March 2000, Energy exports by recipient country in January-March 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products

  20. Statistics

    International Nuclear Information System (INIS)

    1999-01-01

    For the year 1998 and the year 1999, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g. Energiatilastot 1998, Statistics Finland, Helsinki 1999, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-June 1999, Energy exports by recipient country in January-June 1999, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products

  1. Propensity Score Analysis: An Alternative Statistical Approach for HRD Researchers

    Science.gov (United States)

    Keiffer, Greggory L.; Lane, Forrest C.

    2016-01-01

    Purpose: This paper aims to introduce matching in propensity score analysis (PSA) as an alternative statistical approach for researchers looking to make causal inferences using intact groups. Design/methodology/approach: An illustrative example demonstrated the varying results of analysis of variance, analysis of covariance and PSA on a heuristic…

  2. The system for statistical analysis of logistic information

    Directory of Open Access Journals (Sweden)

    Khayrullin Rustam Zinnatullovich

    2015-05-01

    Full Text Available The current problem for managers in logistic and trading companies is the task of improving the operational business performance and developing the logistics support of sales. The development of logistics sales supposes development and implementation of a set of works for the development of the existing warehouse facilities, including both a detailed description of the work performed, and the timing of their implementation. Logistics engineering of warehouse complex includes such tasks as: determining the number and the types of technological zones, calculation of the required number of loading-unloading places, development of storage structures, development and pre-sales preparation zones, development of specifications of storage types, selection of loading-unloading equipment, detailed planning of warehouse logistics system, creation of architectural-planning decisions, selection of information-processing equipment, etc. The currently used ERP and WMS systems did not allow us to solve the full list of logistics engineering problems. In this regard, the development of specialized software products, taking into account the specifics of warehouse logistics, and subsequent integration of these software with ERP and WMS systems seems to be a current task. In this paper we suggest a system of statistical analysis of logistics information, designed to meet the challenges of logistics engineering and planning. The system is based on the methods of statistical data processing.The proposed specialized software is designed to improve the efficiency of the operating business and the development of logistics support of sales. The system is based on the methods of statistical data processing, the methods of assessment and prediction of logistics performance, the methods for the determination and calculation of the data required for registration, storage and processing of metal products, as well as the methods for planning the reconstruction and development

  3. Statistical Agent Based Modelization of the Phenomenon of Drug Abuse

    Science.gov (United States)

    di Clemente, Riccardo; Pietronero, Luciano

    2012-07-01

    We introduce a statistical agent based model to describe the phenomenon of drug abuse and its dynamical evolution at the individual and global level. The agents are heterogeneous with respect to their intrinsic inclination to drugs, to their budget attitude and social environment. The various levels of drug use were inspired by the professional description of the phenomenon and this permits a direct comparison with all available data. We show that certain elements have a great importance to start the use of drugs, for example the rare events in the personal experiences which permit to overcame the barrier of drug use occasionally. The analysis of how the system reacts to perturbations is very important to understand its key elements and it provides strategies for effective policy making. The present model represents the first step of a realistic description of this phenomenon and can be easily generalized in various directions.

  4. Data analysis for radiological characterisation: Geostatistical and statistical complementarity

    International Nuclear Information System (INIS)

    Desnoyers, Yvon; Dubot, Didier

    2012-01-01

    Radiological characterisation may cover a large range of evaluation objectives during a decommissioning and dismantling (D and D) project: removal of doubt, delineation of contaminated materials, monitoring of the decontamination work and final survey. At each stage, collecting relevant data to be able to draw the conclusions needed is quite a big challenge. In particular two radiological characterisation stages require an advanced sampling process and data analysis, namely the initial categorization and optimisation of the materials to be removed and the final survey to demonstrate compliance with clearance levels. On the one hand the latter is widely used and well developed in national guides and norms, using random sampling designs and statistical data analysis. On the other hand a more complex evaluation methodology has to be implemented for the initial radiological characterisation, both for sampling design and for data analysis. The geostatistical framework is an efficient way to satisfy the radiological characterisation requirements providing a sound decision-making approach for the decommissioning and dismantling of nuclear premises. The relevance of the geostatistical methodology relies on the presence of a spatial continuity for radiological contamination. Thus geo-statistics provides reliable methods for activity estimation, uncertainty quantification and risk analysis, leading to a sound classification of radiological waste (surfaces and volumes). This way, the radiological characterization of contaminated premises can be divided into three steps. First, the most exhaustive facility analysis provides historical and qualitative information. Then, a systematic (exhaustive or not) surface survey of the contamination is implemented on a regular grid. Finally, in order to assess activity levels and contamination depths, destructive samples are collected at several locations within the premises (based on the surface survey results) and analysed. Combined with

  5. Statistical Analysis of Environmental Tritium around Wolsong Site

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ju Youl [FNC Technology Co., Yongin (Korea, Republic of)

    2010-04-15

    To find the relationship among airborne tritium, tritium in rainwater, TFWT (Tissue Free Water Tritium) and TBT (Tissue Bound Tritium), statistical analysis is conducted based on tritium data measured at KHNP employees' house around Wolsong nuclear power plants during 10 years from 1999 to 2008. The results show that tritium in such media exhibits a strong seasonal and annual periodicity. Tritium concentration in rainwater is observed to be highly correlated with TFWT and directly transmitted to TFWT without delay. The response of environmental radioactivity of tritium around Wolsong site is analyzed using time-series technique and non-parametric trend analysis. Tritium in the atmosphere and rainwater is strongly auto-correlated by seasonal and annual periodicity. TFWT concentration in pine needle is proven to be more sensitive to rainfall phenomenon than other weather variables. Non-parametric trend analysis of TFWT concentration within pine needle shows a increasing slope in terms of confidence level of 95%. This study demonstrates a usefulness of time-series and trend analysis for the interpretation of environmental radioactivity relationship with various environmental media.

  6. Statistics

    International Nuclear Information System (INIS)

    2003-01-01

    For the year 2002, part of the figures shown in the tables of the Energy Review are partly preliminary. The annual statistics of the Energy Review also includes historical time-series over a longer period (see e.g. Energiatilastot 2001, Statistics Finland, Helsinki 2002). The applied energy units and conversion coefficients are shown in the inside back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supply and total consumption of electricity GWh, Energy imports by country of origin in January-June 2003, Energy exports by recipient country in January-June 2003, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Excise taxes, precautionary stock fees on oil pollution fees on energy products

  7. Statistics

    International Nuclear Information System (INIS)

    2004-01-01

    For the year 2003 and 2004, the figures shown in the tables of the Energy Review are partly preliminary. The annual statistics of the Energy Review also includes historical time-series over a longer period (see e.g. Energiatilastot, Statistics Finland, Helsinki 2003, ISSN 0785-3165). The applied energy units and conversion coefficients are shown in the inside back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supplies and total consumption of electricity GWh, Energy imports by country of origin in January-March 2004, Energy exports by recipient country in January-March 2004, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Excise taxes, precautionary stock fees on oil pollution fees

  8. Statistics

    International Nuclear Information System (INIS)

    2000-01-01

    For the year 1999 and 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy also includes historical time series over a longer period (see e.g., Energiatilastot 1999, Statistics Finland, Helsinki 2000, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-June 2000, Energy exports by recipient country in January-June 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products

  9. Basic statistical tools in research and data analysis

    Directory of Open Access Journals (Sweden)

    Zulfiqar Ali

    2016-01-01

    Full Text Available Statistical methods involved in carrying out a study include planning, designing, collecting data, analysing, drawing meaningful interpretation and reporting of the research findings. The statistical analysis gives meaning to the meaningless numbers, thereby breathing life into a lifeless data. The results and inferences are precise only if proper statistical tests are used. This article will try to acquaint the reader with the basic research tools that are utilised while conducting various studies. The article covers a brief outline of the variables, an understanding of quantitative and qualitative variables and the measures of central tendency. An idea of the sample size estimation, power analysis and the statistical errors is given. Finally, there is a summary of parametric and non-parametric tests used for data analysis.

  10. Numeric computation and statistical data analysis on the Java platform

    CERN Document Server

    Chekanov, Sergei V

    2016-01-01

    Numerical computation, knowledge discovery and statistical data analysis integrated with powerful 2D and 3D graphics for visualization are the key topics of this book. The Python code examples powered by the Java platform can easily be transformed to other programming languages, such as Java, Groovy, Ruby and BeanShell. This book equips the reader with a computational platform which, unlike other statistical programs, is not limited by a single programming language. The author focuses on practical programming aspects and covers a broad range of topics, from basic introduction to the Python language on the Java platform (Jython), to descriptive statistics, symbolic calculations, neural networks, non-linear regression analysis and many other data-mining topics. He discusses how to find regularities in real-world data, how to classify data, and how to process data for knowledge discoveries. The code snippets are so short that they easily fit into single pages. Numeric Computation and Statistical Data Analysis ...

  11. Analysis of room transfer function and reverberant signal statistics

    DEFF Research Database (Denmark)

    Georganti, Eleftheria; Mourjopoulos, John; Jacobsen, Finn

    2008-01-01

    For some time now, statistical analysis has been a valuable tool in analyzing room transfer functions (RTFs). This work examines existing statistical time-frequency models and techniques for RTF analysis (e.g., Schroeder's stochastic model and the standard deviation over frequency bands for the RTF...... magnitude and phase). RTF fractional octave smoothing, as with 1-slash 3 octave analysis, may lead to RTF simplifications that can be useful for several audio applications, like room compensation, room modeling, auralisation purposes. The aim of this work is to identify the relationship of optimal response...... and the corresponding ratio of the direct and reverberant signal. In addition, this work examines the statistical quantities for speech and audio signals prior to their reproduction within rooms and when recorded in rooms. Histograms and other statistical distributions are used to compare RTF minima of typical...

  12. Detecting fire in video stream using statistical analysis

    Directory of Open Access Journals (Sweden)

    Koplík Karel

    2017-01-01

    Full Text Available The real time fire detection in video stream is one of the most interesting problems in computer vision. In fact, in most cases it would be nice to have fire detection algorithm implemented in usual industrial cameras and/or to have possibility to replace standard industrial cameras with one implementing the fire detection algorithm. In this paper, we present new algorithm for detecting fire in video. The algorithm is based on tracking suspicious regions in time with statistical analysis of their trajectory. False alarms are minimized by combining multiple detection criteria: pixel brightness, trajectories of suspicious regions for evaluating characteristic fire flickering and persistence of alarm state in sequence of frames. The resulting implementation is fast and therefore can run on wide range of affordable hardware.

  13. Diffusion tensor imaging study of early white matter integrity in HIV-infected patients: A tract-based spatial statistics analysis

    Directory of Open Access Journals (Sweden)

    Ruili Li

    2015-12-01

    Conclusion: Multiple cerebral white matter fiber tracts are damaged in HIV-infected patients without cognitive impairment. Quantitative analysis of DTI using TBSS is valuable in evaluating changes of HIV-associated white matter microstructures.

  14. Statistical analysis of dynamic parameters of the core

    International Nuclear Information System (INIS)

    Ionov, V.S.

    2007-01-01

    The transients of various types were investigated for the cores of zero power critical facilities in RRC KI and NPP. Dynamic parameters of neutron transients were explored by tool statistical analysis. Its have sufficient duration, few channels for currents of chambers and reactivity and also some channels for technological parameters. On these values the inverse period. reactivity, lifetime of neutrons, reactivity coefficients and some effects of a reactivity are determinate, and on the values were restored values of measured dynamic parameters as result of the analysis. The mathematical means of statistical analysis were used: approximation(A), filtration (F), rejection (R), estimation of parameters of descriptive statistic (DSP), correlation performances (kk), regression analysis(KP), the prognosis (P), statistician criteria (SC). The calculation procedures were realized by computer language MATLAB. The reasons of methodical and statistical errors are submitted: inadequacy of model operation, precision neutron-physical parameters, features of registered processes, used mathematical model in reactivity meters, technique of processing for registered data etc. Examples of results of statistical analysis. Problems of validity of the methods used for definition and certification of values of statistical parameters and dynamic characteristics are considered (Authors)

  15. Practical application and statistical analysis of titrimetric monitoring ...

    African Journals Online (AJOL)

    2008-09-18

    Sep 18, 2008 ... The statistical tests showed that, depending on the titrant concentration ... The ASD process offers the possibility of transferring waste streams into ..... (1993) Weak acid/bases and pH control in anaerobic system – A review.

  16. Spectral signature verification using statistical analysis and text mining

    Science.gov (United States)

    DeCoster, Mallory E.; Firpi, Alexe H.; Jacobs, Samantha K.; Cone, Shelli R.; Tzeng, Nigel H.; Rodriguez, Benjamin M.

    2016-05-01

    In the spectral science community, numerous spectral signatures are stored in databases representative of many sample materials collected from a variety of spectrometers and spectroscopists. Due to the variety and variability of the spectra that comprise many spectral databases, it is necessary to establish a metric for validating the quality of spectral signatures. This has been an area of great discussion and debate in the spectral science community. This paper discusses a method that independently validates two different aspects of a spectral signature to arrive at a final qualitative assessment; the textual meta-data and numerical spectral data. Results associated with the spectral data stored in the Signature Database1 (SigDB) are proposed. The numerical data comprising a sample material's spectrum is validated based on statistical properties derived from an ideal population set. The quality of the test spectrum is ranked based on a spectral angle mapper (SAM) comparison to the mean spectrum derived from the population set. Additionally, the contextual data of a test spectrum is qualitatively analyzed using lexical analysis text mining. This technique analyzes to understand the syntax of the meta-data to provide local learning patterns and trends within the spectral data, indicative of the test spectrum's quality. Text mining applications have successfully been implemented for security2 (text encryption/decryption), biomedical3 , and marketing4 applications. The text mining lexical analysis algorithm is trained on the meta-data patterns of a subset of high and low quality spectra, in order to have a model to apply to the entire SigDB data set. The statistical and textual methods combine to assess the quality of a test spectrum existing in a database without the need of an expert user. This method has been compared to other validation methods accepted by the spectral science community, and has provided promising results when a baseline spectral signature is

  17. A brief statistical analysis of stomatology papers published in international journals by researchers from the mainland of China--based on Thomson Reuters' journal citation reports for 2009.

    Science.gov (United States)

    Zhang, Yu-Hua; Pan, Yun-Tao; Hong, Xiao

    2011-01-01

    This paper has briefly analyzed the publication status of Chinese stomatology papers in 2009 with statistics from Journal Citation Reports provided by Thomson Scientific. Only those papers with the first author coming from the mainland of China were included for calculations. We have found a significant increase in the number of SCIE-indexed papers, most of which were published by six dental institutions in China. Among all Chinese medical institutions, West China School of Stomatology Sichuan University is the first dental institution to enter the top-20 list with the most publications, and it also ranks the 9th in the number of outstanding articles. West China School of Stomatology Sichuan University and the Forth Military Medical University are the only two dental institutions in China which have published SCIE-indexed papers over a hundred. The former has published the most SCIE-indexed stomatology papers, while the latter has the highest average impact factor. As the laboratories and funds in China have close connections with various dental hospitals, the pratice of translational medicine in dentistry is fast and fluent.

  18. Simulation Experiments in Practice: Statistical Design and Regression Analysis

    OpenAIRE

    Kleijnen, J.P.C.

    2007-01-01

    In practice, simulation analysts often change only one factor at a time, and use graphical analysis of the resulting Input/Output (I/O) data. The goal of this article is to change these traditional, naïve methods of design and analysis, because statistical theory proves that more information is obtained when applying Design Of Experiments (DOE) and linear regression analysis. Unfortunately, classic DOE and regression analysis assume a single simulation response that is normally and independen...

  19. Statistical analysis of long term (2006-2016) TIR imagery based on Generalized Extreme Value estimator: an application at Pisciarelli volcanic area (Campi Flegrei, Italy).

    Science.gov (United States)

    Petrillo, Zaccaria; Vilardo, Giuseppe; Sansivero, Fabio; Mangiacapra, Annarita; Caliro, Stefano; Caputo, Teresa

    2017-04-01

    Quantifying and monitoring energy budgets at calderas, released in terms of heat output during unrest periods, is crucial to understand the state of activity, the system evolution and to draw a possible future eruptive scenario. Campi Flegrei, a restless caldera in Southern Italy, during the last years is experiencing clear signs of potential reawakening. Indeed, is now more important then ever to consider, analyse and monitor all the potential precursors, contributing to the caldera volcanic hazard assessment. We analysed the continuous long term (2006-2016) TIR images night-time collected at Pisciarelli site. This volcanic area, is located above a critical volume which recently showed an increase and clustering of earthquakes distribution and which shows the most impressive gas discharge (mainly H2O and CO2) at Campi Flegrei caldera. We treated in a statistical way the TIR images, defining an anomaly zone, which we compared to a background area. The pixel distributions, as function of the temperature, showed a generalized extreme value structure. The anomaly area, with a long tail toward high temperature values, showed a positive factor form ( f > 0, Frechet distribution). This value was constantly above zero and kept stable along the whole 2006-2016 period, while the scale factor was estimated with a decreasing trend (variance reduction). Pixels of the background TIR images, in contrast, showed a factor form between zero and a weakly negative value (f = 0 or f < 0) Gumbel or Weibull distribution). We used the location parameter as representative of the temperature distribution (which is very near the average temperature) and analysed its trend as function of time, removing the annual variation using a 365.25 days mobile average.

  20. Incidence, treatment and recurrence of endometriosis in a UK-based population analysis using data from The Health Improvement Network and the Hospital Episode Statistics database.

    Science.gov (United States)

    Cea Soriano, Lucia; López-Garcia, Esther; Schulze-Rath, Renate; Garcia Rodríguez, Luis A

    2017-10-01

    This retrospective study used medical records from The Health Improvement Network (THIN) and Hospital Episode Statistics (HES) database to evaluate endometriosis (incidence, treatment and need for recurrent invasive procedures) in the general UK population. Women aged 12-54 years between January 2000 and December 2010, with a Read code for endometriosis, were identified in THIN. Cases were validated by manual review of free-text comments in medical records and responses to physician questionnaires. False-negative cases were identified among women with Read codes for hysterectomy or dysmenorrhea. Prescriptions of medical therapies for endometriosis were identified in THIN. Cases of single and recurrent invasive procedures were identified in women with medical records in both THIN and HES. Overall, 5087 women had a Read code for endometriosis, corresponding to an incidence of 1.02 (95% confidence interval [CI]: 0.99-1.05) per 1000 person-years. After case validation, the estimate was 1.46 (95% CI: 1.43-1.50) per 1000 person-years. Medical therapy was prescribed to 55.5% of women with endometriosis in the first year after diagnosis. In total, 48.3% of women received invasive treatment during the study period; approximately one-fifth of these women required further invasive treatment, mainly in the 3 years after the index procedure. Using Read codes as the only method to identify women with endometriosis underestimates incidence. Over half of women with recorded endometriosis are prescribed medical therapy in the first year after diagnosis. Women with diagnosed endometriosis are at risk of requiring recurrent invasive procedures.

  1. Statistical analysis of planktic foraminifera of the surface Continental ...

    African Journals Online (AJOL)

    Planktic foraminiferal assemblage recorded from selected samples obtained from shallow continental shelf sediments off southwestern Nigeria were subjected to statistical analysis. The Principal Component Analysis (PCA) was used to determine variants of planktic parameters. Values obtained for these parameters were ...

  2. Simulation Experiments in Practice : Statistical Design and Regression Analysis

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2007-01-01

    In practice, simulation analysts often change only one factor at a time, and use graphical analysis of the resulting Input/Output (I/O) data. Statistical theory proves that more information is obtained when applying Design Of Experiments (DOE) and linear regression analysis. Unfortunately, classic

  3. Simulation Experiments in Practice : Statistical Design and Regression Analysis

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2007-01-01

    In practice, simulation analysts often change only one factor at a time, and use graphical analysis of the resulting Input/Output (I/O) data. The goal of this article is to change these traditional, naïve methods of design and analysis, because statistical theory proves that more information is

  4. PRECISE - pregabalin in addition to usual care: Statistical analysis plan

    NARCIS (Netherlands)

    S. Mathieson (Stephanie); L. Billot (Laurent); C. Maher (Chris); A.J. McLachlan (Andrew J.); J. Latimer (Jane); B.W. Koes (Bart); M.J. Hancock (Mark J.); I. Harris (Ian); R.O. Day (Richard O.); J. Pik (Justin); S. Jan (Stephen); C.-W.C. Lin (Chung-Wei Christine)

    2016-01-01

    textabstractBackground: Sciatica is a severe, disabling condition that lacks high quality evidence for effective treatment strategies. This a priori statistical analysis plan describes the methodology of analysis for the PRECISE study. Methods/design: PRECISE is a prospectively registered, double

  5. Comparison of Chemical Constituents in Scrophulariae Radix Processed by Different Methods based on UFLC-MS Combined with Multivariate Statistical Analysis.

    Science.gov (United States)

    Wang, Shengnan; Hua, Yujiao; Zou, Lisi; Liu, Xunhong; Yan, Ying; Zhao, Hui; Luo, Yiyuan; Liu, Juanxiu

    2018-02-01

    Scrophulariae Radix is one of the most popular traditional Chinese medicines (TCMs). Primary processing of Scrophulariae Radix is an important link which closely related to the quality of products in this TCM. The aim of this study is to explore the influence of different processing methods on chemical constituents in Scrophulariae Radix. The difference of chemical constituents in Scrophulariae Radix processed by different methods was analyzed by using ultra fast liquid chromatography-triple quadrupole-time of flight mass spectrometry coupled with principal component analysis and orthogonal partial least squares discriminant analysis. Furthermore, the contents of 12 index differential constituents in Scrophulariae Radix processed by different methods were simultaneously determined by using ultra fast liquid chromatography coupled with triple quadrupole-linear ion trap mass spectrometry. Gray relational analysis was performed to evaluate the different processed samples according to the contents of 12 constituents. All of the results demonstrated that the quality of Scrophulariae Radix processed by "sweating" method was better. This study will provide the basic information for revealing the change law of chemical constituents in Scrophulariae Radix processed by different methods and facilitating selection of the suitable processing method of this TCM. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. A Divergence Statistics Extension to VTK for Performance Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Pebay, Philippe Pierre [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bennett, Janine Camille [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    This report follows the series of previous documents ([PT08, BPRT09b, PT09, BPT09, PT10, PB13], where we presented the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k -means, order and auto-correlative statistics engines which we developed within the Visualization Tool Kit ( VTK ) as a scalable, parallel and versatile statistics package. We now report on a new engine which we developed for the calculation of divergence statistics, a concept which we hereafter explain and whose main goal is to quantify the discrepancy, in a stasticial manner akin to measuring a distance, between an observed empirical distribution and a theoretical, "ideal" one. The ease of use of the new diverence statistics engine is illustrated by the means of C++ code snippets. Although this new engine does not yet have a parallel implementation, it has already been applied to HPC performance analysis, of which we provide an example.

  7. HistFitter software framework for statistical data analysis

    CERN Document Server

    Baak, M.; Côte, D.; Koutsman, A.; Lorenz, J.; Short, D.

    2015-01-01

    We present a software framework for statistical data analysis, called HistFitter, that has been used extensively by the ATLAS Collaboration to analyze big datasets originating from proton-proton collisions at the Large Hadron Collider at CERN. Since 2012 HistFitter has been the standard statistical tool in searches for supersymmetric particles performed by ATLAS. HistFitter is a programmable and flexible framework to build, book-keep, fit, interpret and present results of data models of nearly arbitrary complexity. Starting from an object-oriented configuration, defined by users, the framework builds probability density functions that are automatically fitted to data and interpreted with statistical tests. A key innovation of HistFitter is its design, which is rooted in core analysis strategies of particle physics. The concepts of control, signal and validation regions are woven into its very fabric. These are progressively treated with statistically rigorous built-in methods. Being capable of working with mu...

  8. Statistical lamb wave localization based on extreme value theory

    Science.gov (United States)

    Harley, Joel B.

    2018-04-01

    Guided wave localization methods based on delay-and-sum imaging, matched field processing, and other techniques have been designed and researched to create images that locate and describe structural damage. The maximum value of these images typically represent an estimated damage location. Yet, it is often unclear if this maximum value, or any other value in the image, is a statistically significant indicator of damage. Furthermore, there are currently few, if any, approaches to assess the statistical significance of guided wave localization images. As a result, we present statistical delay-and-sum and statistical matched field processing localization methods to create statistically significant images of damage. Our framework uses constant rate of false alarm statistics and extreme value theory to detect damage with little prior information. We demonstrate our methods with in situ guided wave data from an aluminum plate to detect two 0.75 cm diameter holes. Our results show an expected improvement in statistical significance as the number of sensors increase. With seventeen sensors, both methods successfully detect damage with statistical significance.

  9. Statistical analysis applied to safety culture self-assessment

    International Nuclear Information System (INIS)

    Macedo Soares, P.P.

    2002-01-01

    Interviews and opinion surveys are instruments used to assess the safety culture in an organization as part of the Safety Culture Enhancement Programme. Specific statistical tools are used to analyse the survey results. This paper presents an example of an opinion survey with the corresponding application of the statistical analysis and the conclusions obtained. Survey validation, Frequency statistics, Kolmogorov-Smirnov non-parametric test, Student (T-test) and ANOVA means comparison tests and LSD post-hoc multiple comparison test, are discussed. (author)

  10. Longitudinal data analysis a handbook of modern statistical methods

    CERN Document Server

    Fitzmaurice, Garrett; Verbeke, Geert; Molenberghs, Geert

    2008-01-01

    Although many books currently available describe statistical models and methods for analyzing longitudinal data, they do not highlight connections between various research threads in the statistical literature. Responding to this void, Longitudinal Data Analysis provides a clear, comprehensive, and unified overview of state-of-the-art theory and applications. It also focuses on the assorted challenges that arise in analyzing longitudinal data. After discussing historical aspects, leading researchers explore four broad themes: parametric modeling, nonparametric and semiparametric methods, joint

  11. Highly Robust Statistical Methods in Medical Image Analysis

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2012-01-01

    Roč. 32, č. 2 (2012), s. 3-16 ISSN 0208-5216 R&D Projects: GA MŠk(CZ) 1M06014 Institutional research plan: CEZ:AV0Z10300504 Keywords : robust statistics * classification * faces * robust image analysis * forensic science Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.208, year: 2012 http://www.ibib.waw.pl/bbe/bbefulltext/BBE_32_2_003_FT.pdf

  12. Network similarity and statistical analysis of earthquake seismic data

    OpenAIRE

    Deyasi, Krishanu; Chakraborty, Abhijit; Banerjee, Anirban

    2016-01-01

    We study the structural similarity of earthquake networks constructed from seismic catalogs of different geographical regions. A hierarchical clustering of underlying undirected earthquake networks is shown using Jensen-Shannon divergence in graph spectra. The directed nature of links indicates that each earthquake network is strongly connected, which motivates us to study the directed version statistically. Our statistical analysis of each earthquake region identifies the hub regions. We cal...

  13. Near-infrared Raman spectroscopy to detect anti-Toxoplasma gondii antibody in blood sera of domestic cats: quantitative analysis based on partial least-squares multivariate statistics

    Science.gov (United States)

    Duarte, Janaína; Pacheco, Marcos T. T.; Villaverde, Antonio Balbin; Machado, Rosangela Z.; Zângaro, Renato A.; Silveira, Landulfo

    2010-07-01

    Toxoplasmosis is an important zoonosis in public health because domestic cats are the main agents responsible for the transmission of this disease in Brazil. We investigate a method for diagnosing toxoplasmosis based on Raman spectroscopy. Dispersive near-infrared Raman spectra are used to quantify anti-Toxoplasma gondii (IgG) antibodies in blood sera from domestic cats. An 830-nm laser is used for sample excitation, and a dispersive spectrometer is used to detect the Raman scattering. A serological test is performed in all serum samples by the enzyme-linked immunosorbent assay (ELISA) for validation. Raman spectra are taken from 59 blood serum samples and a quantification model is implemented based on partial least squares (PLS) to quantify the sample's serology by Raman spectra compared to the results provided by the ELISA test. Based on the serological values provided by the Raman/PLS model, diagnostic parameters such as sensitivity, specificity, accuracy, positive prediction values, and negative prediction values are calculated to discriminate negative from positive samples, obtaining 100, 80, 90, 83.3, and 100%, respectively. Raman spectroscopy, associated with the PLS, is promising as a serological assay for toxoplasmosis, enabling fast and sensitive diagnosis.

  14. A STATISTICAL ANALYSIS OF LARYNGEAL MALIGNANCIES AT OUR INSTITUTION

    Directory of Open Access Journals (Sweden)

    Bharathi Mohan Mathan

    2017-03-01

    Full Text Available BACKGROUND Malignancies of larynx are an increasing global burden with a distribution of approximately 2-5% of all malignancies with an incidence of 3.6/1,00,000 for men and 1.3/1,00,000 for women with a male-to-female ratio of 4:1. Smoking and alcohol are major established risk factors. More than 90-95% of all malignancies are squamous cell type. Three main subsite of laryngeal malignancies are glottis, supraglottis and subglottis. Improved surgical techniques and advanced chemoradiotherapy has increased the overall 5 year survival rate. The above study is statistical analysis of laryngeal malignancies at our institution for a period of one year and analysis of pattern of distribution, aetiology, sites and subsites and causes for recurrence. MATERIALS AND METHODS Based on the statistical data available in the institution for the period of one year from January 2016-December 2016, all laryngeal malignancies were analysed with respect to demographic pattern, age, gender, site, subsite, aetiology, staging, treatment received and probable cause for failure of treatment. Patients were followed up for 12 months period during the study. RESULTS Total number of cases studied are 27 (twenty seven. Male cases are 23 and female cases are 4, male-to-female ratio is 5.7:1, most common age is above 60 years, most common site is supraglottis, most common type is moderately-differentiated squamous cell carcinoma, most common cause for relapse or recurrence is advanced stage of disease and poor differentiation. CONCLUSION The commonest age occurrence at the end of the study is above 60 years and male-to-female ratio is 5.7:1, which is slightly above the international standards. Most common site is supraglottis and not glottis. The relapse and recurrences are higher compared to the international standards.

  15. Conjunction analysis and propositional logic in fMRI data analysis using Bayesian statistics.

    Science.gov (United States)

    Rudert, Thomas; Lohmann, Gabriele

    2008-12-01

    To evaluate logical expressions over different effects in data analyses using the general linear model (GLM) and to evaluate logical expressions over different posterior probability maps (PPMs). In functional magnetic resonance imaging (fMRI) data analysis, the GLM was applied to estimate unknown regression parameters. Based on the GLM, Bayesian statistics can be used to determine the probability of conjunction, disjunction, implication, or any other arbitrary logical expression over different effects or contrast. For second-level inferences, PPMs from individual sessions or subjects are utilized. These PPMs can be combined to a logical expression and its probability can be computed. The methods proposed in this article are applied to data from a STROOP experiment and the methods are compared to conjunction analysis approaches for test-statistics. The combination of Bayesian statistics with propositional logic provides a new approach for data analyses in fMRI. Two different methods are introduced for propositional logic: the first for analyses using the GLM and the second for common inferences about different probability maps. The methods introduced extend the idea of conjunction analysis to a full propositional logic and adapt it from test-statistics to Bayesian statistics. The new approaches allow inferences that are not possible with known standard methods in fMRI. (c) 2008 Wiley-Liss, Inc.

  16. A statistical mechanical interpretation of algorithmic information theory: Total statistical mechanical interpretation based on physical argument

    International Nuclear Information System (INIS)

    Tadaki, Kohtaro

    2010-01-01

    The statistical mechanical interpretation of algorithmic information theory (AIT, for short) was introduced and developed by our former works [K. Tadaki, Local Proceedings of CiE 2008, pp. 425-434, 2008] and [K. Tadaki, Proceedings of LFCS'09, Springer's LNCS, vol. 5407, pp. 422-440, 2009], where we introduced the notion of thermodynamic quantities, such as partition function Z(T), free energy F(T), energy E(T), statistical mechanical entropy S(T), and specific heat C(T), into AIT. We then discovered that, in the interpretation, the temperature T equals to the partial randomness of the values of all these thermodynamic quantities, where the notion of partial randomness is a stronger representation of the compression rate by means of program-size complexity. Furthermore, we showed that this situation holds for the temperature T itself, which is one of the most typical thermodynamic quantities. Namely, we showed that, for each of the thermodynamic quantities Z(T), F(T), E(T), and S(T) above, the computability of its value at temperature T gives a sufficient condition for T is an element of (0,1) to satisfy the condition that the partial randomness of T equals to T. In this paper, based on a physical argument on the same level of mathematical strictness as normal statistical mechanics in physics, we develop a total statistical mechanical interpretation of AIT which actualizes a perfect correspondence to normal statistical mechanics. We do this by identifying a microcanonical ensemble in the framework of AIT. As a result, we clarify the statistical mechanical meaning of the thermodynamic quantities of AIT.

  17. Classification of Malaysia aromatic rice using multivariate statistical analysis

    International Nuclear Information System (INIS)

    Abdullah, A. H.; Adom, A. H.; Shakaff, A. Y. Md; Masnan, M. J.; Zakaria, A.; Rahim, N. A.; Omar, O.

    2015-01-01

    Aromatic rice (Oryza sativa L.) is considered as the best quality premium rice. The varieties are preferred by consumers because of its preference criteria such as shape, colour, distinctive aroma and flavour. The price of aromatic rice is higher than ordinary rice due to its special needed growth condition for instance specific climate and soil. Presently, the aromatic rice quality is identified by using its key elements and isotopic variables. The rice can also be classified via Gas Chromatography Mass Spectrometry (GC-MS) or human sensory panels. However, the uses of human sensory panels have significant drawbacks such as lengthy training time, and prone to fatigue as the number of sample increased and inconsistent. The GC–MS analysis techniques on the other hand, require detailed procedures, lengthy analysis and quite costly. This paper presents the application of in-house developed Electronic Nose (e-nose) to classify new aromatic rice varieties. The e-nose is used to classify the variety of aromatic rice based on the samples odour. The samples were taken from the variety of rice. The instrument utilizes multivariate statistical data analysis, including Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA) and K-Nearest Neighbours (KNN) to classify the unknown rice samples. The Leave-One-Out (LOO) validation approach is applied to evaluate the ability of KNN to perform recognition and classification of the unspecified samples. The visual observation of the PCA and LDA plots of the rice proves that the instrument was able to separate the samples into different clusters accordingly. The results of LDA and KNN with low misclassification error support the above findings and we may conclude that the e-nose is successfully applied to the classification of the aromatic rice varieties

  18. Classification of Malaysia aromatic rice using multivariate statistical analysis

    Energy Technology Data Exchange (ETDEWEB)

    Abdullah, A. H.; Adom, A. H.; Shakaff, A. Y. Md; Masnan, M. J.; Zakaria, A.; Rahim, N. A. [School of Mechatronic Engineering, Universiti Malaysia Perlis, Kampus Pauh Putra, 02600 Arau, Perlis (Malaysia); Omar, O. [Malaysian Agriculture Research and Development Institute (MARDI), Persiaran MARDI-UPM, 43400 Serdang, Selangor (Malaysia)

    2015-05-15

    Aromatic rice (Oryza sativa L.) is considered as the best quality premium rice. The varieties are preferred by consumers because of its preference criteria such as shape, colour, distinctive aroma and flavour. The price of aromatic rice is higher than ordinary rice due to its special needed growth condition for instance specific climate and soil. Presently, the aromatic rice quality is identified by using its key elements and isotopic variables. The rice can also be classified via Gas Chromatography Mass Spectrometry (GC-MS) or human sensory panels. However, the uses of human sensory panels have significant drawbacks such as lengthy training time, and prone to fatigue as the number of sample increased and inconsistent. The GC–MS analysis techniques on the other hand, require detailed procedures, lengthy analysis and quite costly. This paper presents the application of in-house developed Electronic Nose (e-nose) to classify new aromatic rice varieties. The e-nose is used to classify the variety of aromatic rice based on the samples odour. The samples were taken from the variety of rice. The instrument utilizes multivariate statistical data analysis, including Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA) and K-Nearest Neighbours (KNN) to classify the unknown rice samples. The Leave-One-Out (LOO) validation approach is applied to evaluate the ability of KNN to perform recognition and classification of the unspecified samples. The visual observation of the PCA and LDA plots of the rice proves that the instrument was able to separate the samples into different clusters accordingly. The results of LDA and KNN with low misclassification error support the above findings and we may conclude that the e-nose is successfully applied to the classification of the aromatic rice varieties.

  19. Classification of Malaysia aromatic rice using multivariate statistical analysis

    Science.gov (United States)

    Abdullah, A. H.; Adom, A. H.; Shakaff, A. Y. Md; Masnan, M. J.; Zakaria, A.; Rahim, N. A.; Omar, O.

    2015-05-01

    Aromatic rice (Oryza sativa L.) is considered as the best quality premium rice. The varieties are preferred by consumers because of its preference criteria such as shape, colour, distinctive aroma and flavour. The price of aromatic rice is higher than ordinary rice due to its special needed growth condition for instance specific climate and soil. Presently, the aromatic rice quality is identified by using its key elements and isotopic variables. The rice can also be classified via Gas Chromatography Mass Spectrometry (GC-MS) or human sensory panels. However, the uses of human sensory panels have significant drawbacks such as lengthy training time, and prone to fatigue as the number of sample increased and inconsistent. The GC-MS analysis techniques on the other hand, require detailed procedures, lengthy analysis and quite costly. This paper presents the application of in-house developed Electronic Nose (e-nose) to classify new aromatic rice varieties. The e-nose is used to classify the variety of aromatic rice based on the samples odour. The samples were taken from the variety of rice. The instrument utilizes multivariate statistical data analysis, including Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA) and K-Nearest Neighbours (KNN) to classify the unknown rice samples. The Leave-One-Out (LOO) validation approach is applied to evaluate the ability of KNN to perform recognition and classification of the unspecified samples. The visual observation of the PCA and LDA plots of the rice proves that the instrument was able to separate the samples into different clusters accordingly. The results of LDA and KNN with low misclassification error support the above findings and we may conclude that the e-nose is successfully applied to the classification of the aromatic rice varieties.

  20. Electromagnetic attenuation of eight earthquakes registered in Mexico using FFT-based spectrum and t-test statistical analysis for ULF Q-R ratios signals

    Directory of Open Access Journals (Sweden)

    Omar Chavez

    2016-07-01

    Full Text Available A method to improve the detection of seismo-magnetic signals is presented herein. Eight events registered for periods of 24 hours with seismic activity were analyzed and compared with non-seismic periods of the same duration. The distance between the earthquakes (EQs and the ultra-low frequency detector is of  ρ = (1.8 100.45M, where M is the magnitude of the EQ reported by the Seismological National Service of Mexico, in a period of three years. An improved fast Fourier transform analysis in the form of the ratio of the vertical magnetic field component to the horizontal one (Q = Bz/Bx has been developed. There are important differences between the frequencies obtained during the days of seismic activity compared with those with no seismic activity.

  1. Multivariate statistical modelling based on generalized linear models

    CERN Document Server

    Fahrmeir, Ludwig

    1994-01-01

    This book is concerned with the use of generalized linear models for univariate and multivariate regression analysis. Its emphasis is to provide a detailed introductory survey of the subject based on the analysis of real data drawn from a variety of subjects including the biological sciences, economics, and the social sciences. Where possible, technical details and proofs are deferred to an appendix in order to provide an accessible account for non-experts. Topics covered include: models for multi-categorical responses, model checking, time series and longitudinal data, random effects models, and state-space models. Throughout, the authors have taken great pains to discuss the underlying theoretical ideas in ways that relate well to the data at hand. As a result, numerous researchers whose work relies on the use of these models will find this an invaluable account to have on their desks. "The basic aim of the authors is to bring together and review a large part of recent advances in statistical modelling of m...

  2. Software development for statistical handling of dosimetric and epidemiological data base

    International Nuclear Information System (INIS)

    Amaro, M.

    1990-01-01

    The dose records from different groups of occupationally exposed workers are available in a computerized data base whose main purpose is the individual dose follow-up. Apart from this objective, such a dosimetric data base can be useful to obtain statistical analysis. The type of statistical n formation that can be extracted from the data base may aim to attain mainly two kinds of objectives: - Individual and collective dose distributions and statistics. -Epidemiological statistics. The report describes the software developed to obtain the statistical reports required by the Regulatory Body, as well as any other type of dose distributions or statistics to be included in epidemiological studies A Users Guide for the operators who handle this software package, and the codes listings, are also included in the report. (Author) 2 refs

  3. Software development for statistical handling of dosimetric and epidemiological data base

    International Nuclear Information System (INIS)

    Amaro, M.

    1990-01-01

    The dose records from different group of occupationally exposed workers are available in a computerized data base whose main purpose is the individual dose follow-up. Apart from this objective, such a dosimetric data base can be useful to obtain statistical analysis. The type of statistical information that can be extracted from the data base may aim to attain mainly two kinds of obsectives: - Individual and collective dose distributions and statistics. - Epidemiological statistics. The report describes the software developed to obtain the statistical reports required by the Regulatory Body, as well as any other type of dose distributions or statistics to be included in epidsemiological studies. A Users Guide for the operators who handle this sofware package, and the codes listings, are also included in the report. (Author)

  4. Statistical inference for the lifetime performance index based on generalised order statistics from exponential distribution

    Science.gov (United States)

    Vali Ahmadi, Mohammad; Doostparast, Mahdi; Ahmadi, Jafar

    2015-04-01

    In manufacturing industries, the lifetime of an item is usually characterised by a random variable X and considered to be satisfactory if X exceeds a given lower lifetime limit L. The probability of a satisfactory item is then ηL := P(X ≥ L), called conforming rate. In industrial companies, however, the lifetime performance index, proposed by Montgomery and denoted by CL, is widely used as a process capability index instead of the conforming rate. Assuming a parametric model for the random variable X, we show that there is a connection between the conforming rate and the lifetime performance index. Consequently, the statistical inferences about ηL and CL are equivalent. Hence, we restrict ourselves to statistical inference for CL based on generalised order statistics, which contains several ordered data models such as usual order statistics, progressively Type-II censored data and records. Various point and interval estimators for the parameter CL are obtained and optimal critical regions for the hypothesis testing problems concerning CL are proposed. Finally, two real data-sets on the lifetimes of insulating fluid and ball bearings, due to Nelson (1982) and Caroni (2002), respectively, and a simulated sample are analysed.

  5. A statistical frame based TDMA protocol for human body communication.

    Science.gov (United States)

    Nie, Zedong; Li, Zhao; Huang, Renwei; Liu, Yuhang; Li, Jingzhen; Wang, Lei

    2015-07-09

    Human body communication (HBC) using the human body as the transmission medium, which has been regarded as one of the most promising short-range communications in wireless body area networks (WBAN). Compared to the traditional wireless networks, two challenges are existed in HBC based WBAN. (1) Its sensor nodes should be energy saving since it is inconvenient to replace or recharge the battery on these sensor nodes; (2) the coordinator should be able to react dynamically and rapidly to the burst traffic triggered by sensing events. Those burst traffic conditions include vital physical signal (electrocardiogram, electroencephalogram etc.) monitoring, human motion detection (fall detection, activity monitoring, gesture recognition, motion sensing etc.) and so on. To cope with aforementioned challenges, a statistical frame based TDMA (S-TDMA) protocol with multi-constrained (energy, delay, transmission efficiency and emergency management) service is proposed in this paper. The scenarios where burst traffic is often triggered rapidly with low power consumption and low delay is handled in our proposed S-TDMA. A beacon frame with the contained synchronous and poll information is designed to reduce the possibility of collisions of request frames. A statistical frame which broadcasts the unified scheduling information is adopted to avoid packet collisions, idle listening and overhearing. Dynamic time slot allocation mechanism is presented to manage the burst traffic and reduce the active period in each beacon period. An emergency mechanism is proposed for vital signals to be transmitted. The theory analysis is proceed and the result is evaluated in the hardware platform. To verify its feasibility, S-TDMA was fully implemented on our independently-developed HBC platform where four sensor nodes and a coordinator are fastened on a human body. Experiment results show that S-TDMA costs 89.397 mJ every 20 s when the payload size is 122 bytes, 9.51% lower than Lightweight MAC

  6. Gender Inequity Associated with Increased Child Physical Abuse and Neglect: a Cross-Country Analysis of Population-Based Surveys and Country-Level Statistics.

    Science.gov (United States)

    Klevens, Joanne; Ports, Katie A

    2017-11-01

    Gender inequity is proposed as a societal-level risk factor for child maltreatment. However, most cross-national research examining this association is limited to developing countries and has used limited measures of gender inequity and child homicides as a proxy for child maltreatment. To examine the relationship between gender inequity and child maltreatment, we used caregivers' reported use of severe physical punishment (proxy for physical abuse) and children under 5 left alone or under the care of another child younger than 10 years of age (supervisory neglect) and three indices of gender inequity (the Social and Institutional Gender Index, the Gender Inequality Index, and the Gender Gap Index) from 57 countries, over half of which were developing countries. We found all three gender inequity indices to be significantly associated with physical abuse and two of the three to be significantly associated with neglect, after controlling for country-level development. Based on these findings, efforts to prevent child abuse and neglect might benefit from reducing gender inequity.

  7. Predicting Smoking Status Using Machine Learning Algorithms and Statistical Analysis

    Directory of Open Access Journals (Sweden)

    Charles Frank

    2018-03-01

    Full Text Available Smoking has been proven to negatively affect health in a multitude of ways. As of 2009, smoking has been considered the leading cause of preventable morbidity and mortality in the United States, continuing to plague the country’s overall health. This study aims to investigate the viability and effectiveness of some machine learning algorithms for predicting the smoking status of patients based on their blood tests and vital readings results. The analysis of this study is divided into two parts: In part 1, we use One-way ANOVA analysis with SAS tool to show the statistically significant difference in blood test readings between smokers and non-smokers. The results show that the difference in INR, which measures the effectiveness of anticoagulants, was significant in favor of non-smokers which further confirms the health risks associated with smoking. In part 2, we use five machine learning algorithms: Naïve Bayes, MLP, Logistic regression classifier, J48 and Decision Table to predict the smoking status of patients. To compare the effectiveness of these algorithms we use: Precision, Recall, F-measure and Accuracy measures. The results show that the Logistic algorithm outperformed the four other algorithms with Precision, Recall, F-Measure, and Accuracy of 83%, 83.4%, 83.2%, 83.44%, respectively.

  8. Criminal victimization in Ukraine: analysis of statistical data

    Directory of Open Access Journals (Sweden)

    Serhiy Nezhurbida

    2007-12-01

    Full Text Available The article is based on the analysis of statistical data provided by law-enforcement, judicial and other bodies of Ukraine. The given analysis allows us to give an accurate quantity of a current status of crime victimization in Ukraine, to characterize its basic features (level, rate, structure, dynamics, and etc.. L’article se concentre sur l’analyse des données statystiques fournies par les institutions de contrôle sociale (forces de police et magistrature et par d’autres organes institutionnels ukrainiens. Les analyses effectuées attirent l'attention sur la situation actuelle des victimes du crime en Ukraine et aident à délinéer leur principales caractéristiques (niveau, taux, structure, dynamiques, etc.L’articolo si basa sull’analisi dei dati statistici forniti dalle agenzie del controllo sociale (forze dell'ordine e magistratura e da altri organi istituzionali ucraini. Le analisi effettuate forniscono molte informazioni sulla situazione attuale delle vittime del crimine in Ucraina e aiutano a delinearne le caratteristiche principali (livello, tasso, struttura, dinamiche, ecc..

  9. FTree query construction for virtual screening: a statistical analysis.

    Science.gov (United States)

    Gerlach, Christof; Broughton, Howard; Zaliani, Andrea

    2008-02-01

    FTrees (FT) is a known chemoinformatic tool able to condense molecular descriptions into a graph object and to search for actives in large databases using graph similarity. The query graph is classically derived from a known active molecule, or a set of actives, for which a similar compound has to be found. Recently, FT similarity has been extended to fragment space, widening its capabilities. If a user were able to build a knowledge-based FT query from information other than a known active structure, the similarity search could be combined with other, normally separate, fields like de-novo design or pharmacophore searches. With this aim in mind, we performed a comprehensive analysis of several databases in terms of FT description and provide a basic statistical analysis of the FT spaces so far at hand. Vendors' catalogue collections and MDDR as a source of potential or known "actives", respectively, have been used. With the results reported herein, a set of ranges, mean values and standard deviations for several query parameters are presented in order to set a reference guide for the users. Applications on how to use this information in FT query building are also provided, using a newly built 3D-pharmacophore from 57 5HT-1F agonists and a published one which was used for virtual screening for tRNA-guanine transglycosylase (TGT) inhibitors.

  10. A statistical design for testing apomictic diversification through linkage analysis.

    Science.gov (United States)

    Zeng, Yanru; Hou, Wei; Song, Shuang; Feng, Sisi; Shen, Lin; Xia, Guohua; Wu, Rongling

    2014-03-01

    The capacity of apomixis to generate maternal clones through seed reproduction has made it a useful characteristic for the fixation of heterosis in plant breeding. It has been observed that apomixis displays pronounced intra- and interspecific diversification, but the genetic mechanisms underlying this diversification remains elusive, obstructing the exploitation of this phenomenon in practical breeding programs. By capitalizing on molecular information in mapping populations, we describe and assess a statistical design that deploys linkage analysis to estimate and test the pattern and extent of apomictic differences at various levels from genotypes to species. The design is based on two reciprocal crosses between two individuals each chosen from a hermaphrodite or monoecious species. A multinomial distribution likelihood is constructed by combining marker information from two crosses. The EM algorithm is implemented to estimate the rate of apomixis and test its difference between two plant populations or species as the parents. The design is validated by computer simulation. A real data analysis of two reciprocal crosses between hickory (Carya cathayensis) and pecan (C. illinoensis) demonstrates the utilization and usefulness of the design in practice. The design provides a tool to address fundamental and applied questions related to the evolution and breeding of apomixis.

  11. Data Analysis & Statistical Methods for Command File Errors

    Science.gov (United States)

    Meshkat, Leila; Waggoner, Bruce; Bryant, Larry

    2014-01-01

    This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.

  12. Recurrence time statistics: versatile tools for genomic DNA sequence analysis.

    Science.gov (United States)

    Cao, Yinhe; Tung, Wen-Wen; Gao, J B

    2004-01-01

    With the completion of the human and a few model organisms' genomes, and the genomes of many other organisms waiting to be sequenced, it has become increasingly important to develop faster computational tools which are capable of easily identifying the structures and extracting features from DNA sequences. One of the more important structures in a DNA sequence is repeat-related. Often they have to be masked before protein coding regions along a DNA sequence are to be identified or redundant expressed sequence tags (ESTs) are to be sequenced. Here we report a novel recurrence time based method for sequence analysis. The method can conveniently study all kinds of periodicity and exhaustively find all repeat-related features from a genomic DNA sequence. An efficient codon index is also derived from the recurrence time statistics, which has the salient features of being largely species-independent and working well on very short sequences. Efficient codon indices are key elements of successful gene finding algorithms, and are particularly useful for determining whether a suspected EST belongs to a coding or non-coding region. We illustrate the power of the method by studying the genomes of E. coli, the yeast S. cervisivae, the nematode worm C. elegans, and the human, Homo sapiens. Computationally, our method is very efficient. It allows us to carry out analysis of genomes on the whole genomic scale by a PC.

  13. Performance assessment and beamline diagnostics based on evaluation of temporal information from infrared spectral datasets by means of R Environment for statistical analysis.

    Science.gov (United States)

    Banas, Krzysztof; Banas, Agnieszka; Gajda, Mariusz; Kwiatek, Wojciech M; Pawlicki, Bohdan; Breese, Mark B H

    2014-07-15

    Assessment of the performance and up-to-date diagnostics of scientific equipment is one of the key components in contemporary laboratories. Most reliable checks are performed by real test experiments while varying the experimental conditions (typically, in the case of infrared spectroscopic measurements, the size of the beam aperture, the duration of the experiment, the spectral range, the scanner velocity, etc.). On the other hand, the stability of the instrument response in time is another key element of the great value. Source stability (or easy predictable temporal changes, similar to those observed in the case of synchrotron radiation-based sources working in non top-up mode), detector stability (especially in the case of liquid nitrogen- or liquid helium-cooled detectors) should be monitored. In these cases, recorded datasets (spectra) include additional variables such as time stamp when a particular spectrum was recorded (in the case of time trial experiments). A favorable approach in evaluating these data is building hyperspectral object that consist of all spectra and all additional parameters at which these spectra were recorded. Taking into account that these datasets could be considerably large in size, there is a need for the tools for semiautomatic data evaluation and information extraction. A comprehensive R archive network--the open-source R Environment--with its flexibility and growing potential, fits these requirements nicely. In this paper, examples of practical implementation of methods available in R for real-life Fourier transform infrared (FTIR) spectroscopic data problems are presented. However, this approach could easily be adopted to many various laboratory scenarios with other spectroscopic techniques.

  14. Measuring Rice Farmer’s Pesticide Overuse Practice and the Determinants: A Statistical Analysis Based on Data Collected in Jiangsu and Anhui Provinces of China

    Directory of Open Access Journals (Sweden)

    Jianhua Wang

    2018-03-01

    Full Text Available Understanding the extent of pesticide overuse and what drives rice farmers to overuse pesticide in agricultural production theoretically and empirically is imperative to increase farmers’ income, promote agricultural transformation and agricultural sustainable development. In this paper, we examined the phenomenon and pattern of pesticides overuse based on the data collected from 861 rice farmers in Jiangsu and Anhui, two provinces in China. By applying the Cobb-Douglas production function (C-D production function and the damage control model, we estimated the marginal productivity of pesticides. We also adopted the Binary Probit model to further explore factors leading to overuse of pesticide among farmers. Our findings suggested that the marginal productivity of pesticides is close to zero, indicating that there is an excessive use of pesticides in the surveyed areas. According to the Binary Probit model, we also discovered that female farmers, farmers with knowledge about pesticide toxicity, pesticide residue and farmers who hold the view that massive use of pesticide is inimical to the environment, and farmers who participate in pesticide training organized by the government, are more likely to overuse pesticide. On the contrary, experienced farmers have a lower chance of overusing pesticides. Possible explanations to the above findings may be that applying pesticides in accordance with the instructions causes overusing and farmers who are loss-averse, in order to avoid the risk of income loss that may be caused by disease and insect pests, and keep its own income stable, will still increase the amount of pesticide application. It also indicates that farmers are insensitive to increased pesticide overuse.

  15. Data analysis using the Gnu R system for statistical computation

    Energy Technology Data Exchange (ETDEWEB)

    Simone, James; /Fermilab

    2011-07-01

    R is a language system for statistical computation. It is widely used in statistics, bioinformatics, machine learning, data mining, quantitative finance, and the analysis of clinical drug trials. Among the advantages of R are: it has become the standard language for developing statistical techniques, it is being actively developed by a large and growing global user community, it is open source software, it is highly portable (Linux, OS-X and Windows), it has a built-in documentation system, it produces high quality graphics and it is easily extensible with over four thousand extension library packages available covering statistics and applications. This report gives a very brief introduction to R with some examples using lattice QCD simulation results. It then discusses the development of R packages designed for chi-square minimization fits for lattice n-pt correlation functions.

  16. What type of statistical model to choose for the analysis of radioimmunoassays

    International Nuclear Information System (INIS)

    Huet, S.

    1984-01-01

    The current techniques used for statistical analysis of radioimmunoassays are not very satisfactory for either the statistician or the biologist. They are based on an attempt to make the response curve linear to avoid complicated computations. The present article shows that this practice has considerable effects (often neglected) on the statistical assumptions which must be formulated. A more strict analysis is proposed by applying the four-parameter logistic model. The advantages of this method are: the statistical assumptions formulated are based on observed data, and the model can be applied to almost all radioimmunoassays [fr

  17. A κ-generalized statistical mechanics approach to income analysis

    International Nuclear Information System (INIS)

    Clementi, F; Gallegati, M; Kaniadakis, G

    2009-01-01

    This paper proposes a statistical mechanics approach to the analysis of income distribution and inequality. A new distribution function, having its roots in the framework of κ-generalized statistics, is derived that is particularly suitable for describing the whole spectrum of incomes, from the low–middle income region up to the high income Pareto power-law regime. Analytical expressions for the shape, moments and some other basic statistical properties are given. Furthermore, several well-known econometric tools for measuring inequality, which all exist in a closed form, are considered. A method for parameter estimation is also discussed. The model is shown to fit remarkably well the data on personal income for the United States, and the analysis of inequality performed in terms of its parameters is revealed as very powerful

  18. A κ-generalized statistical mechanics approach to income analysis

    Science.gov (United States)

    Clementi, F.; Gallegati, M.; Kaniadakis, G.

    2009-02-01

    This paper proposes a statistical mechanics approach to the analysis of income distribution and inequality. A new distribution function, having its roots in the framework of κ-generalized statistics, is derived that is particularly suitable for describing the whole spectrum of incomes, from the low-middle income region up to the high income Pareto power-law regime. Analytical expressions for the shape, moments and some other basic statistical properties are given. Furthermore, several well-known econometric tools for measuring inequality, which all exist in a closed form, are considered. A method for parameter estimation is also discussed. The model is shown to fit remarkably well the data on personal income for the United States, and the analysis of inequality performed in terms of its parameters is revealed as very powerful.

  19. Statistical Analysis of Data with Non-Detectable Values

    Energy Technology Data Exchange (ETDEWEB)

    Frome, E.L.

    2004-08-26

    Environmental exposure measurements are, in general, positive and may be subject to left censoring, i.e. the measured value is less than a ''limit of detection''. In occupational monitoring, strategies for assessing workplace exposures typically focus on the mean exposure level or the probability that any measurement exceeds a limit. A basic problem of interest in environmental risk assessment is to determine if the mean concentration of an analyte is less than a prescribed action level. Parametric methods, used to determine acceptable levels of exposure, are often based on a two parameter lognormal distribution. The mean exposure level and/or an upper percentile (e.g. the 95th percentile) are used to characterize exposure levels, and upper confidence limits are needed to describe the uncertainty in these estimates. In certain situations it is of interest to estimate the probability of observing a future (or ''missed'') value of a lognormal variable. Statistical methods for random samples (without non-detects) from the lognormal distribution are well known for each of these situations. In this report, methods for estimating these quantities based on the maximum likelihood method for randomly left censored lognormal data are described and graphical methods are used to evaluate the lognormal assumption. If the lognormal model is in doubt and an alternative distribution for the exposure profile of a similar exposure group is not available, then nonparametric methods for left censored data are used. The mean exposure level, along with the upper confidence limit, is obtained using the product limit estimate, and the upper confidence limit on the 95th percentile (i.e. the upper tolerance limit) is obtained using a nonparametric approach. All of these methods are well known but computational complexity has limited their use in routine data analysis with left censored data. The recent development of the R environment for statistical

  20. General specifications for the development of a USL NASA PC R and D statistical analysis support package

    Science.gov (United States)

    Dominick, Wayne D. (Editor); Bassari, Jinous; Triantafyllopoulos, Spiros

    1984-01-01

    The University of Southwestern Louisiana (USL) NASA PC R and D statistical analysis support package is designed to be a three-level package to allow statistical analysis for a variety of applications within the USL Data Base Management System (DBMS) contract work. The design addresses usage of the statistical facilities as a library package, as an interactive statistical analysis system, and as a batch processing package.

  1. Common pitfalls in statistical analysis: Linear regression analysis

    Directory of Open Access Journals (Sweden)

    Rakesh Aggarwal

    2017-01-01

    Full Text Available In a previous article in this series, we explained correlation analysis which describes the strength of relationship between two continuous variables. In this article, we deal with linear regression analysis which predicts the value of one continuous variable from another. We also discuss the assumptions and pitfalls associated with this analysis.

  2. A novel statistic for genome-wide interaction analysis.

    Directory of Open Access Journals (Sweden)

    Xuesen Wu

    2010-09-01

    Full Text Available Although great progress in genome-wide association studies (GWAS has been made, the significant SNP associations identified by GWAS account for only a few percent of the genetic variance, leading many to question where and how we can find the missing heritability. There is increasing interest in genome-wide interaction analysis as a possible source of finding heritability unexplained by current GWAS. However, the existing statistics for testing interaction have low power for genome-wide interaction analysis. To meet challenges raised by genome-wide interactional analysis, we have developed a novel statistic for testing interaction between two loci (either linked or unlinked. The null distribution and the type I error rates of the new statistic for testing interaction are validated using simulations. Extensive power studies show that the developed statistic has much higher power to detect interaction than classical logistic regression. The results identified 44 and 211 pairs of SNPs showing significant evidence of interactions with FDR<0.001 and 0.001analysis is a valuable tool for finding remaining missing heritability unexplained by the current GWAS, and the developed novel statistic is able to search significant interaction between SNPs across the genome. Real data analysis showed that the results of genome-wide interaction analysis can be replicated in two independent studies.

  3. Benchmark validation of statistical models: Application to mediation analysis of imagery and memory.

    Science.gov (United States)

    MacKinnon, David P; Valente, Matthew J; Wurpts, Ingrid C

    2018-03-29

    This article describes benchmark validation, an approach to validating a statistical model. According to benchmark validation, a valid model generates estimates and research conclusions consistent with a known substantive effect. Three types of benchmark validation-(a) benchmark value, (b) benchmark estimate, and (c) benchmark effect-are described and illustrated with examples. Benchmark validation methods are especially useful for statistical models with assumptions that are untestable or very difficult to test. Benchmark effect validation methods were applied to evaluate statistical mediation analysis in eight studies using the established effect that increasing mental imagery improves recall of words. Statistical mediation analysis led to conclusions about mediation that were consistent with established theory that increased imagery leads to increased word recall. Benchmark validation based on established substantive theory is discussed as a general way to investigate characteristics of statistical models and a complement to mathematical proof and statistical simulation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  4. Statistical Compilation of the ICT Sector and Policy Analysis | CRDI ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Statistical Compilation of the ICT Sector and Policy Analysis. As the presence and influence of information and communication technologies (ICTs) continues to widen and deepen, so too does its impact on economic development. However, much work needs to be done before the linkages between economic development ...

  5. Multivariate statistical analysis of major and trace element data for ...

    African Journals Online (AJOL)

    Multivariate statistical analysis of major and trace element data for niobium exploration in the peralkaline granites of the anorogenic ring-complex province of Nigeria. PO Ogunleye, EC Ike, I Garba. Abstract. No Abstract Available Journal of Mining and Geology Vol.40(2) 2004: 107-117. Full Text: EMAIL FULL TEXT EMAIL ...

  6. Statistical Compilation of the ICT Sector and Policy Analysis | IDRC ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Statistical Compilation of the ICT Sector and Policy Analysis. As the presence and influence of information and communication technologies (ICTs) continues to widen and deepen, so too does its impact on economic development. However, much work needs to be done before the linkages between economic development ...

  7. Statistical analysis of the BOIL program in RSYST-III

    International Nuclear Information System (INIS)

    Beck, W.; Hausch, H.J.

    1978-11-01

    The paper describes a statistical analysis in the RSYST-III program system. Using the example of the BOIL program, it is shown how the effects of inaccurate input data on the output data can be discovered. The existing possibilities of data generation, data handling, and data evaluation are outlined. (orig.) [de

  8. Statistical analysis of thermal conductivity of nanofluid containing ...

    Indian Academy of Sciences (India)

    Thermal conductivity measurements of nanofluids were analysed via two-factor completely randomized design and comparison of data means is carried out with Duncan's multiple-range test. Statistical analysis of experimental data show that temperature and weight fraction have a reasonable impact on the thermal ...

  9. Statistical Analysis of Development Trends in Global Renewable Energy

    Directory of Open Access Journals (Sweden)

    Marina D. Simonova

    2016-01-01

    Full Text Available The article focuses on the economic and statistical analysis of industries associated with the use of renewable energy sources in several countries. The dynamic development and implementation of technologies based on renewable energy sources (hereinafter RES is the defining trend of world energy development. The uneven distribution of hydrocarbon reserves, increasing demand of developing countries and environmental risks associated with the production and consumption of fossil resources has led to an increasing interest of many states to this field. Creating low-carbon economies involves the implementation of plans to increase the proportion of clean energy through renewable energy sources, energy efficiency, reduce greenhouse gas emissions. The priority of this sector is a characteristic feature of modern development of developed (USA, EU, Japan and emerging economies (China, India, Brazil, etc., as evidenced by the inclusion of the development of this segment in the state energy strategies and the revision of existing approaches to energy security. The analysis of the use of renewable energy, its contribution to value added of countries-producers is of a particular interest. Over the last decade, the share of energy produced from renewable sources in the energy balances of the world's largest economies increased significantly. Every year the number of power generating capacity based on renewable energy is growing, especially, this trend is apparent in China, USA and European Union countries. There is a significant increase in direct investment in renewable energy. The total investment over the past ten years increased by 5.6 times. The most rapidly developing kinds are solar energy and wind power.

  10. Multivariate statistical analysis of precipitation chemistry in Northwestern Spain

    International Nuclear Information System (INIS)

    Prada-Sanchez, J.M.; Garcia-Jurado, I.; Gonzalez-Manteiga, W.; Fiestras-Janeiro, M.G.; Espada-Rios, M.I.; Lucas-Dominguez, T.

    1993-01-01

    149 samples of rainwater were collected in the proximity of a power station in northwestern Spain at three rainwater monitoring stations. The resulting data are analyzed using multivariate statistical techniques. Firstly, the Principal Component Analysis shows that there are three main sources of pollution in the area (a marine source, a rural source and an acid source). The impact from pollution from these sources on the immediate environment of the stations is studied using Factorial Discriminant Analysis. 8 refs., 7 figs., 11 tabs

  11. Implementation and statistical analysis of Metropolis algorithm for SU(3)

    International Nuclear Information System (INIS)

    Katznelson, E.; Nobile, A.

    1984-12-01

    In this paper we study the statistical properties of an implementation of the Metropolis algorithm for SU(3) gauge theory. It is shown that the results have normal distribution. We demonstrate that in this case error analysis can be carried on in a simple way and we show that applying it to both the measurement strategy and the output data analysis has an important influence on the performance and reliability of the simulation. (author)

  12. Multivariate statistical analysis of precipitation chemistry in Northwestern Spain

    Energy Technology Data Exchange (ETDEWEB)

    Prada-Sanchez, J.M.; Garcia-Jurado, I.; Gonzalez-Manteiga, W.; Fiestras-Janeiro, M.G.; Espada-Rios, M.I.; Lucas-Dominguez, T. (University of Santiago, Santiago (Spain). Faculty of Mathematics, Dept. of Statistics and Operations Research)

    1993-07-01

    149 samples of rainwater were collected in the proximity of a power station in northwestern Spain at three rainwater monitoring stations. The resulting data are analyzed using multivariate statistical techniques. Firstly, the Principal Component Analysis shows that there are three main sources of pollution in the area (a marine source, a rural source and an acid source). The impact from pollution from these sources on the immediate environment of the stations is studied using Factorial Discriminant Analysis. 8 refs., 7 figs., 11 tabs.

  13. Reducing bias in the analysis of counting statistics data

    International Nuclear Information System (INIS)

    Hammersley, A.P.; Antoniadis, A.

    1997-01-01

    In the analysis of counting statistics data it is common practice to estimate the variance of the measured data points as the data points themselves. This practice introduces a bias into the results of further analysis which may be significant, and under certain circumstances lead to false conclusions. In the case of normal weighted least squares fitting this bias is quantified and methods to avoid it are proposed. (orig.)

  14. Development of a statistically based access delay timeline methodology.

    Energy Technology Data Exchange (ETDEWEB)

    Rivera, W. Gary; Robinson, David Gerald; Wyss, Gregory Dane; Hendrickson, Stacey M. Langfitt

    2013-02-01

    The charter for adversarial delay is to hinder access to critical resources through the use of physical systems increasing an adversarys task time. The traditional method for characterizing access delay has been a simple model focused on accumulating times required to complete each task with little regard to uncertainty, complexity, or decreased efficiency associated with multiple sequential tasks or stress. The delay associated with any given barrier or path is further discounted to worst-case, and often unrealistic, times based on a high-level adversary, resulting in a highly conservative calculation of total delay. This leads to delay systems that require significant funding and personnel resources in order to defend against the assumed threat, which for many sites and applications becomes cost prohibitive. A new methodology has been developed that considers the uncertainties inherent in the problem to develop a realistic timeline distribution for a given adversary path. This new methodology incorporates advanced Bayesian statistical theory and methodologies, taking into account small sample size, expert judgment, human factors and threat uncertainty. The result is an algorithm that can calculate a probability distribution function of delay times directly related to system risk. Through further analysis, the access delay analyst or end user can use the results in making informed decisions while weighing benefits against risks, ultimately resulting in greater system effectiveness with lower cost.

  15. Fitting the Statistical Distribution for Daily Rainfall in Ibadan, Based ...

    African Journals Online (AJOL)

    PROF. O. E. OSUAGWU

    2013-06-01

    Jun 1, 2013 ... Abstract. This paper presents several types of statistical distributions to describe rainfall distribution in Ibadan metropolis over a period of 30 years. The exponential, gamma, normal and poison distributions are compared to identify the optimal model for daily rainfall amount based on data recorded at rain ...

  16. Statistical model of stress corrosion cracking based on extended ...

    Indian Academy of Sciences (India)

    2016-09-07

    Sep 7, 2016 ... Abstract. In the previous paper (Pramana – J. Phys. 81(6), 1009 (2013)), the mechanism of stress corrosion cracking (SCC) based on non-quadratic form of Dirichlet energy was proposed and its statistical features were discussed. Following those results, we discuss here how SCC propagates on pipe wall ...

  17. HistFitter software framework for statistical data analysis

    Energy Technology Data Exchange (ETDEWEB)

    Baak, M. [CERN, Geneva (Switzerland); Besjes, G.J. [Radboud University Nijmegen, Nijmegen (Netherlands); Nikhef, Amsterdam (Netherlands); Cote, D. [University of Texas, Arlington (United States); Koutsman, A. [TRIUMF, Vancouver (Canada); Lorenz, J. [Ludwig-Maximilians-Universitaet Muenchen, Munich (Germany); Excellence Cluster Universe, Garching (Germany); Short, D. [University of Oxford, Oxford (United Kingdom)

    2015-04-15

    We present a software framework for statistical data analysis, called HistFitter, that has been used extensively by the ATLAS Collaboration to analyze big datasets originating from proton-proton collisions at the Large Hadron Collider at CERN. Since 2012 HistFitter has been the standard statistical tool in searches for supersymmetric particles performed by ATLAS. HistFitter is a programmable and flexible framework to build, book-keep, fit, interpret and present results of data models of nearly arbitrary complexity. Starting from an object-oriented configuration, defined by users, the framework builds probability density functions that are automatically fit to data and interpreted with statistical tests. Internally HistFitter uses the statistics packages RooStats and HistFactory. A key innovation of HistFitter is its design, which is rooted in analysis strategies of particle physics. The concepts of control, signal and validation regions are woven into its fabric. These are progressively treated with statistically rigorous built-in methods. Being capable of working with multiple models at once that describe the data, HistFitter introduces an additional level of abstraction that allows for easy bookkeeping, manipulation and testing of large collections of signal hypotheses. Finally, HistFitter provides a collection of tools to present results with publication quality style through a simple command-line interface. (orig.)

  18. HistFitter software framework for statistical data analysis

    International Nuclear Information System (INIS)

    Baak, M.; Besjes, G.J.; Cote, D.; Koutsman, A.; Lorenz, J.; Short, D.

    2015-01-01

    We present a software framework for statistical data analysis, called HistFitter, that has been used extensively by the ATLAS Collaboration to analyze big datasets originating from proton-proton collisions at the Large Hadron Collider at CERN. Since 2012 HistFitter has been the standard statistical tool in searches for supersymmetric particles performed by ATLAS. HistFitter is a programmable and flexible framework to build, book-keep, fit, interpret and present results of data models of nearly arbitrary complexity. Starting from an object-oriented configuration, defined by users, the framework builds probability density functions that are automatically fit to data and interpreted with statistical tests. Internally HistFitter uses the statistics packages RooStats and HistFactory. A key innovation of HistFitter is its design, which is rooted in analysis strategies of particle physics. The concepts of control, signal and validation regions are woven into its fabric. These are progressively treated with statistically rigorous built-in methods. Being capable of working with multiple models at once that describe the data, HistFitter introduces an additional level of abstraction that allows for easy bookkeeping, manipulation and testing of large collections of signal hypotheses. Finally, HistFitter provides a collection of tools to present results with publication quality style through a simple command-line interface. (orig.)

  19. Application of a statistical thermal design procedure to evaluate the PWR DNBR safety analysis limits

    International Nuclear Information System (INIS)

    Robeyns, J.; Parmentier, F.; Peeters, G.

    2001-01-01

    In the framework of safety analysis for the Belgian nuclear power plants and for the reload compatibility studies, Tractebel Energy Engineering (TEE) has developed, to define a 95/95 DNBR criterion, a statistical thermal design method based on the analytical full statistical approach: the Statistical Thermal Design Procedure (STDP). In that methodology, each DNBR value in the core assemblies is calculated with an adapted CHF (Critical Heat Flux) correlation implemented in the sub-channel code Cobra for core thermal hydraulic analysis. The uncertainties of the correlation are represented by the statistical parameters calculated from an experimental database. The main objective of a sub-channel analysis is to prove that in all class 1 and class 2 situations, the minimum DNBR (Departure from Nucleate Boiling Ratio) remains higher than the Safety Analysis Limit (SAL). The SAL value is calculated from the Statistical Design Limit (SDL) value adjusted with some penalties and deterministic factors. The search of a realistic value for the SDL is the objective of the statistical thermal design methods. In this report, we apply a full statistical approach to define the DNBR criterion or SDL (Statistical Design Limit) with the strict observance of the design criteria defined in the Standard Review Plan. The same statistical approach is used to define the expected number of rods experiencing DNB. (author)

  20. Steganalysis based on reducing the differences of image statistical characteristics

    Science.gov (United States)

    Wang, Ran; Niu, Shaozhang; Ping, Xijian; Zhang, Tao

    2018-04-01

    Compared with the process of embedding, the image contents make a more significant impact on the differences of image statistical characteristics. This makes the image steganalysis to be a classification problem with bigger withinclass scatter distances and smaller between-class scatter distances. As a result, the steganalysis features will be inseparate caused by the differences of image statistical characteristics. In this paper, a new steganalysis framework which can reduce the differences of image statistical characteristics caused by various content and processing methods is proposed. The given images are segmented to several sub-images according to the texture complexity. Steganalysis features are separately extracted from each subset with the same or close texture complexity to build a classifier. The final steganalysis result is figured out through a weighted fusing process. The theoretical analysis and experimental results can demonstrate the validity of the framework.

  1. Analysis of Statistical Methods Currently used in Toxicology Journals.

    Science.gov (United States)

    Na, Jihye; Yang, Hyeri; Bae, SeungJin; Lim, Kyung-Min

    2014-09-01

    Statistical methods are frequently used in toxicology, yet it is not clear whether the methods employed by the studies are used consistently and conducted based on sound statistical grounds. The purpose of this paper is to describe statistical methods used in top toxicology journals. More specifically, we sampled 30 papers published in 2014 from Toxicology and Applied Pharmacology, Archives of Toxicology, and Toxicological Science and described methodologies used to provide descriptive and inferential statistics. One hundred thirteen endpoints were observed in those 30 papers, and most studies had sample size less than 10, with the median and the mode being 6 and 3 & 6, respectively. Mean (105/113, 93%) was dominantly used to measure central tendency, and standard error of the mean (64/113, 57%) and standard deviation (39/113, 34%) were used to measure dispersion, while few studies provide justifications regarding why the methods being selected. Inferential statistics were frequently conducted (93/113, 82%), with one-way ANOVA being most popular (52/93, 56%), yet few studies conducted either normality or equal variance test. These results suggest that more consistent and appropriate use of statistical method is necessary which may enhance the role of toxicology in public health.

  2. STATCAT, Statistical Analysis of Parametric and Non-Parametric Data

    International Nuclear Information System (INIS)

    David, Hugh

    1990-01-01

    1 - Description of program or function: A suite of 26 programs designed to facilitate the appropriate statistical analysis and data handling of parametric and non-parametric data, using classical and modern univariate and multivariate methods. 2 - Method of solution: Data is read entry by entry, using a choice of input formats, and the resultant data bank is checked for out-of- range, rare, extreme or missing data. The completed STATCAT data bank can be treated by a variety of descriptive and inferential statistical methods, and modified, using other standard programs as required

  3. Statistical analysis of absorptive laser damage in dielectric thin films

    International Nuclear Information System (INIS)

    Budgor, A.B.; Luria-Budgor, K.F.

    1978-01-01

    The Weibull distribution arises as an example of the theory of extreme events. It is commonly used to fit statistical data arising in the failure analysis of electrical components and in DC breakdown of materials. This distribution is employed to analyze time-to-damage and intensity-to-damage statistics obtained when irradiating thin film coated samples of SiO 2 , ZrO 2 , and Al 2 O 3 with tightly focused laser beams. The data used is furnished by Milam. The fit to the data is excellent; and least squared correlation coefficients greater than 0.9 are often obtained

  4. Analytical and statistical analysis of elemental composition of lichens

    International Nuclear Information System (INIS)

    Calvelo, S.; Baccala, N.; Bubach, D.; Arribere, M.A.; Riberio Guevara, S.

    1997-01-01

    The elemental composition of lichens from remote southern South America regions has been studied with analytical and statistical techniques to determine if the values obtained reflect species, growth forms or habitat characteristics. The enrichment factors are calculated discriminated by species and collection site and compared with data available in the literature. The elemental concentrations are standardized and compared for different species. The information was statistically processed, a cluster analysis was performed using the three first principal axes of the PCA; the three groups formed are presented. Their relationship with the species, collection sites and the lichen growth forms are interpreted. (author)

  5. Detecting errors in micro and trace analysis by using statistics

    DEFF Research Database (Denmark)

    Heydorn, K.

    1993-01-01

    By assigning a standard deviation to each step in an analytical method it is possible to predict the standard deviation of each analytical result obtained by this method. If the actual variability of replicate analytical results agrees with the expected, the analytical method is said...... to be in statistical control. Significant deviations between analytical results from different laboratories reveal the presence of systematic errors, and agreement between different laboratories indicate the absence of systematic errors. This statistical approach, referred to as the analysis of precision, was applied...

  6. Multivariate statistical analysis of atom probe tomography data

    International Nuclear Information System (INIS)

    Parish, Chad M.; Miller, Michael K.

    2010-01-01

    The application of spectrum imaging multivariate statistical analysis methods, specifically principal component analysis (PCA), to atom probe tomography (APT) data has been investigated. The mathematical method of analysis is described and the results for two example datasets are analyzed and presented. The first dataset is from the analysis of a PM 2000 Fe-Cr-Al-Ti steel containing two different ultrafine precipitate populations. PCA properly describes the matrix and precipitate phases in a simple and intuitive manner. A second APT example is from the analysis of an irradiated reactor pressure vessel steel. Fine, nm-scale Cu-enriched precipitates having a core-shell structure were identified and qualitatively described by PCA. Advantages, disadvantages, and future prospects for implementing these data analysis methodologies for APT datasets, particularly with regard to quantitative analysis, are also discussed.

  7. Using Pre-Statistical Analysis to Streamline Monitoring Assessments

    International Nuclear Information System (INIS)

    Reed, J.K.

    1999-01-01

    A variety of statistical methods exist to aid evaluation of groundwater quality and subsequent decision making in regulatory programs. These methods are applied because of large temporal and spatial extrapolations commonly applied to these data. In short, statistical conclusions often serve as a surrogate for knowledge. However, facilities with mature monitoring programs that have generated abundant data have inherently less uncertainty because of the sheer quantity of analytical results. In these cases, statistical tests can be less important, and ''expert'' data analysis should assume an important screening role.The WSRC Environmental Protection Department, working with the General Separations Area BSRI Environmental Restoration project team has developed a method for an Integrated Hydrogeological Analysis (IHA) of historical water quality data from the F and H Seepage Basins groundwater remediation project. The IHA combines common sense analytical techniques and a GIS presentation that force direct interactive evaluation of the data. The IHA can perform multiple data analysis tasks required by the RCRA permit. These include: (1) Development of a groundwater quality baseline prior to remediation startup, (2) Targeting of constituents for removal from RCRA GWPS, (3) Targeting of constituents for removal from UIC, permit, (4) Targeting of constituents for reduced, (5)Targeting of monitoring wells not producing representative samples, (6) Reduction in statistical evaluation, and (7) Identification of contamination from other facilities

  8. PROSA: A computer program for statistical analysis of near-real-time-accountancy (NRTA) data

    International Nuclear Information System (INIS)

    Beedgen, R.; Bicking, U.

    1987-04-01

    The computer program PROSA (Program for Statistical Analysis of NRTA Data) is a tool to decide on the basis of statistical considerations if, in a given sequence of materials balance periods, a loss of material might have occurred or not. The evaluation of the material balance data is based on statistical test procedures. In PROSA three truncated sequential tests are applied to a sequence of material balances. The manual describes the statistical background of PROSA and how to use the computer program on an IBM-PC with DOS 3.1. (orig.) [de

  9. Development of statistical analysis code for meteorological data (W-View)

    International Nuclear Information System (INIS)

    Tachibana, Haruo; Sekita, Tsutomu; Yamaguchi, Takenori

    2003-03-01

    A computer code (W-View: Weather View) was developed to analyze the meteorological data statistically based on 'the guideline of meteorological statistics for the safety analysis of nuclear power reactor' (Nuclear Safety Commission on January 28, 1982; revised on March 29, 2001). The code gives statistical meteorological data to assess the public dose in case of normal operation and severe accident to get the license of nuclear reactor operation. This code was revised from the original code used in a large office computer code to enable a personal computer user to analyze the meteorological data simply and conveniently and to make the statistical data tables and figures of meteorology. (author)

  10. Statistical analysis of the profile of consumer Internet services

    Directory of Open Access Journals (Sweden)

    Arzhenovskii Sergei Valentinovich

    2014-09-01

    Full Text Available Article is devoted to the construction of the Russian Internet user profile. Statistical methods of summary, grouping and the graphical representation of information about Internet consumer by socio-demographic characteristics and settlement are used. RLMS at 2005-2012 years are the information base.

  11. Data management and statistical analysis for environmental assessment

    International Nuclear Information System (INIS)

    Wendelberger, J.R.; McVittie, T.I.

    1995-01-01

    Data management and statistical analysis for environmental assessment are important issues on the interface of computer science and statistics. Data collection for environmental decision making can generate large quantities of various types of data. A database/GIS system developed is described which provides efficient data storage as well as visualization tools which may be integrated into the data analysis process. FIMAD is a living database and GIS system. The system has changed and developed over time to meet the needs of the Los Alamos National Laboratory Restoration Program. The system provides a repository for data which may be accessed by different individuals for different purposes. The database structure is driven by the large amount and varied types of data required for environmental assessment. The integration of the database with the GIS system provides the foundation for powerful visualization and analysis capabilities

  12. Explorations in statistics: the analysis of ratios and normalized data.

    Science.gov (United States)

    Curran-Everett, Douglas

    2013-09-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This ninth installment of Explorations in Statistics explores the analysis of ratios and normalized-or standardized-data. As researchers, we compute a ratio-a numerator divided by a denominator-to compute a proportion for some biological response or to derive some standardized variable. In each situation, we want to control for differences in the denominator when the thing we really care about is the numerator. But there is peril lurking in a ratio: only if the relationship between numerator and denominator is a straight line through the origin will the ratio be meaningful. If not, the ratio will misrepresent the true relationship between numerator and denominator. In contrast, regression techniques-these include analysis of covariance-are versatile: they can accommodate an analysis of the relationship between numerator and denominator when a ratio is useless.

  13. Statistical analysis and interpolation of compositional data in materials science.

    Science.gov (United States)

    Pesenson, Misha Z; Suram, Santosh K; Gregoire, John M

    2015-02-09

    Compositional data are ubiquitous in chemistry and materials science: analysis of elements in multicomponent systems, combinatorial problems, etc., lead to data that are non-negative and sum to a constant (for example, atomic concentrations). The constant sum constraint restricts the sampling space to a simplex instead of the usual Euclidean space. Since statistical measures such as mean and standard deviation are defined for the Euclidean space, traditional correlation studies, multivariate analysis, and hypothesis testing may lead to erroneous dependencies and incorrect inferences when applied to compositional data. Furthermore, composition measurements that are used for data analytics may not include all of the elements contained in the material; that is, the measurements may be subcompositions of a higher-dimensional parent composition. Physically meaningful statistical analysis must yield results that are invariant under the number of composition elements, requiring the application of specialized statistical tools. We present specifics and subtleties of compositional data processing through discussion of illustrative examples. We introduce basic concepts, terminology, and methods required for the analysis of compositional data and utilize them for the spatial interpolation of composition in a sputtered thin film. The results demonstrate the importance of this mathematical framework for compositional data analysis (CDA) in the fields of materials science and chemistry.

  14. A Proof of Concept Study of Function-based Statistical Analysis of fNIRS Data: Syntax Comprehension in Children with Specific Language Impairment Compared To Typically-Developing Controls

    Directory of Open Access Journals (Sweden)

    Guifang eFu

    2016-06-01

    Full Text Available Functional near infrared spectroscopy (fNIRS is a neuroimaging techonology that enables investigators to indirectly monitor brain activity in vivo through relative changes in the concentration of oxygenated and deoxygenated hemoglobin. One of the key features of fNIRS is its superior temporal resolution, with dense measurements over very short periods of time (100ms increments. Unfortunately, most statistical analysis approaches in the existing literature have not fully utilized the high temporal resolution of fNIRS. For example, many analysis procedures are based on linearity assumptions that only extract partial information, thereby neglecting the overall dynamic trends in fNIRS trajectories. The main goal of this article is to assess the ability of a functional data analysis approach for detecting significant differences in hemodynamic responses recorded by fNIRS. Children with and without specific language impairment wore two, 3*5 fNIRS caps situated over the bilateral parasylvian areas as they completed a language comprehension task. Functional data analysis was used to decompose the high dimensional hemodynamic curves into the mean function and a few eigenfunctions to represent the overall trend and variation structures over time. Compared to the most popular general linear model, we did not assume any parametric structure and let the data speak for itself. This analysis identified significant differences between the case and control groups in the oxygenated hemodynamic mean trends in the right inferior frontal cortex and left inferior posterior parietal cortex brain regions. We also detected significant group differences in the deoxygenated hemodynamic mean trends in the right inferior posterior parietal cortex and left temporal parietal junction brain region. These findings, using dramatically different approaches, experimental designs, data sets, and foci, were consistent with several other reports, confirming group differences in the

  15. Speech emotion recognition based on statistical pitch model

    Institute of Scientific and Technical Information of China (English)

    WANG Zhiping; ZHAO Li; ZOU Cairong

    2006-01-01

    A modified Parzen-window method, which keep high resolution in low frequencies and keep smoothness in high frequencies, is proposed to obtain statistical model. Then, a gender classification method utilizing the statistical model is proposed, which have a 98% accuracy of gender classification while long sentence is dealt with. By separation the male voice and female voice, the mean and standard deviation of speech training samples with different emotion are used to create the corresponding emotion models. Then the Bhattacharyya distance between the test sample and statistical models of pitch, are utilized for emotion recognition in speech.The normalization of pitch for the male voice and female voice are also considered, in order to illustrate them into a uniform space. Finally, the speech emotion recognition experiment based on K Nearest Neighbor shows that, the correct rate of 81% is achieved, where it is only 73.85%if the traditional parameters are utilized.

  16. CORSSA: Community Online Resource for Statistical Seismicity Analysis

    Science.gov (United States)

    Zechar, J. D.; Hardebeck, J. L.; Michael, A. J.; Naylor, M.; Steacy, S.; Wiemer, S.; Zhuang, J.

    2011-12-01

    Statistical seismology is critical to the understanding of seismicity, the evaluation of proposed earthquake prediction and forecasting methods, and the assessment of seismic hazard. Unfortunately, despite its importance to seismology-especially to those aspects with great impact on public policy-statistical seismology is mostly ignored in the education of seismologists, and there is no central repository for the existing open-source software tools. To remedy these deficiencies, and with the broader goal to enhance the quality of statistical seismology research, we have begun building the Community Online Resource for Statistical Seismicity Analysis (CORSSA, www.corssa.org). We anticipate that the users of CORSSA will range from beginning graduate students to experienced researchers. More than 20 scientists from around the world met for a week in Zurich in May 2010 to kick-start the creation of CORSSA: the format and initial table of contents were defined; a governing structure was organized; and workshop participants began drafting articles. CORSSA materials are organized with respect to six themes, each will contain between four and eight articles. CORSSA now includes seven articles with an additional six in draft form along with forums for discussion, a glossary, and news about upcoming meetings, special issues, and recent papers. Each article is peer-reviewed and presents a balanced discussion, including illustrative examples and code snippets. Topics in the initial set of articles include: introductions to both CORSSA and statistical seismology, basic statistical tests and their role in seismology; understanding seismicity catalogs and their problems; basic techniques for modeling seismicity; and methods for testing earthquake predictability hypotheses. We have also begun curating a collection of statistical seismology software packages.

  17. State analysis of BOP using statistical and heuristic methods

    International Nuclear Information System (INIS)

    Heo, Gyun Young; Chang, Soon Heung

    2003-01-01

    Under the deregulation environment, the performance enhancement of BOP in nuclear power plants is being highlighted. To analyze performance level of BOP, we use the performance test procedures provided from an authorized institution such as ASME. However, through plant investigation, it was proved that the requirements of the performance test procedures about the reliability and quantity of sensors was difficult to be satisfied. As a solution of this, state analysis method that are the expanded concept of signal validation, was proposed on the basis of the statistical and heuristic approaches. Authors recommended the statistical linear regression model by analyzing correlation among BOP parameters as a reference state analysis method. Its advantage is that its derivation is not heuristic, it is possible to calculate model uncertainty, and it is easy to apply to an actual plant. The error of the statistical linear regression model is below 3% under normal as well as abnormal system states. Additionally a neural network model was recommended since the statistical model is impossible to apply to the validation of all of the sensors and is sensitive to the outlier that is the signal located out of a statistical distribution. Because there are a lot of sensors need to be validated in BOP, wavelet analysis (WA) were applied as a pre-processor for the reduction of input dimension and for the enhancement of training accuracy. The outlier localization capability of WA enhanced the robustness of the neural network. The trained neural network restored the degraded signals to the values within ±3% of the true signals

  18. Computerized statistical analysis with bootstrap method in nuclear medicine

    International Nuclear Information System (INIS)

    Zoccarato, O.; Sardina, M.; Zatta, G.; De Agostini, A.; Barbesti, S.; Mana, O.; Tarolo, G.L.

    1988-01-01

    Statistical analysis of data samples involves some hypothesis about the features of data themselves. The accuracy of these hypotheses can influence the results of statistical inference. Among the new methods of computer-aided statistical analysis, the bootstrap method appears to be one of the most powerful, thanks to its ability to reproduce many artificial samples starting from a single original sample and because it works without hypothesis about data distribution. The authors applied the bootstrap method to two typical situation of Nuclear Medicine Department. The determination of the normal range of serum ferritin, as assessed by radioimmunoassay and defined by the mean value ±2 standard deviations, starting from an experimental sample of small dimension, shows an unacceptable lower limit (ferritin plasmatic levels below zero). On the contrary, the results obtained by elaborating 5000 bootstrap samples gives ans interval of values (10.95 ng/ml - 72.87 ng/ml) corresponding to the normal ranges commonly reported. Moreover the authors applied the bootstrap method in evaluating the possible error associated with the correlation coefficient determined between left ventricular ejection fraction (LVEF) values obtained by first pass radionuclide angiocardiography with 99m Tc and 195m Au. The results obtained indicate a high degree of statistical correlation and give the range of r 2 values to be considered acceptable for this type of studies

  19. Statistical analysis of the Ft. Calhoun reactor coolant pump system

    International Nuclear Information System (INIS)

    Heising, Carolyn D.

    1998-01-01

    In engineering science, statistical quality control techniques have traditionally been applied to control manufacturing processes. An application to commercial nuclear power plant maintenance and control is presented that can greatly improve plant safety. As a demonstration of such an approach to plant maintenance and control, a specific system is analyzed: the reactor coolant pumps (RCPs) of the Ft. Calhoun nuclear power plant. This research uses capability analysis, Shewhart X-bar, R-charts, canonical correlation methods, and design of experiments to analyze the process for the state of statistical control. The results obtained show that six out of ten parameters are under control specifications limits and four parameters are not in the state of statistical control. The analysis shows that statistical process control methods can be applied as an early warning system capable of identifying significant equipment problems well in advance of traditional control room alarm indicators Such a system would provide operators with ample time to respond to possible emergency situations and thus improve plant safety and reliability. (author)

  20. Statistical analysis of the Ft. Calhoun reactor coolant pump system

    International Nuclear Information System (INIS)

    Patel, Bimal; Heising, C.D.

    1997-01-01

    In engineering science, statistical quality control techniques have traditionally been applied to control manufacturing processes. An application to commercial nuclear power plant maintenance and control is presented that can greatly improve plant safety. As a demonstration of such an approach, a specific system is analyzed: the reactor coolant pumps (RCPs) of the Ft. Calhoun nuclear power plant. This research uses capability analysis, Shewhart X-bar, R charts, canonical correlation methods, and design of experiments to analyze the process for the state of statistical control. The results obtained show that six out of ten parameters are under control specification limits and four parameters are not in the state of statistical control. The analysis shows that statistical process control methods can be applied as an early warning system capable of identifying significant equipment problems well in advance of traditional control room alarm indicators. Such a system would provide operators with ample time to respond to possible emergency situations and thus improve plant safety and reliability. (Author)

  1. STATISTICAL ANALYSIS OF THE HEAVY NEUTRAL ATOMS MEASURED BY IBEX

    International Nuclear Information System (INIS)

    Park, Jeewoo; Kucharek, Harald; Möbius, Eberhard; Galli, André; Livadiotis, George; Fuselier, Steve A.; McComas, David J.

    2015-01-01

    We investigate the directional distribution of heavy neutral atoms in the heliosphere by using heavy neutral maps generated with the IBEX-Lo instrument over three years from 2009 to 2011. The interstellar neutral (ISN) O and Ne gas flow was found in the first-year heavy neutral map at 601 keV and its flow direction and temperature were studied. However, due to the low counting statistics, researchers have not treated the full sky maps in detail. The main goal of this study is to evaluate the statistical significance of each pixel in the heavy neutral maps to get a better understanding of the directional distribution of heavy neutral atoms in the heliosphere. Here, we examine three statistical analysis methods: the signal-to-noise filter, the confidence limit method, and the cluster analysis method. These methods allow us to exclude background from areas where the heavy neutral signal is statistically significant. These methods also allow the consistent detection of heavy neutral atom structures. The main emission feature expands toward lower longitude and higher latitude from the observational peak of the ISN O and Ne gas flow. We call this emission the extended tail. It may be an imprint of the secondary oxygen atoms generated by charge exchange between ISN hydrogen atoms and oxygen ions in the outer heliosheath

  2. Statistical cluster analysis and diagnosis of nuclear system level performance

    International Nuclear Information System (INIS)

    Teichmann, T.; Levine, M.M.; Samanta, P.K.; Kato, W.Y.

    1985-01-01

    The complexity of individual nuclear power plants and the importance of maintaining reliable and safe operations makes it desirable to complement the deterministic analyses of these plants by corresponding statistical surveys and diagnoses. Based on such investigations, one can then explore, statistically, the anticipation, prevention, and when necessary, the control of such failures and malfunctions. This paper, and the accompanying one by Samanta et al., describe some of the initial steps in exploring the feasibility of setting up such a program on an integrated and global (industry-wide) basis. The conceptual statistical and data framework was originally outlined in BNL/NUREG-51609, NUREG/CR-3026, and the present work aims at showing how some important elements might be implemented in a practical way (albeit using hypothetical or simulated data)

  3. Conditional Probability Analysis: A Statistical Tool for Environmental Analysis.

    Science.gov (United States)

    The use and application of environmental conditional probability analysis (CPA) is relatively recent. The first presentation using CPA was made in 2002 at the New England Association of Environmental Biologists Annual Meeting in Newport. Rhode Island. CPA has been used since the...

  4. Statistical analysis of first period of operation of FTU Tokamak

    International Nuclear Information System (INIS)

    Crisanti, F.; Apruzzese, G.; Frigione, D.; Kroegler, H.; Lovisetto, L.; Mazzitelli, G.; Podda, S.

    1996-09-01

    On the FTU Tokamak the plasma physics operations started on the 20/4/90. The first plasma had a plasma current Ip=0.75 MA for about a second. The experimental phase lasted until 7/7/94, when a long shut-down begun for installing the toroidal limiter in the inner side of the vacuum vessel. In these four years of operations plasma experiments have been successfully exploited, e.g. experiments of single and multiple pellet injections; full current drive up to Ip=300 KA was obtained by using waves at the frequency of the Lower Hybrid; analysis of ohmic plasma parameters with different materials (from the low Z silicon to high Z tungsten) as plasma facing element was performed. In this work a statistical analysis of the full period of operation is presented. Moreover, a comparison with the statistical data from other Tokamaks is attempted

  5. Statistics in experimental design, preprocessing, and analysis of proteomics data.

    Science.gov (United States)

    Jung, Klaus

    2011-01-01

    High-throughput experiments in proteomics, such as 2-dimensional gel electrophoresis (2-DE) and mass spectrometry (MS), yield usually high-dimensional data sets of expression values for hundreds or thousands of proteins which are, however, observed on only a relatively small number of biological samples. Statistical methods for the planning and analysis of experiments are important to avoid false conclusions and to receive tenable results. In this chapter, the most frequent experimental designs for proteomics experiments are illustrated. In particular, focus is put on studies for the detection of differentially regulated proteins. Furthermore, issues of sample size planning, statistical analysis of expression levels as well as methods for data preprocessing are covered.

  6. Statistical analysis of RHIC beam position monitors performance

    Science.gov (United States)

    Calaga, R.; Tomás, R.

    2004-04-01

    A detailed statistical analysis of beam position monitors (BPM) performance at RHIC is a critical factor in improving regular operations and future runs. Robust identification of malfunctioning BPMs plays an important role in any orbit or turn-by-turn analysis. Singular value decomposition and Fourier transform methods, which have evolved as powerful numerical techniques in signal processing, will aid in such identification from BPM data. This is the first attempt at RHIC to use a large set of data to statistically enhance the capability of these two techniques and determine BPM performance. A comparison from run 2003 data shows striking agreement between the two methods and hence can be used to improve BPM functioning at RHIC and possibly other accelerators.

  7. Statistical analysis of RHIC beam position monitors performance

    Directory of Open Access Journals (Sweden)

    R. Calaga

    2004-04-01

    Full Text Available A detailed statistical analysis of beam position monitors (BPM performance at RHIC is a critical factor in improving regular operations and future runs. Robust identification of malfunctioning BPMs plays an important role in any orbit or turn-by-turn analysis. Singular value decomposition and Fourier transform methods, which have evolved as powerful numerical techniques in signal processing, will aid in such identification from BPM data. This is the first attempt at RHIC to use a large set of data to statistically enhance the capability of these two techniques and determine BPM performance. A comparison from run 2003 data shows striking agreement between the two methods and hence can be used to improve BPM functioning at RHIC and possibly other accelerators.

  8. Common pitfalls in statistical analysis: Odds versus risk

    Science.gov (United States)

    Ranganathan, Priya; Aggarwal, Rakesh; Pramesh, C. S.

    2015-01-01

    In biomedical research, we are often interested in quantifying the relationship between an exposure and an outcome. “Odds” and “Risk” are the most common terms which are used as measures of association between variables. In this article, which is the fourth in the series of common pitfalls in statistical analysis, we explain the meaning of risk and odds and the difference between the two. PMID:26623395

  9. Statistical Challenges of Big Data Analysis in Medicine

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2015-01-01

    Roč. 3, č. 1 (2015), s. 24-27 ISSN 1805-8698 R&D Projects: GA ČR GA13-23940S Grant - others:CESNET Development Fund(CZ) 494/2013 Institutional support: RVO:67985807 Keywords : big data * variable selection * classification * cluster analysis Subject RIV: BB - Applied Statistics, Operational Research http://www.ijbh.org/ijbh2015-1.pdf

  10. Research and Development on Food Nutrition Statistical Analysis Software System

    OpenAIRE

    Du Li; Ke Yun

    2013-01-01

    Designing and developing a set of food nutrition component statistical analysis software can realize the automation of nutrition calculation, improve the nutrition processional professional’s working efficiency and achieve the informatization of the nutrition propaganda and education. In the software development process, the software engineering method and database technology are used to calculate the human daily nutritional intake and the intelligent system is used to evaluate the user’s hea...

  11. Maximum Likelihood, Consistency and Data Envelopment Analysis: A Statistical Foundation

    OpenAIRE

    Rajiv D. Banker

    1993-01-01

    This paper provides a formal statistical basis for the efficiency evaluation techniques of data envelopment analysis (DEA). DEA estimators of the best practice monotone increasing and concave production function are shown to be also maximum likelihood estimators if the deviation of actual output from the efficient output is regarded as a stochastic variable with a monotone decreasing probability density function. While the best practice frontier estimator is biased below the theoretical front...

  12. Lifetime statistics of quantum chaos studied by a multiscale analysis

    KAUST Repository

    Di Falco, A.

    2012-04-30

    In a series of pump and probe experiments, we study the lifetime statistics of a quantum chaotic resonator when the number of open channels is greater than one. Our design embeds a stadium billiard into a two dimensional photonic crystal realized on a silicon-on-insulator substrate. We calculate resonances through a multiscale procedure that combines energy landscape analysis and wavelet transforms. Experimental data is found to follow the universal predictions arising from random matrix theory with an excellent level of agreement.

  13. Statistical Analysis of the Exchange Rate of Bitcoin

    Science.gov (United States)

    Chu, Jeffrey; Nadarajah, Saralees; Chan, Stephen

    2015-01-01

    Bitcoin, the first electronic payment system, is becoming a popular currency. We provide a statistical analysis of the log-returns of the exchange rate of Bitcoin versus the United States Dollar. Fifteen of the most popular parametric distributions in finance are fitted to the log-returns. The generalized hyperbolic distribution is shown to give the best fit. Predictions are given for future values of the exchange rate. PMID:26222702

  14. Statistical Analysis of the Exchange Rate of Bitcoin.

    Directory of Open Access Journals (Sweden)

    Jeffrey Chu

    Full Text Available Bitcoin, the first electronic payment system, is becoming a popular currency. We provide a statistical analysis of the log-returns of the exchange rate of Bitcoin versus the United States Dollar. Fifteen of the most popular parametric distributions in finance are fitted to the log-returns. The generalized hyperbolic distribution is shown to give the best fit. Predictions are given for future values of the exchange rate.

  15. Statistical Analysis of the Exchange Rate of Bitcoin.

    Science.gov (United States)

    Chu, Jeffrey; Nadarajah, Saralees; Chan, Stephen

    2015-01-01

    Bitcoin, the first electronic payment system, is becoming a popular currency. We provide a statistical analysis of the log-returns of the exchange rate of Bitcoin versus the United States Dollar. Fifteen of the most popular parametric distributions in finance are fitted to the log-returns. The generalized hyperbolic distribution is shown to give the best fit. Predictions are given for future values of the exchange rate.

  16. Analysis of spectral data with rare events statistics

    International Nuclear Information System (INIS)

    Ilyushchenko, V.I.; Chernov, N.I.

    1990-01-01

    The case is considered of analyzing experimental data, when the results of individual experimental runs cannot be summed due to large systematic errors. A statistical analysis of the hypothesis about the persistent peaks in the spectra has been performed by means of the Neyman-Pearson test. The computations demonstrate the confidence level for the hypothesis about the presence of a persistent peak in the spectrum is proportional to the square root of the number of independent experimental runs, K. 5 refs

  17. Australasian Resuscitation In Sepsis Evaluation trial statistical analysis plan.

    Science.gov (United States)

    Delaney, Anthony; Peake, Sandra L; Bellomo, Rinaldo; Cameron, Peter; Holdgate, Anna; Howe, Belinda; Higgins, Alisa; Presneill, Jeffrey; Webb, Steve

    2013-10-01

    The Australasian Resuscitation In Sepsis Evaluation (ARISE) study is an international, multicentre, randomised, controlled trial designed to evaluate the effectiveness of early goal-directed therapy compared with standard care for patients presenting to the ED with severe sepsis. In keeping with current practice, and taking into considerations aspects of trial design and reporting specific to non-pharmacologic interventions, this document outlines the principles and methods for analysing and reporting the trial results. The document is prepared prior to completion of recruitment into the ARISE study, without knowledge of the results of the interim analysis conducted by the data safety and monitoring committee and prior to completion of the two related international studies. The statistical analysis plan was designed by the ARISE chief investigators, and reviewed and approved by the ARISE steering committee. The data collected by the research team as specified in the study protocol, and detailed in the study case report form were reviewed. Information related to baseline characteristics, characteristics of delivery of the trial interventions, details of resuscitation and other related therapies, and other relevant data are described with appropriate comparisons between groups. The primary, secondary and tertiary outcomes for the study are defined, with description of the planned statistical analyses. A statistical analysis plan was developed, along with a trial profile, mock-up tables and figures. A plan for presenting baseline characteristics, microbiological and antibiotic therapy, details of the interventions, processes of care and concomitant therapies, along with adverse events are described. The primary, secondary and tertiary outcomes are described along with identification of subgroups to be analysed. A statistical analysis plan for the ARISE study has been developed, and is available in the public domain, prior to the completion of recruitment into the

  18. STATISTICAL ANALYSIS OF DIESEL CAR REPAIRS ON THE EXAMPLE OF DIESEL SERVICE ADAMCZYK COMPANIES

    Directory of Open Access Journals (Sweden)

    Łukasz KONIECZNY

    2014-12-01

    Full Text Available The article presents a statistical analysis of car repair data gathered by an examined company over five-year time interval. It is based on a SQL database which contains information about all realized orders. The analysis defines the structure of the set of repaired car makes and additionally to find the most frequent vehicle defects.

  19. Improved air ventilation rate estimation based on a statistical model

    International Nuclear Information System (INIS)

    Brabec, M.; Jilek, K.

    2004-01-01

    A new approach to air ventilation rate estimation from CO measurement data is presented. The approach is based on a state-space dynamic statistical model, allowing for quick and efficient estimation. Underlying computations are based on Kalman filtering, whose practical software implementation is rather easy. The key property is the flexibility of the model, allowing various artificial regimens of CO level manipulation to be treated. The model is semi-parametric in nature and can efficiently handle time-varying ventilation rate. This is a major advantage, compared to some of the methods which are currently in practical use. After a formal introduction of the statistical model, its performance is demonstrated on real data from routine measurements. It is shown how the approach can be utilized in a more complex situation of major practical relevance, when time-varying air ventilation rate and radon entry rate are to be estimated simultaneously from concurrent radon and CO measurements

  20. Statistical data processing of mobility curves of univalent weak bases

    Czech Academy of Sciences Publication Activity Database

    Šlampová, Andrea; Boček, Petr

    2008-01-01

    Roč. 29, č. 2 (2008), s. 538-541 ISSN 0173-0835 R&D Projects: GA AV ČR IAA400310609; GA ČR GA203/05/2106 Institutional research plan: CEZ:AV0Z40310501 Keywords : mobility curve * univalent weak bases * statistical evaluation Subject RIV: CB - Analytical Chemistry, Separation Impact factor: 3.509, year: 2008

  1. Olive mill wastewater characteristics: modelling and statistical analysis

    Directory of Open Access Journals (Sweden)

    Martins-Dias, Susete

    2004-09-01

    Full Text Available A synthesis of the work carried out on Olive Mill Wastewater (OMW characterisation is given, covering articles published over the last 50 years. Data on OMW characterisation found in the literature are summarised and correlations between them and with phenolic compounds content are sought. This permits the characteristics of an OMW to be estimated from one simple measurement: the phenolic compounds concentration. A model based on OMW characterisations accounting 6 countries was developed along with a model for Portuguese OMW. The statistical analysis of the correlations obtained indicates that Chemical Oxygen Demand of a given OMW is a second-degree polynomial function of its phenolic compounds concentration. Tests to evaluate the regressions significance were carried out, based on multivariable ANOVA analysis, on visual standardised residuals distribution and their means for confidence levels of 95 and 99 %, validating clearly these models. This modelling work will help in the future planning, operation and monitoring of an OMW treatment plant.Presentamos una síntesis de los trabajos realizados en los últimos 50 años relacionados con la caracterización del alpechín. Realizamos una recopilación de los datos publicados, buscando correlaciones entre los datos relativos al alpechín y los compuestos fenólicos. Esto permite la determinación de las características del alpechín a partir de una sola medida: La concentración de compuestos fenólicos. Proponemos dos modelos, uno basado en datos relativos a seis países y un segundo aplicado únicamente a Portugal. El análisis estadístico de las correlaciones obtenidas indica que la demanda química de oxígeno de un determinado alpechín es una función polinómica de segundo grado de su concentración de compuestos fenólicos. Se comprobó la significancia de esta correlación mediante la aplicación del análisis multivariable ANOVA, y además se evaluó la distribución de residuos y sus

  2. Parallelization of the Physical-Space Statistical Analysis System (PSAS)

    Science.gov (United States)

    Larson, J. W.; Guo, J.; Lyster, P. M.

    1999-01-01

    Atmospheric data assimilation is a method of combining observations with model forecasts to produce a more accurate description of the atmosphere than the observations or forecast alone can provide. Data assimilation plays an increasingly important role in the study of climate and atmospheric chemistry. The NASA Data Assimilation Office (DAO) has developed the Goddard Earth Observing System Data Assimilation System (GEOS DAS) to create assimilated datasets. The core computational components of the GEOS DAS include the GEOS General Circulation Model (GCM) and the Physical-space Statistical Analysis System (PSAS). The need for timely validation of scientific enhancements to the data assimilation system poses computational demands that are best met by distributed parallel software. PSAS is implemented in Fortran 90 using object-based design principles. The analysis portions of the code solve two equations. The first of these is the "innovation" equation, which is solved on the unstructured observation grid using a preconditioned conjugate gradient (CG) method. The "analysis" equation is a transformation from the observation grid back to a structured grid, and is solved by a direct matrix-vector multiplication. Use of a factored-operator formulation reduces the computational complexity of both the CG solver and the matrix-vector multiplication, rendering the matrix-vector multiplications as a successive product of operators on a vector. Sparsity is introduced to these operators by partitioning the observations using an icosahedral decomposition scheme. PSAS builds a large (approx. 128MB) run-time database of parameters used in the calculation of these operators. Implementing a message passing parallel computing paradigm into an existing yet developing computational system as complex as PSAS is nontrivial. One of the technical challenges is balancing the requirements for computational reproducibility with the need for high performance. The problem of computational

  3. SAS and R data management, statistical analysis, and graphics

    CERN Document Server

    Kleinman, Ken

    2009-01-01

    An All-in-One Resource for Using SAS and R to Carry out Common TasksProvides a path between languages that is easier than reading complete documentationSAS and R: Data Management, Statistical Analysis, and Graphics presents an easy way to learn how to perform an analytical task in both SAS and R, without having to navigate through the extensive, idiosyncratic, and sometimes unwieldy software documentation. The book covers many common tasks, such as data management, descriptive summaries, inferential procedures, regression analysis, and the creation of graphics, along with more complex applicat

  4. Statistical Analysis of Hypercalcaemia Data related to Transferability

    DEFF Research Database (Denmark)

    Frølich, Anne; Nielsen, Bo Friis

    2005-01-01

    In this report we describe statistical analysis related to a study of hypercalcaemia carried out in the Copenhagen area in the ten year period from 1984 to 1994. Results from the study have previously been publised in a number of papers [3, 4, 5, 6, 7, 8, 9] and in various abstracts and posters...... at conferences during the late eighties and early nineties. In this report we give a more detailed description of many of the analysis and provide some new results primarily by simultaneous studies of several databases....

  5. An Efficient Graph-based Method for Long-term Land-use Change Statistics

    Directory of Open Access Journals (Sweden)

    Yipeng Zhang

    2015-12-01

    Full Text Available Statistical analysis of land-use change plays an important role in sustainable land management and has received increasing attention from scholars and administrative departments. However, the statistical process involving spatial overlay analysis remains difficult and needs improvement to deal with mass land-use data. In this paper, we introduce a spatio-temporal flow network model to reveal the hidden relational information among spatio-temporal entities. Based on graph theory, the constant condition of saturated multi-commodity flow is derived. A new method based on a network partition technique of spatio-temporal flow network are proposed to optimize the transition statistical process. The effectiveness and efficiency of the proposed method is verified through experiments using land-use data in Hunan from 2009 to 2014. In the comparison among three different land-use change statistical methods, the proposed method exhibits remarkable superiority in efficiency.

  6. Analysis of Preference Data Using Intermediate Test Statistic Abstract

    African Journals Online (AJOL)

    PROF. O. E. OSUAGWU

    2013-06-01

    Jun 1, 2013 ... West African Journal of Industrial and Academic Research Vol.7 No. 1 June ... Keywords:-Preference data, Friedman statistic, multinomial test statistic, intermediate test statistic. ... new method and consequently a new statistic ...

  7. A Proof of Concept Study of Function-Based Statistical Analysis of fNIRS Data: Syntax Comprehension in Children with Specific Language Impairment Compared to Typically-Developing Controls.

    Science.gov (United States)

    Fu, Guifang; Wan, Nicholas J A; Baker, Joseph M; Montgomery, James W; Evans, Julia L; Gillam, Ronald B

    2016-01-01

    Functional near infrared spectroscopy (fNIRS) is a neuroimaging technology that enables investigators to indirectly monitor brain activity in vivo through relative changes in the concentration of oxygenated and deoxygenated hemoglobin. One of the key features of fNIRS is its superior temporal resolution, with dense measurements over very short periods of time (100 ms increments). Unfortunately, most statistical analysis approaches in the existing literature have not fully utilized the high temporal resolution of fNIRS. For example, many analysis procedures are based on linearity assumptions that only extract partial information, thereby neglecting the overall dynamic trends in fNIRS trajectories. The main goal of this article is to assess the ability of a functional data analysis (FDA) approach for detecting significant differences in hemodynamic responses recorded by fNIRS. Children with and without SLI wore two, 3 × 5 fNIRS caps situated over the bilateral parasylvian areas as they completed a language comprehension task. FDA was used to decompose the high dimensional hemodynamic curves into the mean function and a few eigenfunctions to represent the overall trend and variation structures over time. Compared to the most popular GLM, we did not assume any parametric structure and let the data speak for itself. This analysis identified significant differences between the case and control groups in the oxygenated hemodynamic mean trends in the bilateral inferior frontal and left inferior posterior parietal brain regions. We also detected significant group differences in the deoxygenated hemodynamic mean trends in the right inferior posterior parietal cortex and left temporal parietal junction. These findings, using dramatically different approaches, experimental designs, data sets, and foci, were consistent with several other reports, confirming group differences in the importance of these two areas for syntax comprehension. The proposed FDA was consistent with the

  8. New Statistics for Texture Classification Based on Gabor Filters

    Directory of Open Access Journals (Sweden)

    J. Pavlovicova

    2007-09-01

    Full Text Available The paper introduces a new method of texture segmentation efficiency evaluation. One of the well known texture segmentation methods is based on Gabor filters because of their orientation and spatial frequency character. Several statistics are used to extract more information from results obtained by Gabor filtering. Big amount of input parameters causes a wide set of results which need to be evaluated. The evaluation method is based on the normal distributions Gaussian curves intersection assessment and provides a new point of view to the segmentation method selection.

  9. Statistical analysis of the potassium concentration obtained through

    International Nuclear Information System (INIS)

    Pereira, Joao Eduardo da Silva; Silva, Jose Luiz Silverio da; Pires, Carlos Alberto da Fonseca; Strieder, Adelir Jose

    2007-01-01

    The present work was developed in outcrops of Santa Maria region, southern Brazil, Rio Grande do Sul State. Statistic evaluations were applied in different rock types. The possibility to distinguish different geologic units, sedimentary and volcanic (acid and basic types) by means of the statistic analyses from the use of airborne gamma-ray spectrometry integrating potash radiation emissions data with geological and geochemistry data is discussed. This Project was carried out at 1973 by Geological Survey of Brazil/Companhia de Pesquisas de Recursos Minerais. The Camaqua Project evaluated the behavior of potash concentrations generating XYZ Geosof 1997 format, one grid, thematic map and digital thematic map files from this total area. Using these data base, the integration of statistics analyses in sedimentary formations which belong to the Depressao Central do Rio Grande do Sul and/or to volcanic rocks from Planalto da Serra Geral at the border of Parana Basin was tested. Univariate statistics model was used: the media, the standard media error, and the trust limits were estimated. The Tukey's Test was used in order to compare mean values. The results allowed to create criteria to distinguish geological formations based on their potash content. The back-calibration technique was employed to transform K radiation to percentage. Inside this context it was possible to define characteristic values from radioactive potash emissions and their trust ranges in relation to geologic formations. The potash variable when evaluated in relation to geographic Universal Transverse Mercator coordinates system showed a spatial relation following one polynomial model of second order, with one determination coefficient. The statistica 7.1 software Generalist Linear Models produced by Statistics Department of Federal University of Santa Maria/Brazil was used. (author)

  10. Statistical performance and information content of time lag analysis and redundancy analysis in time series modeling.

    Science.gov (United States)

    Angeler, David G; Viedma, Olga; Moreno, José M

    2009-11-01

    Time lag analysis (TLA) is a distance-based approach used to study temporal dynamics of ecological communities by measuring community dissimilarity over increasing time lags. Despite its increased use in recent years, its performance in comparison with other more direct methods (i.e., canonical ordination) has not been evaluated. This study fills this gap using extensive simulations and real data sets from experimental temporary ponds (true zooplankton communities) and landscape studies (landscape categories as pseudo-communities) that differ in community structure and anthropogenic stress history. Modeling time with a principal coordinate of neighborhood matrices (PCNM) approach, the canonical ordination technique (redundancy analysis; RDA) consistently outperformed the other statistical tests (i.e., TLAs, Mantel test, and RDA based on linear time trends) using all real data. In addition, the RDA-PCNM revealed different patterns of temporal change, and the strength of each individual time pattern, in terms of adjusted variance explained, could be evaluated, It also identified species contributions to these patterns of temporal change. This additional information is not provided by distance-based methods. The simulation study revealed better Type I error properties of the canonical ordination techniques compared with the distance-based approaches when no deterministic component of change was imposed on the communities. The simulation also revealed that strong emphasis on uniform deterministic change and low variability at other temporal scales is needed to result in decreased statistical power of the RDA-PCNM approach relative to the other methods. Based on the statistical performance of and information content provided by RDA-PCNM models, this technique serves ecologists as a powerful tool for modeling temporal change of ecological (pseudo-) communities.

  11. THE FLUORBOARD A STATISTICALLY BASED DASHBOARD METHOD FOR IMPROVING SAFETY

    International Nuclear Information System (INIS)

    PREVETTE, S.S.

    2005-01-01

    The FluorBoard is a statistically based dashboard method for improving safety. Fluor Hanford has achieved significant safety improvements--including more than a 80% reduction in OSHA cases per 200,000 hours, during its work at the US Department of Energy's Hanford Site in Washington state. The massive project on the former nuclear materials production site is considered one of the largest environmental cleanup projects in the world. Fluor Hanford's safety improvements were achieved by a committed partnering of workers, managers, and statistical methodology. Safety achievements at the site have been due to a systematic approach to safety. This includes excellent cooperation between the field workers, the safety professionals, and management through OSHA Voluntary Protection Program principles. Fluor corporate values are centered around safety, and safety excellence is important for every manager in every project. In addition, Fluor Hanford has utilized a rigorous approach to using its safety statistics, based upon Dr. Shewhart's control charts, and Dr. Deming's management and quality methods

  12. Bayesian Sensitivity Analysis of Statistical Models with Missing Data.

    Science.gov (United States)

    Zhu, Hongtu; Ibrahim, Joseph G; Tang, Niansheng

    2014-04-01

    Methods for handling missing data depend strongly on the mechanism that generated the missing values, such as missing completely at random (MCAR) or missing at random (MAR), as well as other distributional and modeling assumptions at various stages. It is well known that the resulting estimates and tests may be sensitive to these assumptions as well as to outlying observations. In this paper, we introduce various perturbations to modeling assumptions and individual observations, and then develop a formal sensitivity analysis to assess these perturbations in the Bayesian analysis of statistical models with missing data. We develop a geometric framework, called the Bayesian perturbation manifold, to characterize the intrinsic structure of these perturbations. We propose several intrinsic influence measures to perform sensitivity analysis and quantify the effect of various perturbations to statistical models. We use the proposed sensitivity analysis procedure to systematically investigate the tenability of the non-ignorable missing at random (NMAR) assumption. Simulation studies are conducted to evaluate our methods, and a dataset is analyzed to illustrate the use of our diagnostic measures.

  13. An analysis of distribution transformer failure using the statistical package for the social sciences (SPSS software

    Directory of Open Access Journals (Sweden)

    María Gabriela Mago Ramos

    2012-05-01

    Full Text Available A methodology was developed for analysing faults in distribution transformers using the statistical package for social sciences (SPSS; it consisted of organising and creating of database regarding failed equipment, incorporating such data into the processing programme and converting all the information into numerical variables to be processed, thereby obtaining descriptive statistics and enabling factor and discriminant analysis. The research was based on information provided by companies in areas served by Corpoelec (Valencia, Venezuela and Codensa (Bogotá, Colombia.

  14. Using R and RStudio for data management, statistical analysis and graphics

    CERN Document Server

    Horton, Nicholas J

    2015-01-01

    This is the second edition of the popular book on using R for statistical analysis and graphics. The authors, who run a popular blog supplementing their books, have focused on adding many new examples to this new edition. These examples are presented primarily in new chapters based on the following themes: simulation, probability, statistics, mathematics/computing, and graphics. The authors have also added many other updates, including a discussion of RStudio-a very popular development environment for R.

  15. STATISTICAL ANALYSIS OF SPORT MOVEMENT OBSERVATIONS: THE CASE OF ORIENTEERING

    Directory of Open Access Journals (Sweden)

    K. Amouzandeh

    2017-09-01

    Full Text Available Study of movement observations is becoming more popular in several applications. Particularly, analyzing sport movement time series has been considered as a demanding area. However, most of the attempts made on analyzing movement sport data have focused on spatial aspects of movement to extract some movement characteristics, such as spatial patterns and similarities. This paper proposes statistical analysis of sport movement observations, which refers to analyzing changes in the spatial movement attributes (e.g. distance, altitude and slope and non-spatial movement attributes (e.g. speed and heart rate of athletes. As the case study, an example dataset of movement observations acquired during the “orienteering” sport is presented and statistically analyzed.

  16. Statistical Analysis Of Failure Strength Of Material Using Weibull Distribution

    International Nuclear Information System (INIS)

    Entin Hartini; Mike Susmikanti; Antonius Sitompul

    2008-01-01

    In evaluation of ceramic and glass materials strength a statistical approach is necessary Strength of ceramic and glass depend on its measure and size distribution of flaws in these material. The distribution of strength for ductile material is narrow and close to a Gaussian distribution while strength of brittle materials as ceramic and glass following Weibull distribution. The Weibull distribution is an indicator of the failure of material strength resulting from a distribution of flaw size. In this paper, cumulative probability of material strength to failure probability, cumulative probability of failure versus fracture stress and cumulative probability of reliability of material were calculated. Statistical criteria calculation supporting strength analysis of Silicon Nitride material were done utilizing MATLAB. (author)

  17. Multivariate statistical pattern recognition system for reactor noise analysis

    International Nuclear Information System (INIS)

    Gonzalez, R.C.; Howington, L.C.; Sides, W.H. Jr.; Kryter, R.C.

    1976-01-01

    A multivariate statistical pattern recognition system for reactor noise analysis was developed. The basis of the system is a transformation for decoupling correlated variables and algorithms for inferring probability density functions. The system is adaptable to a variety of statistical properties of the data, and it has learning, tracking, and updating capabilities. System design emphasizes control of the false-alarm rate. The ability of the system to learn normal patterns of reactor behavior and to recognize deviations from these patterns was evaluated by experiments at the ORNL High-Flux Isotope Reactor (HFIR). Power perturbations of less than 0.1 percent of the mean value in selected frequency ranges were detected by the system

  18. Reactor noise analysis by statistical pattern recognition methods

    International Nuclear Information System (INIS)

    Howington, L.C.; Gonzalez, R.C.

    1976-01-01

    A multivariate statistical pattern recognition system for reactor noise analysis is presented. The basis of the system is a transformation for decoupling correlated variables and algorithms for inferring probability density functions. The system is adaptable to a variety of statistical properties of the data, and it has learning, tracking, updating, and data compacting capabilities. System design emphasizes control of the false-alarm rate. Its abilities to learn normal patterns, to recognize deviations from these patterns, and to reduce the dimensionality of data with minimum error were evaluated by experiments at the Oak Ridge National Laboratory (ORNL) High-Flux Isotope Reactor. Power perturbations of less than 0.1 percent of the mean value in selected frequency ranges were detected by the pattern recognition system

  19. Multivariate statistical pattern recognition system for reactor noise analysis

    International Nuclear Information System (INIS)

    Gonzalez, R.C.; Howington, L.C.; Sides, W.H. Jr.; Kryter, R.C.

    1975-01-01

    A multivariate statistical pattern recognition system for reactor noise analysis was developed. The basis of the system is a transformation for decoupling correlated variables and algorithms for inferring probability density functions. The system is adaptable to a variety of statistical properties of the data, and it has learning, tracking, and updating capabilities. System design emphasizes control of the false-alarm rate. The ability of the system to learn normal patterns of reactor behavior and to recognize deviations from these patterns was evaluated by experiments at the ORNL High-Flux Isotope Reactor (HFIR). Power perturbations of less than 0.1 percent of the mean value in selected frequency ranges were detected by the system. 19 references

  20. Statistical Analysis of Sport Movement Observations: the Case of Orienteering

    Science.gov (United States)

    Amouzandeh, K.; Karimipour, F.

    2017-09-01

    Study of movement observations is becoming more popular in several applications. Particularly, analyzing sport movement time series has been considered as a demanding area. However, most of the attempts made on analyzing movement sport data have focused on spatial aspects of movement to extract some movement characteristics, such as spatial patterns and similarities. This paper proposes statistical analysis of sport movement observations, which refers to analyzing changes in the spatial movement attributes (e.g. distance, altitude and slope) and non-spatial movement attributes (e.g. speed and heart rate) of athletes. As the case study, an example dataset of movement observations acquired during the "orienteering" sport is presented and statistically analyzed.

  1. Statistical Mechanics Analysis of ATP Binding to a Multisubunit Enzyme

    International Nuclear Information System (INIS)

    Zhang Yun-Xin

    2014-01-01

    Due to inter-subunit communication, multisubunit enzymes usually hydrolyze ATP in a concerted fashion. However, so far the principle of this process remains poorly understood. In this study, from the viewpoint of statistical mechanics, a simple model is presented. In this model, we assume that the binding of ATP will change the potential of the corresponding enzyme subunit, and the degree of this change depends on the state of its adjacent subunits. The probability of enzyme in a given state satisfies the Boltzmann's distribution. Although it looks much simple, this model can fit the recent experimental data of chaperonin TRiC/CCT well. From this model, the dominant state of TRiC/CCT can be obtained. This study provide a new way to understand biophysical processe by statistical mechanics analysis. (interdisciplinary physics and related areas of science and technology)

  2. Statistical analysis of subjective preferences for video enhancement

    Science.gov (United States)

    Woods, Russell L.; Satgunam, PremNandhini; Bronstad, P. Matthew; Peli, Eli

    2010-02-01

    Measuring preferences for moving video quality is harder than for static images due to the fleeting and variable nature of moving video. Subjective preferences for image quality can be tested by observers indicating their preference for one image over another. Such pairwise comparisons can be analyzed using Thurstone scaling (Farrell, 1999). Thurstone (1927) scaling is widely used in applied psychology, marketing, food tasting and advertising research. Thurstone analysis constructs an arbitrary perceptual scale for the items that are compared (e.g. enhancement levels). However, Thurstone scaling does not determine the statistical significance of the differences between items on that perceptual scale. Recent papers have provided inferential statistical methods that produce an outcome similar to Thurstone scaling (Lipovetsky and Conklin, 2004). Here, we demonstrate that binary logistic regression can analyze preferences for enhanced video.

  3. A statistical framework for differential network analysis from microarray data

    Directory of Open Access Journals (Sweden)

    Datta Somnath

    2010-02-01

    Full Text Available Abstract Background It has been long well known that genes do not act alone; rather groups of genes act in consort during a biological process. Consequently, the expression levels of genes are dependent on each other. Experimental techniques to detect such interacting pairs of genes have been in place for quite some time. With the advent of microarray technology, newer computational techniques to detect such interaction or association between gene expressions are being proposed which lead to an association network. While most microarray analyses look for genes that are differentially expressed, it is of potentially greater significance to identify how entire association network structures change between two or more biological settings, say normal versus diseased cell types. Results We provide a recipe for conducting a differential analysis of networks constructed from microarray data under two experimental settings. At the core of our approach lies a connectivity score that represents the strength of genetic association or interaction between two genes. We use this score to propose formal statistical tests for each of following queries: (i whether the overall modular structures of the two networks are different, (ii whether the connectivity of a particular set of "interesting genes" has changed between the two networks, and (iii whether the connectivity of a given single gene has changed between the two networks. A number of examples of this score is provided. We carried out our method on two types of simulated data: Gaussian networks and networks based on differential equations. We show that, for appropriate choices of the connectivity scores and tuning parameters, our method works well on simulated data. We also analyze a real data set involving normal versus heavy mice and identify an interesting set of genes that may play key roles in obesity. Conclusions Examining changes in network structure can provide valuable information about the

  4. Statistical analysis of the spatial distribution of galaxies and clusters

    International Nuclear Information System (INIS)

    Cappi, Alberto

    1993-01-01

    This thesis deals with the analysis of the distribution of galaxies and clusters, describing some observational problems and statistical results. First chapter gives a theoretical introduction, aiming to describe the framework of the formation of structures, tracing the history of the Universe from the Planck time, t_p = 10"-"4"3 sec and temperature corresponding to 10"1"9 GeV, to the present epoch. The most usual statistical tools and models of the galaxy distribution, with their advantages and limitations, are described in chapter two. A study of the main observed properties of galaxy clustering, together with a detailed statistical analysis of the effects of selecting galaxies according to apparent magnitude or diameter, is reported in chapter three. Chapter four delineates some properties of groups of galaxies, explaining the reasons of discrepant results on group distributions. Chapter five is a study of the distribution of galaxy clusters, with different statistical tools, like correlations, percolation, void probability function and counts in cells; it is found the same scaling-invariant behaviour of galaxies. Chapter six describes our finding that rich galaxy clusters too belong to the fundamental plane of elliptical galaxies, and gives a discussion of its possible implications. Finally chapter seven reviews the possibilities offered by multi-slit and multi-fibre spectrographs, and I present some observational work on nearby and distant galaxy clusters. In particular, I show the opportunities offered by ongoing surveys of galaxies coupled with multi-object fibre spectrographs, focusing on the ESO Key Programme A galaxy redshift survey in the south galactic pole region to which I collaborate and on MEFOS, a multi-fibre instrument with automatic positioning. Published papers related to the work described in this thesis are reported in the last appendix. (author) [fr

  5. Finding differentially expressed genes in high dimensional data: Rank based test statistic via a distance measure.

    Science.gov (United States)

    Mathur, Sunil; Sadana, Ajit

    2015-12-01

    We present a rank-based test statistic for the identification of differentially expressed genes using a distance measure. The proposed test statistic is highly robust against extreme values and does not assume the distribution of parent population. Simulation studies show that the proposed test is more powerful than some of the commonly used methods, such as paired t-test, Wilcoxon signed rank test, and significance analysis of microarray (SAM) under certain non-normal distributions. The asymptotic distribution of the test statistic, and the p-value function are discussed. The application of proposed method is shown using a real-life data set. © The Author(s) 2011.

  6. Statistical analysis plan for the EuroHYP-1 trial

    DEFF Research Database (Denmark)

    Winkel, Per; Bath, Philip M; Gluud, Christian

    2017-01-01

    Score; (4) brain infarct size at 48 +/-24 hours; (5) EQ-5D-5 L score, and (6) WHODAS 2.0 score. Other outcomes are: the primary safety outcome serious adverse events; and the incremental cost-effectiveness, and cost utility ratios. The analysis sets include (1) the intention-to-treat population, and (2...... outcome), logistic regression (binary outcomes), general linear model (continuous outcomes), and the Poisson or negative binomial model (rate outcomes). DISCUSSION: Major adjustments compared with the original statistical analysis plan encompass: (1) adjustment of analyses by nationality; (2) power......) the per protocol population. The sample size is estimated to 800 patients (5% type 1 and 20% type 2 errors). All analyses are adjusted for the protocol-specified stratification variables (nationality of centre), and the minimisation variables. In the analysis, we use ordinal regression (the primary...

  7. Data and statistical methods for analysis of trends and patterns

    International Nuclear Information System (INIS)

    Atwood, C.L.; Gentillon, C.D.; Wilson, G.E.

    1992-11-01

    This report summarizes topics considered at a working meeting on data and statistical methods for analysis of trends and patterns in US commercial nuclear power plants. This meeting was sponsored by the Office of Analysis and Evaluation of Operational Data (AEOD) of the Nuclear Regulatory Commission (NRC). Three data sets are briefly described: Nuclear Plant Reliability Data System (NPRDS), Licensee Event Report (LER) data, and Performance Indicator data. Two types of study are emphasized: screening studies, to see if any trends or patterns appear to be present; and detailed studies, which are more concerned with checking the analysis assumptions, modeling any patterns that are present, and searching for causes. A prescription is given for a screening study, and ideas are suggested for a detailed study, when the data take of any of three forms: counts of events per time, counts of events per demand, and non-event data

  8. International Conference on Modern Problems of Stochastic Analysis and Statistics

    CERN Document Server

    2017-01-01

    This book brings together the latest findings in the area of stochastic analysis and statistics. The individual chapters cover a wide range of topics from limit theorems, Markov processes, nonparametric methods, acturial science, population dynamics, and many others. The volume is dedicated to Valentin Konakov, head of the International Laboratory of Stochastic Analysis and its Applications on the occasion of his 70th birthday. Contributions were prepared by the participants of the international conference of the international conference “Modern problems of stochastic analysis and statistics”, held at the Higher School of Economics in Moscow from May 29 - June 2, 2016. It offers a valuable reference resource for researchers and graduate students interested in modern stochastics.

  9. Statistical methods for data analysis in particle physics

    CERN Document Server

    Lista, Luca

    2017-01-01

    This concise set of course-based notes provides the reader with the main concepts and tools needed to perform statistical analyses of experimental data, in particular in the field of high-energy physics (HEP). First, the book provides an introduction to probability theory and basic statistics, mainly intended as a refresher from readers’ advanced undergraduate studies, but also to help them clearly distinguish between the Frequentist and Bayesian approaches and interpretations in subsequent applications. More advanced concepts and applications are gradually introduced, culminating in the chapter on both discoveries and upper limits, as many applications in HEP concern hypothesis testing, where the main goal is often to provide better and better limits so as to eventually be able to distinguish between competing hypotheses, or to rule out some of them altogether. Many worked-out examples will help newcomers to the field and graduate students alike understand the pitfalls involved in applying theoretical co...

  10. A Statistical Framework for the Functional Analysis of Metagenomes

    Energy Technology Data Exchange (ETDEWEB)

    Sharon, Itai; Pati, Amrita; Markowitz, Victor; Pinter, Ron Y.

    2008-10-01

    Metagenomic studies consider the genetic makeup of microbial communities as a whole, rather than their individual member organisms. The functional and metabolic potential of microbial communities can be analyzed by comparing the relative abundance of gene families in their collective genomic sequences (metagenome) under different conditions. Such comparisons require accurate estimation of gene family frequencies. They present a statistical framework for assessing these frequencies based on the Lander-Waterman theory developed originally for Whole Genome Shotgun (WGS) sequencing projects. They also provide a novel method for assessing the reliability of the estimations which can be used for removing seemingly unreliable measurements. They tested their method on a wide range of datasets, including simulated genomes and real WGS data from sequencing projects of whole genomes. Results suggest that their framework corrects inherent biases in accepted methods and provides a good approximation to the true statistics of gene families in WGS projects.

  11. Determination of the minimum size of a statistical representative volume element from a fibre-reinforced composite based on point pattern statistics

    DEFF Research Database (Denmark)

    Hansen, Jens Zangenberg; Brøndsted, Povl

    2013-01-01

    In a previous study, Trias et al. [1] determined the minimum size of a statistical representative volume element (SRVE) of a unidirectional fibre-reinforced composite primarily based on numerical analyses of the stress/strain field. In continuation of this, the present study determines the minimu...... size of an SRVE based on a statistical analysis on the spatial statistics of the fibre packing patterns found in genuine laminates, and those generated numerically using a microstructure generator. © 2012 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved....

  12. Parametric analysis of the statistical model of the stick-slip process

    Science.gov (United States)

    Lima, Roberta; Sampaio, Rubens

    2017-06-01

    In this paper it is performed a parametric analysis of the statistical model of the response of a dry-friction oscillator. The oscillator is a spring-mass system which moves over a base with a rough surface. Due to this roughness, the mass is subject to a dry-frictional force modeled as a Coulomb friction. The system is stochastically excited by an imposed bang-bang base motion. The base velocity is modeled by a Poisson process for which a probabilistic model is fully specified. The excitation induces in the system stochastic stick-slip oscillations. The system response is composed by a random sequence alternating stick and slip-modes. With realizations of the system, a statistical model is constructed for this sequence. In this statistical model, the variables of interest of the sequence are modeled as random variables, as for example, the number of time intervals in which stick or slip occur, the instants at which they begin, and their duration. Samples of the system response are computed by integration of the dynamic equation of the system using independent samples of the base motion. Statistics and histograms of the random variables which characterize the stick-slip process are estimated for the generated samples. The objective of the paper is to analyze how these estimated statistics and histograms vary with the system parameters, i.e., to make a parametric analysis of the statistical model of the stick-slip process.

  13. Consolidity analysis for fully fuzzy functions, matrices, probability and statistics

    Directory of Open Access Journals (Sweden)

    Walaa Ibrahim Gabr

    2015-03-01

    Full Text Available The paper presents a comprehensive review of the know-how for developing the systems consolidity theory for modeling, analysis, optimization and design in fully fuzzy environment. The solving of systems consolidity theory included its development for handling new functions of different dimensionalities, fuzzy analytic geometry, fuzzy vector analysis, functions of fuzzy complex variables, ordinary differentiation of fuzzy functions and partial fraction of fuzzy polynomials. On the other hand, the handling of fuzzy matrices covered determinants of fuzzy matrices, the eigenvalues of fuzzy matrices, and solving least-squares fuzzy linear equations. The approach demonstrated to be also applicable in a systematic way in handling new fuzzy probabilistic and statistical problems. This included extending the conventional probabilistic and statistical analysis for handling fuzzy random data. Application also covered the consolidity of fuzzy optimization problems. Various numerical examples solved have demonstrated that the new consolidity concept is highly effective in solving in a compact form the propagation of fuzziness in linear, nonlinear, multivariable and dynamic problems with different types of complexities. Finally, it is demonstrated that the implementation of the suggested fuzzy mathematics can be easily embedded within normal mathematics through building special fuzzy functions library inside the computational Matlab Toolbox or using other similar software languages.

  14. Sensitivity analysis and optimization of system dynamics models : Regression analysis and statistical design of experiments

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    1995-01-01

    This tutorial discusses what-if analysis and optimization of System Dynamics models. These problems are solved, using the statistical techniques of regression analysis and design of experiments (DOE). These issues are illustrated by applying the statistical techniques to a System Dynamics model for

  15. Statistical Analysis of Designed Experiments Theory and Applications

    CERN Document Server

    Tamhane, Ajit C

    2012-01-01

    A indispensable guide to understanding and designing modern experiments The tools and techniques of Design of Experiments (DOE) allow researchers to successfully collect, analyze, and interpret data across a wide array of disciplines. Statistical Analysis of Designed Experiments provides a modern and balanced treatment of DOE methodology with thorough coverage of the underlying theory and standard designs of experiments, guiding the reader through applications to research in various fields such as engineering, medicine, business, and the social sciences. The book supplies a foundation for the

  16. A Statistical Analysis of Cointegration for I(2) Variables

    DEFF Research Database (Denmark)

    Johansen, Søren

    1995-01-01

    be conducted using the ¿ sup2/sup distribution. It is shown to what extent inference on the cointegration ranks can be conducted using the tables already prepared for the analysis of cointegration of I(1) variables. New tables are needed for the test statistics to control the size of the tests. This paper...... contains a multivariate test for the existence of I(2) variables. This test is illustrated using a data set consisting of U.K. and foreign prices and interest rates as well as the exchange rate....

  17. Using R for Data Management, Statistical Analysis, and Graphics

    CERN Document Server

    Horton, Nicholas J

    2010-01-01

    This title offers quick and easy access to key element of documentation. It includes worked examples across a wide variety of applications, tasks, and graphics. "Using R for Data Management, Statistical Analysis, and Graphics" presents an easy way to learn how to perform an analytical task in R, without having to navigate through the extensive, idiosyncratic, and sometimes unwieldy software documentation and vast number of add-on packages. Organized by short, clear descriptive entries, the book covers many common tasks, such as data management, descriptive summaries, inferential proc

  18. Spatial Analysis Along Networks Statistical and Computational Methods

    CERN Document Server

    Okabe, Atsuyuki

    2012-01-01

    In the real world, there are numerous and various events that occur on and alongside networks, including the occurrence of traffic accidents on highways, the location of stores alongside roads, the incidence of crime on streets and the contamination along rivers. In order to carry out analyses of those events, the researcher needs to be familiar with a range of specific techniques. Spatial Analysis Along Networks provides a practical guide to the necessary statistical techniques and their computational implementation. Each chapter illustrates a specific technique, from Stochastic Point Process

  19. Statistical analysis of the W Cyg light curve

    International Nuclear Information System (INIS)

    Klyus, I.A.

    1983-01-01

    A statistical analysis of the light curve of W Cygni has been carried out. The process of brightness variations brightness of the star is shown to be a stationary stochastic one. The hypothesis of stationarity of the process was checked at the significance level of α=0.05. Oscillations of the brightness with average durations of 131 and 250 days have been found. It is proved that oscillations are narrow-band noise, i.e. cycles. Peaks on the power spectrum corresponding to these cycles exceed 99% confidence interval. It has been stated that the oscillations are independent

  20. CFAssay: statistical analysis of the colony formation assay

    International Nuclear Information System (INIS)

    Braselmann, Herbert; Michna, Agata; Heß, Julia; Unger, Kristian

    2015-01-01

    Colony formation assay is the gold standard to determine cell reproductive death after treatment with ionizing radiation, applied for different cell lines or in combination with other treatment modalities. Associated linear-quadratic cell survival curves can be calculated with different methods. For easy code exchange and methodological standardisation among collaborating laboratories a software package CFAssay for R (R Core Team, R: A Language and Environment for Statistical Computing, 2014) was established to perform thorough statistical analysis of linear-quadratic cell survival curves after treatment with ionizing radiation and of two-way designs of experiments with chemical treatments only. CFAssay offers maximum likelihood and related methods by default and the least squares or weighted least squares method can be optionally chosen. A test for comparision of cell survival curves and an ANOVA test for experimental two-way designs are provided. For the two presented examples estimated parameters do not differ much between maximum-likelihood and least squares. However the dispersion parameter of the quasi-likelihood method is much more sensitive for statistical variation in the data than the multiple R 2 coefficient of determination from the least squares method. The dispersion parameter for goodness of fit and different plot functions in CFAssay help to evaluate experimental data quality. As open source software interlaboratory code sharing between users is facilitated