WorldWideScience

Sample records for sas statistical analysis

  1. Statistical data analysis using SAS intermediate statistical methods

    CERN Document Server

    Marasinghe, Mervyn G

    2018-01-01

    The aim of this textbook (previously titled SAS for Data Analytics) is to teach the use of SAS for statistical analysis of data for advanced undergraduate and graduate students in statistics, data science, and disciplines involving analyzing data. The book begins with an introduction beyond the basics of SAS, illustrated with non-trivial, real-world, worked examples. It proceeds to SAS programming and applications, SAS graphics, statistical analysis of regression models, analysis of variance models, analysis of variance with random and mixed effects models, and then takes the discussion beyond regression and analysis of variance to conclude. Pedagogically, the authors introduce theory and methodological basis topic by topic, present a problem as an application, followed by a SAS analysis of the data provided and a discussion of results. The text focuses on applied statistical problems and methods. Key features include: end of chapter exercises, downloadable SAS code and data sets, and advanced material suitab...

  2. SAS and R data management, statistical analysis, and graphics

    CERN Document Server

    Kleinman, Ken

    2009-01-01

    An All-in-One Resource for Using SAS and R to Carry out Common TasksProvides a path between languages that is easier than reading complete documentationSAS and R: Data Management, Statistical Analysis, and Graphics presents an easy way to learn how to perform an analytical task in both SAS and R, without having to navigate through the extensive, idiosyncratic, and sometimes unwieldy software documentation. The book covers many common tasks, such as data management, descriptive summaries, inferential procedures, regression analysis, and the creation of graphics, along with more complex applicat

  3. Conducting Meta-Analysis Using SAS

    CERN Document Server

    Arthur, Winfried; Huffcutt, Allen I; Arthur, Winfred

    2001-01-01

    Conducting Meta-Analysis Using SAS reviews the meta-analysis statistical procedure and shows the reader how to conduct one using SAS. It presents and illustrates the use of the PROC MEANS procedure in SAS to perform the data computations called for by the two most commonly used meta-analytic procedures, the Hunter & Schmidt and Glassian approaches. This book serves as both an operational guide and user's manual by describing and explaining the meta-analysis procedures and then presenting the appropriate SAS program code for computing the pertinent statistics. The practical, step-by-step instru

  4. Essentials of Excel, Excel VBA, SAS and Minitab for statistical and financial analyses

    CERN Document Server

    Lee, Cheng-Few; Chang, Jow-Ran; Tai, Tzu

    2016-01-01

    This introductory textbook for business statistics teaches statistical analysis and research methods via business case studies and financial data using Excel, MINITAB, and SAS. Every chapter in this textbook engages the reader with data of individual stock, stock indices, options, and futures. One studies and uses statistics to learn how to study, analyze, and understand a data set of particular interest. Some of the more popular statistical programs that have been developed to use statistical and computational methods to analyze data sets are SAS, SPSS, and MINITAB. Of those, we look at MINITAB and SAS in this textbook. One of the main reasons to use MINITAB is that it is the easiest to use among the popular statistical programs. We look at SAS because it is the leading statistical package used in industry. We also utilize the much less costly and ubiquitous Microsoft Excel to do statistical analysis, as the benefits of Excel have become widely recognized in the academic world and its analytical capabilities...

  5. Statistical hypothesis testing with SAS and R

    CERN Document Server

    Taeger, Dirk

    2014-01-01

    A comprehensive guide to statistical hypothesis testing with examples in SAS and R When analyzing datasets the following questions often arise:Is there a short hand procedure for a statistical test available in SAS or R?If so, how do I use it?If not, how do I program the test myself? This book answers these questions and provides an overview of the most commonstatistical test problems in a comprehensive way, making it easy to find and performan appropriate statistical test. A general summary of statistical test theory is presented, along with a basicdescription for each test, including the

  6. SAS for dummies

    CERN Document Server

    McDaniel, Stephen

    2010-01-01

    The fun and easy way to learn to use this leading business intelligence tool Written by an author team who is directly involved with SAS, this easy-to-follow guide is fully updated for the latest release of SAS and covers just what you need to put this popular software to work in your business. SAS allows any business or enterprise to improve data delivery, analysis, reporting, movement across a company, data mining, forecasting, statistical analysis, and more. SAS For Dummies, 2nd Edition  gives you the necessary background on what SAS can do for you and explains how to use the Enterprise Guide. SAS provides statistical and data analysis tools to help you deal with all kinds of data: operational, financial, performance, and more Places special emphasis on Enterprise Guide and other analytical tools, covering all commonly used features Covers all commonly used features and shows you the practical applications you can put to work in your business Explores how to get various types of data into the software and...

  7. SAS essentials mastering SAS for data analytics

    CERN Document Server

    Elliott, Alan C

    2015-01-01

    A step-by-step introduction to using SAS® statistical software as a foundational approach to data analysis and interpretation Presenting a straightforward introduction from the ground up, SAS® Essentials: Mastering SAS for Data Analytics, Second Edition illustrates SAS using hands-on learning techniques and numerous real-world examples. Keeping different experience levels in mind, the highly-qualified author team has developed the book over 20 years of teaching introductory SAS courses. Divided into two sections, the first part of the book provides an introduction to data manipulation, st

  8. Design and analysis of experiments with SAS

    CERN Document Server

    Lawson, John

    2010-01-01

    IntroductionStatistics and Data Collection Beginnings of Statistically Planned Experiments Definitions and Preliminaries Purposes of Experimental Design Types of Experimental Designs Planning Experiments Performing the Experiments Use of SAS SoftwareCompletely Randomized Designs with One Factor Introduction Replication and Randomization A Historical Example Linear Model for Completely Randomized Design (CRD) Verifying Assumptions of the Linear Model Analysis Strategies When Assumptions Are Violated Determining the Number of Replicates Comparison of Treatments after the F-TestFactorial Designs

  9. [A SAS marco program for batch processing of univariate Cox regression analysis for great database].

    Science.gov (United States)

    Yang, Rendong; Xiong, Jie; Peng, Yangqin; Peng, Xiaoning; Zeng, Xiaomin

    2015-02-01

    To realize batch processing of univariate Cox regression analysis for great database by SAS marco program. We wrote a SAS macro program, which can filter, integrate, and export P values to Excel by SAS9.2. The program was used for screening survival correlated RNA molecules of ovarian cancer. A SAS marco program could finish the batch processing of univariate Cox regression analysis, the selection and export of the results. The SAS macro program has potential applications in reducing the workload of statistical analysis and providing a basis for batch processing of univariate Cox regression analysis.

  10. State Space Modeling Using SAS

    Directory of Open Access Journals (Sweden)

    Rajesh Selukar

    2011-05-01

    Full Text Available This article provides a brief introduction to the state space modeling capabilities in SAS, a well-known statistical software system. SAS provides state space modeling in a few different settings. SAS/ETS, the econometric and time series analysis module of the SAS system, contains many procedures that use state space models to analyze univariate and multivariate time series data. In addition, SAS/IML, an interactive matrix language in the SAS system, provides Kalman filtering and smoothing routines for stationary and nonstationary state space models. SAS/IML also provides support for linear algebra and nonlinear function optimization, which makes it a convenient environment for general-purpose state space modeling.

  11. CTTITEM: SAS macro and SPSS syntax for classical item analysis.

    Science.gov (United States)

    Lei, Pui-Wa; Wu, Qiong

    2007-08-01

    This article describes the functions of a SAS macro and an SPSS syntax that produce common statistics for conventional item analysis including Cronbach's alpha, item difficulty index (p-value or item mean), and item discrimination indices (D-index, point biserial and biserial correlations for dichotomous items and item-total correlation for polytomous items). These programs represent an improvement over the existing SAS and SPSS item analysis routines in terms of completeness and user-friendliness. To promote routine evaluations of item qualities in instrument development of any scale, the programs are available at no charge for interested users. The program codes along with a brief user's manual that contains instructions and examples are downloadable from suen.ed.psu.edu/-pwlei/plei.htm.

  12. Extending and Enhancing SAS (Static Analysis Suite)

    CERN Document Server

    Ho, David

    2016-01-01

    The Static Analysis Suite (SAS) is an open-source software package used to perform static analysis on C and C++ code, helping to ensure safety, readability and maintainability. In this Summer Student project, SAS was enhanced to improve ease of use and user customisation. A straightforward method of integrating static analysis into a project at compilation time was provided using the automated build tool CMake. The process of adding checkers to the suite was streamlined and simplied by developing an automatic code generator. To make SAS more suitable for continuous integration, a reporting mechanism summarising results was added. This suitability has been demonstrated by inclusion of SAS in the Future Circular Collider Software nightly build system. Scalability of the improved package was demonstrated by using the tool to analyse the ROOT code base.

  13. SAS validation and analysis of in-pile TUCOP experiments

    International Nuclear Information System (INIS)

    Morman, J.A.; Tentner, A.M.; Dever, D.J.

    1985-01-01

    The validation of the SAS4A accident analysis code centers on its capability to calculate the wide range of tests performed in the TREAT (Transient Reactor Test Facility) in-pile experiments program. This paper presents the SAS4A analysis of a simulated TUCOP (Transient-Under-Cooled-Over-Power) experiment using seven full-length PFR mixed oxide fuel pins in a flowing sodium loop. Calculations agree well with measured thermal-hydraulic, pin failure time and post-failure fuel motion data. The extent of the agreement confirms the validity of the models used in the SAS4A code to describe TUCOP accidents

  14. Analysis of metal fuel transient overpower experiments with the SAS4A accident analysis code

    International Nuclear Information System (INIS)

    Tentner, A.M.; Kalimullah; Miles, K.J.

    1990-01-01

    The results of the SAS4A analysis of the M7 TREAT Metal fuel experiment are presented. New models incorporated in the metal fuel version of SAS4A are described. The computational results are compared with the experimental observations and this comparison is used in the interpretation of physical phenomena. This analysis was performed using the integrated metal fuel SAS4A version and covers a wide range of events, providing an increased degree of confidence in the SAS4A metal fuel accident analysis capabilities

  15. Analysis of Variance: What Is Your Statistical Software Actually Doing?

    Science.gov (United States)

    Li, Jian; Lomax, Richard G.

    2011-01-01

    Users assume statistical software packages produce accurate results. In this article, the authors systematically examined Statistical Package for the Social Sciences (SPSS) and Statistical Analysis System (SAS) for 3 analysis of variance (ANOVA) designs, mixed-effects ANOVA, fixed-effects analysis of covariance (ANCOVA), and nested ANOVA. For each…

  16. Social Analysis Systems (SAS2) - Phase III

    International Development Research Centre (IDRC) Digital Library (Canada)

    Scaling Up the International Impact of Action Research : Social Analysis ... up the international impact of action research : SAS phase 3; final technical report ... 000 Canadians abroad to work at the local level on various development issues.

  17. A new package: MySAS for small angle scattering data analysis

    International Nuclear Information System (INIS)

    Huang Chaoqiang; Xia Qingzhong; Yan Guanyun; Sun Guang'ai; Chen Bo

    2010-01-01

    In this paper, A MySAS package, which is verified on Windows XP, can easily convert two-dimensional data in small angle neutron and X-ray scattering analysis, operate individually and execute one particular operation as numerical data reduction or analysis, and graphical visualization. This MySAS package can implement the input and output routines via scanning certain properties, thus recalling completely sets of repetition input and selecting the input files. On starting from the two-dimensional files, the MySAS package can correct the anisotropic or isotropic data for physical interpretation and select the relevant pixels. Over 50 model functions are fitted by the POWELL code using χ 2 as the figure of merit function. (authors)

  18. Coupling the System Analysis Module with SAS4A/SASSYS-1

    Energy Technology Data Exchange (ETDEWEB)

    Fanning, T. H. [Argonne National Lab. (ANL), Argonne, IL (United States); Hu, R. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-09-30

    SAS4A/SASSYS-1 is a simulation tool used to perform deterministic analysis of anticipated events as well as design basis and beyond design basis accidents for advanced reactors, with an emphasis on sodium fast reactors. SAS4A/SASSYS-1 has been under development and in active use for nearly forty-five years, and is currently maintained by the U.S. Department of Energy under the Office of Advanced Reactor Technology. Although SAS4A/SASSYS-1 contains a very capable primary and intermediate system modeling component, PRIMAR-4, it also has some shortcomings: outdated data management and code structure makes extension of the PRIMAR-4 module somewhat difficult. The user input format for PRIMAR-4 also limits the number of volumes and segments that can be used to describe a given system. The System Analysis Module (SAM) is a fairly new code development effort being carried out under the U.S. DOE Nuclear Energy Advanced Modeling and Simulation (NEAMS) program. SAM is being developed with advanced physical models, numerical methods, and software engineering practices; however, it is currently somewhat limited in the system components and phenomena that can be represented. For example, component models for electromagnetic pumps and multi-layer stratified volumes have not yet been developed. Nor is there support for a balance of plant model. Similarly, system-level phenomena such as control-rod driveline expansion and vessel elongation are not represented. This report documents fiscal year 2016 work that was carried out to couple the transient safety analysis capabilities of SAS4A/SASSYS-1 with the system modeling capabilities of SAM under the joint support of the ART and NEAMS programs. The coupling effort was successful and is demonstrated by evaluating an unprotected loss of flow transient for the Advanced Burner Test Reactor (ABTR) design. There are differences between the stand-alone SAS4A/SASSYS-1 simulations and the coupled SAS/SAM simulations, but these are mainly

  19. Statistical analysis of medical data using SAS

    CERN Document Server

    Der, Geoff

    2005-01-01

    An Introduction to SASDescribing and Summarizing DataBasic InferenceScatterplots Correlation: Simple Regression and SmoothingAnalysis of Variance and CovarianceMultiple RegressionLogistic RegressionThe Generalized Linear ModelGeneralized Additive ModelsNonlinear Regression ModelsThe Analysis of Longitudinal Data IThe Analysis of Longitudinal Data II: Models for Normal Response VariablesThe Analysis of Longitudinal Data III: Non-Normal ResponseSurvival AnalysisAnalysis Multivariate Date: Principal Components and Cluster AnalysisReferences

  20. Validity of Simpson-Angus Scale (SAS) in a naturalistic schizophrenia population.

    Science.gov (United States)

    Janno, Sven; Holi, Matti M; Tuisku, Katinka; Wahlbeck, Kristian

    2005-03-17

    Simpson-Angus Scale (SAS) is an established instrument for neuroleptic-induced parkinsonism (NIP), but its statistical properties have been studied insufficiently. Some shortcomings concerning its content have been suggested as well. According to a recent report, the widely used SAS mean score cut-off value 0.3 of for NIP detection may be too low. Our aim was to evaluate SAS against DSM-IV diagnostic criteria for NIP and objective motor assessment (actometry). Ninety-nine chronic institutionalised schizophrenia patients were evaluated during the same interview by standardised actometric recording and SAS. The diagnosis of NIP was based on DSM-IV criteria. Internal consistency measured by Cronbach's alpha, convergence to actometry and the capacity for NIP case detection were assessed. Cronbach's alpha for the scale was 0.79. SAS discriminated between DSM-IV NIP and non-NIP patients. The actometric findings did not correlate with SAS. ROC-analysis yielded a good case detection power for SAS mean score. The optimal threshold value of SAS mean score was between 0.65 and 0.95, i.e. clearly higher than previously suggested threshold value. We conclude that SAS seems a reliable and valid instrument. The previously commonly used cut-off mean score of 0.3 has been too low resulting in low specificity, and we suggest a new cut-off value of 0.65, whereby specificity could be doubled without loosing sensitivity.

  1. Validity of Simpson-Angus Scale (SAS in a naturalistic schizophrenia population

    Directory of Open Access Journals (Sweden)

    Tuisku Katinka

    2005-03-01

    Full Text Available Abstract Background Simpson-Angus Scale (SAS is an established instrument for neuroleptic-induced parkinsonism (NIP, but its statistical properties have been studied insufficiently. Some shortcomings concerning its content have been suggested as well. According to a recent report, the widely used SAS mean score cut-off value 0.3 of for NIP detection may be too low. Our aim was to evaluate SAS against DSM-IV diagnostic criteria for NIP and objective motor assessment (actometry. Methods Ninety-nine chronic institutionalised schizophrenia patients were evaluated during the same interview by standardised actometric recording and SAS. The diagnosis of NIP was based on DSM-IV criteria. Internal consistency measured by Cronbach's α, convergence to actometry and the capacity for NIP case detection were assessed. Results Cronbach's α for the scale was 0.79. SAS discriminated between DSM-IV NIP and non-NIP patients. The actometric findings did not correlate with SAS. ROC-analysis yielded a good case detection power for SAS mean score. The optimal threshold value of SAS mean score was between 0.65 and 0.95, i.e. clearly higher than previously suggested threshold value. Conclusion We conclude that SAS seems a reliable and valid instrument. The previously commonly used cut-off mean score of 0.3 has been too low resulting in low specificity, and we suggest a new cut-off value of 0.65, whereby specificity could be doubled without loosing sensitivity.

  2. The SAS4A/SASSYS-1 Safety Analysis Code System, Version 5

    Energy Technology Data Exchange (ETDEWEB)

    Fanning, T. H. [Argonne National Lab. (ANL), Argonne, IL (United States); Brunett, A. J. [Argonne National Lab. (ANL), Argonne, IL (United States); Sumner, T. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-01-01

    The SAS4A/SASSYS-1 computer code is developed by Argonne National Laboratory for thermal, hydraulic, and neutronic analysis of power and flow transients in liquidmetal- cooled nuclear reactors (LMRs). SAS4A was developed to analyze severe core disruption accidents with coolant boiling and fuel melting and relocation, initiated by a very low probability coincidence of an accident precursor and failure of one or more safety systems. SASSYS-1, originally developed to address loss-of-decay-heat-removal accidents, has evolved into a tool for margin assessment in design basis accident (DBA) analysis and for consequence assessment in beyond-design-basis accident (BDBA) analysis. SAS4A contains detailed, mechanistic models of transient thermal, hydraulic, neutronic, and mechanical phenomena to describe the response of the reactor core, its coolant, fuel elements, and structural members to accident conditions. The core channel models in SAS4A provide the capability to analyze the initial phase of core disruptive accidents, through coolant heat-up and boiling, fuel element failure, and fuel melting and relocation. Originally developed to analyze oxide fuel clad with stainless steel, the models in SAS4A have been extended and specialized to metallic fuel with advanced alloy cladding. SASSYS-1 provides the capability to perform a detailed thermal/hydraulic simulation of the primary and secondary sodium coolant circuits and the balance-ofplant steam/water circuit. These sodium and steam circuit models include component models for heat exchangers, pumps, valves, turbines, and condensers, and thermal/hydraulic models of pipes and plena. SASSYS-1 also contains a plant protection and control system modeling capability, which provides digital representations of reactor, pump, and valve controllers and their response to input signal changes.

  3. The Spectrum Analysis Solution (SAS) System: Theoretical Analysis, Hardware Design and Implementation.

    Science.gov (United States)

    Narayanan, Ram M; Pooler, Richard K; Martone, Anthony F; Gallagher, Kyle A; Sherbondy, Kelly D

    2018-02-22

    This paper describes a multichannel super-heterodyne signal analyzer, called the Spectrum Analysis Solution (SAS), which performs multi-purpose spectrum sensing to support spectrally adaptive and cognitive radar applications. The SAS operates from ultrahigh frequency (UHF) to the S-band and features a wideband channel with eight narrowband channels. The wideband channel acts as a monitoring channel that can be used to tune the instantaneous band of the narrowband channels to areas of interest in the spectrum. The data collected from the SAS has been utilized to develop spectrum sensing algorithms for the budding field of spectrum sharing (SS) radar. Bandwidth (BW), average total power, percent occupancy (PO), signal-to-interference-plus-noise ratio (SINR), and power spectral entropy (PSE) have been examined as metrics for the characterization of the spectrum. These metrics are utilized to determine a contiguous optimal sub-band (OSB) for a SS radar transmission in a given spectrum for different modalities. Three OSB algorithms are presented and evaluated: the spectrum sensing multi objective (SS-MO), the spectrum sensing with brute force PSE (SS-BFE), and the spectrum sensing multi-objective with brute force PSE (SS-MO-BFE).

  4. Validation of the metal fuel version of the SAS4A accident analysis code

    International Nuclear Information System (INIS)

    Tentner, A.M.

    1991-01-01

    This paper describes recent work directed towards the validation of the metal fuel version of the SAS4A accident analysis code. The SAS4A code system has been developed at Argonne National Laboratory for the simulation of hypothetical severe accidents in Liquid Metal-Cooled Reactors (LMR), designed to operate in a fast neutron spectrum. SAS4A was initially developed for the analysis of oxide-fueled liquid metal-cooled reactors and has played an important role in the simulation and assessment of the energetics potential for postulated severe accidents in these reactors. Due to the current interest in the metal-fueled liquid metal-cooled reactors, a metal fuel version of the SAS4A accident analysis code is being developed in the Integral Fast Reactor program at Argonne. During such postulated accident scenarios as the unprotected (i.e. without scram) loss-of-flow and transient overpower events, a large number of interrelated physical phenomena occur during a relatively short time. These phenomena include transient heat transfer and hydrodynamic events, coolant boiling, and fuel and cladding melting and relocation. Due to strong neutronic feedbacks these events can significantly influence the reactor power history in the accident progression. The paper presents the results of a recent SAS4A simulation of the M7 TREAT experiment. 6 refs., 5 figs

  5. SPSS and SAS programming for the testing of mediation models.

    Science.gov (United States)

    Dudley, William N; Benuzillo, Jose G; Carrico, Mineh S

    2004-01-01

    Mediation modeling can explain the nature of the relation among three or more variables. In addition, it can be used to show how a variable mediates the relation between levels of intervention and outcome. The Sobel test, developed in 1990, provides a statistical method for determining the influence of a mediator on an intervention or outcome. Although interactive Web-based and stand-alone methods exist for computing the Sobel test, SPSS and SAS programs that automatically run the required regression analyses and computations increase the accessibility of mediation modeling to nursing researchers. To illustrate the utility of the Sobel test and to make this programming available to the Nursing Research audience in both SAS and SPSS. The history, logic, and technical aspects of mediation testing are introduced. The syntax files sobel.sps and sobel.sas, created to automate the computation of the regression analysis and test statistic, are available from the corresponding author. The reported programming allows the user to complete mediation testing with the user's own data in a single-step fashion. A technical manual included with the programming provides instruction on program use and interpretation of the output. Mediation modeling is a useful tool for describing the relation between three or more variables. Programming and manuals for using this model are made available.

  6. A SAS2H/KENO-V Methodology for 3D Full Core depletion analysis

    International Nuclear Information System (INIS)

    Milosevic, M.; Greenspan, E.; Vujic, J.; Petrovic, B.

    2003-04-01

    This paper describes the use of a SAS2H/KENO-V methodology for 3D full core depletion analysis and illustrates its capabilities by applying it to burnup analysis of the IRIS core benchmarks. This new SAS2H/KENO-V sequence combines a 3D Monte Carlo full core calculation of node power distribution and a 1D Wigner-Seitz equivalent cell transport method for independent depletion calculation of each of the nodes. This approach reduces by more than an order of magnitude the time required for getting comparable results using the MOCUP code system. The SAS2H/KENO-V results for the asymmetric IRIS core benchmark are in good agreement with the results of the ALPHA/PHOENIX/ANC code system. (author)

  7. SPSS and SAS programs for addressing interdependence and basic levels-of-analysis issues in psychological data.

    Science.gov (United States)

    O'Connor, Brian P

    2004-02-01

    Levels-of-analysis issues arise whenever individual-level data are collected from more than one person from the same dyad, family, classroom, work group, or other interaction unit. Interdependence in data from individuals in the same interaction units also violates the independence-of-observations assumption that underlies commonly used statistical tests. This article describes the data analysis challenges that are presented by these issues and presents SPSS and SAS programs for conducting appropriate analyses. The programs conduct the within-and-between-analyses described by Dansereau, Alutto, and Yammarino (1984) and the dyad-level analyses described by Gonzalez and Griffin (1999) and Griffin and Gonzalez (1995). Contrasts with general multilevel modeling procedures are then discussed.

  8. SPSS and SAS programs for determining the number of components using parallel analysis and velicer's MAP test.

    Science.gov (United States)

    O'Connor, B P

    2000-08-01

    Popular statistical software packages do not have the proper procedures for determining the number of components in factor and principal components analyses. Parallel analysis and Velicer's minimum average partial (MAP) test are validated procedures, recommended widely by statisticians. However, many researchers continue to use alternative, simpler, but flawed procedures, such as the eigenvalues-greater-than-one rule. Use of the proper procedures might be increased if these procedures could be conducted within familiar software environments. This paper describes brief and efficient programs for using SPSS and SAS to conduct parallel analyses and the MAP test.

  9. A SAS macro for testing differences among three or more independent groups using Kruskal-Wallis and Nemenyi tests.

    Science.gov (United States)

    Liu, Yuewei; Chen, Weihong

    2012-02-01

    As a nonparametric method, the Kruskal-Wallis test is widely used to compare three or more independent groups when an ordinal or interval level of data is available, especially when the assumptions of analysis of variance (ANOVA) are not met. If the Kruskal-Wallis statistic is statistically significant, Nemenyi test is an alternative method for further pairwise multiple comparisons to locate the source of significance. Unfortunately, most popular statistical packages do not integrate the Nemenyi test, which is not easy to be calculated by hand. We described the theory and applications of the Kruskal-Wallis and Nemenyi tests, and presented a flexible SAS macro to implement the two tests. The SAS macro was demonstrated by two examples from our cohort study in occupational epidemiology. It provides a useful tool for SAS users to test the differences among three or more independent groups using a nonparametric method.

  10. Fuel relocation modeling in the SAS4A accident analysis code system

    International Nuclear Information System (INIS)

    Tentner, A.M.; Miles, K.J.; Kalimullah; Hill, D.J.

    1986-01-01

    The SAS4A code system has been designed for the analysis of the initial phase of Hypothetical Core Disruptive Accidents (HCDAs) up to gross melting or failure of the subassembly walls. During such postulated accident scenarios as the Loss-of-Flow (LOF) and Transient-Overpower (TOP) events, the relocation of the fuel plays a key role in determining the sequence of events and the amount of energy produced before neutronic shutdown. This paper discusses the general strategy used in modelong the various phenomena which lead to fuel relocation and presents the key fuel relocation models used in SAS4A. The implications of these models for the whole-core accident analysis as well as recent results of fuel relocation are emphasized. 12 refs

  11. Categorical Data Analysis With Sas and Spss Applications

    CERN Document Server

    Lawal, H Bayo

    2003-01-01

    This book covers the fundamental aspects of categorical data analysis with an emphasis on how to implement the models used in the book using SAS and SPSS. This is accomplished through the frequent use of examples, with relevant codes and instructions, that are closely related to the problems in the text. Concepts are explained in detail so that students can reproduce similar results on their own. Beginning with chapter two, exercises at the end of each chapter further strengthen students' understanding of the concepts by requiring them to apply some of the ideas expressed in the text in a more

  12. SASqPCR: robust and rapid analysis of RT-qPCR data in SAS.

    Directory of Open Access Journals (Sweden)

    Daijun Ling

    Full Text Available Reverse transcription quantitative real-time PCR (RT-qPCR is a key method for measurement of relative gene expression. Analysis of RT-qPCR data requires many iterative computations for data normalization and analytical optimization. Currently no computer program for RT-qPCR data analysis is suitable for analytical optimization and user-controllable customization based on data quality, experimental design as well as specific research aims. Here I introduce an all-in-one computer program, SASqPCR, for robust and rapid analysis of RT-qPCR data in SAS. This program has multiple macros for assessment of PCR efficiencies, validation of reference genes, optimization of data normalizers, normalization of confounding variations across samples, and statistical comparison of target gene expression in parallel samples. Users can simply change the macro variables to test various analytical strategies, optimize results and customize the analytical processes. In addition, it is highly automatic and functionally extendable. Thus users are the actual decision-makers controlling RT-qPCR data analyses. SASqPCR and its tutorial are freely available at http://code.google.com/p/sasqpcr/downloads/list.

  13. Modeling binary correlated responses using SAS, SPSS and R

    CERN Document Server

    Wilson, Jeffrey R

    2015-01-01

    Statistical tools to analyze correlated binary data are spread out in the existing literature. This book makes these tools accessible to practitioners in a single volume. Chapters cover recently developed statistical tools and statistical packages that are tailored to analyzing correlated binary data. The authors showcase both traditional and new methods for application to health-related research. Data and computer programs will be publicly available in order for readers to replicate model development, but learning a new statistical language is not necessary with this book. The inclusion of code for R, SAS, and SPSS allows for easy implementation by readers. For readers interested in learning more about the languages, though, there are short tutorials in the appendix. Accompanying data sets are available for download through the book s website. Data analysis presented in each chapter will provide step-by-step instructions so these new methods can be readily applied to projects.  Researchers and graduate stu...

  14. Interpretation of the CABRI LT1 test with SAS4A-code analysis

    International Nuclear Information System (INIS)

    Sato, Ikken; Onoda, Yu-uichi

    2001-03-01

    In the CABRI-FAST LT1 test, simulating a ULOF (Unprotected Loss of Flow) accident of LMFBR, pin failure took place rather early during the transient. No fuel melting is expected at this failure because the energy injection was too low and a rapid gas-release-like response leading to coolant-channel voiding was observed. This channel voiding was followed by a gradual fuel breakup and axial relocation. With an aid of SAS4A analysis, interpretation of this test was performed. Although the original SAS4A model was not well fitted to this type of early pin failure, the global behavior after the pin failure was reasonably simulated with temporary modifications. Through this study, gas release behavior from the failed fuel pin and its effect on further transient were well understood. It was also demonstrated that the SAS4A code has a potential to simulate the post-failure behavior initiated by a very early pin failure provided that necessary model modification is given. (author)

  15. %HPGLIMMIX: A High-Performance SAS Macro for GLMM Estimation

    Directory of Open Access Journals (Sweden)

    Liang Xie

    2014-06-01

    Full Text Available Generalized linear mixed models (GLMMs comprise a class of widely used statistical tools for data analysis with fixed and random effects when the response variable has a conditional distribution in the exponential family. GLMM analysis also has a close relationship with actuarial credibility theory. While readily available programs such as the GLIMMIX procedure in SAS and the lme4 package in R are powerful tools for using this class of models, these progarms are not able to handle models with thousands of levels of fixed and random effects. By using sparse-matrix and other high performance techniques, procedures such as HPMIXED in SAS can easily fit models with thousands of factor levels, but only for normally distributed response variables. In this paper, we present the %HPGLIMMIX SAS macro that fits GLMMs with large number of sparsely populated design matrices using the doubly-iterative linearization (pseudo-likelihood method, in which the sparse-matrix-based HPMIXED is used for the inner iterations with the pseudo-variable constructed from the inverse-link function and the chosen model. Although the macro does not have the full functionality of the GLIMMIX procedure, time and memory savings can be large with the new macro. In applications in which design matrices contain many zeros and there are hundreds or thousands of factor levels, models can be fitted without exhausting computer memory, and 90% or better reduction in running time can be observed. Examples with a Poisson, binomial, and gamma conditional distribution are presented to demonstrate the usage and efficiency of this macro.

  16. Analysis of the OPERA-15 two-dimensional voiding experiment using the SAS4A code

    International Nuclear Information System (INIS)

    Briggs, L.L.

    1984-01-01

    Overall, SAS4A appears to do a good job for simulating the OPERA-15 experiment. For most of the experiment parameters, the code calculations compare quite well with the experimental data. The lack of a multi-dimensional voiding model has the effect of extending the flow coastdown time until voiding starts; otherwise, the code simulates the accident progression satisfactorily. These results indicate a need for further work in this area in the form of a tandem analysis by a two-dimensional flow code and a one-dimensional version of that code to confirm the observations derived from the SAS4A analysis

  17. Fuel relocation modeling in the SAS4A accident analysis code system

    International Nuclear Information System (INIS)

    Tentner, A.M.; Miles, K.J.

    1985-01-01

    SAS4A is a new code system which has been designed for analyzing the initial phase of Hypothetical Core Disruptive Accidents (HCDAs) up to gross melting or failure of the subassembly walls. During such postulated accident scenarios as the Loss-of-Flow (LOF) and Transient-Overpower (TOP) events, the relocation of the fuel plays a key role in determining the sequence of events and the amount of energy produced before neutronic shutdown. This paper discusses the general strategy used in modeling the various phenomena which lead to fuel relocation and presents the key fuel relocation models used in SAS4A. The implications of these models for the whole-core accident analysis as well as recent results of fuel motion experiment analyses are also presented

  18. Ambiguities and completeness of SAS data analysis: investigations of apoferritin by SAXS/SANS EID and SEC-SAXS methods

    Science.gov (United States)

    Zabelskii, D. V.; Vlasov, A. V.; Ryzhykau, Yu L.; Murugova, T. N.; Brennich, M.; Soloviov, D. V.; Ivankov, O. I.; Borshchevskiy, V. I.; Mishin, A. V.; Rogachev, A. V.; Round, A.; Dencher, N. A.; Büldt, G.; Gordeliy, V. I.; Kuklin, A. I.

    2018-03-01

    The method of small angle scattering (SAS) is widely used in the field of biophysical research of proteins in aqueous solutions. Obtaining low-resolution structure of proteins is still a highly valuable method despite the advances in high-resolution methods such as X-ray diffraction, cryo-EM etc. SAS offers the unique possibility to obtain structural information under conditions close to those of functional assays, i.e. in solution, without different additives, in the mg/mL concentration range. SAS method has a long history, but there are still many uncertainties related to data treatment. We compared 1D SAS profiles of apoferritin obtained by X-ray diffraction (XRD) and SAS methods. It is shown that SAS curves for X-ray diffraction crystallographic structure of apoferritin differ more significantly than it might be expected due to the resolution of the SAS instrument. Extrapolation to infinite dilution (EID) method does not sufficiently exclude dimerization and oligomerization effects and therefore could not guarantee total absence of dimers account in the final SAS curve. In this study, we show that EID SAXS, EID SANS and SEC-SAXS methods give complementary results and when they are used all together, it allows obtaining the most accurate results and high confidence from SAS data analysis of proteins.

  19. FY2017 Updates to the SAS4A/SASSYS-1 Safety Analysis Code

    Energy Technology Data Exchange (ETDEWEB)

    Fanning, T. H. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-09-30

    The SAS4A/SASSYS-1 safety analysis software is used to perform deterministic analysis of anticipated events as well as design-basis and beyond-design-basis accidents for advanced fast reactors. It plays a central role in the analysis of U.S. DOE conceptual designs, proposed test and demonstration reactors, and in domestic and international collaborations. This report summarizes the code development activities that have taken place during FY2017. Extensions to the void and cladding reactivity feedback models have been implemented, and Control System capabilities have been improved through a new virtual data acquisition system for plant state variables and an additional Block Signal for a variable lag compensator to represent reactivity feedback for novel shutdown devices. Current code development and maintenance needs are also summarized in three key areas: software quality assurance, modeling improvements, and maintenance of related tools. With ongoing support, SAS4A/SASSYS-1 can continue to fulfill its growing role in fast reactor safety analysis and help solidify DOE’s leadership role in fast reactor safety both domestically and in international collaborations.

  20. NParCov3: A SAS/IML Macro for Nonparametric Randomization-Based Analysis of Covariance

    Directory of Open Access Journals (Sweden)

    Richard C. Zink

    2012-07-01

    Full Text Available Analysis of covariance serves two important purposes in a randomized clinical trial. First, there is a reduction of variance for the treatment effect which provides more powerful statistical tests and more precise confidence intervals. Second, it provides estimates of the treatment effect which are adjusted for random imbalances of covariates between the treatment groups. The nonparametric analysis of covariance method of Koch, Tangen, Jung, and Amara (1998 defines a very general methodology using weighted least-squares to generate covariate-adjusted treatment effects with minimal assumptions. This methodology is general in its applicability to a variety of outcomes, whether continuous, binary, ordinal, incidence density or time-to-event. Further, its use has been illustrated in many clinical trial settings, such as multi-center, dose-response and non-inferiority trials.NParCov3 is a SAS/IML macro written to conduct the nonparametric randomization-based covariance analyses of Koch et al. (1998. The software can analyze a variety of outcomes and can account for stratification. Data from multiple clinical trials will be used for illustration.

  1. Development and validation of a smartphone addiction scale (SAS.

    Directory of Open Access Journals (Sweden)

    Min Kwon

    Full Text Available OBJECTIVE: The aim of this study was to develop a self-diagnostic scale that could distinguish smartphone addicts based on the Korean self-diagnostic program for Internet addiction (K-scale and the smartphone's own features. In addition, the reliability and validity of the smartphone addiction scale (SAS was demonstrated. METHODS: A total of 197 participants were selected from Nov. 2011 to Jan. 2012 to accomplish a set of questionnaires, including SAS, K-scale, modified Kimberly Young Internet addiction test (Y-scale, visual analogue scale (VAS, and substance dependence and abuse diagnosis of DSM-IV. There were 64 males and 133 females, with ages ranging from 18 to 53 years (M = 26.06; SD = 5.96. Factor analysis, internal-consistency test, t-test, ANOVA, and correlation analysis were conducted to verify the reliability and validity of SAS. RESULTS: Based on the factor analysis results, the subscale "disturbance of reality testing" was removed, and six factors were left. The internal consistency and concurrent validity of SAS were verified (Cronbach's alpha = 0.967. SAS and its subscales were significantly correlated with K-scale and Y-scale. The VAS of each factor also showed a significant correlation with each subscale. In addition, differences were found in the job (p<0.05, education (p<0.05, and self-reported smartphone addiction scores (p<0.001 in SAS. CONCLUSIONS: This study developed the first scale of the smartphone addiction aspect of the diagnostic manual. This scale was proven to be relatively reliable and valid.

  2. Vybrané výběrové statistické metody v programu SAS

    OpenAIRE

    Voříšek, Jan

    2009-01-01

    In the present work we study methodology of different kinds of sample surveys and their design in SAS software. Creating of SAS Enterprise Guide Add-In was the fundamental creative part of this work. This Add-In enables to compute important statistics of sample surveys, without need of being familiar with SAS code. Add-In was created in MSFT Visual Studio 2003 in C # language using a tamplate for Add-Ins provided by SAS. This work contains a general description of the creation of an Add-In as...

  3. Confirmatory factor analysis and sample invariance of the Chinese version of Somatosensory Amplification Scale (ChSAS) among Chinese adolescents

    OpenAIRE

    Tam, B. K.; Wong, W. S.

    2011-01-01

    Objective: This paper aimed to evaluate the factor structure of the Chinese version of Somatosensory Amplification Scale (ChSAS) in a sample of Chinese adolescents across different grade levels using confirmatory factor analysis (CFA). Methods: A total of 1991 Chinese adolescents completed the ChSAS. CFA assessed the fit of the one-factor model to the entire sample. Factorial invariance of the ChSAS was also examined across grade levels using multigroup CFA. Results: Results of CFA confirmed ...

  4. Development and validation of a smartphone addiction scale (SAS).

    Science.gov (United States)

    Kwon, Min; Lee, Joon-Yeop; Won, Wang-Youn; Park, Jae-Woo; Min, Jung-Ah; Hahn, Changtae; Gu, Xinyu; Choi, Ji-Hye; Kim, Dai-Jin

    2013-01-01

    The aim of this study was to develop a self-diagnostic scale that could distinguish smartphone addicts based on the Korean self-diagnostic program for Internet addiction (K-scale) and the smartphone's own features. In addition, the reliability and validity of the smartphone addiction scale (SAS) was demonstrated. A total of 197 participants were selected from Nov. 2011 to Jan. 2012 to accomplish a set of questionnaires, including SAS, K-scale, modified Kimberly Young Internet addiction test (Y-scale), visual analogue scale (VAS), and substance dependence and abuse diagnosis of DSM-IV. There were 64 males and 133 females, with ages ranging from 18 to 53 years (M = 26.06; SD = 5.96). Factor analysis, internal-consistency test, t-test, ANOVA, and correlation analysis were conducted to verify the reliability and validity of SAS. Based on the factor analysis results, the subscale "disturbance of reality testing" was removed, and six factors were left. The internal consistency and concurrent validity of SAS were verified (Cronbach's alpha = 0.967). SAS and its subscales were significantly correlated with K-scale and Y-scale. The VAS of each factor also showed a significant correlation with each subscale. In addition, differences were found in the job (psmartphone addiction scores (psmartphone addiction aspect of the diagnostic manual. This scale was proven to be relatively reliable and valid.

  5. Mediation analysis allowing for exposure-mediator interactions and causal interpretation: theoretical assumptions and implementation with SAS and SPSS macros

    Science.gov (United States)

    Valeri, Linda; VanderWeele, Tyler J.

    2012-01-01

    Mediation analysis is a useful and widely employed approach to studies in the field of psychology and in the social and biomedical sciences. The contributions of this paper are several-fold. First we seek to bring the developments in mediation analysis for non linear models within the counterfactual framework to the psychology audience in an accessible format and compare the sorts of inferences about mediation that are possible in the presence of exposure-mediator interaction when using a counterfactual versus the standard statistical approach. Second, the work by VanderWeele and Vansteelandt (2009, 2010) is extended here to allow for dichotomous mediators and count outcomes. Third, we provide SAS and SPSS macros to implement all of these mediation analysis techniques automatically and we compare the types of inferences about mediation that are allowed by a variety of software macros. PMID:23379553

  6. SPSS and SAS programs for comparing Pearson correlations and OLS regression coefficients.

    Science.gov (United States)

    Weaver, Bruce; Wuensch, Karl L

    2013-09-01

    Several procedures that use summary data to test hypotheses about Pearson correlations and ordinary least squares regression coefficients have been described in various books and articles. To our knowledge, however, no single resource describes all of the most common tests. Furthermore, many of these tests have not yet been implemented in popular statistical software packages such as SPSS and SAS. In this article, we describe all of the most common tests and provide SPSS and SAS programs to perform them. When they are applicable, our code also computes 100 × (1 - α)% confidence intervals corresponding to the tests. For testing hypotheses about independent regression coefficients, we demonstrate one method that uses summary data and another that uses raw data (i.e., Potthoff analysis). When the raw data are available, the latter method is preferred, because use of summary data entails some loss of precision due to rounding.

  7. [Design and implementation of online statistical analysis function in information system of air pollution and health impact monitoring].

    Science.gov (United States)

    Lü, Yiran; Hao, Shuxin; Zhang, Guoqing; Liu, Jie; Liu, Yue; Xu, Dongqun

    2018-01-01

    To implement the online statistical analysis function in information system of air pollution and health impact monitoring, and obtain the data analysis information real-time. Using the descriptive statistical method as well as time-series analysis and multivariate regression analysis, SQL language and visual tools to implement online statistical analysis based on database software. Generate basic statistical tables and summary tables of air pollution exposure and health impact data online; Generate tendency charts of each data part online and proceed interaction connecting to database; Generate butting sheets which can lead to R, SAS and SPSS directly online. The information system air pollution and health impact monitoring implements the statistical analysis function online, which can provide real-time analysis result to its users.

  8. Introduction to SAS on VAX

    International Nuclear Information System (INIS)

    Kardaun, O.; Miura, Yukitoshi; Matsuda, Toshiaki; Tamai, Hiroshi.

    1991-06-01

    To analyse, among others, the H-mode data base, a new version (6.06) of the SAS system has been installed on the VAX 3200 Workstation at JFT-2M. In this report, we summarize how to use SAS interactively (i.e., in 'display manager mode') on this machine. By a didactical example program and its annotated output we illustrate some of the capabilities of SAS. The report is intended to facilitate the access to the SAS documentation by physicists interested in plasma physical applications. (author)

  9. A SAS2H/KENO-V methodology for 3D fuel burnup analysis

    International Nuclear Information System (INIS)

    Milosevic, M.; Greenspan, E.; Vujic, J.

    2002-01-01

    An efficient methodology for 3D fuel burnup analysis of LWR reactors is described in this paper. This methodology is founded on coupling Monte Carlo method for 3D calculation of node power distribution, and transport method for depletion calculation in ID Wigner-Seitz equivalent cell for each node independently. The proposed fuel burnup modeling, based on application of SCALE-4.4a control modules SAS2H and KENO-V.a is verified for the case of 2D x-y model of IRIS 15 x 15 fuel assembly (with reflective boundary condition) by using two well benchmarked code systems. The one is MOCUP, a coupled MCNP-4C and ORIGEN2.1 utility code, and the second is KENO-V.a/ORIGEN2.1 code system recently developed by authors of this paper. The proposed SAS2H/KENO-V.a methodology was applied for 3D burnup analysis of IRIS-1000 benchmark.44 core. Detailed k sub e sub f sub f and power density evolution with burnup are reported. (author)

  10. Mediation analysis allowing for exposure-mediator interactions and causal interpretation: theoretical assumptions and implementation with SAS and SPSS macros.

    Science.gov (United States)

    Valeri, Linda; Vanderweele, Tyler J

    2013-06-01

    Mediation analysis is a useful and widely employed approach to studies in the field of psychology and in the social and biomedical sciences. The contributions of this article are several-fold. First we seek to bring the developments in mediation analysis for nonlinear models within the counterfactual framework to the psychology audience in an accessible format and compare the sorts of inferences about mediation that are possible in the presence of exposure-mediator interaction when using a counterfactual versus the standard statistical approach. Second, the work by VanderWeele and Vansteelandt (2009, 2010) is extended here to allow for dichotomous mediators and count outcomes. Third, we provide SAS and SPSS macros to implement all of these mediation analysis techniques automatically, and we compare the types of inferences about mediation that are allowed by a variety of software macros. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  11. SASWeave: Literate Programming Using SAS

    Directory of Open Access Journals (Sweden)

    Russell V. Lenth

    2007-05-01

    Full Text Available SASweave is a collection of scripts that allow one to embed SAS code into a LATEX document, and automatically incorporate the results as well. SASweave is patterned after Sweave, which does the same thing for code written in R. In fact, a document may contain both SAS and R code. Besides the convenience of being able to easily incorporate SAS examples in a document, SASweave facilitates the concept of “literate programming”: having code, documentation, and results packaged together. Among other things, this helps to ensure that the SAS output in the document is in concordance with the code.

  12. SASWeave: Literate Programming Using SAS

    DEFF Research Database (Denmark)

    Lenth, Russell V; Højsgaard, Søren

    2007-01-01

    SASweave is a collection of scripts that allow one to embed SAS code into a LATEX document, and automatically incorporate the results as well. SASweave is patterned after Sweave, which does the same thing for code written in R. In fact, a document may contain both SAS and R code. Besides...... the convenience of being able to easily incorporate SAS examples in a document, SASweave facilitates the concept of "literate programming": having code, documentation, and results packaged together. Among other things, this helps to ensure that the SAS output in the document is in concordance with the code...

  13. Clinical SAS programming in India: A study of industry needs versus wants

    Directory of Open Access Journals (Sweden)

    Nithiyanandhan Ananthakrishnan

    2014-01-01

    Full Text Available Background: The clinical SAS (www.sas.com programming industry, in India, has seen a rapid growth in the last decade and the trend seems set to continue, for the next couple of years, due to cost advantage and the availability of skilled labor. On one side the industry needs are focused on less execution time, high margins, segmented tasks and the delivery of high quality output with minimal oversight. On the other side, due to the increased demand for skilled resources, the wants of the programmers have taken a different shift toward diversifying exposure, unsustainable wage inflation due to multiple opportunities and generally high expectations around career progression. If the industry needs are not going to match with programmers want, or vice versa, then there is the possibility that the current year on year growth may start to slow or even go into decline. Aim: This paper is intended to identify the gap between wants and need and puts forwards some suggestions, for both sides, in ways to change the equation to benefit all. Settings and Design: Questionnaire on similar themes created to survey managers and programmers working in clinical SAS programming industry and was surveyed online to collect their perspectives. Their views are compared for each theme and presented as results. Materials and Methods: Two surveys were created in www.surveymonkey.com. Management: https://www.surveymonkey.com/s/SAS_India_managment_needvswant_survey. Programmer: https://www.surveymonkey.com/s/SAS_India_programmer_needvswant_survey. Statistical Analysis Used: Bar chart and pie chart used on data collect to show segmentation of data. Results and Conclusions: In conclusion, it seeks to highlight the future industry direction and the skillset that existing programmers need to have, in order to sustain the momentum and remain competitive, to contribute to the future pipeline and the development of the profession in India.

  14. Analysis of the TREAT loss-of-flow tests L6 and L7 using SAS3D

    International Nuclear Information System (INIS)

    Morris, E.E.; Simms, R.; Gruber, E.E.

    1985-01-01

    The TREAT loss-of-flow tests L6 and L7 have been analyzed using the SAS3D accident analysis code. The impetus for the analysis was the need for experimentally supported fuel motion modeling in whole core accident studies performed in support of licensing of the Clinch River Breeder Reactor Project. The input prescription chosen for the SAS3D/SLUMPY fuel motion model gave reasonable agreement with the test results. Tests L6 and L7, each conducted with a cluster of three fuel pins, were planned to simulate key events in the loss-of-flow accident scenario for the Clinch River homogeneous reactor

  15. A handbook of statistical graphics using SAS ODS

    CERN Document Server

    Der, Geoff

    2014-01-01

    An Introduction to Graphics: Good Graphics, Bad Graphics, Catastrophic Graphics and Statistical GraphicsThe Challenger DisasterGraphical DisplaysA Little History and Some Early Graphical DisplaysGraphical DeceptionAn Introduction to ODS GraphicsGenerating ODS GraphsODS DestinationsStatistical Graphics ProceduresODS Graphs from Statistical ProceduresControlling ODS GraphicsControlling Labelling in GraphsODS Graphics EditorGraphs for Displaying the Characteristics of Univariate Data: Horse Racing, Mortality Rates, Forearm Lengths, Survival Times and Geyser EruptionsIntroductionPie Chart, Bar Cha

  16. Adaptation of XMM-Newton SAS to GRID and VO architectures via web

    Science.gov (United States)

    Ibarra, A.; de La Calle, I.; Gabriel, C.; Salgado, J.; Osuna, P.

    2008-10-01

    The XMM-Newton Scientific Analysis Software (SAS) is a robust software that has allowed users to produce good scientific results since the beginning of the mission. This has been possible given the SAS capability to evolve with the advent of new technologies and adapt to the needs of the scientific community. The prototype of the Remote Interface for Science Analysis (RISA) presented here, is one such example, which provides remote analysis of XMM-Newton data with access to all the existing SAS functionality, while making use of GRID computing technology. This new technology has recently emerged within the astrophysical community to tackle the ever lasting problem of computer power for the reduction of large amounts of data.

  17. Data Mining Supercomputing with SAS JMP® Genomics

    Directory of Open Access Journals (Sweden)

    Richard S. Segall

    2011-02-01

    Full Text Available JMP® Genomics is statistical discovery software that can uncover meaningful patterns in high-throughput genomics and proteomics data. JMP® Genomics is designed for biologists, biostatisticians, statistical geneticists, and those engaged in analyzing the vast stores of data that are common in genomic research (SAS, 2009. Data mining was performed using JMP® Genomics on the two collections of microarray databases available from National Center for Biotechnology Information (NCBI for lung cancer and breast cancer. The Gene Expression Omnibus (GEO of NCBI serves as a public repository for a wide range of highthroughput experimental data, including the two collections of lung cancer and breast cancer that were used for this research. The results for applying data mining using software JMP® Genomics are shown in this paper with numerous screen shots.

  18. WinBUGSio: A SAS Macro for the Remote Execution of WinBUGS

    Directory of Open Access Journals (Sweden)

    Michael K. Smith

    2007-09-01

    Full Text Available This is a macro which facilitates remote execution of WinBUGS from within SAS. The macro pre-processes data for WinBUGS, writes the WinBUGS batch-script, executes this script and reads in output statistics from the WinBUGS log-file back into SAS native format. The user specifies the input and output file names and directory path as well as the statistics to be monitored in WinBUGS. The code works best for a model that has already been set up and checked for convergence diagnostics within WinBUGS. An obvious extension of the use of this macro is for running simulations where the input and output files all have the same name but all that differs between simulation iterations is the input dataset. The functionality and syntax of the macro call are described in this paper and illustrated using a simple linear regression model.

  19. Implementation of Surface Detector Option in SCALE SAS4 Shielding Module

    International Nuclear Information System (INIS)

    Broadhead, B.L.; Emmett, M.B.; Tang, J.S.

    1999-01-01

    The Shielding Analysis Sequence No. 4 (SAS4) in the Standardized Cask Analysis and Licensing Evaluation System (SCALE) is designed to aid the novice user in the preparation of detailed three-dimensional models and radiation protection studies of transportation or storage packages containing spent fuel from a nuclear reactor facility. The underlying methodology in these analyses is the Monte Carlo particle-tracking approach as incorporated into the MORSE-SGC computer code. The use of these basic procedures is enhanced via the automatic generation of the biasing parameters in the SAS4 sequence, which dramatically increases the calculational efficiency of most standard shielding problems. Until recently the primary mechanism for dose estimates in SAS4 was the use of point detectors, which were effective for single-dose locations, but inefficient for quantification of dose-rate profiles. This paper describes the implementation of a new surface detector option for SAS4 with automatic discretization of the detector surface into multiple segments or subdetectors. Results from several sample problems are given and discussed

  20. To improve the quality of the statistical analysis of papers published in the Journal of the Korean Society for Therapeutic Radiology and Oncology

    International Nuclear Information System (INIS)

    Park, Hee Chul; Choi, Doo Ho; Ahn, Song Vogue

    2008-01-01

    To improve the quality of the statistical analysis of papers published in the Journal of the Korean Society for Therapeutic Radiology and Oncology (JKOSTRO) by evaluating commonly encountered errors. Materials and Methods: Papers published in the JKOSTRO from January 2006 to December 2007 were reviewed for methodological and statistical validity using a modified version of Ahn's checklist. A statistician reviewed individual papers and evaluated the list items in the checklist for each paper. To avoid the potential assessment error by the statistician who lacks expertise in the field of radiation oncology; the editorial board of the JKOSTRO reviewed each checklist for individual articles. A frequency analysis of the list items was performed using SAS (version 9.0, SAS Institute, NC, USA) software. Results: A total of 73 papers including 5 case reports and 68 original articles were reviewed. Inferential statistics was used in 46 papers. The most commonly adopted statistical methodology was a survival analysis (58.7%). Only 19% of papers were free of statistical errors. Errors of omission were encountered in 34 (50.0%) papers. Errors of commission were encountered in 35 (51.5%) papers. Twenty-one papers (30.9%) had both errors of omission and commission. Conclusion: A variety of statistical errors were encountered in papers published in the JKOSTRO. The current study suggests that a more thorough review of the statistical analysis is needed for manuscripts submitted in the JKOSTRO

  1. SAS3A analysis of natural convection boiling behavior in the Sodium Boiling Test Facility

    International Nuclear Information System (INIS)

    Klein, G.A.

    1979-01-01

    An analysis of natural convection boiling behavior in the Sodium Boiling Test (SBT) Facility has been performed using the SAS3A computer code. The predictions from this analysis indicate that stable boiling can be achieved for extensive periods of time for channel powers less than 1.4 kW and indicate intermittent dryout at higher powers up to at least 1.7 kW. The results of this anaysis are in reasonable agreement with the SBT Facility test results

  2. A SAS(®) macro implementation of a multiple comparison post hoc test for a Kruskal-Wallis analysis.

    Science.gov (United States)

    Elliott, Alan C; Hynan, Linda S

    2011-04-01

    The Kruskal-Wallis (KW) nonparametric analysis of variance is often used instead of a standard one-way ANOVA when data are from a suspected non-normal population. The KW omnibus procedure tests for some differences between groups, but provides no specific post hoc pair wise comparisons. This paper provides a SAS(®) macro implementation of a multiple comparison test based on significant Kruskal-Wallis results from the SAS NPAR1WAY procedure. The implementation is designed for up to 20 groups at a user-specified alpha significance level. A Monte-Carlo simulation compared this nonparametric procedure to commonly used parametric multiple comparison tests. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  3. Implementation, verification, and validation of the FPIN2 metal fuel pin mechanics model in the SASSYS/SAS4A LMR transient analysis codes

    International Nuclear Information System (INIS)

    Sofu, T.; Kramer, J.M.

    1994-01-01

    The metal fuel version of the FPIN2 code which provides a validated pin mechanics model is coupled with SASSYS/SAS4A Version 3.0 for single pin calculations. In this implementation, SASSY/SAS4A provides pin temperatures, and FPIN2 performs analysis of pin deformation and predicts the time and location of cladding failure. FPIN2 results are also used for the estimates of axial expansion of fuel and associated reactivity effects. The revalidation of the integrated SAS-FPIN2 code system is performed using TREAT tests

  4. Intermediate statistics a modern approach

    CERN Document Server

    Stevens, James P

    2007-01-01

    Written for those who use statistical techniques, this text focuses on a conceptual understanding of the material. It uses definitional formulas on small data sets to provide conceptual insight into what is being measured. It emphasizes the assumptions underlying each analysis, and shows how to test the critical assumptions using SPSS or SAS.

  5. Sas2

    International Development Research Centre (IDRC) Digital Library (Canada)

    The fascinating multi-country examples in the Guide illustrate how SAS2 ... The challenge is to raise all forms of inquiry to the power of two: making the .... This requires an ability to suspend judgment, consider the views of others, ..... Our view is that the drive to think "holistically" must always be expressed with local color and ...

  6. A SAS/AF application to administrate and query a file of incidents occurring in foreign nuclear power plants

    International Nuclear Information System (INIS)

    Durbec, V.

    1994-07-01

    The Research and Development Division of Electricite de France has a file of incidents occurring in foreign pressurized water nuclear power stations. These incidents have an impact either on safety or reliability. The file is stored on an IBM 3090. For each incident, a docket is assigned, containing the identity of the nuclear plant and information in the form of code or text on the incident. An application has been built with the SAS System under IBM (MVS) in order to: - allow the input of new nuclear plant identities, monthly operating coefficients and new incidents; - subset data from each SAS data set, according to selection criteria (country, manufacturers, period, materials, etc...) in the form of coded fields and characters strings; -calculate simple statistical analyses on subset data (histograms of break duration, distribution of operating coefficients, cross-tabulation tables of sets and materials which bring about the incident) with a restitution on screen and/or printer; - edit an annual booklet containing general results of functioning of plants. After validation, data retrieved from the database are used in probabilistic safety analysis of nuclear power plants and materials designing studies (comparison with French materials, identification of factors having an impact on performance). The application is an interactive menu-driven tool and contains data entry screens (for new data or selection criteria). These screens have been built with SAS/AF software and Screen Control Language. Data selection and processing have been developed with Base SAS and SAS/GRAPH software. (author). 1 ref., 6 figs., 2 tabs

  7. SAS macro programs for geographically weighted generalized linear modeling with spatial point data: applications to health research.

    Science.gov (United States)

    Chen, Vivian Yi-Ju; Yang, Tse-Chuan

    2012-08-01

    An increasing interest in exploring spatial non-stationarity has generated several specialized analytic software programs; however, few of these programs can be integrated natively into a well-developed statistical environment such as SAS. We not only developed a set of SAS macro programs to fill this gap, but also expanded the geographically weighted generalized linear modeling (GWGLM) by integrating the strengths of SAS into the GWGLM framework. Three features distinguish our work. First, the macro programs of this study provide more kernel weighting functions than the existing programs. Second, with our codes the users are able to better specify the bandwidth selection process compared to the capabilities of existing programs. Third, the development of the macro programs is fully embedded in the SAS environment, providing great potential for future exploration of complicated spatially varying coefficient models in other disciplines. We provided three empirical examples to illustrate the use of the SAS macro programs and demonstrated the advantages explained above. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  8. Design and analysis of experiments classical and regression approaches with SAS

    CERN Document Server

    Onyiah, Leonard C

    2008-01-01

    Introductory Statistical Inference and Regression Analysis Elementary Statistical Inference Regression Analysis Experiments, the Completely Randomized Design (CRD)-Classical and Regression Approaches Experiments Experiments to Compare Treatments Some Basic Ideas Requirements of a Good Experiment One-Way Experimental Layout or the CRD: Design and Analysis Analysis of Experimental Data (Fixed Effects Model) Expected Values for the Sums of Squares The Analysis of Variance (ANOVA) Table Follow-Up Analysis to Check fo

  9. The Statistical Analysis Techniques to Support the NGNP Fuel Performance Experiments

    International Nuclear Information System (INIS)

    Pham, Bihn T.; Einerson, Jeffrey J.

    2010-01-01

    This paper describes the development and application of statistical analysis techniques to support the AGR experimental program on NGNP fuel performance. The experiments conducted in the Idaho National Laboratory's Advanced Test Reactor employ fuel compacts placed in a graphite cylinder shrouded by a steel capsule. The tests are instrumented with thermocouples embedded in graphite blocks and the target quantity (fuel/graphite temperature) is regulated by the He-Ne gas mixture that fills the gap volume. Three techniques for statistical analysis, namely control charting, correlation analysis, and regression analysis, are implemented in the SAS-based NGNP Data Management and Analysis System (NDMAS) for automated processing and qualification of the AGR measured data. The NDMAS also stores daily neutronic (power) and thermal (heat transfer) code simulation results along with the measurement data, allowing for their combined use and comparative scrutiny. The ultimate objective of this work includes (a) a multi-faceted system for data monitoring and data accuracy testing, (b) identification of possible modes of diagnostics deterioration and changes in experimental conditions, (c) qualification of data for use in code validation, and (d) identification and use of data trends to support effective control of test conditions with respect to the test target. Analysis results and examples given in the paper show the three statistical analysis techniques providing a complementary capability to warn of thermocouple failures. It also suggests that the regression analysis models relating calculated fuel temperatures and thermocouple readings can enable online regulation of experimental parameters (i.e. gas mixture content), to effectively maintain the target quantity (fuel temperature) within a given range.

  10. Homework Solutions S.A.S.

    OpenAIRE

    Acero Mora, Mariluz; Hernández Laguna, Liliana

    2012-01-01

    Homework Solutions S.A.S, será una empresa de servicios dedicada a brindar asesorías de tareas y trabajos bilingües con soluciones al instante sin que los padres de familia tengan que pagar por meses o semestres. Está previsto que dicha empresa empiece operaciones en el mes de Enero de 2013. Homework Solutions S.A.S, trabajará a domicilio, en los colegios que requieran sus servicios y en su sede principal ubicada al norte de la ciudad de Bogotá.

  11. Statistical core design methodology using the VIPRE thermal-hydraulics code

    International Nuclear Information System (INIS)

    Lloyd, M.W.; Feltus, M.A.

    1995-01-01

    An improved statistical core design methodology for developing a computational departure from nucleate boiling ratio (DNBR) correlation has been developed and applied in order to analyze the nominal 1.3 DNBR limit on Westinghouse Pressurized Water Reactor (PWR) cores. This analysis, although limited in scope, found that the DNBR limit can be reduced from 1.3 to some lower value and be accurate within an adequate confidence level of 95%, for three particular FSAR operational transients: turbine trip, complete loss of flow, and inadvertent opening of a pressurizer relief valve. The VIPRE-01 thermal-hydraulics code, the SAS/STAT statistical package, and the EPRI/Columbia University DNBR experimental data base were used in this research to develop the Pennsylvania State Statistical Core Design Methodology (PSSCDM). The VIPRE code was used to perform the necessary sensitivity studies and generate the EPRI correlation-calculated DNBR predictions. The SAS package used for these EPRI DNBR correlation predictions from VIPRE as a data set to determine the best fit for the empirical model and to perform the statistical analysis. (author)

  12. Proteomic analysis of cellular response induced by boron neutron capture reaction in human squamous cell carcinoma SAS cells

    International Nuclear Information System (INIS)

    Sato, Akira; Itoh, Tasuku; Imamichi, Shoji; Kikuhara, Sota; Fujimori, Hiroaki; Hirai, Takahisa; Saito, Soichiro; Sakurai, Yoshinori; Tanaka, Hiroki; Nakamura, Hiroyuki; Suzuki, Minoru

    2015-01-01

    To understand the mechanism of cell death induced by boron neutron capture reaction (BNCR), we performed proteome analyses of human squamous tumor SAS cells after BNCR. Cells were irradiated with thermal neutron beam at KUR after incubation under boronophenylalanine (BPA)(+) and BPA(−) conditions. BNCR mainly induced typical apoptosis in SAS cells 24 h post-irradiation. Proteomic analysis in SAS cells suggested that proteins functioning in endoplasmic reticulum, DNA repair, and RNA processing showed dynamic changes at early phase after BNCR and could be involved in the regulation of cellular response to BNCR. We found that the BNCR induces fragments of endoplasmic reticulum-localized lymphoid-restricted protein (LRMP). The fragmentation of LRMP was also observed in the rat tumor graft model 20 hours after BNCT treatment carried out at the National Nuclear Center of the Republic of Kazakhstan. These data suggest that dynamic changes of LRMP could be involved during cellular response to BNCR. - Highlights: • BNCR in human squamous carcinoma cells caused typical apoptotic features. • BNCR induced fragments of LRMP, in human squamous carcinoma and rat tumor model. • The fragmentation of LRMP could be involved in cellular response to BNCR.

  13. Stratospheric Air Sub-sampler (SAS) and its application to analysis of Delta O-17(CO2) from small air samples collected with an AirCore

    NARCIS (Netherlands)

    Mrozek, Dorota Janina; van der Veen, Carina; Hofmann, Magdalena E. G.; Chen, Huilin; Kivi, Rigel; Heikkinen, Pauli; Rockmann, Thomas

    2016-01-01

    We present the set-up and a scientific application of the Stratospheric Air Sub-sampler (SAS), a device to collect and to store the vertical profile of air collected with an AirCore (Karion et al., 2010) in numerous sub-samples for later analysis in the laboratory. The SAS described here is a 20m

  14. Particle and particle systems characterization small-angle scattering (SAS) applications

    CERN Document Server

    Gille, Wilfried

    2016-01-01

    Small-angle scattering (SAS) is the premier technique for the characterization of disordered nanoscale particle ensembles. SAS is produced by the particle as a whole and does not depend in any way on the internal crystal structure of the particle. Since the first applications of X-ray scattering in the 1930s, SAS has developed into a standard method in the field of materials science. SAS is a non-destructive method and can be directly applied for solid and liquid samples. Particle and Particle Systems Characterization: Small-Angle Scattering (SAS) Applications is geared to any scientist who might want to apply SAS to study tightly packed particle ensembles using elements of stochastic geometry. After completing the book, the reader should be able to demonstrate detailed knowledge of the application of SAS for the characterization of physical and chemical materials.

  15. SOCR: Statistics Online Computational Resource

    Directory of Open Access Journals (Sweden)

    Ivo D. Dinov

    2006-10-01

    Full Text Available The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an integrated educational web-based framework for: interactive distribution modeling, virtual online probability experimentation, statistical data analysis, visualization and integration. Following years of experience in statistical teaching at all college levels using established licensed statistical software packages, like STATA, S-PLUS, R, SPSS, SAS, Systat, etc., we have attempted to engineer a new statistics education environment, the Statistics Online Computational Resource (SOCR. This resource performs many of the standard types of statistical analysis, much like other classical tools. In addition, it is designed in a plug-in object-oriented architecture and is completely platform independent, web-based, interactive, extensible and secure. Over the past 4 years we have tested, fine-tuned and reanalyzed the SOCR framework in many of our undergraduate and graduate probability and statistics courses and have evidence that SOCR resources build student's intuition and enhance their learning.

  16. Nonparametric statistical inference

    CERN Document Server

    Gibbons, Jean Dickinson

    2014-01-01

    Thoroughly revised and reorganized, the fourth edition presents in-depth coverage of the theory and methods of the most widely used nonparametric procedures in statistical analysis and offers example applications appropriate for all areas of the social, behavioral, and life sciences. The book presents new material on the quantiles, the calculation of exact and simulated power, multiple comparisons, additional goodness-of-fit tests, methods of analysis of count data, and modern computer applications using MINITAB, SAS, and STATXACT. It includes tabular guides for simplified applications of tests and finding P values and confidence interval estimates.

  17. Exploring item and higher order factor structure with the Schmid-Leiman solution: syntax codes for SPSS and SAS.

    Science.gov (United States)

    Wolff, Hans-Georg; Preising, Katja

    2005-02-01

    To ease the interpretation of higher order factor analysis, the direct relationships between variables and higher order factors may be calculated by the Schmid-Leiman solution (SLS; Schmid & Leiman, 1957). This simple transformation of higher order factor analysis orthogonalizes first-order and higher order factors and thereby allows the interpretation of the relative impact of factor levels on variables. The Schmid-Leiman solution may also be used to facilitate theorizing and scale development. The rationale for the procedure is presented, supplemented by syntax codes for SPSS and SAS, since the transformation is not part of most statistical programs. Syntax codes may also be downloaded from www.psychonomic.org/archive/.

  18. A SAS IML Macro for Loglinear Smoothing

    Science.gov (United States)

    Moses, Tim; von Davier, Alina

    2011-01-01

    Polynomial loglinear models for one-, two-, and higher-way contingency tables have important applications to measurement and assessment. They are essentially regarded as a smoothing technique, which is commonly referred to as loglinear smoothing. A SAS IML (SAS Institute, 2002a) macro was created to implement loglinear smoothing according to…

  19. Modeling developments for the SAS4A and SASSYS computer codes

    International Nuclear Information System (INIS)

    Cahalan, J.E.; Wei, T.Y.C.

    1990-01-01

    The SAS4A and SASSYS computer codes are being developed at Argonne National Laboratory for transient analysis of liquid metal cooled reactors. The SAS4A code is designed to analyse severe loss-of-coolant flow and overpower accidents involving coolant boiling, Cladding failures, and fuel melting and relocation. Recent SAS4A modeling developments include extension of the coolant boiling model to treat sudden fission gas release upon pin failure, expansion of the DEFORM fuel behavior model to handle advanced cladding materials and metallic fuel, and addition of metallic fuel modeling capability to the PINACLE and LEVITATE fuel relocation models. The SASSYS code is intended for the analysis of operational and beyond-design-basis transients, and provides a detailed transient thermal and hydraulic simulation of the core, the primary and secondary coolant circuits, and the balance-of-plant, in addition to a detailed model of the plant control and protection systems. Recent SASSYS modeling developments have resulted in detailed representations of the balance of plant piping network and components, including steam generators, feedwater heaters and pumps, and the turbine. 12 refs., 2 tabs

  20. Effect of hyperbaric oxygen therapy on SAS and SDS in children with ischemic encephalopathy

    Directory of Open Access Journals (Sweden)

    Pei-Yun Li

    2017-08-01

    Full Text Available Objective: To study and analyze the effect of early psychological intervention on the scores of SAS and SDS in children with hypoxic-ischemic encephalopathy undergoing hyperbaric oxygen therapy. Methods: A total of 64 children with hypoxic - ischemic encephalopathy enrolled in our hospital from July 2015 to July 2016 and their parents were selected as study subjects. The patients were treated with hyperbaric oxygen therapy, while their parents were given early psychological intervention. By the way of increasing parents’ awareness of the disease, helping parents build confidence in their children’s treatment and encouraging them to participate in daily training for their children to relieve their anxiety and depression. The parents' knowledge of the disease before and during treatment, the treatment of hyperbaric oxygen therapy and the change of SAS and SDS were observed. Results: After effective intervention, the scores of SAS and SDS of 64 patients’ parents were significantly lower than those before treatment. After 1 courses of intervention, the score of SAS was (43.36 ± 1.27 points, and the score of SDS was (45.22 ± 8.13 points. After 2 courses of intervention, the score of SAS was (41.07 ± 1.21 and the score of SDS was (42.35 ± 7.44 points, and parents' awareness of hypoxic-ischemic encephalopathy was significantly increased, and the differences between the two groups were statistically significant. Conclusion: Early psychological intervention on parents of children with hypoxic-ischemic encephalopathy can effectively improve the awareness of parents on the disease, so as to improve their acceptance of hyperbaric oxygen therapy; significantly reduce the parents’ SAS, SDS score. It is beneficial to build a good doctor-patient and nurse-patient relationship, improve the treatment effect and shorten the treatment time.

  1. Regression modeling methods, theory, and computation with SAS

    CERN Document Server

    Panik, Michael

    2009-01-01

    Regression Modeling: Methods, Theory, and Computation with SAS provides an introduction to a diverse assortment of regression techniques using SAS to solve a wide variety of regression problems. The author fully documents the SAS programs and thoroughly explains the output produced by the programs.The text presents the popular ordinary least squares (OLS) approach before introducing many alternative regression methods. It covers nonparametric regression, logistic regression (including Poisson regression), Bayesian regression, robust regression, fuzzy regression, random coefficients regression,

  2. SPSS and SAS programs for generalizability theory analyses.

    Science.gov (United States)

    Mushquash, Christopher; O'Connor, Brian P

    2006-08-01

    The identification and reduction of measurement errors is a major challenge in psychological testing. Most investigators rely solely on classical test theory for assessing reliability, whereas most experts have long recommended using generalizability theory instead. One reason for the common neglect of generalizability theory is the absence of analytic facilities for this purpose in popular statistical software packages. This article provides a brief introduction to generalizability theory, describes easy to use SPSS, SAS, and MATLAB programs for conducting the recommended analyses, and provides an illustrative example, using data (N = 329) for the Rosenberg Self-Esteem Scale. Program output includes variance components, relative and absolute errors and generalizability coefficients, coefficients for D studies, and graphs of D study results.

  3. Pravděpodobnostní rozdělení v programu SAS

    OpenAIRE

    Rosypal, Martin

    2009-01-01

    The aim of this work is to elaborate the metodology of creation of custom add-ins for statistical application SAS Enterprise Guide and create one of these that would enable to simplify the calculation of quantiles, values of probability mass function (probability density function respectively) and cumulative distribution function and further based on user's specifications create an appropriate graph. In connection with content of this add-in, the presented work includes the recherche from bac...

  4. SAS-1 Is a C2 Domain Protein Critical for Centriole Integrity in C. elegans

    Science.gov (United States)

    Delattre, Marie; Balestra, Fernando R.; Blanchoud, Simon; Finger, Susanne; Knott, Graham; Müller-Reichert, Thomas; Gönczy, Pierre

    2014-01-01

    Centrioles are microtubule-based organelles important for the formation of cilia, flagella and centrosomes. Despite progress in understanding the underlying assembly mechanisms, how centriole integrity is ensured is incompletely understood, including in sperm cells, where such integrity is particularly critical. We identified C. elegans sas-1 in a genetic screen as a locus required for bipolar spindle assembly in the early embryo. Our analysis reveals that sperm-derived sas-1 mutant centrioles lose their integrity shortly after fertilization, and that a related defect occurs when maternal sas-1 function is lacking. We establish that sas-1 encodes a C2 domain containing protein that localizes to centrioles in C. elegans, and which can bind and stabilize microtubules when expressed in human cells. Moreover, we uncover that SAS-1 is related to C2CD3, a protein required for complete centriole formation in human cells and affected in a type of oral-facial-digital (OFD) syndrome. PMID:25412110

  5. SAS-1 is a C2 domain protein critical for centriole integrity in C. elegans.

    Directory of Open Access Journals (Sweden)

    Lukas von Tobel

    2014-11-01

    Full Text Available Centrioles are microtubule-based organelles important for the formation of cilia, flagella and centrosomes. Despite progress in understanding the underlying assembly mechanisms, how centriole integrity is ensured is incompletely understood, including in sperm cells, where such integrity is particularly critical. We identified C. elegans sas-1 in a genetic screen as a locus required for bipolar spindle assembly in the early embryo. Our analysis reveals that sperm-derived sas-1 mutant centrioles lose their integrity shortly after fertilization, and that a related defect occurs when maternal sas-1 function is lacking. We establish that sas-1 encodes a C2 domain containing protein that localizes to centrioles in C. elegans, and which can bind and stabilize microtubules when expressed in human cells. Moreover, we uncover that SAS-1 is related to C2CD3, a protein required for complete centriole formation in human cells and affected in a type of oral-facial-digital (OFD syndrome.

  6. SAS- Semantic Annotation Service for Geoscience resources on the web

    Science.gov (United States)

    Elag, M.; Kumar, P.; Marini, L.; Li, R.; Jiang, P.

    2015-12-01

    There is a growing need for increased integration across the data and model resources that are disseminated on the web to advance their reuse across different earth science applications. Meaningful reuse of resources requires semantic metadata to realize the semantic web vision for allowing pragmatic linkage and integration among resources. Semantic metadata associates standard metadata with resources to turn them into semantically-enabled resources on the web. However, the lack of a common standardized metadata framework as well as the uncoordinated use of metadata fields across different geo-information systems, has led to a situation in which standards and related Standard Names abound. To address this need, we have designed SAS to provide a bridge between the core ontologies required to annotate resources and information systems in order to enable queries and analysis over annotation from a single environment (web). SAS is one of the services that are provided by the Geosematnic framework, which is a decentralized semantic framework to support the integration between models and data and allow semantically heterogeneous to interact with minimum human intervention. Here we present the design of SAS and demonstrate its application for annotating data and models. First we describe how predicates and their attributes are extracted from standards and ingested in the knowledge-base of the Geosemantic framework. Then we illustrate the application of SAS in annotating data managed by SEAD and annotating simulation models that have web interface. SAS is a step in a broader approach to raise the quality of geoscience data and models that are published on the web and allow users to better search, access, and use of the existing resources based on standard vocabularies that are encoded and published using semantic technologies.

  7. SPSS and SAS procedures for estimating indirect effects in simple mediation models.

    Science.gov (United States)

    Preacher, Kristopher J; Hayes, Andrew F

    2004-11-01

    Researchers often conduct mediation analysis in order to indirectly assess the effect of a proposed cause on some outcome through a proposed mediator. The utility of mediation analysis stems from its ability to go beyond the merely descriptive to a more functional understanding of the relationships among variables. A necessary component of mediation is a statistically and practically significant indirect effect. Although mediation hypotheses are frequently explored in psychological research, formal significance tests of indirect effects are rarely conducted. After a brief overview of mediation, we argue the importance of directly testing the significance of indirect effects and provide SPSS and SAS macros that facilitate estimation of the indirect effect with a normal theory approach and a bootstrap approach to obtaining confidence intervals, as well as the traditional approach advocated by Baron and Kenny (1986). We hope that this discussion and the macros will enhance the frequency of formal mediation tests in the psychology literature. Electronic copies of these macros may be downloaded from the Psychonomic Society's Web archive at www.psychonomic.org/archive/.

  8. The ASDEX integrated data analysis system AIDA

    International Nuclear Information System (INIS)

    Grassie, K.; Gruber, O.; Kardaun, O.; Kaufmann, M.; Lackner, K.; Martin, P.; Mast, K.F.; McCarthy, P.J.; Mertens, V.; Pohl, D.; Rang, U.; Wunderlich, R.

    1989-11-01

    Since about two years, the ASDEX integrated data analysis system (AIDA), which combines the database (DABA) and the statistical analysis system (SAS), is successfully in operation. Besides a considerable, but meaningful, reduction of the 'raw' shot data, it offers the advantage of carefully selected and precisely defined datasets, which are easily accessible for informative tabular data overviews (DABA), and multi-shot analysis (SAS). Even rather complicated, statistical analyses can be performed efficiently within this system. In this report, we want to summarise AIDA's main features, give some details on its set-up and on the physical models which have been used for the derivation of the processed data. We also give short introduction how to use DABA and SAS. (orig.)

  9. SAS3DC - A computer program to describe accidents in LMFBRs

    International Nuclear Information System (INIS)

    Angerer, G.; Arnecke, G.; Polch, A.

    1981-02-01

    The code system SAS3D - developed in the ANL - is at present the most adequate instrument for simulating accidents in the LMFBRs. SAS3DC is an improved version of this code system: the routine CLAZAS - modelling in SAS3D the motion of the fuel cladding - is replaced in the SAS3DC by the routine CMOT. CMOT describes the moving material not in the Lagrangian - as CLAZAS - but in the Eulerian system and is so able to register even small cladding-displacements. To complete the description of the SAS3DC-code the results of some sample problems are included. (orig.) [de

  10. [Standardization of the Greek version of Zung's Self-rating Anxiety Scale (SAS)].

    Science.gov (United States)

    Samakouri, M; Bouhos, G; Kadoglou, M; Giantzelidou, A; Tsolaki, K; Livaditis, M

    2012-01-01

    Self-rating Anxiety Scale (SAS), introduced by Zung, has been widely used in research and in clinical practice for the detection of anxiety. The present study aims at standardizing the Greek version of SAS. SAS consists of 20 items rated on a 1-4 likert type scale. The total SAS score may vary from 20 (no anxiety at all) to 80 (severe anxiety). Two hundred and fifty four participants (114 male and 140 female), psychiatric patients, physically ill and general population individuals, aged 45.40±11.35 years, completed the following: (a) a demographic characteristics' questionnaire, (b) the SAS Greek version, (c) the Spielberg's Modified Greek State-Trait Anxiety Scale (STAI-Gr.-X) and (d) the Zung Depression Rating Scale (ZDRS). Seventy six participants answered the SAS twice within a 12th-day median period of time. The following parameters were calculated: (a) internal consistency of the SAS in terms of Cronbach's α co-efficient, (b) its test-retest reliability in terms of the Intraclass Correlation Coefficient (ICC) and (c) its concurrent and convergent validities through its score's Spearman's rho correlations with both the state and trait subscales of STAI-Gr X and the ZDRS. In addition, in order to evaluate SAS' discriminant validity, the scale's scores of the three groups of participants (psychiatric patients, physically ill and general population individuals) were compared among each other, in terms of Kruskall Wallis and Mann Whitney U tests. SAS Cronbach's alpha equals 0.897 while ICC regarding its test-retest reliability equals 0.913. Spearman's rho concerning validity: (a) when SAS is compared to STAI-Gr.-X (state), equals it 0.767, (b) when SAS is compared to STAI-Gr. X (trait), it equals 0.802 and (c) when SAS is compared to ZDRS, it equals 0.835. The mentally ill scored significantly higher in SAS compared to both the healthy and the general population. In conclusion, the SAS Greek version presents very satisfactory psychometric properties regarding

  11. The School Anxiety Scale-Teacher Report (SAS-TR): translation and psychometric properties of the Iranian version.

    Science.gov (United States)

    Hajiamini, Zahra; Mohamadi, Ashraf; Ebadi, Abbas; Fathi- Ashtiani, Ali; Tavousi, Mahmoud; Montazeri, Ali

    2012-07-18

    The School Anxiety Scale-Teacher Report (SAS-TR) was designed to assess anxiety in children at school. The SAS-TR is a proxy rated measure and could assess social anxiety, generalized anxiety and also gives a total anxiety score. This study aimed to translate and validate the SAS-TR in Iran. The translation and cultural adaptation of the original questionnaire were carried out in accordance with the published guidelines. A sample of students participated in the study. Reliability was estimated using internal consistency and test-retest analysis. Validity was assessed using content validity. The factor structure of the questionnaire was extracted by performing both exploratory and confirmatory factor analyses. In all 200 elementary students aged 6 to 10 years were studied. Considering the recommended cut-off values, overall the prevalence of high anxiety condition in elementary students was found to be 21 %. Cronbach's alpha coefficient for the Iranian SAS-TR was 0.92 and intraclass correlation coefficient (ICC) was found to be 0.81. The principal component analysis indicated a two-factor structure for the questionnaire (generalized and social anxiety) that jointly accounted for 55.3 % of variances observed. The confirmatory factory analysis also indicated a good fit to the data for the two-latent structure of the questionnaire. In general the findings suggest that the Iranian version of SAS-TR has satisfactory reliability, and validity for measuring anxiety in 6 to 10 years old children in Iran. It is simple and easy to use and now can be applied in future studies.

  12. Mirror suspension system for the TAMA SAS

    CERN Document Server

    Takamori, A; Bertolini, A; Cella, G; DeSalvo, R; Fukushima, M; Iida, Y; Jacquier, F; Kawamura, S; Marka, S; Nishi, Y; Numata, K; Sannibale, V; Somiya, K; Takahashi, R; Tariq, H; Tsubono, K; Ugas, J; Viboud, N; Yamamoto, H; Yoda, T; Wang Chen Yang

    2002-01-01

    Several R and D programmes are ongoing to develop the next generation of interferometric gravitational wave detectors providing the superior sensitivity desired for refined astronomical observations. In order to obtain a wide observation band at low frequencies, the optics need to be isolated from the seismic noise. The TAMA SAS (seismic attenuation system) has been developed within an international collaboration between TAMA, LIGO, and some European institutes, with the main objective of achieving sufficient low-frequency seismic attenuation (-180 dB at 10 HZ). The system suppresses seismic noise well below the other noise levels starting at very low frequencies above 10 Hz. It also includes an active inertial damping system to decrease the residual motion of the optics enough to allow a stable operation of the interferometer. The TAMA SAS also comprises a sophisticated mirror suspension subsystem (SUS). The SUS provides support for the optics and vibration isolation complementing the SAS performance. The SU...

  13. Surface anatomy scanning (SAS) in intracranial tumours: comparison with surgical findings

    International Nuclear Information System (INIS)

    Sumida, M.; Uozumi, T.; Kiya, K.; Arita, K.; Kurisu, K.; Onda, J.; Satoh, H.; Ikawa, F.; Yukawa, O.; Migita, K.; Hada, H.; Katada, K.

    1995-01-01

    We evaluated the usefulness of surface anatomy scanning (SAS) in intracranial tumours, comparing it with surgical findings. We examined 31 patients with brain tumours preoperatively. The tumours included 16 meningiomas, 8 gliomas, 4 metastases and 3 others. SAS clearly demonstrated the tumours, allowing them to be distinguished from the structures of the brain surface, including oedema, except in cases of metastasis. SAS clearly demonstrated large cortical veins. SAS is useful for three-dimensional delineation of the brain surface before surgery. (orig.)

  14. Adaptive beamforming for low frequency SAS imagery and bathymetry

    NARCIS (Netherlands)

    Hayes, M.P.; Hunter, A.J.

    2012-01-01

    Synthetic aperture side-scan sonar (SAS) is a mature technology for high-resolution sea floor imaging [1]. Interferometric synthetic aperture sonars (InSAS) use additional hydrophones in a vertical array for bathymetric mapping [2]. This has created high-resolution bathymetry in deep water

  15. First report of sasX-positive methicillin-resistant Staphylococcus aureus in Japan.

    Science.gov (United States)

    Nakaminami, Hidemasa; Ito, Teruyo; Han, Xiao; Ito, Ayumu; Matsuo, Miki; Uehara, Yuki; Baba, Tadashi; Hiramatsu, Keiichi; Noguchi, Norihisa

    2017-09-01

    SasX is a known virulence factor of Staphylococcus aureus involved in colonisation and immune evasion of the bacterium. The sasX gene, which is located on the ϕSPβ prophage, is frequently found in the sequence type (ST) 239 S. aureus lineage, which is the predominant healthcare-associated clone in Asian countries. In Japan, ST239 clones have rarely been identified, and sasX-positive strains have not been reported to date. Here, we report the first identification of 18 sasX-positive methicillin-resistant S. aureus (MRSA) strains in Japanese hospitals between 2009 and 2011. All sasX-positive isolates belonged to an ST239-staphylococcal cassette chromosome mec type III (ST239-III) lineage. However, we were unable to identify additional sasX-positive MRSA strains from 2012 to 2016, indicating that the small epidemic of sasX-positive isolates observed in this study was temporary. The sequence surrounding sasX in the strain TOHH628 lacked 51 genes that encode phage packaging and structural proteins, and no bacteriophage was induced by mitomycin C. Additionally, in the TOHH628 strain, the region (64.6 kb) containing sasX showed high identity to the ϕSPβ-like element (71.3 kb) of the Taiwanese MRSA strain Z172. The data strongly suggest that the present sasX-positive isolates found in Japanese hospitals were transmitted incidentally from other countries. © FEMS 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  16. SAS Code for Calculating Intraclass Correlation Coefficients and Effect Size Benchmarks for Site-Randomized Education Experiments

    Science.gov (United States)

    Brandon, Paul R.; Harrison, George M.; Lawton, Brian E.

    2013-01-01

    When evaluators plan site-randomized experiments, they must conduct the appropriate statistical power analyses. These analyses are most likely to be valid when they are based on data from the jurisdictions in which the studies are to be conducted. In this method note, we provide software code, in the form of a SAS macro, for producing statistical…

  17. Hotelli Radisson SAS mantra : jah, ma saan! / Kai Vare

    Index Scriptorium Estoniae

    Vare, Kai, 1968-

    2004-01-01

    Radisson SAS hotell Tallinnas on kliendirahulolu-uuringute järgi keti hotellide seas esimeste hulgas. Hotelli direktor Kaido Ojaperv ja müügijuht Ann-Kai Tõrs Radisson SAS-i standarditest, kliendi sajaprotsendilise rahulolu tagamise põhimõtetest, personali valikust, koolitusest. Kommenteerib Sandra Dimitrovich

  18. The School Anxiety Scale-Teacher Report (SAS-TR: translation and psychometric properties of the Iranian version

    Directory of Open Access Journals (Sweden)

    Hajiamini Zahra

    2012-07-01

    Full Text Available Abstract Background The School Anxiety Scale-Teacher Report (SAS-TR was designed to assess anxiety in children at school. The SAS-TR is a proxy rated measure and could assess social anxiety, generalized anxiety and also gives a total anxiety score. This study aimed to translate and validate the SAS-TR in Iran. Methods The translation and cultural adaptation of the original questionnaire were carried out in accordance with the published guidelines. A sample of students participated in the study. Reliability was estimated using internal consistency and test-retest analysis. Validity was assessed using content validity. The factor structure of the questionnaire was extracted by performing both exploratory and confirmatory factor analyses. Results In all 200 elementary students aged 6 to 10 years were studied. Considering the recommended cut-off values, overall the prevalence of high anxiety condition in elementary students was found to be 21 %. Cronbach's alpha coefficient for the Iranian SAS-TR was 0.92 and intraclass correlation coefficient (ICC was found to be 0.81. The principal component analysis indicated a two-factor structure for the questionnaire (generalized and social anxiety that jointly accounted for 55.3 % of variances observed. The confirmatory factory analysis also indicated a good fit to the data for the two-latent structure of the questionnaire. Conclusion In general the findings suggest that the Iranian version of SAS-TR has satisfactory reliability, and validity for measuring anxiety in 6 to 10 years old children in Iran. It is simple and easy to use and now can be applied in future studies.

  19. Simple and flexible SAS and SPSS programs for analyzing lag-sequential categorical data.

    Science.gov (United States)

    O'Connor, B P

    1999-11-01

    This paper describes simple and flexible programs for analyzing lag-sequential categorical data, using SAS and SPSS. The programs read a stream of codes and produce a variety of lag-sequential statistics, including transitional frequencies, expected transitional frequencies, transitional probabilities, adjusted residuals, z values, Yule's Q values, likelihood ratio tests of stationarity across time and homogeneity across groups or segments, transformed kappas for unidirectional dependence, bidirectional dependence, parallel and nonparallel dominance, and significance levels based on both parametric and randomization tests.

  20. Cognition of and Demand for Education and Teaching in Medical Statistics in China: A Systematic Review and Meta-Analysis

    Science.gov (United States)

    Li, Gaoming; Yi, Dali; Wu, Xiaojiao; Liu, Xiaoyu; Zhang, Yanqi; Liu, Ling; Yi, Dong

    2015-01-01

    Background Although a substantial number of studies focus on the teaching and application of medical statistics in China, few studies comprehensively evaluate the recognition of and demand for medical statistics. In addition, the results of these various studies differ and are insufficiently comprehensive and systematic. Objectives This investigation aimed to evaluate the general cognition of and demand for medical statistics by undergraduates, graduates, and medical staff in China. Methods We performed a comprehensive database search related to the cognition of and demand for medical statistics from January 2007 to July 2014 and conducted a meta-analysis of non-controlled studies with sub-group analysis for undergraduates, graduates, and medical staff. Results There are substantial differences with respect to the cognition of theory in medical statistics among undergraduates (73.5%), graduates (60.7%), and medical staff (39.6%). The demand for theory in medical statistics is high among graduates (94.6%), undergraduates (86.1%), and medical staff (88.3%). Regarding specific statistical methods, the cognition of basic statistical methods is higher than of advanced statistical methods. The demand for certain advanced statistical methods, including (but not limited to) multiple analysis of variance (ANOVA), multiple linear regression, and logistic regression, is higher than that for basic statistical methods. The use rates of the Statistical Package for the Social Sciences (SPSS) software and statistical analysis software (SAS) are only 55% and 15%, respectively. Conclusion The overall statistical competence of undergraduates, graduates, and medical staff is insufficient, and their ability to practically apply their statistical knowledge is limited, which constitutes an unsatisfactory state of affairs for medical statistics education. Because the demand for skills in this area is increasing, the need to reform medical statistics education in China has become urgent

  1. Structures of SAS-6 suggest its organization in centrioles.

    Science.gov (United States)

    van Breugel, Mark; Hirono, Masafumi; Andreeva, Antonina; Yanagisawa, Haru-aki; Yamaguchi, Shoko; Nakazawa, Yuki; Morgner, Nina; Petrovich, Miriana; Ebong, Ima-Obong; Robinson, Carol V; Johnson, Christopher M; Veprintsev, Dmitry; Zuber, Benoît

    2011-03-04

    Centrioles are cylindrical, ninefold symmetrical structures with peripheral triplet microtubules strictly required to template cilia and flagella. The highly conserved protein SAS-6 constitutes the center of the cartwheel assembly that scaffolds centrioles early in their biogenesis. We determined the x-ray structure of the amino-terminal domain of SAS-6 from zebrafish, and we show that recombinant SAS-6 self-associates in vitro into assemblies that resemble cartwheel centers. Point mutations are consistent with the notion that centriole formation in vivo depends on the interactions that define the self-assemblies observed here. Thus, these interactions are probably essential to the structural organization of cartwheel centers.

  2. Preliminary study of energy confinement data with a statistical analysis system in HL-2A tokamak

    International Nuclear Information System (INIS)

    Xu Yuan; Cui Zhengying; Ji Xiaoquan; Dong Chunfeng; Yang Qingwei; O J W F Kardaun

    2010-01-01

    Taking advantage of the HL-2A experimental data,an energy confinement database facing ITERL DB2.0 version has been originally established. As for this database,a world widely used statistical analysis system (SAS) has been adopted for the first time to analyze and evaluate the confinement data from HL-2A and the research on scaling laws of energy confinement time corresponding to plasma density is developed, some preliminary results having been achieved. Finally, through comparing with both ITER scaling law and previous ASDEX database, the investigation about L-mode confinement quality on HL-2A and influence of temperature on Spitzer resistivity will be discussed. (authors)

  3. Napojení .NET / Java technologie na SAS BI

    OpenAIRE

    Jandák, Miroslav

    2009-01-01

    This thesis is focused on SAS Enterprise Intelligence Platform product and it's capabilities to integrate within a Business Intelligence solution. The aim of the thesis is to describe integration technolgies that the platform features, as well as to determine their application field and compare them, eventually add usage examples. The first part of the thesis explains the general concept and architecture of Business Intelligence, afterwards the reader gets familiar with the SAS Enterprise Int...

  4. Porovnání statistických programových systémů SPSS, STATISTICA a SAS z hlediska jejich možností při analýze kategoriálních dat

    OpenAIRE

    Torboněnko, Natalja

    2009-01-01

    The purpose of this bachelor thesis is to compare three statistical software packages when applied to analysis of categorical data. These packages are SPSS, STATISTICA , and SAS Enterprise Guide. They provide a wide variety of statistical and graphical techniques, and enable users to obtain the results of statistical procedures, without requiring programming. The comparison between these packages were based upon some features such as data input, capability to define missing values, value labe...

  5. Mirror suspension system for the TAMA SAS

    International Nuclear Information System (INIS)

    Takamori, Akiteru; Ando, Masaki; Bertolini, Alessandro; Cella, Giancarlo; DeSalvo, Riccardo; Fukushima, Mitsuhiro; Iida, Yukiyoshi; Jacquier, Florian; Kawamura, Seiji; Marka, Szabolcs; Nishi, Yuhiko; Numata, Kenji; Sannibale, Virginio; Somiya, Kentaro; Takahashi, Ryutaro; Tariq, Hareem; Tsubono, Kimio; Ugas, Jose; Viboud, Nicolas; Yamamoto, Hiroaki; Yoda, Tatsuo; Wang Chenyang

    2002-01-01

    Several R and D programmes are ongoing to develop the next generation of interferometric gravitational wave detectors providing the superior sensitivity desired for refined astronomical observations. In order to obtain a wide observation band at low frequencies, the optics need to be isolated from the seismic noise. The TAMA SAS (seismic attenuation system) has been developed within an international collaboration between TAMA, LIGO, and some European institutes, with the main objective of achieving sufficient low-frequency seismic attenuation (-180 dB at 10 HZ). The system suppresses seismic noise well below the other noise levels starting at very low frequencies above 10 Hz. It also includes an active inertial damping system to decrease the residual motion of the optics enough to allow a stable operation of the interferometer. The TAMA SAS also comprises a sophisticated mirror suspension subsystem (SUS). The SUS provides support for the optics and vibration isolation complementing the SAS performance. The SUS is equipped with a totally passive magnetic damper to suppress internal resonances without degrading the thermal noise performance. In this paper we discuss the SUS details and present prototype results

  6. A new cavity model for SAS4A

    International Nuclear Information System (INIS)

    Moxon, D.; Camous, F.

    1994-01-01

    The SAS4 code is the fourth generation of the SAS series developed at the ANL to study the initiating phase of hypothetical core disruptive accidents in LMFBRs. It was made available to the CEA in order to obtain more validation studies and model developments. The new cavity model described and incorporated in the code was first developed as a stand-alone code. It was thoroughly tested numerically and found to be quick and stable. Tis paper describes only the physical phenomena taken into account

  7. Susan J. Slaughter, Lora D. Delwiche, The Little SAS Book for Enterprise Guide 4.1 et James B. Davis, Statistics using SAS Enterprise Guide

    OpenAIRE

    2010-01-01

    Le logiciel SAS® (Statistical Analysis System) est un logiciel d’analyse statistique et économétrique de référence en économie industrielle ; il permet de gérer de grandes bases de données informatiques indépendamment de leur format ou de leur plate-forme de résidence, de réaliser quasiment tous les traitements économétriques sur ces données, et de mettre en forme les résultats. Un obstacle à son utilisation a souvent résidé pour beaucoup dans la relative complexité des outils de programmatio...

  8. PREFACE Proceedings of the XIV International Conference on Small-Angle Scattering, SAS-2009

    Science.gov (United States)

    King, Stephen; Terrill, Nicholas

    2010-10-01

    scientific heart of the conference comprised 10 plenary sessions, interspersed by 39 'themed' parallel sessions, 2 poster sessions, an afternoon tour of Diamond and ISIS, and a week-long exhibition. There were 144 contributed oral presentations and 308 poster presentations across a total of 21 themes. Over half of all presentations fell under 6 themes: biological systems, colloids and solutions, instrumentation, kinetic and time-resolved measurements, polymers, and surfaces and interfaces. The importance of SAS techniques to the study of biology, materials science and soft matter/nanoscience is clear. The plenary presentations, which covered topics as diverse as advanced analysis techniques, biology, green chemistry, materials science and surfaces, were delivered by Frank Bates, Minnesota, USA, Peter Fratzl, MPI Golm, Germany, Buxing Han, Bejing, China, Julia Kornfield, CIT, USA, Jan Skov Pedersen, Aarhus, Denmark, Moonhor Ree, Pohang, Korea, Mitsuhiro Shibayama, Tokyo, Japan, Robert Thomas, Oxford, UK, Jill Trewhella, Sydney, Australia, and Thomas Zemb, ICSM Bagnols, France. Instigated by representatives of the Belgian and Dutch SAS communities one parallel session was dedicated to a tribute for Michel Koch, the pioneer of so many novel applications of SAXS, who retired after 30 years at the EMBL Hamburg in late 2006. With a supporting cast that included Wim Bras, ESRF, France, Tony Ryan, Sheffield, UK and Joe Zaccai, ILL,France, and watched by former colleague André Gabriel, Michel treated the audience to a fascinating - and at times light-hearted - retrospective of the evolution of synchrotron SAXS. Another parallel session was devoted to the work of the canSAS (Collective Action for Nomadic Small-Angle Scatterers) network of large-facility representatives and instrument scientists in areas such as data file formats, intensity calibration and software development. For further information see http://www.smallangles.net/wgwiki/index.php/canSAS_Working_Groups. A total of

  9. Nueva evidencia sobre la Statistical Anxiety Scale (SAS

    Directory of Open Access Journals (Sweden)

    Amparo Oliver

    2014-01-01

    Full Text Available Las asignaturas relacionadas con la estadística suelen tener problemas de rendimiento académico. La ansiedad se relaciona de forma negativa con el rendimiento y en particular, la ansiedad estadística puede ser un constructo clave en la mejora de la enseñanza de esta materia y afines. La Statistical Anxiety Scale (Vigil-Colet, Lorenzo-Seva y Condon, 2008 se creó con la pretensión de ser útil para predecir el rendimiento académico en estadística. Se fundamenta en tres dimensiones de ansiedad referidas a tres aspectos específicos: respecto al examen, cuando se pide ayuda en la comprensión de estadística y en el proceso de interpretación de resultados. Esta estructura de tres factores fue hallada en un primer momento por los autores de la escala y en una primera validación corroborada en estudiantes italianos y españoles. El presente estudio pretende añadir nueva evidencia sobre la fiabilidad y validez de la escala, empleando en el estudio de fiabilidad técnicas estadísticas robustas, y ampliando el estudio de la validez respecto a su principal criterio, el rendimiento académico, ya que no puede ser considerado sinónimo de autoeficacia.

  10. SAS-macros for estimation and prediction in an model of the electricity consumption

    DEFF Research Database (Denmark)

    1998-01-01

    SAS-macros for estimation and prediction in an model of the electricity consumption'' is a large collection of SAS-macros for handling a model of the electricity consumption in the Eastern Denmark. The macros are installed at Elkraft, Ballerup.......SAS-macros for estimation and prediction in an model of the electricity consumption'' is a large collection of SAS-macros for handling a model of the electricity consumption in the Eastern Denmark. The macros are installed at Elkraft, Ballerup....

  11. Fitting polytomous Rasch models in SAS

    DEFF Research Database (Denmark)

    Christensen, Karl Bang

    2006-01-01

    The item parameters of a polytomous Rasch model can be estimated using marginal and conditional approaches. This paper describes how this can be done in SAS (V8.2) for three item parameter estimation procedures: marginal maximum likelihood estimation, conditional maximum likelihood estimation, an...

  12. Olev Schults : SAS vajab Estonian Airi rahvusliku lennufirmana / Olev Schults ; interv. Andres Reimer

    Index Scriptorium Estoniae

    Schults, Olev

    2008-01-01

    Estonian Airi nõukogu esimees vastab küsimustele, kas SAS arendas Läti airBalticut Estonian SAS-i arvel, mis mõte on rahvuslikul lennukompaniil, kui riik ei tohi seda finantseerida, kuidas mõjutab investorite meeleolu SAS-i Eestis tabanud poliitikute kriitika tulv

  13. Predicting Smoking Status Using Machine Learning Algorithms and Statistical Analysis

    Directory of Open Access Journals (Sweden)

    Charles Frank

    2018-03-01

    Full Text Available Smoking has been proven to negatively affect health in a multitude of ways. As of 2009, smoking has been considered the leading cause of preventable morbidity and mortality in the United States, continuing to plague the country’s overall health. This study aims to investigate the viability and effectiveness of some machine learning algorithms for predicting the smoking status of patients based on their blood tests and vital readings results. The analysis of this study is divided into two parts: In part 1, we use One-way ANOVA analysis with SAS tool to show the statistically significant difference in blood test readings between smokers and non-smokers. The results show that the difference in INR, which measures the effectiveness of anticoagulants, was significant in favor of non-smokers which further confirms the health risks associated with smoking. In part 2, we use five machine learning algorithms: Naïve Bayes, MLP, Logistic regression classifier, J48 and Decision Table to predict the smoking status of patients. To compare the effectiveness of these algorithms we use: Precision, Recall, F-measure and Accuracy measures. The results show that the Logistic algorithm outperformed the four other algorithms with Precision, Recall, F-Measure, and Accuracy of 83%, 83.4%, 83.2%, 83.44%, respectively.

  14. Nyheder i SAS Analytics 14.2

    DEFF Research Database (Denmark)

    Milhøj, Anders

    2017-01-01

    I november 2016 blev Analytical Produts i den opdaterede version 14.2 sendt på markedet. Denne opdatering indeholder opdateringer af de analytiske programpakker inden for statistik, økonometri, operationsanalyse etc. Disse opdateringer er nu løsrevet fra samtidige opdateringer af det samlede SAS-program...

  15. Code portability and data management considerations in the SAS3D LMFBR accident-analysis code

    International Nuclear Information System (INIS)

    Dunn, F.E.

    1981-01-01

    The SAS3D code was produced from a predecessor in order to reduce or eliminate interrelated problems in the areas of code portability, the large size of the code, inflexibility in the use of memory and the size of cases that can be run, code maintenance, and running speed. Many conventional solutions, such as variable dimensioning, disk storage, virtual memory, and existing code-maintenance utilities were not feasible or did not help in this case. A new data management scheme was developed, coding standards and procedures were adopted, special machine-dependent routines were written, and a portable source code processing code was written. The resulting code is quite portable, quite flexible in the use of memory and the size of cases that can be run, much easier to maintain, and faster running. SAS3D is still a large, long running code that only runs well if sufficient main memory is available

  16. Melnās krāsas iespējas interjerā

    OpenAIRE

    Lapkovska, Ērika

    2016-01-01

    Diplomdarbā “Melnās krāsas iespējas interjerā”, analizējot pieejamo literatūru, tiek apskatīta melnās krāsas izpratne, nozīme un lietojums vēsturiskā skatījumā, kā arī melnās krāsas izmantojuma principi interjerā. Empīriskajā daļā tiek veikts salīdzinošais pētījums, analizējot dažādus kafejnīcu un bāru interjerus Latvijā un pasaulē, kuros ir melnās krāsas klātbūtne. Diplomdarba apjoms – 75 lpp., kurās iekļauts ievads, 4 nodaļas, 7 apakšnodaļas, literatūras saraksts ar 53 vienībām, 2 pielikum...

  17. DEFORM-4: fuel pin characterization and transient response in the SAS4A accident analysis code system

    International Nuclear Information System (INIS)

    Miles, K.J.; Hill, D.J.

    1986-01-01

    The DEFORM-4 module is the segment of the SAS4A Accident Analysis Code System that calculates the fuel pin characterization in response to a steady state irradiation history, thereby providing the initial conditions for the transient calculation. The various phenomena considered include fuel porosity migration, fission gas bubble induced swelling, fuel cracking and healing, fission gas release, cladding swelling, and the thermal-mechanical state of the fuel and cladding. In the transient state, the module continues the thermal-mechanical response calculation, including fuel melting and central cavity pressurization, until cladding failure is predicted and one of the failed fuel modules is initiated. Comparisons with experimental data have demonstrated the validity of the modeling approach

  18. Combination of functional MRI with SAS and MRA

    Energy Technology Data Exchange (ETDEWEB)

    Sumida, Masayuki; Takeshita, Shinichirou; Kutsuna, Munenori; Akimitsu, Tomohide; Arita, Kazunori; Kurisu, Kaoru [Hiroshima Univ. (Japan). School of Medicine

    1999-02-01

    For presurgical diagnosis of brain surface, combination of functional MRI (fMRI) with the MR angiography was examined. This method could visualize brain bay, convolution and vein as index of surface. Five normal adults (male, mean age: 28-year-old) and 7 patients with brain tumor on the main locus to surface (male: 4, female: 3, mean age: 52.3-year-old) were studied. fMRI was performed by SPGR method (TR 70, TE 40, flip angle 60, one slice, thickness 10 mm, FOV 20 cm, matrix 128 x 128). The brain surface was visualized by SAS (surface anatomy scanning). SAS was performed by FSE method (TR 6000, TE 200, echo train 16, thickness 20 mm, slice 3, NEX 2). Cortical veins near superior sagittal sinus were visualized by MRA with 2D-TOF method (TR 50, TE 20, flip angle 60, thickness 2 mm, slice 28, NEX 1). These images were superimposed and functional image of peripheral sensorimotor region was evaluated anatomically. In normal adults, high signal was visualized at another side of near sensorimotor region at 8 of 10 sides. All high signal area of fMRI agreed with cortical vein near sensorimotor region that was visualized by MRA. In patients with brain tumor, signal was visualized at another side of sensorimotor region of tumor without 2 cases with palsy. In another side of tumor, signal of fMRI was visualized in 5 of 7 cases. The tumor was visualized as opposite low signal field in SAS. Locational relation between tumor and brain surface and brain function was visualized distinctly by combination of MRA, SAS and MRA. This method could become useful for presurgical diagnosis. (K.H.)

  19. SAS-6 engineering reveals interdependence between cartwheel and microtubules in determining centriole architecture.

    Science.gov (United States)

    Hilbert, Manuel; Noga, Akira; Frey, Daniel; Hamel, Virginie; Guichard, Paul; Kraatz, Sebastian H W; Pfreundschuh, Moritz; Hosner, Sarah; Flückiger, Isabelle; Jaussi, Rolf; Wieser, Mara M; Thieltges, Katherine M; Deupi, Xavier; Müller, Daniel J; Kammerer, Richard A; Gönczy, Pierre; Hirono, Masafumi; Steinmetz, Michel O

    2016-04-01

    Centrioles are critical for the formation of centrosomes, cilia and flagella in eukaryotes. They are thought to assemble around a nine-fold symmetric cartwheel structure established by SAS-6 proteins. Here, we have engineered Chlamydomonas reinhardtii SAS-6-based oligomers with symmetries ranging from five- to ten-fold. Expression of a SAS-6 mutant that forms six-fold symmetric cartwheel structures in vitro resulted in cartwheels and centrioles with eight- or nine-fold symmetries in vivo. In combination with Bld10 mutants that weaken cartwheel-microtubule interactions, this SAS-6 mutant produced six- to eight-fold symmetric cartwheels. Concurrently, the microtubule wall maintained eight- and nine-fold symmetries. Expressing SAS-6 with analogous mutations in human cells resulted in nine-fold symmetric centrioles that exhibited impaired length and organization. Together, our data suggest that the self-assembly properties of SAS-6 instruct cartwheel symmetry, and lead us to propose a model in which the cartwheel and the microtubule wall assemble in an interdependent manner to establish the native architecture of centrioles.

  20. Sas-4 proteins are required during basal body duplication in Paramecium

    Science.gov (United States)

    Gogendeau, Delphine; Hurbain, Ilse; Raposo, Graca; Cohen, Jean; Koll, France; Basto, Renata

    2011-01-01

    Centrioles and basal bodies are structurally related organelles composed of nine microtubule (MT) triplets. Studies performed in Caenorhabditis elegans embryos have shown that centriole duplication takes place in sequential way, in which different proteins are recruited in a specific order to assemble a procentriole. ZYG-1 initiates centriole duplication by triggering the recruitment of a complex of SAS-5 and SAS-6, which then recruits the final player, SAS-4, to allow the incorporation of MT singlets. It is thought that a similar mechanism (that also involves additional proteins) is present in other animal cells, but it remains to be investigated whether the same players and their ascribed functions are conserved during basal body duplication in cells that exclusively contain basal bodies. To investigate this question, we have used the multiciliated protist Paramecium tetraurelia. Here we show that in the absence of PtSas4, two types of defects in basal body duplication can be identified. In the majority of cases, the germinative disk and cartwheel, the first structures assembled during duplication, are not detected. In addition, if daughter basal bodies were formed, they invariably had defects in MT recruitment. Our results suggest that PtSas4 has a broader function than its animal orthologues. PMID:21289083

  1. SAS-6 assembly templated by the lumen of cartwheel-less centrioles precedes centriole duplication.

    Science.gov (United States)

    Fong, Chii Shyang; Kim, Minhee; Yang, T Tony; Liao, Jung-Chi; Tsou, Meng-Fu Bryan

    2014-07-28

    Centrioles are 9-fold symmetric structures duplicating once per cell cycle. Duplication involves self-oligomerization of the centriolar protein SAS-6, but how the 9-fold symmetry is invariantly established remains unclear. Here, we found that SAS-6 assembly can be shaped by preexisting (or mother) centrioles. During S phase, SAS-6 molecules are first recruited to the proximal lumen of the mother centriole, adopting a cartwheel-like organization through interactions with the luminal wall, rather than via their self-oligomerization activity. The removal or release of luminal SAS-6 requires Plk4 and the cartwheel protein STIL. Abolishing either the recruitment or the removal of luminal SAS-6 hinders SAS-6 (or centriole) assembly at the outside wall of mother centrioles. After duplication, the lumen of engaged mother centrioles becomes inaccessible to SAS-6, correlating with a block for reduplication. These results lead to a proposed model that centrioles may duplicate via a template-based process to preserve their geometry and copy number. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Investigating spousal concordance of diabetes through statistical analysis and data mining.

    Directory of Open Access Journals (Sweden)

    Jong-Yi Wang

    Full Text Available Spousal clustering of diabetes merits attention. Whether old-age vulnerability or a shared family environment determines the concordance of diabetes is also uncertain. This study investigated the spousal concordance of diabetes and compared the risk of diabetes concordance between couples and noncouples by using nationally representative data.A total of 22,572 individuals identified from the 2002-2013 National Health Insurance Research Database of Taiwan constituted 5,643 couples and 5,643 noncouples through 1:1 dual propensity score matching (PSM. Factors associated with concordance in both spouses with diabetes were analyzed at the individual level. The risk of diabetes concordance between couples and noncouples was compared at the couple level. Logistic regression was the main statistical method. Statistical data were analyzed using SAS 9.4. C&RT and Apriori of data mining conducted in IBM SPSS Modeler 13 served as a supplement to statistics.High odds of the spousal concordance of diabetes were associated with old age, middle levels of urbanization, and high comorbidities (all P < 0.05. The dual PSM analysis revealed that the risk of diabetes concordance was significantly higher in couples (5.19% than in noncouples (0.09%; OR = 61.743, P < 0.0001.A high concordance rate of diabetes in couples may indicate the influences of assortative mating and shared environment. Diabetes in a spouse implicates its risk in the partner. Family-based diabetes care that emphasizes the screening of couples at risk of diabetes by using the identified risk factors is suggested in prospective clinical practice interventions.

  3. Beginning statistics with data analysis

    CERN Document Server

    Mosteller, Frederick; Rourke, Robert EK

    2013-01-01

    This introduction to the world of statistics covers exploratory data analysis, methods for collecting data, formal statistical inference, and techniques of regression and analysis of variance. 1983 edition.

  4. Classical Methods of Statistics With Applications in Fusion-Oriented Plasma Physics

    CERN Document Server

    Kardaun, Otto J W F

    2005-01-01

    Classical Methods of Statistics is a blend of theory and practical statistical methods written for graduate students and researchers interested in applications to plasma physics and its experimental aspects. It can also fruitfully be used by students majoring in probability theory and statistics. In the first part, the mathematical framework and some of the history of the subject are described. Many exercises help readers to understand the underlying concepts. In the second part, two case studies are presented exemplifying discriminant analysis and multivariate profile analysis. The introductions of these case studies outline contextual magnetic plasma fusion research. In the third part, an overview of statistical software is given and, in particular, SAS and S-PLUS are discussed. In the last chapter, several datasets with guided exercises, predominantly from the ASDEX Upgrade tokamak, are included and their physical background is concisely described. The book concludes with a list of essential keyword transl...

  5. The Prospects of SAS Interferometry for Detection and Classification (SAS Interferometrie voor Detectie en Classificatie)

    Science.gov (United States)

    2008-10-01

    DV2008A176 Opdrachtnummer Datum October 2008 Auteur (s) dr. R. van Vossen B.A.J. Quesson dr.ir. J.C. Sabel Rubricering rapport Ongerubriceerd TH9...TNO report | TNO-DV 2008 A176 4/44 Summary This report presents an overview of the theory and implementation of interferometric SAS processing at TNO... theory in software has been tested on two types of data, simulated and measured. Chapter 3 presents results obtained with simulated data; Chapter 4

  6. Statistical Study to Check the Conformity of Aggregate in Kirkuk City to Requirement of Iraqi Specification

    OpenAIRE

    Ammar Saleem Khazaal; Nizar N Ismeel; Abdel fattah K. Hussein

    2018-01-01

    This research reviews a statistical study to check the conformity of aggregates (Coarse and Fine) was used in Kirkuk city to the requirements of the Iraqi specifications. The data of sieve analysis (215 samples) of aggregates being obtained from of National Central Construction Laboratory and Technical College Construction Laboratory in Kirkuk city have analyzed using the statistical program SAS. The results showed that 5%, 17%, and 18% of fine aggregate samples are passing sieve sizes 10 mm,...

  7. Research design and statistical analysis

    CERN Document Server

    Myers, Jerome L; Lorch Jr, Robert F

    2013-01-01

    Research Design and Statistical Analysis provides comprehensive coverage of the design principles and statistical concepts necessary to make sense of real data.  The book's goal is to provide a strong conceptual foundation to enable readers to generalize concepts to new research situations.  Emphasis is placed on the underlying logic and assumptions of the analysis and what it tells the researcher, the limitations of the analysis, and the consequences of violating assumptions.  Sampling, design efficiency, and statistical models are emphasized throughout. As per APA recommendations

  8. New applications of statistical tools in plant pathology.

    Science.gov (United States)

    Garrett, K A; Madden, L V; Hughes, G; Pfender, W F

    2004-09-01

    ABSTRACT The series of papers introduced by this one address a range of statistical applications in plant pathology, including survival analysis, nonparametric analysis of disease associations, multivariate analyses, neural networks, meta-analysis, and Bayesian statistics. Here we present an overview of additional applications of statistics in plant pathology. An analysis of variance based on the assumption of normally distributed responses with equal variances has been a standard approach in biology for decades. Advances in statistical theory and computation now make it convenient to appropriately deal with discrete responses using generalized linear models, with adjustments for overdispersion as needed. New nonparametric approaches are available for analysis of ordinal data such as disease ratings. Many experiments require the use of models with fixed and random effects for data analysis. New or expanded computing packages, such as SAS PROC MIXED, coupled with extensive advances in statistical theory, allow for appropriate analyses of normally distributed data using linear mixed models, and discrete data with generalized linear mixed models. Decision theory offers a framework in plant pathology for contexts such as the decision about whether to apply or withhold a treatment. Model selection can be performed using Akaike's information criterion. Plant pathologists studying pathogens at the population level have traditionally been the main consumers of statistical approaches in plant pathology, but new technologies such as microarrays supply estimates of gene expression for thousands of genes simultaneously and present challenges for statistical analysis. Applications to the study of the landscape of the field and of the genome share the risk of pseudoreplication, the problem of determining the appropriate scale of the experimental unit and of obtaining sufficient replication at that scale.

  9. Genetic analysis of female fertility traits in South African Holstein cattle

    African Journals Online (AJOL)

    Bobby

    1 ARC-Livestock Business Division, P/Bag X2, Irene 0062, South Africa .... Descriptive statistics of all traits were computed using the Proc Means procedure of the Statistical. Analysis System (SAS ..... of Australian Holstein-Friesian cattle. Anim.

  10. Agent Based Model in SAS Environment for Rail Transit System Alignment Determination

    Directory of Open Access Journals (Sweden)

    I Made Indradjaja Brunner

    2018-04-01

    Full Text Available Transit system had been proposed for the urban area of Honolulu. One consideration to be determined is the alignment of the transit system. Decision to set the transit alignment will have influences on which areas will be served, who will be benefiting, as well as who will be impacted. Inputs for the decision usually conducted through public meetings, where community members are shown numbers of maps with pre-set routes. That approach could lead to a rather subjective decision by the community members. This paper attempts to discuss the utilization of grid map in determining the best alignment for rail transit system in Honolulu, Hawaii. It tries to use a more objective approach using various data derived from thematic maps. Overlaid maps are aggregated into a uniform 0.1-square mile vector based grid map system in GIS environment. The large dataset in the GIS environment is analyzed and manipulated using SAS software. The SAS procedure is applied to select the location of the alignment using a rational and deterministic approach. Grid cells that are superior compared to the others are selected based on several predefined criteria. Location of the dominant cells indicates possible transit alignment. The SAS procedure is designed to allow a transient vector called the GUIDE (Grid Unit with Intelligent Directional Expertise agent to analyze several cells at its vicinity and to move towards a cell with the highest value. Each time the agent landed on a cell, it left a mark. The chain of those marks shows location for the transit alignment. This study shows that the combination of ArcGIS and SAS allows a robust analysis of spatial data and manipulation of its datasets, which can be used to run a simulation mimicking the Agent-Based Modelling. This study also opens up further study possibilities by increasing number of factors analyzed by the agent, as well as creating a composite value of multi-factors.

  11. Want independent validation and assurance? Ask for a SAS-70.

    Science.gov (United States)

    Boutin, Christopher C

    2008-08-01

    The AICPA's Statement on Auditing Standards No.70, Service Organizations addresses CPA audits of service providers conducted to verify that a provider has adequate controls over its operations. Hospitals should request a SAS-70, the report produced by such an audit, from all of their third-party service providers. SAS-70s can be issued for a specific date or for a six-month period, and they typically consist of three sections: a CPA opinion, a description of controls, and information about the design of the controls.

  12. Statistical data analysis handbook

    National Research Council Canada - National Science Library

    Wall, Francis J

    1986-01-01

    It must be emphasized that this is not a text book on statistics. Instead it is a working tool that presents data analysis in clear, concise terms which can be readily understood even by those without formal training in statistics...

  13. Airbus A320 NEO Base Maintenance Concept SAS Operations

    OpenAIRE

    Dafgård, Oscar

    2017-01-01

    Detta arbete har utförts tillsammans med SAS, för att undersöka om ett base maintenance koncept med ett intervall på 36 månader skulle vara mer kostnadseffektivt än 24 månader för deras nya Airbus A320 NEO flotta. SAS, precis som alla andra flygbolag utnyttjar sina flygplan på sina egna och specifika vis. Eftersom inga flygbolag opererar på samma sätt kan inte heller underhållet skötas på samma sätt för en specifik flygplanstyp. Underhållsprogrammet behövde därför ses över för att optimeras g...

  14. SAS tahab Estonian Airi liita lätlaste firmaga airBaltic / Andres Eilart

    Index Scriptorium Estoniae

    Eilart, Andres

    2007-01-01

    SAS plaanib Estonian Airi ja Läti firma airBalticu liitmisega luua uue lennufirma. Autori hinnangul viitavad Estonian Airi laienemisplaanidele kriipsu peale tõmbamine ja SAS-i investeeringud airBalticusse sellele, et ühendamise käigus "neelab" Läti firma Estonian Airi

  15. SAS üritab panna riiki lennufirma aktsiaid müüma / Lemmi Kann

    Index Scriptorium Estoniae

    Kann, Lemmi

    2007-01-01

    SAS-i eesmärgiks on saada enamusosalus nii Estonian Airis kui ka airBalticus, et seeläbi oma positsioone Baltimaades kinnistada. Diagramm: Estonian Airi majandusnäitajad ja omanikud. Vt. samas: SAS on Estonian Airi jahtinud kaua. Kommenteerib Raivo Vare

  16. Statistical Power in Meta-Analysis

    Science.gov (United States)

    Liu, Jin

    2015-01-01

    Statistical power is important in a meta-analysis study, although few studies have examined the performance of simulated power in meta-analysis. The purpose of this study is to inform researchers about statistical power estimation on two sample mean difference test under different situations: (1) the discrepancy between the analytical power and…

  17. Aging-Resistant Functionalized LDH⁻SAS/Nitrile-Butadiene Rubber Composites: Preparation and Study of Aging Kinetics/Anti-Aging Mechanism.

    Science.gov (United States)

    Li, Tianxiang; Shi, Zhengren; He, Xianru; Jiang, Ping; Lu, Xiaobin; Zhang, Rui; Wang, Xin

    2018-05-18

    With the aim of improving the anti-aging properties of nitrile-butadiene rubber (NBR), a functional organic filler, namely LDH⁻SAS, prepared by intercalating 4-amino-benzenesulfonic acid monosodium salt (SAS) into layered double hydroxides (LDHs) through anion exchange, was added to nitrile-butadiene rubber (NBR), giving the NBR/LDH⁻SAS composites. Successful preparation of LDH⁻SAS was confirmed by XRD, TGA and FTIR. LDH⁻SAS was well dispersed in the NBR matrix, owing to its strong interaction with the nitrile group of NBR. The obtained NBR/LDH⁻SAS composites exhibited excellent thermo-oxidative aging resistance as shown by TGA-DSC. Further investigation by ATR-FTIR indicated that SAS can capture the radical groups, even during the aging process, which largely accounts for the improved aging resistance.

  18. Aging-Resistant Functionalized LDH–SAS/Nitrile-Butadiene Rubber Composites: Preparation and Study of Aging Kinetics/Anti-Aging Mechanism

    Science.gov (United States)

    Li, Tianxiang; Shi, Zhengren; He, Xianru; Jiang, Ping; Lu, Xiaobin; Zhang, Rui

    2018-01-01

    With the aim of improving the anti-aging properties of nitrile-butadiene rubber (NBR), a functional organic filler, namely LDH–SAS, prepared by intercalating 4-amino-benzenesulfonic acid monosodium salt (SAS) into layered double hydroxides (LDHs) through anion exchange, was added to nitrile-butadiene rubber (NBR), giving the NBR/LDH–SAS composites. Successful preparation of LDH–SAS was confirmed by XRD, TGA and FTIR. LDH–SAS was well dispersed in the NBR matrix, owing to its strong interaction with the nitrile group of NBR. The obtained NBR/LDH–SAS composites exhibited excellent thermo-oxidative aging resistance as shown by TGA-DSC. Further investigation by ATR-FTIR indicated that SAS can capture the radical groups, even during the aging process, which largely accounts for the improved aging resistance. PMID:29783656

  19. Rweb:Web-based Statistical Analysis

    Directory of Open Access Journals (Sweden)

    Jeff Banfield

    1999-03-01

    Full Text Available Rweb is a freely accessible statistical analysis environment that is delivered through the World Wide Web (WWW. It is based on R, a well known statistical analysis package. The only requirement to run the basic Rweb interface is a WWW browser that supports forms. If you want graphical output you must, of course, have a browser that supports graphics. The interface provides access to WWW accessible data sets, so you may run Rweb on your own data. Rweb can provide a four window statistical computing environment (code input, text output, graphical output, and error information through browsers that support Javascript. There is also a set of point and click modules under development for use in introductory statistics courses.

  20. Regularized Statistical Analysis of Anatomy

    DEFF Research Database (Denmark)

    Sjöstrand, Karl

    2007-01-01

    This thesis presents the application and development of regularized methods for the statistical analysis of anatomical structures. Focus is on structure-function relationships in the human brain, such as the connection between early onset of Alzheimer’s disease and shape changes of the corpus...... and mind. Statistics represents a quintessential part of such investigations as they are preluded by a clinical hypothesis that must be verified based on observed data. The massive amounts of image data produced in each examination pose an important and interesting statistical challenge...... efficient algorithms which make the analysis of large data sets feasible, and gives examples of applications....

  1. Anatomy of the TAMA SAS seismic attenuation system

    International Nuclear Information System (INIS)

    Marka, Szabolcs; Takamori, Akiteru; Ando, Masaki; Bertolini, Alessandro; Cella, Giancarlo; DeSalvo, Riccardo; Fukushima, Mitsuhiro; Iida, Yukiyoshi; Jacquier, Florian; Kawamura, Seiji; Nishi, Yuhiko; Numata, Kenji; Sannibale, Virginio; Somiya, Kentaro; Takahashi, Ryutaro; Tariq, Hareem; Tsubono, Kimio; Ugas, Jose; Viboud, Nicolas; Wang Chenyang; Yamamoto, Hiroaki; Yoda, Tatsuo

    2002-01-01

    The TAMA SAS seismic attenuation system was developed to provide the extremely high level of seismic isolation required by the next generation of interferometric gravitational wave detectors to achieve the desired sensitivity at low frequencies. Our aim was to provide good performance at frequencies above ∼10 Hz, while utilizing only passive subsystems in the sensitive frequency band of the TAMA interferometric gravitational wave detectors. The only active feedback is relegated below 6 Hz and it is used to damp the rigid body resonances of the attenuation chain. Simulations, based on subsystem performance characterizations, indicate that the system can achieve rms mirror residual motion measured in a few tens of nanometres. We will give a brief overview of the subsystems and point out some of the characterization results, supporting our claims of achieved performance. SAS is a passive, UHV compatible and low cost system. It is likely that extremely sensitive experiments in other fields will also profit from our study

  2. Modification of the SAS4A Safety Analysis Code for Integration with the ADAPT Discrete Dynamic Event Tree Framework.

    Energy Technology Data Exchange (ETDEWEB)

    Jankovsky, Zachary Kyle [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Denman, Matthew R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-05-01

    It is difficult to assess the consequences of a transient in a sodium-cooled fast reactor (SFR) using traditional probabilistic risk assessment (PRA) methods, as numerous safety-related sys- tems have passive characteristics. Often there is significant dependence on the value of con- tinuous stochastic parameters rather than binary success/failure determinations. One form of dynamic PRA uses a system simulator to represent the progression of a transient, tracking events through time in a discrete dynamic event tree (DDET). In order to function in a DDET environment, a simulator must have characteristics that make it amenable to changing physical parameters midway through the analysis. The SAS4A SFR system analysis code did not have these characteristics as received. This report describes the code modifications made to allow dynamic operation as well as the linking to a Sandia DDET driver code. A test case is briefly described to demonstrate the utility of the changes.

  3. C-C1-04: How to Win Friends and Influence People with the SAS Output Delivery System

    Science.gov (United States)

    Tolbert, William

    2010-01-01

    Background and Aims: Long-time SAS users remember the days when SAS output was embarrassingly ugly. Version 7 saw the introduction of the Output Delivery System (ODS). ODS has matured into a very capable subsystem that gives users powerful reporting options. This presentation will highlight useful features and outline a macro-based system for handling multiple ODS destinations simultaneously. Nowadays there is no excuse for ugly SAS output! When building reports, SAS users should think about the needs of those using the reports. Some people just want to review frequency tables, and are happy to do so on a monitor. Others want to be able to print data for review in a meeting. And, there are always those that want to work with the data in a spreadsheet. Consider the ideal formats for each of the users outlined above. For the casual data browser, HTML output is ideal. For printing, PDF is preferred. And for the additional analysis, Excel is a popular option. With ODS, we can meet all of these needs. Methods: Because ODS permits opening multiple output destinations simultaneously, a single procedure can be used to generate data in HTML, PDF, and Excel at once. The presentation will demonstrate the following: o- basic ODS syntax for HTML, PDF, and Excel output o- custom HTML table of contents o- using the ExcelXP tagset for multi-tab spreadsheets o- a custom macro for managing multiple ODS destinations simultaneously o- simple PROC Template code for easy customization o- techniques for consistent output from multiple platforms. Results: The techniques outlined here have been well-received in a variety of business reporting environments. Conclusions: The SAS ODS provides a wide array of reporting options. Don’t limit yourself to just one type of output.

  4. Cloning, overexpression, purification, crystallization and preliminary X-ray diffraction analysis of an inositol monophosphatase family protein (SAS2203) from Staphylococcus aureus MSSA476

    International Nuclear Information System (INIS)

    Bhattacharyya, Sudipta; Dutta, Debajyoti; Ghosh, Ananta Kumar; Das, Amit Kumar

    2011-01-01

    The cloning, overexpression, purification, crystallization and preliminary X-ray diffraction analysis of an inositol monophosphatase family protein (SAS2203) from S. aureus MSSA476 is reported. The gene product of the sas2203 ORF of Staphylococcus aureus MSSA476 encodes a 30 kDa molecular-weight protein with a high sequence resemblance (29% identity) to tetrameric inositol monophosphatase from Thermotoga maritima. The protein was cloned, expressed, purified to homogeneity and crystallized. Crystals appeared in several conditions and good diffraction-quality crystals were obtained from 0.2 M Li 2 SO 4 , 20% PEG 3350, 0.1 M HEPES pH 7.0 using the sitting-drop vapour-diffusion method. A complete diffraction data set was collected to 2.6 Å resolution using a Rigaku MicroMax-007 HF Cu Kα X-ray generator and a Rigaku R-AXIS IV ++ detector. The diffraction data were consistent with the orthorhombic space group P2 1 2 1 2 1 , with unit-cell parameters a = 49.98, b = 68.35, c = 143.79 Å, α = β = γ = 90°, and the crystal contained two molecules in the asymmetric unit

  5. An implementation of SAS reg-sign in an environmental information system

    International Nuclear Information System (INIS)

    James, T.; Zygmunt, B.C.

    1994-01-01

    This paper describes a software environmental database information system that uses SAS to process data and ORACLE reg-sign as the relational database management system (RDBMS). The hardware includes a network of UNIX-based servers and workstations. The relational database consists of large tables containing envionmental measurement data, as well as other smaller tables with reference, metadata and internal administrative information. The data come in a variety of formats and must be converted to conform to the system's standards. SAS/ACCESS reg-sign and PROC SQL are used extensively in the data processing

  6. C-C1-04: How to Win Friends and Influence People with the SAS Output Delivery System

    OpenAIRE

    Tolbert, William

    2010-01-01

    Background and Aims: Long-time SAS users remember the days when SAS output was embarrassingly ugly. Version 7 saw the introduction of the Output Delivery System (ODS). ODS has matured into a very capable subsystem that gives users powerful reporting options. This presentation will highlight useful features and outline a macro-based system for handling multiple ODS destinations simultaneously. Nowadays there is no excuse for ugly SAS output! When building reports, SAS users should think about ...

  7. Social Anxiety Scale for Adolescents (SAS-A): Measuring Social Anxiety among Finnish Adolescents

    Science.gov (United States)

    Ranta, Klaus; Junttila, Niina; Laakkonen, Eero; Uhmavaara, Anni; La Greca, Annette M.; Niemi, Paivi M.

    2012-01-01

    The aim of this study was to investigate symptoms of social anxiety and the psychometric properties of the "Social Anxiety Scale for Adolescents" (SAS-A) among Finnish adolescents, 13-16 years of age. Study 1 (n = 867) examined the distribution of SAS-A scores according to gender and age, and the internal consistency and factor structure…

  8. Does age matter? Controls on the spatial organization of age and life expectancy in hillslopes, and implications for transport parameterization using rSAS

    Science.gov (United States)

    Kim, M.; Harman, C. J.; Troch, P. A. A.

    2017-12-01

    Hillslopes have been extensively explored as a natural fundamental unit for spatially-integrated hydrologic models. Much of this attention has focused on their use in predicting the quantity of discharge, but hillslope-based models can potentially be used to predict the composition of discharge (in terms of age and chemistry) if they can be parameterized terms of measurable physical properties. Here we present advances in the use of rank StorAge Selection (rSAS) functions to parameterize transport through hillslopes. These functions provide a mapping between the distribution of water ages in storage and in outfluxes in terms of a probability distribution over storage. It has previously been shown that rSAS functions are related to the relative partitioning and arrangement of flow pathways (and variabilities in that arrangement), while separating out the effect of changes in the overall rate of fluxes in and out. This suggests that rSAS functions should have a connection to the internal organization of flow paths in a hillslope.Using a combination of numerical modeling and theoretical analysis we examined: first, the controls of physical properties on internal spatial organization of age (time since entry), life expectancy (time to exit), and the emergent transit time distribution and rSAS functions; second, the possible parameterization of the rSAS function using the physical properties. The numerical modeling results showed the clear dependence of the rSAS function forms on the physical properties and relations between the internal organization and the rSAS functions. For the different rates of the exponential saturated hydraulic conductivity decline with depth the spatial organization of life expectancy varied dramatically and determined the rSAS function forms, while the organizaiton of the age showed less qualitative differences. Analytical solutions predicting this spatial organization and the resulting rSAS function were derived for simplified systems. These

  9. Statistical methods for astronomical data analysis

    CERN Document Server

    Chattopadhyay, Asis Kumar

    2014-01-01

    This book introduces “Astrostatistics” as a subject in its own right with rewarding examples, including work by the authors with galaxy and Gamma Ray Burst data to engage the reader. This includes a comprehensive blending of Astrophysics and Statistics. The first chapter’s coverage of preliminary concepts and terminologies for astronomical phenomenon will appeal to both Statistics and Astrophysics readers as helpful context. Statistics concepts covered in the book provide a methodological framework. A unique feature is the inclusion of different possible sources of astronomical data, as well as software packages for converting the raw data into appropriate forms for data analysis. Readers can then use the appropriate statistical packages for their particular data analysis needs. The ideas of statistical inference discussed in the book help readers determine how to apply statistical tests. The authors cover different applications of statistical techniques already developed or specifically introduced for ...

  10. Adaptive Tests of Significance Using Permutations of Residuals with R and SAS

    CERN Document Server

    O'Gorman, Thomas W

    2012-01-01

    Provides the tools needed to successfully perform adaptive tests across a broad range of datasets Adaptive Tests of Significance Using Permutations of Residuals with R and SAS illustrates the power of adaptive tests and showcases their ability to adjust the testing method to suit a particular set of data. The book utilizes state-of-the-art software to demonstrate the practicality and benefits for data analysis in various fields of study. Beginning with an introduction, the book moves on to explore the underlying concepts of adaptive tests, including:Smoothing methods and normalizing transforma

  11. ODM Data Analysis-A tool for the automatic validation, monitoring and generation of generic descriptive statistics of patient data.

    Science.gov (United States)

    Brix, Tobias Johannes; Bruland, Philipp; Sarfraz, Saad; Ernsting, Jan; Neuhaus, Philipp; Storck, Michael; Doods, Justin; Ständer, Sonja; Dugas, Martin

    2018-01-01

    A required step for presenting results of clinical studies is the declaration of participants demographic and baseline characteristics as claimed by the FDAAA 801. The common workflow to accomplish this task is to export the clinical data from the used electronic data capture system and import it into statistical software like SAS software or IBM SPSS. This software requires trained users, who have to implement the analysis individually for each item. These expenditures may become an obstacle for small studies. Objective of this work is to design, implement and evaluate an open source application, called ODM Data Analysis, for the semi-automatic analysis of clinical study data. The system requires clinical data in the CDISC Operational Data Model format. After uploading the file, its syntax and data type conformity of the collected data is validated. The completeness of the study data is determined and basic statistics, including illustrative charts for each item, are generated. Datasets from four clinical studies have been used to evaluate the application's performance and functionality. The system is implemented as an open source web application (available at https://odmanalysis.uni-muenster.de) and also provided as Docker image which enables an easy distribution and installation on local systems. Study data is only stored in the application as long as the calculations are performed which is compliant with data protection endeavors. Analysis times are below half an hour, even for larger studies with over 6000 subjects. Medical experts have ensured the usefulness of this application to grant an overview of their collected study data for monitoring purposes and to generate descriptive statistics without further user interaction. The semi-automatic analysis has its limitations and cannot replace the complex analysis of statisticians, but it can be used as a starting point for their examination and reporting.

  12. Clinical SAS programming in India: A study of industry needs versus wants.

    Science.gov (United States)

    Ananthakrishnan, Nithiyanandhan

    2014-07-01

    The clinical SAS (www.sas.com) programming industry, in India, has seen a rapid growth in the last decade and the trend seems set to continue, for the next couple of years, due to cost advantage and the availability of skilled labor. On one side the industry needs are focused on less execution time, high margins, segmented tasks and the delivery of high quality output with minimal oversight. On the other side, due to the increased demand for skilled resources, the wants of the programmers have taken a different shift toward diversifying exposure, unsustainable wage inflation due to multiple opportunities and generally high expectations around career progression. If the industry needs are not going to match with programmers want, or vice versa, then there is the possibility that the current year on year growth may start to slow or even go into decline. This paper is intended to identify the gap between wants and need and puts forwards some suggestions, for both sides, in ways to change the equation to benefit all. Questionnaire on similar themes created to survey managers and programmers working in clinical SAS programming industry and was surveyed online to collect their perspectives. Their views are compared for each theme and presented as results. Two surveys were created in www.surveymonkey.com. https://www.surveymonkey.com/s/SAS_India_managment_needvswant_survey. Programmer: https://www.surveymonkey.com/s/SAS_India_programmer_needvswant_survey. Bar chart and pie chart used on data collect to show segmentation of data. In conclusion, it seeks to highlight the future industry direction and the skillset that existing programmers need to have, in order to sustain the momentum and remain competitive, to contribute to the future pipeline and the development of the profession in India.

  13. How to Deal with Interval-Censored Data Practically while Assessing the Progression-Free Survival: A Step-by-Step Guide Using SAS and R Software.

    Science.gov (United States)

    Dugué, Audrey Emmanuelle; Pulido, Marina; Chabaud, Sylvie; Belin, Lisa; Gal, Jocelyn

    2016-12-01

    We describe how to estimate progression-free survival while dealing with interval-censored data in the setting of clinical trials in oncology. Three procedures with SAS and R statistical software are described: one allowing for a nonparametric maximum likelihood estimation of the survival curve using the EM-ICM (Expectation and Maximization-Iterative Convex Minorant) algorithm as described by Wellner and Zhan in 1997; a sensitivity analysis procedure in which the progression time is assigned (i) at the midpoint, (ii) at the upper limit (reflecting the standard analysis when the progression time is assigned at the first radiologic exam showing progressive disease), or (iii) at the lower limit of the censoring interval; and finally, two multiple imputations are described considering a uniform or the nonparametric maximum likelihood estimation (NPMLE) distribution. Clin Cancer Res; 22(23); 5629-35. ©2016 AACR. ©2016 American Association for Cancer Research.

  14. Centriolar SAS-7 acts upstream of SPD-2 to regulate centriole assembly and pericentriolar material formation

    Science.gov (United States)

    Sugioka, Kenji; Hamill, Danielle R; Lowry, Joshua B; McNeely, Marie E; Enrick, Molly; Richter, Alyssa C; Kiebler, Lauren E; Priess, James R; Bowerman, Bruce

    2017-01-01

    The centriole/basal body is a eukaryotic organelle that plays essential roles in cell division and signaling. Among five known core centriole proteins, SPD-2/Cep192 is the first recruited to the site of daughter centriole formation and regulates the centriolar localization of the other components in C. elegans and in humans. However, the molecular basis for SPD-2 centriolar localization remains unknown. Here, we describe a new centriole component, the coiled-coil protein SAS-7, as a regulator of centriole duplication, assembly and elongation. Intriguingly, our genetic data suggest that SAS-7 is required for daughter centrioles to become competent for duplication, and for mother centrioles to maintain this competence. We also show that SAS-7 binds SPD-2 and regulates SPD-2 centriolar recruitment, while SAS-7 centriolar localization is SPD-2-independent. Furthermore, pericentriolar material (PCM) formation is abnormal in sas-7 mutants, and the PCM-dependent induction of cell polarity that defines the anterior-posterior body axis frequently fails. We conclude that SAS-7 functions at the earliest step in centriole duplication yet identified and plays important roles in the orchestration of centriole and PCM assembly. DOI: http://dx.doi.org/10.7554/eLife.20353.001 PMID:28092264

  15. Archive of Census Related Products (ACRP): 1980 SAS Transport Files

    Data.gov (United States)

    National Aeronautics and Space Administration — The 1980 SAS Transport Files portion of the Archive of Census Related Products (ACRP) contains housing and population demographics from the 1980 Summary Tape File...

  16. The Development of Student’s Activity Sheets (SAS) Based on Multiple Intelligences and Problem-Solving Skills Using Simple Science Tools

    Science.gov (United States)

    Wardani, D. S.; Kirana, T.; Ibrahim, M.

    2018-01-01

    The aim of this research is to produce SAS based on MI and problem-solving skills using simple science tools that are suitable to be used by elementary school students. The feasibility of SAS is evaluated based on its validity, practicality, and effectiveness. The completion Lesson Plan (LP) implementation and student’s activities are the indicators of SAS practicality. The effectiveness of SAS is measured by indicators of increased learning outcomes and problem-solving skills. The development of SAS follows the 4-D (define, design, develop, and disseminate) phase. However, this study was done until the third stage (develop). The written SAS was then validated through expert evaluation done by two experts of science, before its is tested to the target students. The try-out of SAS used one group with pre-test and post-test design. The result of this research shows that SAS is valid with “good” category. In addition, SAS is considered practical as seen from the increase of student activity at each meeting and LP implementation. Moreover, it was considered effective due to the significant difference between pre-test and post-test result of the learning outcomes and problem-solving skill test. Therefore, SAS is feasible to be used in learning.

  17. Statistics and finance an introduction

    CERN Document Server

    Ruppert, David

    2004-01-01

    This textbook emphasizes the applications of statistics and probability to finance. Students are assumed to have had a prior course in statistics, but no background in finance or economics. The basics of probability and statistics are reviewed and more advanced topics in statistics, such as regression, ARMA and GARCH models, the bootstrap, and nonparametric regression using splines, are introduced as needed. The book covers the classical methods of finance such as portfolio theory, CAPM, and the Black-Scholes formula, and it introduces the somewhat newer area of behavioral finance. Applications and use of MATLAB and SAS software are stressed. The book will serve as a text in courses aimed at advanced undergraduates and masters students in statistics, engineering, and applied mathematics as well as quantitatively oriented MBA students. Those in the finance industry wishing to know more statistics could also use it for self-study. David Ruppert is the Andrew Schultz, Jr. Professor of Engineering, School of Oper...

  18. A Statistical Toolkit for Data Analysis

    International Nuclear Information System (INIS)

    Donadio, S.; Guatelli, S.; Mascialino, B.; Pfeiffer, A.; Pia, M.G.; Ribon, A.; Viarengo, P.

    2006-01-01

    The present project aims to develop an open-source and object-oriented software Toolkit for statistical data analysis. Its statistical testing component contains a variety of Goodness-of-Fit tests, from Chi-squared to Kolmogorov-Smirnov, to less known, but generally much more powerful tests such as Anderson-Darling, Goodman, Fisz-Cramer-von Mises, Kuiper, Tiku. Thanks to the component-based design and the usage of the standard abstract interfaces for data analysis, this tool can be used by other data analysis systems or integrated in experimental software frameworks. This Toolkit has been released and is downloadable from the web. In this paper we describe the statistical details of the algorithms, the computational features of the Toolkit and describe the code validation

  19. Statistical considerations on safety analysis

    International Nuclear Information System (INIS)

    Pal, L.; Makai, M.

    2004-01-01

    The authors have investigated the statistical methods applied to safety analysis of nuclear reactors and arrived at alarming conclusions: a series of calculations with the generally appreciated safety code ATHLET were carried out to ascertain the stability of the results against input uncertainties in a simple experimental situation. Scrutinizing those calculations, we came to the conclusion that the ATHLET results may exhibit chaotic behavior. A further conclusion is that the technological limits are incorrectly set when the output variables are correlated. Another formerly unnoticed conclusion of the previous ATHLET calculations that certain innocent looking parameters (like wall roughness factor, the number of bubbles per unit volume, the number of droplets per unit volume) can influence considerably such output parameters as water levels. The authors are concerned with the statistical foundation of present day safety analysis practices and can only hope that their own misjudgment will be dispelled. Until then, the authors suggest applying correct statistical methods in safety analysis even if it makes the analysis more expensive. It would be desirable to continue exploring the role of internal parameters (wall roughness factor, steam-water surface in thermal hydraulics codes, homogenization methods in neutronics codes) in system safety codes and to study their effects on the analysis. In the validation and verification process of a code one carries out a series of computations. The input data are not precisely determined because measured data have an error, calculated data are often obtained from a more or less accurate model. Some users of large codes are content with comparing the nominal output obtained from the nominal input, whereas all the possible inputs should be taken into account when judging safety. At the same time, any statement concerning safety must be aleatory, and its merit can be judged only when the probability is known with which the

  20. Statistical shape analysis with applications in R

    CERN Document Server

    Dryden, Ian L

    2016-01-01

    A thoroughly revised and updated edition of this introduction to modern statistical methods for shape analysis Shape analysis is an important tool in the many disciplines where objects are compared using geometrical features. Examples include comparing brain shape in schizophrenia; investigating protein molecules in bioinformatics; and describing growth of organisms in biology. This book is a significant update of the highly-regarded `Statistical Shape Analysis’ by the same authors. The new edition lays the foundations of landmark shape analysis, including geometrical concepts and statistical techniques, and extends to include analysis of curves, surfaces, images and other types of object data. Key definitions and concepts are discussed throughout, and the relative merits of different approaches are presented. The authors have included substantial new material on recent statistical developments and offer numerous examples throughout the text. Concepts are introduced in an accessible manner, while reta...

  1. Spatial analysis statistics, visualization, and computational methods

    CERN Document Server

    Oyana, Tonny J

    2015-01-01

    An introductory text for the next generation of geospatial analysts and data scientists, Spatial Analysis: Statistics, Visualization, and Computational Methods focuses on the fundamentals of spatial analysis using traditional, contemporary, and computational methods. Outlining both non-spatial and spatial statistical concepts, the authors present practical applications of geospatial data tools, techniques, and strategies in geographic studies. They offer a problem-based learning (PBL) approach to spatial analysis-containing hands-on problem-sets that can be worked out in MS Excel or ArcGIS-as well as detailed illustrations and numerous case studies. The book enables readers to: Identify types and characterize non-spatial and spatial data Demonstrate their competence to explore, visualize, summarize, analyze, optimize, and clearly present statistical data and results Construct testable hypotheses that require inferential statistical analysis Process spatial data, extract explanatory variables, conduct statisti...

  2. Unsteady Simulations of the Flow in a Channel Flow and a Ventilated Room Using the SST-SAS Model

    DEFF Research Database (Denmark)

    Davidson, Lars; Nielsen, Peter V.

    The SAS model (Scale Adapted Simulation) was invented by Menter and his co-workers. The idea behind the SST-SAS model is to add an additional production term - the SAS term - in the w equation which is sensitive to resolved (i.e. unsteady) fluctuations. In regions where the flow is on the limit...

  3. PS3-21: Extracting Utilization Data from Clarity into VDW Using Oracle and SAS

    Science.gov (United States)

    Chimmula, Srivardhan

    2013-01-01

    Background/Aims The purpose of the presentation is to demonstrate how we use SAS and Oracle to load VDW_Utilization, VDW_DX, and VDW_PX tables from Clarity at the Kaiser Permanente Northern California (KPNC) Division of Research (DOR) site. Methods DOR uses the best of Oracle PL/ SQL and SAS capabilities in building Extract Transform and Load (ETL) processes. These processes extract patient encounter, diagnosis, and procedure data from Teradata-based Clarity. The data is then transformed to fit HMORN’s VDW definitions of the table. This data is then loaded into the Oracle-based VDW table on DOR’s research database and then finally a copy of the table is also created as a SAS dataset. Results DOR builds robust and efficient ETL processes that refresh VDW Utilization table on a monthly basis processing millions of records/observations. The ETL processes have the capability to identify daily changes in Clarity and update the VDW tables on a daily basis. Conclusions KPNC DOR combines the best of both Oracle and SAS worlds to build ETL processes that load the data into VDW Utilization tables efficiently.

  4. A SAS-macro for estimation of the cumulative incidence using Poisson regression

    DEFF Research Database (Denmark)

    Waltoft, Berit Lindum

    2009-01-01

    the hazard rates, and the hazard rates are often estimated by the Cox regression. This procedure may not be suitable for large studies due to limited computer resources. Instead one uses Poisson regression, which approximates the Cox regression. Rosthøj et al. presented a SAS-macro for the estimation...... of the cumulative incidences based on the Cox regression. I present the functional form of the probabilities and variances when using piecewise constant hazard rates and a SAS-macro for the estimation using Poisson regression. The use of the macro is demonstrated through examples and compared to the macro presented...

  5. SEPARATION ANXIETY SYNDROME (SAS IN DOGS FROM FERNANDOPOLIS, SP, REFERRED TO UNICASTELO VETERINARY HOSPITAL SÍNDROME DA ANSIEDADE DE SEPARAÇÃO (SAS EM CÃES ATENDIDOS NO HOSPITAL VETERINÁRIO DA UNICASTELO, FERNANDÓPOLIS, SP

    Directory of Open Access Journals (Sweden)

    Adriana Alonso Novais

    2010-04-01

    Full Text Available The separation anxiety syndrome (SAS is defined by a group of altered behaviors showed by dogs when they are left alone, contributing for the most common behavior problems in this specie. The basic clinical signs of SAS are the following: distress vocalization (whining, barking, howling, destructiveness and house soiling. SAS reduce the animal’s life quality and is a frequent cause of abandonment and euthanasia of these dogs. The goal of this research was to verify the occurrence of SAS in dogs from Fernandopolis, SP, referred to the veterinary hospital of Unicastelo, in the period lying between december/2007 and december/2008. Seventy five animals were studied, comprising 30 (40% adult males, 9 (12% young males, 30 (40% adult females and 6 (8% young females. The dogs were evaluated through data given by the owners, according to a behavior questionnaire. From the general studied population, 35 dogs (47% showed distress vocalization, 29 (39% dogs showed micturition at inappropriate places, 17 (23% dogs showed defecation at inappropriate places and 22 (29% showed destructiveness during the periods of the owner’s absence. From the obtained results we may conclude the occurrence of SAS in 68% of the studied dogs.

    KEY WORDS: Dogs, animal behavior, behavior disturbances, SAS.

    A síndrome da ansiedade de separação (SAS é definida como o conjunto de comportamentos exibidos por cães quando são deixados sozinhos. É considerada um dos problemas comportamentais mais comuns da espécie. Os sinais clínicos básicos da SAS são vocalização excessiva, destruição de objetos, defecação e micção em locais impróprios, acarretando prejuízos na qualidade de vida dos animais. Sendo uma das causas de abandono e eutanásia desses animais, a SAS foi pesquisada em cães atendidos no Hospital Veterinário da Unicastelo em Fernandópolis, SP, no período de dezembro de 2007 a dezembro de 2008, mediante levantamento realizado em 75

  6. Application of descriptive statistics in analysis of experimental data

    OpenAIRE

    Mirilović Milorad; Pejin Ivana

    2008-01-01

    Statistics today represent a group of scientific methods for the quantitative and qualitative investigation of variations in mass appearances. In fact, statistics present a group of methods that are used for the accumulation, analysis, presentation and interpretation of data necessary for reaching certain conclusions. Statistical analysis is divided into descriptive statistical analysis and inferential statistics. The values which represent the results of an experiment, and which are the subj...

  7. Statistical Analysis of Research Data | Center for Cancer Research

    Science.gov (United States)

    Recent advances in cancer biology have resulted in the need for increased statistical analysis of research data. The Statistical Analysis of Research Data (SARD) course will be held on April 5-6, 2018 from 9 a.m.-5 p.m. at the National Institutes of Health's Natcher Conference Center, Balcony C on the Bethesda Campus. SARD is designed to provide an overview on the general principles of statistical analysis of research data.  The first day will feature univariate data analysis, including descriptive statistics, probability distributions, one- and two-sample inferential statistics.

  8. Competent statistical programmer: Need of business process outsourcing industry

    Science.gov (United States)

    Khan, Imran

    2014-01-01

    Over the last two decades Business Process Outsourcing (BPO) has evolved as much mature practice. India is looked as preferred destination for pharmaceutical outsourcing over a cost arbitrage. Among the biometrics outsourcing, statistical programming and analysis required very niche skill for service delivery. The demand and supply ratios are imbalance due to high churn out rate and less supply of competent programmer. Industry is moving from task delivery to ownership and accountability. The paradigm shift from an outsourcing to consulting is triggering the need for competent statistical programmer. Programmers should be trained in technical, analytical, problem solving, decision making and soft skill as the expectations from the customer are changing from task delivery to accountability of the project. This paper will highlight the common issue SAS programming service industry is facing and skills the programmers need to develop to cope up with these changes. PMID:24987578

  9. Competent statistical programmer: Need of business process outsourcing industry.

    Science.gov (United States)

    Khan, Imran

    2014-07-01

    Over the last two decades Business Process Outsourcing (BPO) has evolved as much mature practice. India is looked as preferred destination for pharmaceutical outsourcing over a cost arbitrage. Among the biometrics outsourcing, statistical programming and analysis required very niche skill for service delivery. The demand and supply ratios are imbalance due to high churn out rate and less supply of competent programmer. Industry is moving from task delivery to ownership and accountability. The paradigm shift from an outsourcing to consulting is triggering the need for competent statistical programmer. Programmers should be trained in technical, analytical, problem solving, decision making and soft skill as the expectations from the customer are changing from task delivery to accountability of the project. This paper will highlight the common issue SAS programming service industry is facing and skills the programmers need to develop to cope up with these changes.

  10. Competent statistical programmer: Need of business process outsourcing industry

    Directory of Open Access Journals (Sweden)

    Imran Khan

    2014-01-01

    Full Text Available Over the last two decades Business Process Outsourcing (BPO has evolved as much mature practice. India is looked as preferred destination for pharmaceutical outsourcing over a cost arbitrage. Among the biometrics outsourcing, statistical programming and analysis required very niche skill for service delivery. The demand and supply ratios are imbalance due to high churn out rate and less supply of competent programmer. Industry is moving from task delivery to ownership and accountability. The paradigm shift from an outsourcing to consulting is triggering the need for competent statistical programmer. Programmers should be trained in technical, analytical, problem solving, decision making and soft skill as the expectations from the customer are changing from task delivery to accountability of the project. This paper will highlight the common issue SAS programming service industry is facing and skills the programmers need to develop to cope up with these changes.

  11. Statistical analysis with Excel for dummies

    CERN Document Server

    Schmuller, Joseph

    2013-01-01

    Take the mystery out of statistical terms and put Excel to work! If you need to create and interpret statistics in business or classroom settings, this easy-to-use guide is just what you need. It shows you how to use Excel's powerful tools for statistical analysis, even if you've never taken a course in statistics. Learn the meaning of terms like mean and median, margin of error, standard deviation, and permutations, and discover how to interpret the statistics of everyday life. You'll learn to use Excel formulas, charts, PivotTables, and other tools to make sense of everything fro

  12. Using SAS PROC MCMC for Item Response Theory Models

    Science.gov (United States)

    Ames, Allison J.; Samonte, Kelli

    2015-01-01

    Interest in using Bayesian methods for estimating item response theory models has grown at a remarkable rate in recent years. This attentiveness to Bayesian estimation has also inspired a growth in available software such as WinBUGS, R packages, BMIRT, MPLUS, and SAS PROC MCMC. This article intends to provide an accessible overview of Bayesian…

  13. The Music Therapy Session Assessment Scale (MT-SAS): Validation of a new tool for music therapy process evaluation.

    Science.gov (United States)

    Raglio, Alfredo; Gnesi, Marco; Monti, Maria Cristina; Oasi, Osmano; Gianotti, Marta; Attardo, Lapo; Gontero, Giulia; Morotti, Lara; Boffelli, Sara; Imbriani, Chiara; Montomoli, Cristina; Imbriani, Marcello

    2017-11-01

    Music therapy (MT) interventions are aimed at creating and developing a relationship between patient and therapist. However, there is a lack of validated observational instruments to consistently evaluate the MT process. The purpose of this study was the validation of Music Therapy Session Assessment Scale (MT-SAS), designed to assess the relationship between therapist and patient during active MT sessions. Videotapes of a single 30-min session per patient were considered. A pilot study on the videotapes of 10 patients was carried out to help refine the items, define the scoring system and improve inter-rater reliability among the five raters. Then, a validation study on 100 patients with different clinical conditions was carried out. The Italian MT-SAS was used throughout the process, although we also provide an English translation. The final scale consisted of 7 binary items accounting for eye contact, countenance, and nonverbal and sound-music communication. In the pilot study, raters were found to share an acceptable level of agreement in their assessments. Explorative factorial analysis disclosed a single homogeneous factor including 6 items (thus supporting an ordinal total score), with only the item about eye contact being unrelated to the others. Moreover, the existence of 2 different archetypal profiles of attuned and disattuned behaviours was highlighted through multiple correspondence analysis. As suggested by the consistent results of 2 different analyses, MT-SAS is a reliable tool that globally evaluates sonorous-musical and nonverbal behaviours related to emotional attunement and empathetic relationship between patient and therapist during active MT sessions. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Statistical analysis of dynamic parameters of the core

    International Nuclear Information System (INIS)

    Ionov, V.S.

    2007-01-01

    The transients of various types were investigated for the cores of zero power critical facilities in RRC KI and NPP. Dynamic parameters of neutron transients were explored by tool statistical analysis. Its have sufficient duration, few channels for currents of chambers and reactivity and also some channels for technological parameters. On these values the inverse period. reactivity, lifetime of neutrons, reactivity coefficients and some effects of a reactivity are determinate, and on the values were restored values of measured dynamic parameters as result of the analysis. The mathematical means of statistical analysis were used: approximation(A), filtration (F), rejection (R), estimation of parameters of descriptive statistic (DSP), correlation performances (kk), regression analysis(KP), the prognosis (P), statistician criteria (SC). The calculation procedures were realized by computer language MATLAB. The reasons of methodical and statistical errors are submitted: inadequacy of model operation, precision neutron-physical parameters, features of registered processes, used mathematical model in reactivity meters, technique of processing for registered data etc. Examples of results of statistical analysis. Problems of validity of the methods used for definition and certification of values of statistical parameters and dynamic characteristics are considered (Authors)

  15. CONFIDENCE LEVELS AND/VS. STATISTICAL HYPOTHESIS TESTING IN STATISTICAL ANALYSIS. CASE STUDY

    Directory of Open Access Journals (Sweden)

    ILEANA BRUDIU

    2009-05-01

    Full Text Available Estimated parameters with confidence intervals and testing statistical assumptions used in statistical analysis to obtain conclusions on research from a sample extracted from the population. Paper to the case study presented aims to highlight the importance of volume of sample taken in the study and how this reflects on the results obtained when using confidence intervals and testing for pregnant. If statistical testing hypotheses not only give an answer "yes" or "no" to some questions of statistical estimation using statistical confidence intervals provides more information than a test statistic, show high degree of uncertainty arising from small samples and findings build in the "marginally significant" or "almost significant (p very close to 0.05.

  16. Collecting operational event data for statistical analysis

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1994-09-01

    This report gives guidance for collecting operational data to be used for statistical analysis, especially analysis of event counts. It discusses how to define the purpose of the study, the unit (system, component, etc.) to be studied, events to be counted, and demand or exposure time. Examples are given of classification systems for events in the data sources. A checklist summarizes the essential steps in data collection for statistical analysis

  17. Statistical models for thermal ageing of steel materials in nuclear power plants

    International Nuclear Information System (INIS)

    Persoz, M.

    1996-01-01

    Some category of steel materials in nuclear power plants may be subjected to thermal ageing, whose extent depends on the steel chemical composition and the ageing parameters, i.e. temperature and duration. This ageing affects the 'impact strength' of the materials, which is a mechanical property. In order to assess the residual lifetime of these components, a probabilistic study has been launched, which takes into account the scatter over the input parameters of the mechanical model. Predictive formulae for estimating the impact strength of aged materials are important input data of the model. A data base has been created with impact strength results obtained from an ageing program in laboratory and statistical treatments have been undertaken. Two kinds of model have been developed, with non linear regression methods (PROC NLIN, available in SAS/STAT). The first one, using a hyperbolic tangent function, is partly based on physical considerations, and the second one, of an exponential type, is purely statistically built. The difficulties consist in selecting the significant parameters and attributing initial values to the coefficients, which is a requirement of the NLIN procedure. This global statistical analysis has led to general models that are unction of the chemical variables and the ageing parameters. These models are as precise (if not more) as local models that had been developed earlier for some specific values of ageing temperature and ageing duration. This paper describes the data and the methodology used to build the models and analyses the results given by the SAS system. (author)

  18. Multilevel models applications using SAS

    CERN Document Server

    Wang, Jichuan; Fisher, James F

    2011-01-01

    This book covers a broad range of topics about multilevel modeling. The goal is to help readers to understand the basic concepts, theoretical frameworks, and application methods of multilevel modeling. It is at a level also accessible to non-mathematicians, focusing on the methods and applications of various multilevel models and using the widely used statistical software SAS®. Examples are drawn from analysis of real-world research data.

  19. Statistics and analysis of scientific data

    CERN Document Server

    Bonamente, Massimiliano

    2013-01-01

    Statistics and Analysis of Scientific Data covers the foundations of probability theory and statistics, and a number of numerical and analytical methods that are essential for the present-day analyst of scientific data. Topics covered include probability theory, distribution functions of statistics, fits to two-dimensional datasheets and parameter estimation, Monte Carlo methods and Markov chains. Equal attention is paid to the theory and its practical application, and results from classic experiments in various fields are used to illustrate the importance of statistics in the analysis of scientific data. The main pedagogical method is a theory-then-application approach, where emphasis is placed first on a sound understanding of the underlying theory of a topic, which becomes the basis for an efficient and proactive use of the material for practical applications. The level is appropriate for undergraduates and beginning graduate students, and as a reference for the experienced researcher. Basic calculus is us...

  20. Coupled MCNP - SAS-SFR calculations for sodium fast reactor core at steady-state - 15460

    International Nuclear Information System (INIS)

    Ponomarev, A.; Travleev, A.; Pfrang, W.; Sanchez, V.

    2015-01-01

    The prediction of core parameters at steady state is the first step when studying core accident transient behaviour. At this step thermal hydraulics (TH) and core geometry parameters are calculated corresponding to initial operating conditions. In this study we present the coupling of the SAS-SFR code to the Monte-Carlo neutron transport code MCNP at steady state together with application to the European Sodium Fast Reactor (ESFR). The SAS-SFR code employs a multi-channel core representation where each channel represents subassemblies with similar power, thermal-hydraulics and pin mechanics conditions. For every axial node of every channel the individual geometry and material compositions parameters are calculated in accord with power and cooling conditions. This requires supplying the SAS-SFR-code with nodal power values which should be calculated by neutron physics code with given realistic core parameters. In the conventional approach the neutron physics model employs some core averaged TH and geometry data (fuel temperature, coolant density, core axial and radial expansion). In this study we organize a new approach coupling the MCNP neutron physics models and the SAS-SFR models, so that calculations of power can be improved by using distributed core parameters (TH and geometry) taken from SAS-SFR. The MCNP code is capable to describe cores with distributed TH parameters and even to model non-uniform axial expansion of fuel subassemblies. In this way, core TH and geometrical data calculated by SAS-SFR are taken into account accurately in the neutronics model. The coupling implementation is done by data exchange between two codes with help of processing routines managed by driver routine. Currently it is model-specific and realized for the ESFR 'Reference Oxide' core. The Beginning-Of-Life core state is considered with 10 channel representation for fuel subassemblies. For this model several sets of coupled calculations are performed, in which different

  1. Method for statistical data analysis of multivariate observations

    CERN Document Server

    Gnanadesikan, R

    1997-01-01

    A practical guide for multivariate statistical techniques-- now updated and revised In recent years, innovations in computer technology and statistical methodologies have dramatically altered the landscape of multivariate data analysis. This new edition of Methods for Statistical Data Analysis of Multivariate Observations explores current multivariate concepts and techniques while retaining the same practical focus of its predecessor. It integrates methods and data-based interpretations relevant to multivariate analysis in a way that addresses real-world problems arising in many areas of inte

  2. Advances in statistical models for data analysis

    CERN Document Server

    Minerva, Tommaso; Vichi, Maurizio

    2015-01-01

    This edited volume focuses on recent research results in classification, multivariate statistics and machine learning and highlights advances in statistical models for data analysis. The volume provides both methodological developments and contributions to a wide range of application areas such as economics, marketing, education, social sciences and environment. The papers in this volume were first presented at the 9th biannual meeting of the Classification and Data Analysis Group (CLADAG) of the Italian Statistical Society, held in September 2013 at the University of Modena and Reggio Emilia, Italy.

  3. SAS6-like protein in Plasmodium indicates that conoid-associated apical complex proteins persist in invasive stages within the mosquito vector.

    Science.gov (United States)

    Wall, Richard J; Roques, Magali; Katris, Nicholas J; Koreny, Ludek; Stanway, Rebecca R; Brady, Declan; Waller, Ross F; Tewari, Rita

    2016-06-24

    The SAS6-like (SAS6L) protein, a truncated paralogue of the ubiquitous basal body/centriole protein SAS6, has been characterised recently as a flagellum protein in trypanosomatids, but associated with the conoid in apicomplexan Toxoplasma. The conoid has been suggested to derive from flagella parts, but is thought to have been lost from some apicomplexans including the malaria-causing genus Plasmodium. Presence of SAS6L in Plasmodium, therefore, suggested a possible role in flagella assembly in male gametes, the only flagellated stage. Here, we have studied the expression and role of SAS6L throughout the Plasmodium life cycle using the rodent malaria model P. berghei. Contrary to a hypothesised role in flagella, SAS6L was absent during gamete flagellum formation. Instead, SAS6L was restricted to the apical complex in ookinetes and sporozoites, the extracellular invasive stages that develop within the mosquito vector. In these stages SAS6L forms an apical ring, as we show is also the case in Toxoplasma tachyzoites. The SAS6L ring was not apparent in blood-stage invasive merozoites, indicating that the apical complex is differentiated between the different invasive forms. Overall this study indicates that a conoid-associated apical complex protein and ring structure is persistent in Plasmodium in a stage-specific manner.

  4. Síndrome de apnea del sueño (SAS Sleep apnea syndrome

    Directory of Open Access Journals (Sweden)

    Camilo José Borrego Abello

    1994-03-01

    Full Text Available Se describe el síndrome de apnea del sueño (SAS abarcando los aspectos históricos, signos y síntomas, las diversas modalidades (apneas obstructiva, central y mixta, complicaciones, principalmente cardiovasculares y cerebrovasculares y formas de tratamiento. Se hace énfasis en la ayuda diagnóstica del polisomnograma que ha permitido definir como SAS síntomas antes considerados inespecíficos y cuantificar su gravedad. Se describen las diversas medidas terapéuticas, locales y generales, recalcando los beneficios que se obtienen con la aplicación de los aparatos de respiración a presión positiva. Estos permiten tratamientos no invasivos que hacen desaparecer la totalidad de los síntomas y evitan los riesgos incrementados de trastornos cardiovasculares y accidentes laborales o de tránsito. Este grave síndrome afecta a un grupo grande de población por lo que su importancia es indudable.

    Different aspects of the sleep apnea síndrome (SAS are described, including history, clinical manifestations, clinical forms (obstructive, central and mixed, cardiovascular, cerebrovascular and other complications and treatment. With the use of the polysomnogram it has been possible to define non-specific symptoms as due to SAS and to quantitate their seriousness. Different therapeutic approaches are described, both local and systemic, with emphasis on the benefits obtained from the use of positive pressure breathing machines which control every manifestation of the syndrome and avoid the increased cardiovascular risks aswell as work and traffic accidents. This syndrome is important in terms of frequency and of increased death risk.

  5. De novo centriole formation in human cells is error-prone and does not require SAS-6 self-assembly.

    Science.gov (United States)

    Wang, Won-Jing; Acehan, Devrim; Kao, Chien-Han; Jane, Wann-Neng; Uryu, Kunihiro; Tsou, Meng-Fu Bryan

    2015-11-26

    Vertebrate centrioles normally propagate through duplication, but in the absence of preexisting centrioles, de novo synthesis can occur. Consistently, centriole formation is thought to strictly rely on self-assembly, involving self-oligomerization of the centriolar protein SAS-6. Here, through reconstitution of de novo synthesis in human cells, we surprisingly found that normal looking centrioles capable of duplication and ciliation can arise in the absence of SAS-6 self-oligomerization. Moreover, whereas canonically duplicated centrioles always form correctly, de novo centrioles are prone to structural errors, even in the presence of SAS-6 self-oligomerization. These results indicate that centriole biogenesis does not strictly depend on SAS-6 self-assembly, and may require preexisting centrioles to ensure structural accuracy, fundamentally deviating from the current paradigm.

  6. Statistical models and methods for reliability and survival analysis

    CERN Document Server

    Couallier, Vincent; Huber-Carol, Catherine; Mesbah, Mounir; Huber -Carol, Catherine; Limnios, Nikolaos; Gerville-Reache, Leo

    2013-01-01

    Statistical Models and Methods for Reliability and Survival Analysis brings together contributions by specialists in statistical theory as they discuss their applications providing up-to-date developments in methods used in survival analysis, statistical goodness of fit, stochastic processes for system reliability, amongst others. Many of these are related to the work of Professor M. Nikulin in statistics over the past 30 years. The authors gather together various contributions with a broad array of techniques and results, divided into three parts - Statistical Models and Methods, Statistical

  7. Classification, (big) data analysis and statistical learning

    CERN Document Server

    Conversano, Claudio; Vichi, Maurizio

    2018-01-01

    This edited book focuses on the latest developments in classification, statistical learning, data analysis and related areas of data science, including statistical analysis of large datasets, big data analytics, time series clustering, integration of data from different sources, as well as social networks. It covers both methodological aspects as well as applications to a wide range of areas such as economics, marketing, education, social sciences, medicine, environmental sciences and the pharmaceutical industry. In addition, it describes the basic features of the software behind the data analysis results, and provides links to the corresponding codes and data sets where necessary. This book is intended for researchers and practitioners who are interested in the latest developments and applications in the field. The peer-reviewed contributions were presented at the 10th Scientific Meeting of the Classification and Data Analysis Group (CLADAG) of the Italian Statistical Society, held in Santa Margherita di Pul...

  8. Statistical hot spot analysis of reactor cores

    International Nuclear Information System (INIS)

    Schaefer, H.

    1974-05-01

    This report is an introduction into statistical hot spot analysis. After the definition of the term 'hot spot' a statistical analysis is outlined. The mathematical method is presented, especially the formula concerning the probability of no hot spots in a reactor core is evaluated. A discussion with the boundary conditions of a statistical hot spot analysis is given (technological limits, nominal situation, uncertainties). The application of the hot spot analysis to the linear power of pellets and the temperature rise in cooling channels is demonstrated with respect to the test zone of KNK II. Basic values, such as probability of no hot spots, hot spot potential, expected hot spot diagram and cumulative distribution function of hot spots, are discussed. It is shown, that the risk of hot channels can be dispersed equally over all subassemblies by an adequate choice of the nominal temperature distribution in the core

  9. The statistical analysis of anisotropies

    International Nuclear Information System (INIS)

    Webster, A.

    1977-01-01

    One of the many uses to which a radio survey may be put is an analysis of the distribution of the radio sources on the celestial sphere to find out whether they are bunched into clusters or lie in preferred regions of space. There are many methods of testing for clustering in point processes and since they are not all equally good this contribution is presented as a brief guide to what seems to be the best of them. The radio sources certainly do not show very strong clusering and may well be entirely unclustered so if a statistical method is to be useful it must be both powerful and flexible. A statistic is powerful in this context if it can efficiently distinguish a weakly clustered distribution of sources from an unclustered one, and it is flexible if it can be applied in a way which avoids mistaking defects in the survey for true peculiarities in the distribution of sources. The paper divides clustering statistics into two classes: number density statistics and log N/log S statistics. (Auth.)

  10. Basic statistical tools in research and data analysis

    Directory of Open Access Journals (Sweden)

    Zulfiqar Ali

    2016-01-01

    Full Text Available Statistical methods involved in carrying out a study include planning, designing, collecting data, analysing, drawing meaningful interpretation and reporting of the research findings. The statistical analysis gives meaning to the meaningless numbers, thereby breathing life into a lifeless data. The results and inferences are precise only if proper statistical tests are used. This article will try to acquaint the reader with the basic research tools that are utilised while conducting various studies. The article covers a brief outline of the variables, an understanding of quantitative and qualitative variables and the measures of central tendency. An idea of the sample size estimation, power analysis and the statistical errors is given. Finally, there is a summary of parametric and non-parametric tests used for data analysis.

  11. Reproducible statistical analysis with multiple languages

    DEFF Research Database (Denmark)

    Lenth, Russell; Højsgaard, Søren

    2011-01-01

    This paper describes the system for making reproducible statistical analyses. differs from other systems for reproducible analysis in several ways. The two main differences are: (1) Several statistics programs can be in used in the same document. (2) Documents can be prepared using OpenOffice or ......Office or \\LaTeX. The main part of this paper is an example showing how to use and together in an OpenOffice text document. The paper also contains some practical considerations on the use of literate programming in statistics....

  12. Linear mixed models a practical guide using statistical software

    CERN Document Server

    West, Brady T; Galecki, Andrzej T

    2006-01-01

    Simplifying the often confusing array of software programs for fitting linear mixed models (LMMs), Linear Mixed Models: A Practical Guide Using Statistical Software provides a basic introduction to primary concepts, notation, software implementation, model interpretation, and visualization of clustered and longitudinal data. This easy-to-navigate reference details the use of procedures for fitting LMMs in five popular statistical software packages: SAS, SPSS, Stata, R/S-plus, and HLM. The authors introduce basic theoretical concepts, present a heuristic approach to fitting LMMs based on bo

  13. Repositioning the substrate activity screening (SAS) approach as a fragment-based method for identification of weak binders.

    Science.gov (United States)

    Gladysz, Rafaela; Cleenewerck, Matthias; Joossens, Jurgen; Lambeir, Anne-Marie; Augustyns, Koen; Van der Veken, Pieter

    2014-10-13

    Fragment-based drug discovery (FBDD) has evolved into an established approach for "hit" identification. Typically, most applications of FBDD depend on specialised cost- and time-intensive biophysical techniques. The substrate activity screening (SAS) approach has been proposed as a relatively cheap and straightforward alternative for identification of fragments for enzyme inhibitors. We have investigated SAS for the discovery of inhibitors of oncology target urokinase (uPA). Although our results support the key hypotheses of SAS, we also encountered a number of unreported limitations. In response, we propose an efficient modified methodology: "MSAS" (modified substrate activity screening). MSAS circumvents the limitations of SAS and broadens its scope by providing additional fragments and more coherent SAR data. As well as presenting and validating MSAS, this study expands existing SAR knowledge for the S1 pocket of uPA and reports new reversible and irreversible uPA inhibitor scaffolds. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Using Business Analysis Software in a Business Intelligence Course

    Science.gov (United States)

    Elizondo, Juan; Parzinger, Monica J.; Welch, Orion J.

    2011-01-01

    This paper presents an example of a project used in an undergraduate business intelligence class which integrates concepts from statistics, marketing, and information systems disciplines. SAS Enterprise Miner software is used as the foundation for predictive analysis and data mining. The course culminates with a competition and the project is used…

  15. Embedding SAS approach into conjugate gradient algorithms for asymmetric 3D elasticity problems

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Hsin-Chu; Warsi, N.A. [Clark Atlanta Univ., GA (United States); Sameh, A. [Univ. of Minnesota, Minneapolis, MN (United States)

    1996-12-31

    In this paper, we present two strategies to embed the SAS (symmetric-and-antisymmetric) scheme into conjugate gradient (CG) algorithms to make solving 3D elasticity problems, with or without global reflexive symmetry, more efficient. The SAS approach is physically a domain decomposition scheme that takes advantage of reflexive symmetry of discretized physical problems, and algebraically a matrix transformation method that exploits special reflexivity properties of the matrix resulting from discretization. In addition to offering large-grain parallelism, which is valuable in a multiprocessing environment, the SAS scheme also has the potential for reducing arithmetic operations in the numerical solution of a reasonably wide class of scientific and engineering problems. This approach can be applied directly to problems that have global reflexive symmetry, yielding smaller and independent subproblems to solve, or indirectly to problems with partial symmetry, resulting in loosely coupled subproblems. The decomposition is achieved by separating the reflexive subspace from the antireflexive one, possessed by a special class of matrices A, A {element_of} C{sup n x n} that satisfy the relation A = PAP where P is a reflection matrix (symmetric signed permutation matrix).

  16. Common pitfalls in statistical analysis: "P" values, statistical significance and confidence intervals

    Directory of Open Access Journals (Sweden)

    Priya Ranganathan

    2015-01-01

    Full Text Available In the second part of a series on pitfalls in statistical analysis, we look at various ways in which a statistically significant study result can be expressed. We debunk some of the myths regarding the ′P′ value, explain the importance of ′confidence intervals′ and clarify the importance of including both values in a paper

  17. Computational procedures for probing interactions in OLS and logistic regression: SPSS and SAS implementations.

    Science.gov (United States)

    Hayes, Andrew F; Matthes, Jörg

    2009-08-01

    Researchers often hypothesize moderated effects, in which the effect of an independent variable on an outcome variable depends on the value of a moderator variable. Such an effect reveals itself statistically as an interaction between the independent and moderator variables in a model of the outcome variable. When an interaction is found, it is important to probe the interaction, for theories and hypotheses often predict not just interaction but a specific pattern of effects of the focal independent variable as a function of the moderator. This article describes the familiar pick-a-point approach and the much less familiar Johnson-Neyman technique for probing interactions in linear models and introduces macros for SPSS and SAS to simplify the computations and facilitate the probing of interactions in ordinary least squares and logistic regression. A script version of the SPSS macro is also available for users who prefer a point-and-click user interface rather than command syntax.

  18. Statistics and analysis of scientific data

    CERN Document Server

    Bonamente, Massimiliano

    2017-01-01

    The revised second edition of this textbook provides the reader with a solid foundation in probability theory and statistics as applied to the physical sciences, engineering and related fields. It covers a broad range of numerical and analytical methods that are essential for the correct analysis of scientific data, including probability theory, distribution functions of statistics, fits to two-dimensional data and parameter estimation, Monte Carlo methods and Markov chains. Features new to this edition include: • a discussion of statistical techniques employed in business science, such as multiple regression analysis of multivariate datasets. • a new chapter on the various measures of the mean including logarithmic averages. • new chapters on systematic errors and intrinsic scatter, and on the fitting of data with bivariate errors. • a new case study and additional worked examples. • mathematical derivations and theoretical background material have been appropriately marked,to improve the readabili...

  19. Statistical evaluation of diagnostic performance topics in ROC analysis

    CERN Document Server

    Zou, Kelly H; Bandos, Andriy I; Ohno-Machado, Lucila; Rockette, Howard E

    2016-01-01

    Statistical evaluation of diagnostic performance in general and Receiver Operating Characteristic (ROC) analysis in particular are important for assessing the performance of medical tests and statistical classifiers, as well as for evaluating predictive models or algorithms. This book presents innovative approaches in ROC analysis, which are relevant to a wide variety of applications, including medical imaging, cancer research, epidemiology, and bioinformatics. Statistical Evaluation of Diagnostic Performance: Topics in ROC Analysis covers areas including monotone-transformation techniques in parametric ROC analysis, ROC methods for combined and pooled biomarkers, Bayesian hierarchical transformation models, sequential designs and inferences in the ROC setting, predictive modeling, multireader ROC analysis, and free-response ROC (FROC) methodology. The book is suitable for graduate-level students and researchers in statistics, biostatistics, epidemiology, public health, biomedical engineering, radiology, medi...

  20. Bayesian Inference in Statistical Analysis

    CERN Document Server

    Box, George E P

    2011-01-01

    The Wiley Classics Library consists of selected books that have become recognized classics in their respective fields. With these new unabridged and inexpensive editions, Wiley hopes to extend the life of these important works by making them available to future generations of mathematicians and scientists. Currently available in the Series: T. W. Anderson The Statistical Analysis of Time Series T. S. Arthanari & Yadolah Dodge Mathematical Programming in Statistics Emil Artin Geometric Algebra Norman T. J. Bailey The Elements of Stochastic Processes with Applications to the Natural Sciences Rob

  1. Escala de sociotropía-autonomía (SAS: propiedades psicométricas de la adaptación a Colombia

    Directory of Open Access Journals (Sweden)

    Ronald Alberto Toro Tobar

    2014-07-01

    Full Text Available Este artículo presenta los resultados de una investigación cuyo objetivo fue adaptar y validar la Escala de Sociotropía-Autonomía (SAS de Clark & Beck, la cual evalúa dos dimensiones de personalidad cognitiva: la orientación interpersonal y actitudes de logro y metas personales. La muestra consistió en 460 participantes entre 15 y 71 años, escolarizados de distintos estratos socioeconómicos. Los resultados mostraron índices de consistencia interna en la Escala SAS: total (,85, SAS Sociotropía (,82 y menores para SAS autonomía (,61. Estos resultados guardan relación con investigaciones acerca de este instrumento; por lo tanto se considera confiable y válido para el contexto sociocultural; además podría apoyar la investigación en psicoterapia cognitiva y en procesos clínicos basados en el modelo diátesis estrés. Abstract This paper presents the results of a study whose aim was adapt and validate the Sociotropy-Autonomy Scale (SAS Clark & Beck. This scale assesses two dimensions of cognitive personality: interpersonal orientation and attitudes of achievement and personal goals. The sample consisted of 460 participants between 15 and 71 years enrolled from different socioeconomic levels. The results showed internal consistency in SAS Scale: Total (.85, SAS Sociotropy (.82 and lower for SAS Autonomy (.61. These results are relevant to research on this instrument. It is considered a reliable and valid instrument for socio-cultural context; could also support research in cognitive psychotherapy and clinical processes based on the diathesis stress model.

  2. Handling missing data in cluster randomized trials: A demonstration of multiple imputation with PAN through SAS

    Directory of Open Access Journals (Sweden)

    Jiangxiu Zhou

    2014-09-01

    Full Text Available The purpose of this study is to demonstrate a way of dealing with missing data in clustered randomized trials by doing multiple imputation (MI with the PAN package in R through SAS. The procedure for doing MI with PAN through SAS is demonstrated in detail in order for researchers to be able to use this procedure with their own data. An illustration of the technique with empirical data was also included. In this illustration thePAN results were compared with pairwise deletion and three types of MI: (1 Normal Model (NM-MI ignoring the cluster structure; (2 NM-MI with dummy-coded cluster variables (fixed cluster structure; and (3 a hybrid NM-MI which imputes half the time ignoring the cluster structure, and the other half including the dummy-coded cluster variables. The empirical analysis showed that using PAN and the other strategies produced comparable parameter estimates. However, the dummy-coded MI overestimated the intraclass correlation, whereas MI ignoring the cluster structure and the hybrid MI underestimated the intraclass correlation. When compared with PAN, the p-value and standard error for the treatment effect were higher with dummy-coded MI, and lower with MI ignoring the clusterstructure, the hybrid MI approach, and pairwise deletion. Previous studies have shown that NM-MI is not appropriate for handling missing data in clustered randomized trials. This approach, in addition to the pairwise deletion approach, leads to a biased intraclass correlation and faultystatistical conclusions. Imputation in clustered randomized trials should be performed with PAN. We have demonstrated an easy way for using PAN through SAS.

  3. Cosmic radiation and airline pilots. Exposure patterns of Norwegian SAS-pilots 1960 to 1994

    International Nuclear Information System (INIS)

    Tveten, U.

    1997-02-01

    The work which is presented in this report is part of a Norwegian epidemiological project, carried out in cooperation between Institutt for Energiteknikk (IFE), the Norwegian Cancer Registry (NCR) and the Norwegian Radiation Protection Authority (NRPA). The project has been partially financed by the Norwegian Research Council. Originating from the Norwegian project, a number of similar projects have been started or are in the planning stage in a number of European countries. The present report lays the ground for estimation of individual exposure histories to cosmic radiation of pilots employed by the Scandinavian Airline System (SAS). The results presented in this report (radiation doserates for the different types of aircraft in the different years) will, in a later stage of the project, be utilized to estimate the individual radiation exposure histories. The major sources of information used as basis for this work is the collection of old SAS time tables found in the SAS Museum at Fornebu Airport in Oslo, and information provided by members of the Pilots Associations

  4. Cosmic radiation and airline pilots. Exposure patterns of Norwegian SAS-pilots 1960 to 1994

    Energy Technology Data Exchange (ETDEWEB)

    Tveten, U.

    1997-02-01

    The work which is presented in this report is part of a Norwegian epidemiological project, carried out in cooperation between Institutt for Energiteknikk (IFE), the Norwegian Cancer Registry (NCR) and the Norwegian Radiation Protection Authority (NRPA). The project has been partially financed by the Norwegian Research Council. Originating from the Norwegian project, a number of similar projects have been started or are in the planning stage in a number of European countries. The present report lays the ground for estimation of individual exposure histories to cosmic radiation of pilots employed by the Scandinavian Airline System (SAS). The results presented in this report (radiation doserates for the different types of aircraft in the different years) will, in a later stage of the project, be utilized to estimate the individual radiation exposure histories. The major sources of information used as basis for this work is the collection of old SAS time tables found in the SAS Museum at Fornebu Airport in Oslo, and information provided by members of the Pilots Associations.

  5. Comparing Visual and Statistical Analysis of Multiple Baseline Design Graphs.

    Science.gov (United States)

    Wolfe, Katie; Dickenson, Tammiee S; Miller, Bridget; McGrath, Kathleen V

    2018-04-01

    A growing number of statistical analyses are being developed for single-case research. One important factor in evaluating these methods is the extent to which each corresponds to visual analysis. Few studies have compared statistical and visual analysis, and information about more recently developed statistics is scarce. Therefore, our purpose was to evaluate the agreement between visual analysis and four statistical analyses: improvement rate difference (IRD); Tau-U; Hedges, Pustejovsky, Shadish (HPS) effect size; and between-case standardized mean difference (BC-SMD). Results indicate that IRD and BC-SMD had the strongest overall agreement with visual analysis. Although Tau-U had strong agreement with visual analysis on raw values, it had poorer agreement when those values were dichotomized to represent the presence or absence of a functional relation. Overall, visual analysis appeared to be more conservative than statistical analysis, but further research is needed to evaluate the nature of these disagreements.

  6. Sensitivity analysis and related analysis : A survey of statistical techniques

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    1995-01-01

    This paper reviews the state of the art in five related types of analysis, namely (i) sensitivity or what-if analysis, (ii) uncertainty or risk analysis, (iii) screening, (iv) validation, and (v) optimization. The main question is: when should which type of analysis be applied; which statistical

  7. SAS2: Guide sur la recherche collaborative et l'engagement social

    International Development Research Centre (IDRC) Digital Library (Canada)

    À l'encontre de cette façon de voir, les SAS2 favorisent l'« ancrage social » et la ..... les organismes communautaires, les médias, les fondations philanthropiques, ...... Portez attention aux différences qui peuvent influencer la manière dont les ...

  8. Propiedades psicom??tricas de la versi??n espa??ola de la Sexual Assertiveness Scale (SAS)

    OpenAIRE

    Sierra Freire, Juan Carlos; Vallejo-Medina, Pablo; Santos-Iglesias, Pablo

    2011-01-01

    La asertividad sexual se refiere a la capacidad de las personas para iniciar la actividad sexual, rechazar la actividad sexual no deseada y emplear m??todos anticonceptivos, desarrollando comportamientos saluda- bles. La Sexual Assertiveness Scale (SAS) es una escala de 18 ??tems que eval??a tres dimensiones: Inicio, Rechazo y Prevenci??n Embarazo-ETS (E-ETS). En este estudio 853 personas contestaron la SAS, junto a una bater??a de ins- trumentos afines. La puntuaci??n media de los ??tems ha ...

  9. Statistical Data Processing with R – Metadata Driven Approach

    Directory of Open Access Journals (Sweden)

    Rudi SELJAK

    2016-06-01

    Full Text Available In recent years the Statistical Office of the Republic of Slovenia has put a lot of effort into re-designing its statistical process. We replaced the classical stove-pipe oriented production system with general software solutions, based on the metadata driven approach. This means that one general program code, which is parametrized with process metadata, is used for data processing for a particular survey. Currently, the general program code is entirely based on SAS macros, but in the future we would like to explore how successfully statistical software R can be used for this approach. Paper describes the metadata driven principle for data validation, generic software solution and main issues connected with the use of statistical software R for this approach.

  10. Genome sequencing and annotation of Proteus sp. SAS71

    Directory of Open Access Journals (Sweden)

    Samy Selim

    2015-12-01

    Full Text Available We report draft genome sequence of Proteus sp. strain SAS71, isolated from water spring in Aljouf region, Saudi Arabia. The draft genome size is 3,037,704 bp with a G + C content of 39.3% and contains 6 rRNA sequence (single copies of 5S, 16S & 23S rRNA. The genome sequence can be accessed at DDBJ/EMBL/GenBank under the accession no. LDIU00000000.

  11. Online Statistical Modeling (Regression Analysis) for Independent Responses

    Science.gov (United States)

    Made Tirta, I.; Anggraeni, Dian; Pandutama, Martinus

    2017-06-01

    Regression analysis (statistical analmodelling) are among statistical methods which are frequently needed in analyzing quantitative data, especially to model relationship between response and explanatory variables. Nowadays, statistical models have been developed into various directions to model various type and complex relationship of data. Rich varieties of advanced and recent statistical modelling are mostly available on open source software (one of them is R). However, these advanced statistical modelling, are not very friendly to novice R users, since they are based on programming script or command line interface. Our research aims to developed web interface (based on R and shiny), so that most recent and advanced statistical modelling are readily available, accessible and applicable on web. We have previously made interface in the form of e-tutorial for several modern and advanced statistical modelling on R especially for independent responses (including linear models/LM, generalized linier models/GLM, generalized additive model/GAM and generalized additive model for location scale and shape/GAMLSS). In this research we unified them in the form of data analysis, including model using Computer Intensive Statistics (Bootstrap and Markov Chain Monte Carlo/ MCMC). All are readily accessible on our online Virtual Statistics Laboratory. The web (interface) make the statistical modeling becomes easier to apply and easier to compare them in order to find the most appropriate model for the data.

  12. Application of Ontology Technology in Health Statistic Data Analysis.

    Science.gov (United States)

    Guo, Minjiang; Hu, Hongpu; Lei, Xingyun

    2017-01-01

    Research Purpose: establish health management ontology for analysis of health statistic data. Proposed Methods: this paper established health management ontology based on the analysis of the concepts in China Health Statistics Yearbook, and used protégé to define the syntactic and semantic structure of health statistical data. six classes of top-level ontology concepts and their subclasses had been extracted and the object properties and data properties were defined to establish the construction of these classes. By ontology instantiation, we can integrate multi-source heterogeneous data and enable administrators to have an overall understanding and analysis of the health statistic data. ontology technology provides a comprehensive and unified information integration structure of the health management domain and lays a foundation for the efficient analysis of multi-source and heterogeneous health system management data and enhancement of the management efficiency.

  13. Model development of SAS4A and investigation on the initiating phase consequences in LMFRs related with material motion

    International Nuclear Information System (INIS)

    Niwa, H.

    1994-01-01

    This paper focuses on an analytical aspect of the initiating phase scenario and consequences of postulated core disruptive accident in liquid-metal-cooled fast breeder reactors. An analytical code, SAS4A, has been developed at Argonne National Laboratory, and introduced to PNC. Improvement and validation effort have been performed for the mixed-oxide version of SAS4A at PNC. This paper describes firstly recent development of SAS4A's material motion related models briefly. A fission gas mass transfer model and solid fuel chunk jamming model are developed and introduced to SAS4A, and validated using CABRI-2 E13 experimental data. Secondly, an investigation of the mechanism of energetics in the initiating phase of an unprotected loss-of-flow accident has identified major control parameters which are intimately related to core design parameters and material motion phenomena. (author)

  14. Explorations in Statistics: The Analysis of Change

    Science.gov (United States)

    Curran-Everett, Douglas; Williams, Calvin L.

    2015-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This tenth installment of "Explorations in Statistics" explores the analysis of a potential change in some physiological response. As researchers, we often express absolute change as percent change so we can…

  15. Common pitfalls in statistical analysis: “P” values, statistical significance and confidence intervals

    Science.gov (United States)

    Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc

    2015-01-01

    In the second part of a series on pitfalls in statistical analysis, we look at various ways in which a statistically significant study result can be expressed. We debunk some of the myths regarding the ‘P’ value, explain the importance of ‘confidence intervals’ and clarify the importance of including both values in a paper PMID:25878958

  16. Relap4/SAS/Mod5 - A version of Relap4/Mod 5 adapted to IPEN/CNEN - SP computer center

    International Nuclear Information System (INIS)

    Sabundjian, G.

    1988-04-01

    In order to improve the safety of nuclear reactor power plants several computer codes have been developed in the area of thermal - hydraulics accident analysis. Among the public-available codes, RELAP4, developed by Aerojet Nuclear Company, has been the most popular one. RELAP4 has produced satisfactory results when compared to most of the available experimental data. The purposes of the present work are: optimization of RELAP4 output and messages by writing there information in temporary records, - display of RELAP4 results in graphical form through the printer. The sample problem consists on a simplified model of a 150 MW (e) PWR whose primary circuit is simulated by 6 volumes, 8 junctions and 1 heat slab. This new version of RELAP4 (named RELAP4/SAS/MOD5) have produced results which show that the above mentioned purposes have been reached. Obviously the graphical output by RELAP4/SAS/MOD5 favors the interpretation of results by the user. (author) [pt

  17. Quality assurance management plan (QAPP) special analytical support (SAS)

    Energy Technology Data Exchange (ETDEWEB)

    LOCKREM, L.L.

    1999-05-20

    It is the policy of Special Analytical Support (SAS) that the analytical aspects of all environmental data generated and processed in the laboratory, subject to the Environmental Protection Agency (EPA), U.S. Department of Energy or other project specific requirements, be of known and acceptable quality. It is the intention of this QAPP to establish and assure that an effective quality controlled management system is maintained in order to meet the quality requirements of the intended use(s) of the data.

  18. Quality assurance management plan (QAPP) special analytical support (SAS)

    International Nuclear Information System (INIS)

    LOCKREM, L.L.

    1999-01-01

    It is the policy of Special Analytical Support (SAS) that the analytical aspects of all environmental data generated and processed in the laboratory, subject to the Environmental Protection Agency (EPA), U.S. Department of Energy or other project specific requirements, be of known and acceptable quality. It is the intention of this QAPP to establish and assure that an effective quality controlled management system is maintained in order to meet the quality requirements of the intended use(s) of the data

  19. Comparison of ArcGIS and SAS Geostatistical Analyst to Estimate Population-Weighted Monthly Temperature for US Counties.

    Science.gov (United States)

    Xiaopeng, Q I; Liang, Wei; Barker, Laurie; Lekiachvili, Akaki; Xingyou, Zhang

    Temperature changes are known to have significant impacts on human health. Accurate estimates of population-weighted average monthly air temperature for US counties are needed to evaluate temperature's association with health behaviours and disease, which are sampled or reported at the county level and measured on a monthly-or 30-day-basis. Most reported temperature estimates were calculated using ArcGIS, relatively few used SAS. We compared the performance of geostatistical models to estimate population-weighted average temperature in each month for counties in 48 states using ArcGIS v9.3 and SAS v 9.2 on a CITGO platform. Monthly average temperature for Jan-Dec 2007 and elevation from 5435 weather stations were used to estimate the temperature at county population centroids. County estimates were produced with elevation as a covariate. Performance of models was assessed by comparing adjusted R 2 , mean squared error, root mean squared error, and processing time. Prediction accuracy for split validation was above 90% for 11 months in ArcGIS and all 12 months in SAS. Cokriging in SAS achieved higher prediction accuracy and lower estimation bias as compared to cokriging in ArcGIS. County-level estimates produced by both packages were positively correlated (adjusted R 2 range=0.95 to 0.99); accuracy and precision improved with elevation as a covariate. Both methods from ArcGIS and SAS are reliable for U.S. county-level temperature estimates; However, ArcGIS's merits in spatial data pre-processing and processing time may be important considerations for software selection, especially for multi-year or multi-state projects.

  20. TECHNIQUE OF THE STATISTICAL ANALYSIS OF INVESTMENT APPEAL OF THE REGION

    Directory of Open Access Journals (Sweden)

    А. А. Vershinina

    2014-01-01

    Full Text Available The technique of the statistical analysis of investment appeal of the region is given in scientific article for direct foreign investments. Definition of a technique of the statistical analysis is given, analysis stages reveal, the mathematico-statistical tools are considered.

  1. Statistical analysis of network data with R

    CERN Document Server

    Kolaczyk, Eric D

    2014-01-01

    Networks have permeated everyday life through everyday realities like the Internet, social networks, and viral marketing. As such, network analysis is an important growth area in the quantitative sciences, with roots in social network analysis going back to the 1930s and graph theory going back centuries. Measurement and analysis are integral components of network research. As a result, statistical methods play a critical role in network analysis. This book is the first of its kind in network research. It can be used as a stand-alone resource in which multiple R packages are used to illustrate how to conduct a wide range of network analyses, from basic manipulation and visualization, to summary and characterization, to modeling of network data. The central package is igraph, which provides extensive capabilities for studying network graphs in R. This text builds on Eric D. Kolaczyk’s book Statistical Analysis of Network Data (Springer, 2009).

  2. Regression-based statistical mediation and moderation analysis in clinical research: Observations, recommendations, and implementation.

    Science.gov (United States)

    Hayes, Andrew F; Rockwood, Nicholas J

    2017-11-01

    There have been numerous treatments in the clinical research literature about various design, analysis, and interpretation considerations when testing hypotheses about mechanisms and contingencies of effects, popularly known as mediation and moderation analysis. In this paper we address the practice of mediation and moderation analysis using linear regression in the pages of Behaviour Research and Therapy and offer some observations and recommendations, debunk some popular myths, describe some new advances, and provide an example of mediation, moderation, and their integration as conditional process analysis using the PROCESS macro for SPSS and SAS. Our goal is to nudge clinical researchers away from historically significant but increasingly old school approaches toward modifications, revisions, and extensions that characterize more modern thinking about the analysis of the mechanisms and contingencies of effects. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Semiclassical analysis, Witten Laplacians, and statistical mechanis

    CERN Document Server

    Helffer, Bernard

    2002-01-01

    This important book explains how the technique of Witten Laplacians may be useful in statistical mechanics. It considers the problem of analyzing the decay of correlations, after presenting its origin in statistical mechanics. In addition, it compares the Witten Laplacian approach with other techniques, such as the transfer matrix approach and its semiclassical analysis. The author concludes by providing a complete proof of the uniform Log-Sobolev inequality. Contents: Witten Laplacians Approach; Problems in Statistical Mechanics with Discrete Spins; Laplace Integrals and Transfer Operators; S

  4. A novel statistic for genome-wide interaction analysis.

    Directory of Open Access Journals (Sweden)

    Xuesen Wu

    2010-09-01

    Full Text Available Although great progress in genome-wide association studies (GWAS has been made, the significant SNP associations identified by GWAS account for only a few percent of the genetic variance, leading many to question where and how we can find the missing heritability. There is increasing interest in genome-wide interaction analysis as a possible source of finding heritability unexplained by current GWAS. However, the existing statistics for testing interaction have low power for genome-wide interaction analysis. To meet challenges raised by genome-wide interactional analysis, we have developed a novel statistic for testing interaction between two loci (either linked or unlinked. The null distribution and the type I error rates of the new statistic for testing interaction are validated using simulations. Extensive power studies show that the developed statistic has much higher power to detect interaction than classical logistic regression. The results identified 44 and 211 pairs of SNPs showing significant evidence of interactions with FDR<0.001 and 0.001analysis is a valuable tool for finding remaining missing heritability unexplained by the current GWAS, and the developed novel statistic is able to search significant interaction between SNPs across the genome. Real data analysis showed that the results of genome-wide interaction analysis can be replicated in two independent studies.

  5. A statistical approach to plasma profile analysis

    International Nuclear Information System (INIS)

    Kardaun, O.J.W.F.; McCarthy, P.J.; Lackner, K.; Riedel, K.S.

    1990-05-01

    A general statistical approach to the parameterisation and analysis of tokamak profiles is presented. The modelling of the profile dependence on both the radius and the plasma parameters is discussed, and pertinent, classical as well as robust, methods of estimation are reviewed. Special attention is given to statistical tests for discriminating between the various models, and to the construction of confidence intervals for the parameterised profiles and the associated global quantities. The statistical approach is shown to provide a rigorous approach to the empirical testing of plasma profile invariance. (orig.)

  6. Study designs, use of statistical tests, and statistical analysis software choice in 2015: Results from two Pakistani monthly Medline indexed journals.

    Science.gov (United States)

    Shaikh, Masood Ali

    2017-09-01

    Assessment of research articles in terms of study designs used, statistical tests applied and the use of statistical analysis programmes help determine research activity profile and trends in the country. In this descriptive study, all original articles published by Journal of Pakistan Medical Association (JPMA) and Journal of the College of Physicians and Surgeons Pakistan (JCPSP), in the year 2015 were reviewed in terms of study designs used, application of statistical tests, and the use of statistical analysis programmes. JPMA and JCPSP published 192 and 128 original articles, respectively, in the year 2015. Results of this study indicate that cross-sectional study design, bivariate inferential statistical analysis entailing comparison between two variables/groups, and use of statistical software programme SPSS to be the most common study design, inferential statistical analysis, and statistical analysis software programmes, respectively. These results echo previously published assessment of these two journals for the year 2014.

  7. Statistical analysis of brake squeal noise

    Science.gov (United States)

    Oberst, S.; Lai, J. C. S.

    2011-06-01

    Despite substantial research efforts applied to the prediction of brake squeal noise since the early 20th century, the mechanisms behind its generation are still not fully understood. Squealing brakes are of significant concern to the automobile industry, mainly because of the costs associated with warranty claims. In order to remedy the problems inherent in designing quieter brakes and, therefore, to understand the mechanisms, a design of experiments study, using a noise dynamometer, was performed by a brake system manufacturer to determine the influence of geometrical parameters (namely, the number and location of slots) of brake pads on brake squeal noise. The experimental results were evaluated with a noise index and ranked for warm and cold brake stops. These data are analysed here using statistical descriptors based on population distributions, and a correlation analysis, to gain greater insight into the functional dependency between the time-averaged friction coefficient as the input and the peak sound pressure level data as the output quantity. The correlation analysis between the time-averaged friction coefficient and peak sound pressure data is performed by applying a semblance analysis and a joint recurrence quantification analysis. Linear measures are compared with complexity measures (nonlinear) based on statistics from the underlying joint recurrence plots. Results show that linear measures cannot be used to rank the noise performance of the four test pad configurations. On the other hand, the ranking of the noise performance of the test pad configurations based on the noise index agrees with that based on nonlinear measures: the higher the nonlinearity between the time-averaged friction coefficient and peak sound pressure, the worse the squeal. These results highlight the nonlinear character of brake squeal and indicate the potential of using nonlinear statistical analysis tools to analyse disc brake squeal.

  8. The Statistical Analysis of Time Series

    CERN Document Server

    Anderson, T W

    2011-01-01

    The Wiley Classics Library consists of selected books that have become recognized classics in their respective fields. With these new unabridged and inexpensive editions, Wiley hopes to extend the life of these important works by making them available to future generations of mathematicians and scientists. Currently available in the Series: T. W. Anderson Statistical Analysis of Time Series T. S. Arthanari & Yadolah Dodge Mathematical Programming in Statistics Emil Artin Geometric Algebra Norman T. J. Bailey The Elements of Stochastic Processes with Applications to the Natural Sciences George

  9. Analysis of room transfer function and reverberant signal statistics

    DEFF Research Database (Denmark)

    Georganti, Eleftheria; Mourjopoulos, John; Jacobsen, Finn

    2008-01-01

    For some time now, statistical analysis has been a valuable tool in analyzing room transfer functions (RTFs). This work examines existing statistical time-frequency models and techniques for RTF analysis (e.g., Schroeder's stochastic model and the standard deviation over frequency bands for the RTF...... magnitude and phase). RTF fractional octave smoothing, as with 1-slash 3 octave analysis, may lead to RTF simplifications that can be useful for several audio applications, like room compensation, room modeling, auralisation purposes. The aim of this work is to identify the relationship of optimal response...... and the corresponding ratio of the direct and reverberant signal. In addition, this work examines the statistical quantities for speech and audio signals prior to their reproduction within rooms and when recorded in rooms. Histograms and other statistical distributions are used to compare RTF minima of typical...

  10. A statistical analysis of the body condition of cows from two veterinary stations in Zimbabwe

    International Nuclear Information System (INIS)

    Saporu, F.W.O.

    2003-12-01

    The improvement of livestock production is important for Zimbabwe's agriculturally base economy. This paper examines the relationship between the body condition and metabolic parameters of female cows, for the better understanding of traditional livestock farming in Zimbabwe. The data analysed are part of the baseline data on the improvement of livestock production, collected from two sites Chinamora and Bulawayo. Body condition is indexed by body score. Thirty-five variables are examined. The variable selection method employed is stepwise regression. Regression model assumptions of normality and independent observations are checked using normal probability plot and Durbin-Watson statistics for autocorrelation of residuals. Collinearity and outlier problems are examined using eigenanalysis and influence statistics. The effect of some factors, such as, site, which relates to livestock management, parity and season, categorized by the quality of forage available for grazing, are also studied. The data are analysed using SAS statistical package on a Personal Computer. The results show that only about four variables substantially influence the relationship in each of the two sites considered. For the better managed site, Bulawayo, these are PCV, Calcium and WBC. Strongyles, Progesterone Level, Phosphate and HB are obtained in Chinamora. Negative correlation coefficient corresponds to strongyles only. That is, the effect of stronglyes is to reduce the value of bodyscore. For other variables, an improvement in their respective values will bring about improved body condition. Site difference is identified as a factor affecting the relationship. This emphasizes the role of good management in livestock production. Parity and season are also identified. Only two interactions are significant; site-season and a progesterone level-season interaction. The latter is obtained only in Chinamora site and it can be deduced that the cyclic cows are exposed to the risk of loosing their

  11. Transit safety & security statistics & analysis 2002 annual report (formerly SAMIS)

    Science.gov (United States)

    2004-12-01

    The Transit Safety & Security Statistics & Analysis 2002 Annual Report (formerly SAMIS) is a compilation and analysis of mass transit accident, casualty, and crime statistics reported under the Federal Transit Administrations (FTAs) National Tr...

  12. Transit safety & security statistics & analysis 2003 annual report (formerly SAMIS)

    Science.gov (United States)

    2005-12-01

    The Transit Safety & Security Statistics & Analysis 2003 Annual Report (formerly SAMIS) is a compilation and analysis of mass transit accident, casualty, and crime statistics reported under the Federal Transit Administrations (FTAs) National Tr...

  13. Living Together v. Living Well Together: A Normative Examination of the SAS Case

    Directory of Open Access Journals (Sweden)

    Lori G. Beaman

    2016-04-01

    Full Text Available The European Court of Human Rights decision in SAS from France illustrates how a policy and national mantra that ostensibly aims to enhance inclusiveness, ‘living together’, is legally deployed in a manner that may have the opposite effect. In essence, despite acknowledging the sincerity of SAS’s religious practice of wearing the niqab, and her agency in making the decision to do so, the Court focuses on radicalism and women’s oppression amongst Muslims. Taking the notion of living together as the beginning point, the paper explores the normative assumptions underlying this notion as illustrated in the judgment of the Court. An alternative approach, drawing on the work of Derrida for the notion of ‘living well together’ will be proposed and its implications for social inclusion explicated. The paper’s aim is to move beyond the specific example of SAS and France to argue that the SAS pattern of identifying particular values as ‘national values’, the deployment of those values through law, policy and public discourse, and their exclusionary effects is playing out in a number of Western democracies, including Canada, the country with which the author is most familiar. Because of this widespread dissemination of values and their framing as representative of who ‘we’ are, there is a pressing need to consider the potentially alienating effects of a specific manifestation of ‘living together’ and an alternative model of ‘living well together’.

  14. Statistical Modelling of Wind Proles - Data Analysis and Modelling

    DEFF Research Database (Denmark)

    Jónsson, Tryggvi; Pinson, Pierre

    The aim of the analysis presented in this document is to investigate whether statistical models can be used to make very short-term predictions of wind profiles.......The aim of the analysis presented in this document is to investigate whether statistical models can be used to make very short-term predictions of wind profiles....

  15. Statistical analysis of long term spatial and temporal trends of ...

    Indian Academy of Sciences (India)

    Statistical analysis of long term spatial and temporal trends of temperature ... CGCM3; HadCM3; modified Mann–Kendall test; statistical analysis; Sutlej basin. ... Water Resources Systems Division, National Institute of Hydrology, Roorkee 247 ...

  16. AnovArray: a set of SAS macros for the analysis of variance of gene expression data

    Directory of Open Access Journals (Sweden)

    Renard Jean-Paul

    2005-06-01

    Full Text Available Abstract Background Analysis of variance is a powerful approach to identify differentially expressed genes in a complex experimental design for microarray and macroarray data. The advantage of the anova model is the possibility to evaluate multiple sources of variation in an experiment. Results AnovArray is a package implementing ANOVA for gene expression data using SAS® statistical software. The originality of the package is 1 to quantify the different sources of variation on all genes together, 2 to provide a quality control of the model, 3 to propose two models for a gene's variance estimation and to perform a correction for multiple comparisons. Conclusion AnovArray is freely available at http://www-mig.jouy.inra.fr/stat/AnovArray and requires only SAS® statistical software.

  17. CORSSA: The Community Online Resource for Statistical Seismicity Analysis

    Science.gov (United States)

    Michael, Andrew J.; Wiemer, Stefan

    2010-01-01

    Statistical seismology is the application of rigorous statistical methods to earthquake science with the goal of improving our knowledge of how the earth works. Within statistical seismology there is a strong emphasis on the analysis of seismicity data in order to improve our scientific understanding of earthquakes and to improve the evaluation and testing of earthquake forecasts, earthquake early warning, and seismic hazards assessments. Given the societal importance of these applications, statistical seismology must be done well. Unfortunately, a lack of educational resources and available software tools make it difficult for students and new practitioners to learn about this discipline. The goal of the Community Online Resource for Statistical Seismicity Analysis (CORSSA) is to promote excellence in statistical seismology by providing the knowledge and resources necessary to understand and implement the best practices, so that the reader can apply these methods to their own research. This introduction describes the motivation for and vision of CORRSA. It also describes its structure and contents.

  18. Multivariate statistical analysis a high-dimensional approach

    CERN Document Server

    Serdobolskii, V

    2000-01-01

    In the last few decades the accumulation of large amounts of in­ formation in numerous applications. has stimtllated an increased in­ terest in multivariate analysis. Computer technologies allow one to use multi-dimensional and multi-parametric models successfully. At the same time, an interest arose in statistical analysis with a de­ ficiency of sample data. Nevertheless, it is difficult to describe the recent state of affairs in applied multivariate methods as satisfactory. Unimprovable (dominating) statistical procedures are still unknown except for a few specific cases. The simplest problem of estimat­ ing the mean vector with minimum quadratic risk is unsolved, even for normal distributions. Commonly used standard linear multivari­ ate procedures based on the inversion of sample covariance matrices can lead to unstable results or provide no solution in dependence of data. Programs included in standard statistical packages cannot process 'multi-collinear data' and there are no theoretical recommen­ ...

  19. SAS-4 is recruited to a dynamic structure in newly forming centrioles that is stabilized by the gamma-tubulin-mediated addition of centriolar microtubules.

    Science.gov (United States)

    Dammermann, Alexander; Maddox, Paul S; Desai, Arshad; Oegema, Karen

    2008-02-25

    Centrioles are surrounded by pericentriolar material (PCM), which is proposed to promote new centriole assembly by concentrating gamma-tubulin. Here, we quantitatively monitor new centriole assembly in living Caenorhabditis elegans embryos, focusing on the conserved components SAS-4 and SAS-6. We show that SAS-4 and SAS-6 are coordinately recruited to the site of new centriole assembly and reach their maximum levels during S phase. Centriolar SAS-6 is subsequently reduced by a mechanism intrinsic to the early assembly pathway that does not require progression into mitosis. Centriolar SAS-4 remains in dynamic equilibrium with the cytoplasmic pool until late prophase, when it is stably incorporated in a step that requires gamma-tubulin and microtubule assembly. These results indicate that gamma-tubulin in the PCM stabilizes the nascent daughter centriole by promoting microtubule addition to its outer wall. Such a mechanism may help restrict new centriole assembly to the vicinity of preexisting parent centrioles that recruit PCM.

  20. Applied multivariate statistical analysis

    CERN Document Server

    Härdle, Wolfgang Karl

    2015-01-01

    Focusing on high-dimensional applications, this 4th edition presents the tools and concepts used in multivariate data analysis in a style that is also accessible for non-mathematicians and practitioners.  It surveys the basic principles and emphasizes both exploratory and inferential statistics; a new chapter on Variable Selection (Lasso, SCAD and Elastic Net) has also been added.  All chapters include practical exercises that highlight applications in different multivariate data analysis fields: in quantitative financial studies, where the joint dynamics of assets are observed; in medicine, where recorded observations of subjects in different locations form the basis for reliable diagnoses and medication; and in quantitative marketing, where consumers’ preferences are collected in order to construct models of consumer behavior.  All of these examples involve high to ultra-high dimensions and represent a number of major fields in big data analysis. The fourth edition of this book on Applied Multivariate ...

  1. Development and validation of logistic prognostic models by predefined SAS-macros

    Directory of Open Access Journals (Sweden)

    Ziegler, Christoph

    2006-02-01

    Full Text Available In medical decision making about therapies or diagnostic procedures in the treatment of patients the prognoses of the course or of the magnitude of diseases plays a relevant role. Beside of the subjective attitude of the clinician mathematical models can help in providing such prognoses. Such models are mostly multivariate regression models. In the case of a dichotomous outcome the logistic model will be applied as the standard model. In this paper we will describe SAS-macros for the development of such a model, for examination of the prognostic performance, and for model validation. The rational for this developmental approach of a prognostic modelling and the description of the macros can only given briefly in this paper. Much more details are given in. These 14 SAS-macros are a tool for setting up the whole process of deriving a prognostic model. Especially the possibility of validating the model by a standardized software tool gives an opportunity, which is not used in general in published prognostic models. Therefore, this can help to develop new models with good prognostic performance for use in medical applications.

  2. Statistical evaluation of vibration analysis techniques

    Science.gov (United States)

    Milner, G. Martin; Miller, Patrice S.

    1987-01-01

    An evaluation methodology is presented for a selection of candidate vibration analysis techniques applicable to machinery representative of the environmental control and life support system of advanced spacecraft; illustrative results are given. Attention is given to the statistical analysis of small sample experiments, the quantification of detection performance for diverse techniques through the computation of probability of detection versus probability of false alarm, and the quantification of diagnostic performance.

  3. HistFitter software framework for statistical data analysis

    CERN Document Server

    Baak, M.; Côte, D.; Koutsman, A.; Lorenz, J.; Short, D.

    2015-01-01

    We present a software framework for statistical data analysis, called HistFitter, that has been used extensively by the ATLAS Collaboration to analyze big datasets originating from proton-proton collisions at the Large Hadron Collider at CERN. Since 2012 HistFitter has been the standard statistical tool in searches for supersymmetric particles performed by ATLAS. HistFitter is a programmable and flexible framework to build, book-keep, fit, interpret and present results of data models of nearly arbitrary complexity. Starting from an object-oriented configuration, defined by users, the framework builds probability density functions that are automatically fitted to data and interpreted with statistical tests. A key innovation of HistFitter is its design, which is rooted in core analysis strategies of particle physics. The concepts of control, signal and validation regions are woven into its very fabric. These are progressively treated with statistically rigorous built-in methods. Being capable of working with mu...

  4. Non-linear Growth Models in Mplus and SAS

    Science.gov (United States)

    Grimm, Kevin J.; Ram, Nilam

    2013-01-01

    Non-linear growth curves or growth curves that follow a specified non-linear function in time enable researchers to model complex developmental patterns with parameters that are easily interpretable. In this paper we describe how a variety of sigmoid curves can be fit using the Mplus structural modeling program and the non-linear mixed-effects modeling procedure NLMIXED in SAS. Using longitudinal achievement data collected as part of a study examining the effects of preschool instruction on academic gain we illustrate the procedures for fitting growth models of logistic, Gompertz, and Richards functions. Brief notes regarding the practical benefits, limitations, and choices faced in the fitting and estimation of such models are included. PMID:23882134

  5. Statistical analysis on extreme wave height

    Digital Repository Service at National Institute of Oceanography (India)

    Teena, N.V.; SanilKumar, V.; Sudheesh, K.; Sajeev, R.

    -294. • WAFO (2000) – A MATLAB toolbox for analysis of random waves and loads, Lund University, Sweden, homepage http://www.maths.lth.se/matstat/wafo/,2000. 15    Table 1: Statistical results of data and fitted distribution for cumulative distribution...

  6. Statistical Analysis of Zebrafish Locomotor Response.

    Science.gov (United States)

    Liu, Yiwen; Carmer, Robert; Zhang, Gaonan; Venkatraman, Prahatha; Brown, Skye Ashton; Pang, Chi-Pui; Zhang, Mingzhi; Ma, Ping; Leung, Yuk Fai

    2015-01-01

    Zebrafish larvae display rich locomotor behaviour upon external stimulation. The movement can be simultaneously tracked from many larvae arranged in multi-well plates. The resulting time-series locomotor data have been used to reveal new insights into neurobiology and pharmacology. However, the data are of large scale, and the corresponding locomotor behavior is affected by multiple factors. These issues pose a statistical challenge for comparing larval activities. To address this gap, this study has analyzed a visually-driven locomotor behaviour named the visual motor response (VMR) by the Hotelling's T-squared test. This test is congruent with comparing locomotor profiles from a time period. Different wild-type (WT) strains were compared using the test, which shows that they responded differently to light change at different developmental stages. The performance of this test was evaluated by a power analysis, which shows that the test was sensitive for detecting differences between experimental groups with sample numbers that were commonly used in various studies. In addition, this study investigated the effects of various factors that might affect the VMR by multivariate analysis of variance (MANOVA). The results indicate that the larval activity was generally affected by stage, light stimulus, their interaction, and location in the plate. Nonetheless, different factors affected larval activity differently over time, as indicated by a dynamical analysis of the activity at each second. Intriguingly, this analysis also shows that biological and technical repeats had negligible effect on larval activity. This finding is consistent with that from the Hotelling's T-squared test, and suggests that experimental repeats can be combined to enhance statistical power. Together, these investigations have established a statistical framework for analyzing VMR data, a framework that should be generally applicable to other locomotor data with similar structure.

  7. Time Series Analysis Based on Running Mann Whitney Z Statistics

    Science.gov (United States)

    A sensitive and objective time series analysis method based on the calculation of Mann Whitney U statistics is described. This method samples data rankings over moving time windows, converts those samples to Mann-Whitney U statistics, and then normalizes the U statistics to Z statistics using Monte-...

  8. Technical support for a proposed decay heat guide using SAS2H/ORIGEN-S data

    International Nuclear Information System (INIS)

    Hermann, O.W.; Parks, C.V.; Renier, J.P.

    1994-09-01

    Major revisions are proposed to the current US Nuclear Regulatory Commission decay heat rate guide entitled ''Regulatory Guide 3.54, Spent Fuel Heat Generation in an Independent Spent Fuel Storage Installation,'' using a new data base produced by the SAS2H analysis sequence of the SCALE-4 system. The data base for the proposed guide revision has been significantly improved by increasing the number and range of parameters that generally characterize pressurized-water-reactor (PWR) and boiling-water-reactor (BWR) spent fuel assemblies. Using generic PWR and BWR assembly models, calculations were performed with each model for six different burnups at each of three separate specific powers to produce heat rates at 20 cooling times in the range of 1 to 110 y. The proposed procedure specifies proper interpolation formulae for the tabulated heat generation rates. Adjustment formulae for the interpolated values are provided to account for differences in initial 235 U enrichment and changes in the specific power of a cycle from the average value. Finally, safety factor formulae were derived as a function of burnup, cooling time, and type of reactor. The proposed guide revision was designed to be easier to use. Also, the complete data base and guide procedure is incorporated into an interactive code called LWRARC which can be executed on a personal computer. The report shows adequate comparisons of heat rates computed by SAS2H/ORIGEN-S and measurements for 10 BWR and 10 PWR fuel assemblies. The average differences of the computed minus the measured heat rates of fuel assemblies were -07 ± 2.6% for the BWR and 1.5 ± 1.3% for the PWR. In addition, a detailed analysis of the proposed procedure indicated the method and equations to be valid

  9. Sensitivity analysis of ranked data: from order statistics to quantiles

    NARCIS (Netherlands)

    Heidergott, B.F.; Volk-Makarewicz, W.

    2015-01-01

    In this paper we provide the mathematical theory for sensitivity analysis of order statistics of continuous random variables, where the sensitivity is with respect to a distributional parameter. Sensitivity analysis of order statistics over a finite number of observations is discussed before

  10. Design and implementation of reliability evaluation of SAS hard disk based on RAID card

    Science.gov (United States)

    Ren, Shaohua; Han, Sen

    2015-10-01

    Because of the huge advantage of RAID technology in storage, it has been widely used. However, the question associated with this technology is that the hard disk based on the RAID card can not be queried by Operating System. Therefore how to read the self-information and log data of hard disk has been a problem, while this data is necessary for reliability test of hard disk. In traditional way, this information can be read just suitable for SATA hard disk, but not for SAS hard disk. In this paper, we provide a method by using LSI RAID card's Application Program Interface, communicating with RAID card and analyzing the feedback data to solve the problem. Then we will get the necessary information to assess the SAS hard disk.

  11. Feature-Based Statistical Analysis of Combustion Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, J; Krishnamoorthy, V; Liu, S; Grout, R; Hawkes, E; Chen, J; Pascucci, V; Bremer, P T

    2011-11-18

    We present a new framework for feature-based statistical analysis of large-scale scientific data and demonstrate its effectiveness by analyzing features from Direct Numerical Simulations (DNS) of turbulent combustion. Turbulent flows are ubiquitous and account for transport and mixing processes in combustion, astrophysics, fusion, and climate modeling among other disciplines. They are also characterized by coherent structure or organized motion, i.e. nonlocal entities whose geometrical features can directly impact molecular mixing and reactive processes. While traditional multi-point statistics provide correlative information, they lack nonlocal structural information, and hence, fail to provide mechanistic causality information between organized fluid motion and mixing and reactive processes. Hence, it is of great interest to capture and track flow features and their statistics together with their correlation with relevant scalar quantities, e.g. temperature or species concentrations. In our approach we encode the set of all possible flow features by pre-computing merge trees augmented with attributes, such as statistical moments of various scalar fields, e.g. temperature, as well as length-scales computed via spectral analysis. The computation is performed in an efficient streaming manner in a pre-processing step and results in a collection of meta-data that is orders of magnitude smaller than the original simulation data. This meta-data is sufficient to support a fully flexible and interactive analysis of the features, allowing for arbitrary thresholds, providing per-feature statistics, and creating various global diagnostics such as Cumulative Density Functions (CDFs), histograms, or time-series. We combine the analysis with a rendering of the features in a linked-view browser that enables scientists to interactively explore, visualize, and analyze the equivalent of one terabyte of simulation data. We highlight the utility of this new framework for combustion

  12. Statistical learning methods in high-energy and astrophysics analysis

    Energy Technology Data Exchange (ETDEWEB)

    Zimmermann, J. [Forschungszentrum Juelich GmbH, Zentrallabor fuer Elektronik, 52425 Juelich (Germany) and Max-Planck-Institut fuer Physik, Foehringer Ring 6, 80805 Munich (Germany)]. E-mail: zimmerm@mppmu.mpg.de; Kiesling, C. [Max-Planck-Institut fuer Physik, Foehringer Ring 6, 80805 Munich (Germany)

    2004-11-21

    We discuss several popular statistical learning methods used in high-energy- and astro-physics analysis. After a short motivation for statistical learning we present the most popular algorithms and discuss several examples from current research in particle- and astro-physics. The statistical learning methods are compared with each other and with standard methods for the respective application.

  13. Statistical learning methods in high-energy and astrophysics analysis

    International Nuclear Information System (INIS)

    Zimmermann, J.; Kiesling, C.

    2004-01-01

    We discuss several popular statistical learning methods used in high-energy- and astro-physics analysis. After a short motivation for statistical learning we present the most popular algorithms and discuss several examples from current research in particle- and astro-physics. The statistical learning methods are compared with each other and with standard methods for the respective application

  14. The fuzzy approach to statistical analysis

    NARCIS (Netherlands)

    Coppi, Renato; Gil, Maria A.; Kiers, Henk A. L.

    2006-01-01

    For the last decades, research studies have been developed in which a coalition of Fuzzy Sets Theory and Statistics has been established with different purposes. These namely are: (i) to introduce new data analysis problems in which the objective involves either fuzzy relationships or fuzzy terms;

  15. Statistical analysis applied to safety culture self-assessment

    International Nuclear Information System (INIS)

    Macedo Soares, P.P.

    2002-01-01

    Interviews and opinion surveys are instruments used to assess the safety culture in an organization as part of the Safety Culture Enhancement Programme. Specific statistical tools are used to analyse the survey results. This paper presents an example of an opinion survey with the corresponding application of the statistical analysis and the conclusions obtained. Survey validation, Frequency statistics, Kolmogorov-Smirnov non-parametric test, Student (T-test) and ANOVA means comparison tests and LSD post-hoc multiple comparison test, are discussed. (author)

  16. Foundation of statistical energy analysis in vibroacoustics

    CERN Document Server

    Le Bot, A

    2015-01-01

    This title deals with the statistical theory of sound and vibration. The foundation of statistical energy analysis is presented in great detail. In the modal approach, an introduction to random vibration with application to complex systems having a large number of modes is provided. For the wave approach, the phenomena of propagation, group speed, and energy transport are extensively discussed. Particular emphasis is given to the emergence of diffuse field, the central concept of the theory.

  17. Composites reinforcement by rods a SAS study

    CERN Document Server

    Urban, V; Pyckhout-Hintzen, W; Richter, D; Straube, E

    2002-01-01

    The mechanical properties of composites are governed by size, shape and dispersion degree of so-called reinforcing particles. Polymeric fillers based on thermodynamically driven microphase separation of block copolymers offer the opportunity to study a model system of controlled rod-like filler particles. We chose a triblock copolymer (PBPSPB) and carried out SAS measurements with both X-rays and neutrons, in order to characterize separately the hard phase and the cross-linked PB matrix. The properties of the material depend strongly on the way that stress is carried and transferred between the soft matrix and the hard fibers. The failure of the strain-amplification concept and the change of topological contributions to the free energy and scattering factor have to be addressed. In this respect the composite shows a similarity to a two-network system, i.e. interpenetrating rubber and rod-like filler networks. (orig.)

  18. Statistical Analysis of Big Data on Pharmacogenomics

    Science.gov (United States)

    Fan, Jianqing; Liu, Han

    2013-01-01

    This paper discusses statistical methods for estimating complex correlation structure from large pharmacogenomic datasets. We selectively review several prominent statistical methods for estimating large covariance matrix for understanding correlation structure, inverse covariance matrix for network modeling, large-scale simultaneous tests for selecting significantly differently expressed genes and proteins and genetic markers for complex diseases, and high dimensional variable selection for identifying important molecules for understanding molecule mechanisms in pharmacogenomics. Their applications to gene network estimation and biomarker selection are used to illustrate the methodological power. Several new challenges of Big data analysis, including complex data distribution, missing data, measurement error, spurious correlation, endogeneity, and the need for robust statistical methods, are also discussed. PMID:23602905

  19. HistFitter software framework for statistical data analysis

    Energy Technology Data Exchange (ETDEWEB)

    Baak, M. [CERN, Geneva (Switzerland); Besjes, G.J. [Radboud University Nijmegen, Nijmegen (Netherlands); Nikhef, Amsterdam (Netherlands); Cote, D. [University of Texas, Arlington (United States); Koutsman, A. [TRIUMF, Vancouver (Canada); Lorenz, J. [Ludwig-Maximilians-Universitaet Muenchen, Munich (Germany); Excellence Cluster Universe, Garching (Germany); Short, D. [University of Oxford, Oxford (United Kingdom)

    2015-04-15

    We present a software framework for statistical data analysis, called HistFitter, that has been used extensively by the ATLAS Collaboration to analyze big datasets originating from proton-proton collisions at the Large Hadron Collider at CERN. Since 2012 HistFitter has been the standard statistical tool in searches for supersymmetric particles performed by ATLAS. HistFitter is a programmable and flexible framework to build, book-keep, fit, interpret and present results of data models of nearly arbitrary complexity. Starting from an object-oriented configuration, defined by users, the framework builds probability density functions that are automatically fit to data and interpreted with statistical tests. Internally HistFitter uses the statistics packages RooStats and HistFactory. A key innovation of HistFitter is its design, which is rooted in analysis strategies of particle physics. The concepts of control, signal and validation regions are woven into its fabric. These are progressively treated with statistically rigorous built-in methods. Being capable of working with multiple models at once that describe the data, HistFitter introduces an additional level of abstraction that allows for easy bookkeeping, manipulation and testing of large collections of signal hypotheses. Finally, HistFitter provides a collection of tools to present results with publication quality style through a simple command-line interface. (orig.)

  20. HistFitter software framework for statistical data analysis

    International Nuclear Information System (INIS)

    Baak, M.; Besjes, G.J.; Cote, D.; Koutsman, A.; Lorenz, J.; Short, D.

    2015-01-01

    We present a software framework for statistical data analysis, called HistFitter, that has been used extensively by the ATLAS Collaboration to analyze big datasets originating from proton-proton collisions at the Large Hadron Collider at CERN. Since 2012 HistFitter has been the standard statistical tool in searches for supersymmetric particles performed by ATLAS. HistFitter is a programmable and flexible framework to build, book-keep, fit, interpret and present results of data models of nearly arbitrary complexity. Starting from an object-oriented configuration, defined by users, the framework builds probability density functions that are automatically fit to data and interpreted with statistical tests. Internally HistFitter uses the statistics packages RooStats and HistFactory. A key innovation of HistFitter is its design, which is rooted in analysis strategies of particle physics. The concepts of control, signal and validation regions are woven into its fabric. These are progressively treated with statistically rigorous built-in methods. Being capable of working with multiple models at once that describe the data, HistFitter introduces an additional level of abstraction that allows for easy bookkeeping, manipulation and testing of large collections of signal hypotheses. Finally, HistFitter provides a collection of tools to present results with publication quality style through a simple command-line interface. (orig.)

  1. Robust statistics and geochemical data analysis

    International Nuclear Information System (INIS)

    Di, Z.

    1987-01-01

    Advantages of robust procedures over ordinary least-squares procedures in geochemical data analysis is demonstrated using NURE data from the Hot Springs Quadrangle, South Dakota, USA. Robust principal components analysis with 5% multivariate trimming successfully guarded the analysis against perturbations by outliers and increased the number of interpretable factors. Regression with SINE estimates significantly increased the goodness-of-fit of the regression and improved the correspondence of delineated anomalies with known uranium prospects. Because of the ubiquitous existence of outliers in geochemical data, robust statistical procedures are suggested as routine procedures to replace ordinary least-squares procedures

  2. An analysis of the number of parking bays and checkout counters for a supermarket using SAS simulation studio

    Science.gov (United States)

    Kar, Leow Soo

    2014-07-01

    Two important factors that influence customer satisfaction in large supermarkets or hypermarkets are adequate parking facilities and short waiting times at the checkout counters. This paper describes the simulation analysis of a large supermarket to determine the optimal levels of these two factors. SAS Simulation Studio is used to model a large supermarket in a shopping mall with car park facility. In order to make the simulation model more realistic, a number of complexities are introduced into the model. For example, arrival patterns of customers vary with the time of the day (morning, afternoon and evening) and with the day of the week (weekdays or weekends), the transport mode of arriving customers (by car or other means), the mode of payment (cash or credit card), customer shopping pattern (leisurely, normal, exact) or choice of checkout counters (normal or express). In this study, we focus on 2 important components of the simulation model, namely the parking area, the normal and express checkout counters. The parking area is modeled using a Resource Pool block where one resource unit represents one parking bay. A customer arriving by car seizes a unit of the resource from the Pool block (parks car) and only releases it when he exits the system. Cars arriving when the Resource Pool is empty (no more parking bays) leave without entering the system. The normal and express checkouts are represented by Server blocks with appropriate service time distributions. As a case study, a supermarket in a shopping mall with a limited number of parking bays in Bangsar was chosen for this research. Empirical data on arrival patterns, arrival modes, payment modes, shopping patterns, service times of the checkout counters were collected and analyzed to validate the model. Sensitivity analysis was also performed with different simulation scenarios to identify the parameters for the optimal number the parking spaces and checkout counters.

  3. Using Pre-Statistical Analysis to Streamline Monitoring Assessments

    International Nuclear Information System (INIS)

    Reed, J.K.

    1999-01-01

    A variety of statistical methods exist to aid evaluation of groundwater quality and subsequent decision making in regulatory programs. These methods are applied because of large temporal and spatial extrapolations commonly applied to these data. In short, statistical conclusions often serve as a surrogate for knowledge. However, facilities with mature monitoring programs that have generated abundant data have inherently less uncertainty because of the sheer quantity of analytical results. In these cases, statistical tests can be less important, and ''expert'' data analysis should assume an important screening role.The WSRC Environmental Protection Department, working with the General Separations Area BSRI Environmental Restoration project team has developed a method for an Integrated Hydrogeological Analysis (IHA) of historical water quality data from the F and H Seepage Basins groundwater remediation project. The IHA combines common sense analytical techniques and a GIS presentation that force direct interactive evaluation of the data. The IHA can perform multiple data analysis tasks required by the RCRA permit. These include: (1) Development of a groundwater quality baseline prior to remediation startup, (2) Targeting of constituents for removal from RCRA GWPS, (3) Targeting of constituents for removal from UIC, permit, (4) Targeting of constituents for reduced, (5)Targeting of monitoring wells not producing representative samples, (6) Reduction in statistical evaluation, and (7) Identification of contamination from other facilities

  4. Conjunction analysis and propositional logic in fMRI data analysis using Bayesian statistics.

    Science.gov (United States)

    Rudert, Thomas; Lohmann, Gabriele

    2008-12-01

    To evaluate logical expressions over different effects in data analyses using the general linear model (GLM) and to evaluate logical expressions over different posterior probability maps (PPMs). In functional magnetic resonance imaging (fMRI) data analysis, the GLM was applied to estimate unknown regression parameters. Based on the GLM, Bayesian statistics can be used to determine the probability of conjunction, disjunction, implication, or any other arbitrary logical expression over different effects or contrast. For second-level inferences, PPMs from individual sessions or subjects are utilized. These PPMs can be combined to a logical expression and its probability can be computed. The methods proposed in this article are applied to data from a STROOP experiment and the methods are compared to conjunction analysis approaches for test-statistics. The combination of Bayesian statistics with propositional logic provides a new approach for data analyses in fMRI. Two different methods are introduced for propositional logic: the first for analyses using the GLM and the second for common inferences about different probability maps. The methods introduced extend the idea of conjunction analysis to a full propositional logic and adapt it from test-statistics to Bayesian statistics. The new approaches allow inferences that are not possible with known standard methods in fMRI. (c) 2008 Wiley-Liss, Inc.

  5. Statistical applications for chemistry, manufacturing and controls (CMC) in the pharmaceutical industry

    CERN Document Server

    Burdick, Richard K; Pfahler, Lori B; Quiroz, Jorge; Sidor, Leslie; Vukovinsky, Kimberly; Zhang, Lanju

    2017-01-01

    This book examines statistical techniques that are critically important to Chemistry, Manufacturing, and Control (CMC) activities. Statistical methods are presented with a focus on applications unique to the CMC in the pharmaceutical industry. The target audience consists of statisticians and other scientists who are responsible for performing statistical analyses within a CMC environment. Basic statistical concepts are addressed in Chapter 2 followed by applications to specific topics related to development and manufacturing. The mathematical level assumes an elementary understanding of statistical methods. The ability to use Excel or statistical packages such as Minitab, JMP, SAS, or R will provide more value to the reader. The motivation for this book came from an American Association of Pharmaceutical Scientists (AAPS) short course on statistical methods applied to CMC applications presented by four of the authors. One of the course participants asked us for a good reference book, and the only book recomm...

  6. Adaptation and Validation of the Sexual Assertiveness Scale (SAS) in a Sample of Male Drug Users.

    Science.gov (United States)

    Vallejo-Medina, Pablo; Sierra, Juan Carlos

    2015-04-21

    The aim of the present study was to adapt and validate the Sexual Assertiveness Scale (SAS) in a sample of male drug users. A sample of 326 male drug users and 322 non-clinical males was selected by cluster sampling and convenience sampling, respectively. Results showed that the scale had good psychometric properties and adequate internal consistency reliability (Initiation = .66, Refusal = .74 and STD-P = .79). An evaluation of the invariance showed strong factor equivalence between both samples. A high and moderate effect of Differential Item Functioning was only found in items 1 and 14 (∆R 2 Nagelkerke = .076 and .037, respectively). We strongly recommend not using item 1 if the goal is to compare the scores of both groups, otherwise the comparison will be biased. Correlations obtained between the CSFQ-14 and the safe sex ratio and the SAS subscales were significant (CI = 95%) and indicated good concurrent validity. Scores of male drug users were similar to those of non-clinical males. Therefore, the adaptation of the SAS to drug users provides enough guarantees for reliable and valid use in both clinical practice and research, although care should be taken with item 1.

  7. Regression analysis for the social sciences

    CERN Document Server

    Gordon, Rachel A

    2010-01-01

    The book provides graduate students in the social sciences with the basic skills that they need to estimate, interpret, present, and publish basic regression models using contemporary standards. Key features of the book include: interweaving the teaching of statistical concepts with examples developed for the course from publicly-available social science data or drawn from the literature. thorough integration of teaching statistical theory with teaching data processing and analysis. teaching of both SAS and Stata "side-by-side" and use of chapter exercises in which students practice programming and interpretation on the same data set and course exercises in which students can choose their own research questions and data set.

  8. Plan empresa Publitis S.A.S: soluciones tecnológicas

    OpenAIRE

    Robledo Ceballos, Carlos Alberto; Rodríguez Velasco, Salvador

    2011-01-01

    RESUMEN: PubliTICs es una empresa de constitución privada creada bajo la reglamentación emitida para las Sociedades por Acciones Simplificadas, cuyo objeto principal es la prestación de servicios de Tecnologías de la Información y las Comunicaciones en el mercado de Colombia. PubliTICs S.A.S celebrará un contrato de Aliado Tecnológico y Comercial con las Empresas Municipales de Cali EMCALI EICE ESP, para prestar servicios de soluciones tecnológicas empresariales, aprovechando la infraestru...

  9. O papel do community manager Toluna SAS: comunidade online interativa

    OpenAIRE

    Faria, Filipa Maria Ferreira Prego de

    2014-01-01

    Atualmente, a maioria das empresas encontra-se ligada a uma das ferramentas mais poderosas do mundo: a internet. Com o crescimento deste fenómeno, surgem aplicativos para melhorarem o seu aproveitamento. As empresas começam a usufruir da Web 2.0, que visa a utilização da internet como uma plataforma social. O trabalho aqui apresentado pretende fazer uma análise profunda da empresa online Toluna SAS e explicar o surgimento do profissional responsável por essa plataforma social, o Community Man...

  10. The IAEA Transport Safety Appraisal Service (TranSAS)

    International Nuclear Information System (INIS)

    Dicke, G.J.

    2004-01-01

    Representatives of all Member States of the IAEA meet once a year in September at the General Conference in Vienna, Austria, to consider and approve the Agency's programme and budget and to address matters brought before it by the Board of Governors, the Director General, or Member States. In September 1998 the General Conference adopted resolution GC(42)/RES/13 on the Safety of Transport of Radioactive Materials. In adopting that resolution the General Conference recognized that compliance with regulations that take account of the IAEA Regulations for the Safe Transport of Radioactive Material (the IAEA Transport Regulations) is providing a high level of safety during the transport of radioactive material. Good compliance requires that the regulations are implemented effectively. The General Conference therefore requested the IAEA Secretariat to provide a service for carrying out, at the request of any State, an appraisal of the implementation of the Transport Regulations by that State. In response to this request the Director General of the IAEA offered such an appraisal service to all States in letter J1.01.Circ, dated 10 December 1998. The first Transport Safety Appraisal Service (TranSAS) was undertaken and completed at the request of Slovenia in 1999. A report on the results of that appraisal was published and released for general distribution in the early fall of 1999. In each of the General Conferences since 1998, resolutions focused on transport safety have commended the Secretariat for establishing the TranSAS, commended those States that have requested the appraisal service and encouraged other States to avail themselves of this service see GC(43)/RES/11, GC(44)/RES/17, GC(45)/RES/10, GC(46)/RES/9 and GC(47)/RES/7. Six appraisals have been carried out to date as follows: Slovenia, Brazil, United Kingdom, Turkey, Panama and France. This presentation provides an overview of the Transport Safety Appraisal Service and summarizes the major findings from the

  11. The IAEA Transport Safety Appraisal Service (TranSAS)

    Energy Technology Data Exchange (ETDEWEB)

    Dicke, G.J. [International Atomic Energy Agency, Vienna (Austria)

    2004-07-01

    Representatives of all Member States of the IAEA meet once a year in September at the General Conference in Vienna, Austria, to consider and approve the Agency's programme and budget and to address matters brought before it by the Board of Governors, the Director General, or Member States. In September 1998 the General Conference adopted resolution GC(42)/RES/13 on the Safety of Transport of Radioactive Materials. In adopting that resolution the General Conference recognized that compliance with regulations that take account of the IAEA Regulations for the Safe Transport of Radioactive Material (the IAEA Transport Regulations) is providing a high level of safety during the transport of radioactive material. Good compliance requires that the regulations are implemented effectively. The General Conference therefore requested the IAEA Secretariat to provide a service for carrying out, at the request of any State, an appraisal of the implementation of the Transport Regulations by that State. In response to this request the Director General of the IAEA offered such an appraisal service to all States in letter J1.01.Circ, dated 10 December 1998. The first Transport Safety Appraisal Service (TranSAS) was undertaken and completed at the request of Slovenia in 1999. A report on the results of that appraisal was published and released for general distribution in the early fall of 1999. In each of the General Conferences since 1998, resolutions focused on transport safety have commended the Secretariat for establishing the TranSAS, commended those States that have requested the appraisal service and encouraged other States to avail themselves of this service see GC(43)/RES/11, GC(44)/RES/17, GC(45)/RES/10, GC(46)/RES/9 and GC(47)/RES/7. Six appraisals have been carried out to date as follows: Slovenia, Brazil, United Kingdom, Turkey, Panama and France. This presentation provides an overview of the Transport Safety Appraisal Service and summarizes the major findings from

  12. Sensitivity analysis and optimization of system dynamics models : Regression analysis and statistical design of experiments

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    1995-01-01

    This tutorial discusses what-if analysis and optimization of System Dynamics models. These problems are solved, using the statistical techniques of regression analysis and design of experiments (DOE). These issues are illustrated by applying the statistical techniques to a System Dynamics model for

  13. Modelling the detachment dependence on strike point location in the small angle slot divertor (SAS) with SOLPS

    Science.gov (United States)

    Casali, Livia; Covele, Brent; Guo, Houyang

    2017-10-01

    The new Small Angle Slot (SAS) divertor in DIII-D is characterized by a shallow-angle target enclosed by a slot structure about the strike point (SP). SOLPS modelling results of SAS have demonstrated divertor closure's utility in widening the range of acceptable densities for adequate heat handling. An extensive database of runs has been built to study the detachment dependence on SP location in SAS. Density scans show that lower Te at lower upstream density occur when the SP is at the critical location in the slot. The cooling front spreads across the entire target at higher densities, in agreement with experimental Langmuir probe measurements. A localized increase of the atomic and molecular density takes place near the SP, which reduces the target incident power density and facilitates detachment at lower upstream density. Systematic scans of variables such as power, transport, and viscosity have been carried out to assess the detachment sensitivity. Therein, a positive role of the viscosity is found. This work supported by DOE Contract Number DE-FC02-04ER54698.

  14. Multivariate Statistical Methods as a Tool of Financial Analysis of Farm Business

    Czech Academy of Sciences Publication Activity Database

    Novák, J.; Sůvová, H.; Vondráček, Jiří

    2002-01-01

    Roč. 48, č. 1 (2002), s. 9-12 ISSN 0139-570X Institutional research plan: AV0Z1030915 Keywords : financial analysis * financial ratios * multivariate statistical methods * correlation analysis * discriminant analysis * cluster analysis Subject RIV: BB - Applied Statistics, Operational Research

  15. Statistical analysis and interpretation of prenatal diagnostic imaging studies, Part 2: descriptive and inferential statistical methods.

    Science.gov (United States)

    Tuuli, Methodius G; Odibo, Anthony O

    2011-08-01

    The objective of this article is to discuss the rationale for common statistical tests used for the analysis and interpretation of prenatal diagnostic imaging studies. Examples from the literature are used to illustrate descriptive and inferential statistics. The uses and limitations of linear and logistic regression analyses are discussed in detail.

  16. A Critical Analysis of U.S. Army Accessions through Socioeconomic Consideration between 1970 and 1984.

    Science.gov (United States)

    1985-06-01

    ADDRESS 10. PROGRAM ELEMENT, PROJECT. TASK AREA & WORK UNIT NUMBERS Naval Postgraduate School Monterey, California 93943 11. CONTROLLING OFFICE NAME AND...determine the sccioeccnomic representativeness of the Army’s enlistees in that iarticular year. In addition, the socioeconomic overviev of Republic cf...accomplished with the use of the Statistical Analysis System (SAS), an integrated computer system for data analysis. 32 TABLE 2 The States in Each District

  17. Statistical analysis of environmental data

    International Nuclear Information System (INIS)

    Beauchamp, J.J.; Bowman, K.O.; Miller, F.L. Jr.

    1975-10-01

    This report summarizes the analyses of data obtained by the Radiological Hygiene Branch of the Tennessee Valley Authority from samples taken around the Browns Ferry Nuclear Plant located in Northern Alabama. The data collection was begun in 1968 and a wide variety of types of samples have been gathered on a regular basis. The statistical analysis of environmental data involving very low-levels of radioactivity is discussed. Applications of computer calculations for data processing are described

  18. Highly Robust Statistical Methods in Medical Image Analysis

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2012-01-01

    Roč. 32, č. 2 (2012), s. 3-16 ISSN 0208-5216 R&D Projects: GA MŠk(CZ) 1M06014 Institutional research plan: CEZ:AV0Z10300504 Keywords : robust statistics * classification * faces * robust image analysis * forensic science Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.208, year: 2012 http://www.ibib.waw.pl/bbe/bbefulltext/BBE_32_2_003_FT.pdf

  19. Statistical Power Analysis with Missing Data A Structural Equation Modeling Approach

    CERN Document Server

    Davey, Adam

    2009-01-01

    Statistical power analysis has revolutionized the ways in which we conduct and evaluate research.  Similar developments in the statistical analysis of incomplete (missing) data are gaining more widespread applications. This volume brings statistical power and incomplete data together under a common framework, in a way that is readily accessible to those with only an introductory familiarity with structural equation modeling.  It answers many practical questions such as: How missing data affects the statistical power in a study How much power is likely with different amounts and types

  20. Statistical Analysis of Data for Timber Strengths

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2003-01-01

    Statistical analyses are performed for material strength parameters from a large number of specimens of structural timber. Non-parametric statistical analysis and fits have been investigated for the following distribution types: Normal, Lognormal, 2 parameter Weibull and 3-parameter Weibull...... fits to the data available, especially if tail fits are used whereas the Log Normal distribution generally gives a poor fit and larger coefficients of variation, especially if tail fits are used. The implications on the reliability level of typical structural elements and on partial safety factors...... for timber are investigated....

  1. Identificación de Prácticas Empresariales para el Mejoramiento Continuo en Plusagro S.A.S.

    OpenAIRE

    Martín Correal, Alvaro Steven

    2014-01-01

    Este proyecto que se llevara a cabo en la compañía Plusagro S.A.S. la cual busca mejorar de forma sustancial la cultura organizacional de la compañía, para que desarrolle al máximo las ventajas competitivas que encuentra en el mercado, una de estas grandes ventajas es que al tener la exclusividad de su proveedor principal Kursan. Se vuelve de suma importancia saber que existen otras marcas en el mercado de los productos agrícolas que ofrece Plusagro S.A.S. pero se ha encontrado en común acuer...

  2. Numeric computation and statistical data analysis on the Java platform

    CERN Document Server

    Chekanov, Sergei V

    2016-01-01

    Numerical computation, knowledge discovery and statistical data analysis integrated with powerful 2D and 3D graphics for visualization are the key topics of this book. The Python code examples powered by the Java platform can easily be transformed to other programming languages, such as Java, Groovy, Ruby and BeanShell. This book equips the reader with a computational platform which, unlike other statistical programs, is not limited by a single programming language. The author focuses on practical programming aspects and covers a broad range of topics, from basic introduction to the Python language on the Java platform (Jython), to descriptive statistics, symbolic calculations, neural networks, non-linear regression analysis and many other data-mining topics. He discusses how to find regularities in real-world data, how to classify data, and how to process data for knowledge discoveries. The code snippets are so short that they easily fit into single pages. Numeric Computation and Statistical Data Analysis ...

  3. A Divergence Statistics Extension to VTK for Performance Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Pebay, Philippe Pierre [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bennett, Janine Camille [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    This report follows the series of previous documents ([PT08, BPRT09b, PT09, BPT09, PT10, PB13], where we presented the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k -means, order and auto-correlative statistics engines which we developed within the Visualization Tool Kit ( VTK ) as a scalable, parallel and versatile statistics package. We now report on a new engine which we developed for the calculation of divergence statistics, a concept which we hereafter explain and whose main goal is to quantify the discrepancy, in a stasticial manner akin to measuring a distance, between an observed empirical distribution and a theoretical, "ideal" one. The ease of use of the new diverence statistics engine is illustrated by the means of C++ code snippets. Although this new engine does not yet have a parallel implementation, it has already been applied to HPC performance analysis, of which we provide an example.

  4. Determining hyporheic storage using the rSAS model in urban restored streams.

    Science.gov (United States)

    Stoll, E.; Putnam, S. M.; Cosans, C.; Harman, C. J.

    2017-12-01

    One aim of stream restoration is to increase the connectivity of the stream with the hyporheic zone, which is important for processes like denitrification. This study analyzed transects of different restoration techniques in an urban stream, Stony Run in Baltimore, Maryland. The extent of the hyporheic zone was determined using a combination of salt slug injection tracer studies to determine the breakthrough curves and the rank StorAge Selection (rSAS) model. Previous studies using salt tracer injections have often focused on the shape of the breakthrough curve and the transit time distributions of streams to infer indicies correlated with hyporheic zone storage. This study uses the rSAS model to determine the volume of storage that must be turning over to produce the breakthrough curve. This study looked at transects of two different restoration techniques, one with floodplain rehabilitation and one without. Both transects had cross vanes and pool and riffle systems and only differed in the steepness of the banks surrounding the stream. The utility and accuracy of rSAS method was found to be heavily dependent on accurate flow rates. To avoid potential skew in the results, normalized, relatively flow rate-independent metric of storage were compared among transects to reduce error resulting from the flow rate. The results suggested that stream water was retained for longer in a larger storage volume in the transect that did not have floodplain rehabilitation. When compared to the storage of a natural stream with similar geomorphologic characteristics, the restored transect without floodplain rehabilitation had a larger storage volume than the natural stream. The restored transect with floodplain rehabilitation not only had a smaller storage volume than the restored section without rehabilitation, but also had a smaller storage volume than the natural stream with similar bank slopes. This suggests that the floodplain restoration does not significantly contribute to

  5. Developments in statistical analysis in quantitative genetics

    DEFF Research Database (Denmark)

    Sorensen, Daniel

    2009-01-01

    of genetic means and variances, models for the analysis of categorical and count data, the statistical genetics of a model postulating that environmental variance is partly under genetic control, and a short discussion of models that incorporate massive genetic marker information. We provide an overview......A remarkable research impetus has taken place in statistical genetics since the last World Conference. This has been stimulated by breakthroughs in molecular genetics, automated data-recording devices and computer-intensive statistical methods. The latter were revolutionized by the bootstrap...... and by Markov chain Monte Carlo (McMC). In this overview a number of specific areas are chosen to illustrate the enormous flexibility that McMC has provided for fitting models and exploring features of data that were previously inaccessible. The selected areas are inferences of the trajectories over time...

  6. On the Statistical Validation of Technical Analysis

    Directory of Open Access Journals (Sweden)

    Rosane Riera Freire

    2007-06-01

    Full Text Available Technical analysis, or charting, aims on visually identifying geometrical patterns in price charts in order to antecipate price "trends". In this paper we revisit the issue of thecnical analysis validation which has been tackled in the literature without taking care for (i the presence of heterogeneity and (ii statistical dependence in the analyzed data - various agglutinated return time series from distinct financial securities. The main purpose here is to address the first cited problem by suggesting a validation methodology that also "homogenizes" the securities according to the finite dimensional probability distribution of their return series. The general steps go through the identification of the stochastic processes for the securities returns, the clustering of similar securities and, finally, the identification of presence, or absence, of informatinal content obtained from those price patterns. We illustrate the proposed methodology with a real data exercise including several securities of the global market. Our investigation shows that there is a statistically significant informational content in two out of three common patterns usually found through technical analysis, namely: triangle, rectangle and head and shoulders.

  7. Cerebral scintigraphy by 99mTc-HMPAO in sleep apnea syndromes (SAS) during the wakeful state

    International Nuclear Information System (INIS)

    Tainturier, C.; Benamor, M.; Hausser-Hauw, C.; Rakotonanahary, D.; Fleury, B.

    1997-01-01

    The SAS is associated to cerebral hemodynamic modifications and to a high frequency of cerebro-vascular accidents. The aim of this study was to verify, during wakeful state, the cerebral hemodynamic in 14 patients afflicted with SAS of various intensity (Apnea Index = 5-120/h). 555 MBq of 99m Tc-HMPAO were injected in patients maintained awake. The images were obtained 20 minutes after injection by mean of a double-head chamber equipped with fan-beam collimators. They were interpreted visually by two independent readers. Anomalies of cerebral fixation were observed in 12/14 patients. They were small sores of diffuse hypo-fixations, with a 'riddly' aspect (4 cases), sores of bi-temporal hypo-fixation with a right- or left- hemispheric predominance (6 cases), or right fronto-temporal hypo-fixations (2 cases). The cerebral fixation anomalies were reported in the SASs. Ficker et al (1997) have shown in-sleep frontal hypo-perfusions in 5/14 apneic patients, reversible under continuous positive airing pressure (CPAP). In conclusion, anomalies of cerebral fixation exist in SAS-carrying patients, even in the wakeful state. Questions about hypoperfusion, pre-lacunar syndrome, atrophy still remain. A check of this study is planned after the CPAP treatment to determine the hemodynamic or anatomic origin and the anomaly reversibility

  8. Cosmic radiation and airline pilots. Exposure patterns of Norwegian SAS-pilots 1960 to 1994. Revised Version

    International Nuclear Information System (INIS)

    Tveten, U.

    1999-02-01

    The present report is a revised version of an earlier report (IFE/KR/E-96/008). The revision has been carried out since a completely new version of the computational tool has recently been released. All calculations have been redone. The work which is presented in this report is part of a Norwegian epidemiological project, carried out in cooperation between Institute for Energy Technology (IFE), the Norwegian Cancer Registry (NCR) and the Norwegian Radiation Protection Authority (NRPA). Originating from the Norwegian project, a number of similar projects have been started in a number of European countries. The present report lays the ground for estimation of individual exposure histories to cosmic radiation of pilots employed by the the Scandinavian Airlines System (SAS). The result presented in this report (radiation dose rates for the different types of aircraft in the different years) were calculated with the most recent computer program for this purpose, the CARI-5E from the United States Civil Aviation Authority. The other major sources of information used as basis for this work is the collection of old SAS time tables found the the SAS Museum at Fornebu Airport in Oslo, and information provided by members of the Pilots Association in Norway

  9. Cosmic radiation and airline pilots. Exposure patterns of Norwegian SAS-pilots 1960 to 1994. Revised Version

    Energy Technology Data Exchange (ETDEWEB)

    Tveten, U

    1999-02-01

    The present report is a revised version of an earlier report (IFE/KR/E-96/008). The revision has been carried out since a completely new version of the computational tool has recently been released. All calculations have been redone. The work which is presented in this report is part of a Norwegian epidemiological project, carried out in cooperation between Institute for Energy Technology (IFE), the Norwegian Cancer Registry (NCR) and the Norwegian Radiation Protection Authority (NRPA). Originating from the Norwegian project, a number of similar projects have been started in a number of European countries. The present report lays the ground for estimation of individual exposure histories to cosmic radiation of pilots employed by the the Scandinavian Airlines System (SAS). The result presented in this report (radiation dose rates for the different types of aircraft in the different years) were calculated with the most recent computer program for this purpose, the CARI-5E from the United States Civil Aviation Authority. The other major sources of information used as basis for this work is the collection of old SAS time tables found the the SAS Museum at Fornebu Airport in Oslo, and information provided by members of the Pilots Association in Norway.

  10. Data management and statistical analysis for environmental assessment

    International Nuclear Information System (INIS)

    Wendelberger, J.R.; McVittie, T.I.

    1995-01-01

    Data management and statistical analysis for environmental assessment are important issues on the interface of computer science and statistics. Data collection for environmental decision making can generate large quantities of various types of data. A database/GIS system developed is described which provides efficient data storage as well as visualization tools which may be integrated into the data analysis process. FIMAD is a living database and GIS system. The system has changed and developed over time to meet the needs of the Los Alamos National Laboratory Restoration Program. The system provides a repository for data which may be accessed by different individuals for different purposes. The database structure is driven by the large amount and varied types of data required for environmental assessment. The integration of the database with the GIS system provides the foundation for powerful visualization and analysis capabilities

  11. Failure rate analysis using GLIMMIX

    International Nuclear Information System (INIS)

    Moore, L.M.; Hemphill, G.M.; Martz, H.F.

    1998-01-01

    This paper illustrates use of a recently developed SAS macro, GLIMMIX, for implementing an analysis suggested by Wolfinger and O'Connell (1993) in modeling failure count data with random as well as fixed factor effects. Interest in this software tool arose from consideration of modernizing the Failure Rate Analysis Code (FRAC), developed at Los Alamos National Laboratory in the early 1980's by Martz, Beckman and McInteer (1982). FRAC is a FORTRAN program developed to analyze Poisson distributed failure count data as a log-linear model, possibly with random as well as fixed effects. These statistical modeling assumptions are a special case of generalized linear mixed models, identified as GLMM in the current statistics literature. In the nearly 15 years since FRAC was developed, there have been considerable advances in computing capability, statistical methodology and available statistical software tools allowing worthwhile consideration of the tasks of modernizing FRAC. In this paper, the approaches to GLMM estimation implemented in GLIMMIX and in FRAC are described and a comparison of results for the two approaches is made with data on catastrophic time-dependent pump failures from a report by Martz and Whiteman (1984). Additionally, statistical and graphical model diagnostics are suggested and illustrated with the GLIMMIX analysis results

  12. Compliance strategy for statistically based neutron overpower protection safety analysis methodology

    International Nuclear Information System (INIS)

    Holliday, E.; Phan, B.; Nainer, O.

    2009-01-01

    The methodology employed in the safety analysis of the slow Loss of Regulation (LOR) event in the OPG and Bruce Power CANDU reactors, referred to as Neutron Overpower Protection (NOP) analysis, is a statistically based methodology. Further enhancement to this methodology includes the use of Extreme Value Statistics (EVS) for the explicit treatment of aleatory and epistemic uncertainties, and probabilistic weighting of the initial core states. A key aspect of this enhanced NOP methodology is to demonstrate adherence, or compliance, with the analysis basis. This paper outlines a compliance strategy capable of accounting for the statistical nature of the enhanced NOP methodology. (author)

  13. Linear mixed models a practical guide using statistical software

    CERN Document Server

    West, Brady T; Galecki, Andrzej T

    2014-01-01

    Highly recommended by JASA, Technometrics, and other journals, the first edition of this bestseller showed how to easily perform complex linear mixed model (LMM) analyses via a variety of software programs. Linear Mixed Models: A Practical Guide Using Statistical Software, Second Edition continues to lead readers step by step through the process of fitting LMMs. This second edition covers additional topics on the application of LMMs that are valuable for data analysts in all fields. It also updates the case studies using the latest versions of the software procedures and provides up-to-date information on the options and features of the software procedures available for fitting LMMs in SAS, SPSS, Stata, R/S-plus, and HLM.New to the Second Edition A new chapter on models with crossed random effects that uses a case study to illustrate software procedures capable of fitting these models Power analysis methods for longitudinal and clustered study designs, including software options for power analyses and suggest...

  14. Cerebral scintigraphy by {sup 99m}Tc-HMPAO in sleep apnea syndromes (SAS) during the wakeful state; Scintigraphie cerebrale au Tc99m-HMPAO dans les syndromes d`apnees du sommeil (SAS) pendant l`etat de veille

    Energy Technology Data Exchange (ETDEWEB)

    Tainturier, C.; Benamor, M.; Hausser-Hauw, C.; Rakotonanahary, D.; Fleury, B. [CMC FOCH 92150 SURESNES (France)

    1997-12-31

    The SAS is associated to cerebral hemodynamic modifications and to a high frequency of cerebro-vascular accidents. The aim of this study was to verify, during wakeful state, the cerebral hemodynamic in 14 patients afflicted with SAS of various intensity (Apnea Index = 5-120/h). 555 MBq of {sup 99m}Tc-HMPAO were injected in patients maintained awake. The images were obtained 20 minutes after injection by mean of a double-head chamber equipped with fan-beam collimators. They were interpreted visually by two independent readers. Anomalies of cerebral fixation were observed in 12/14 patients. They were small sores of diffuse hypo-fixations, with a `riddly` aspect (4 cases), sores of bi-temporal hypo-fixation with a right- or left- hemispheric predominance (6 cases), or right fronto-temporal hypo-fixations (2 cases). The cerebral fixation anomalies were reported in the SASs. Ficker et al (1997) have shown in-sleep frontal hypo-perfusions in 5/14 apneic patients, reversible under continuous positive airing pressure (CPAP). In conclusion, anomalies of cerebral fixation exist in SAS-carrying patients, even in the wakeful state. Questions about hypoperfusion, pre-lacunar syndrome, atrophy still remain. A check of this study is planned after the CPAP treatment to determine the hemodynamic or anatomic origin and the anomaly reversibility

  15. Diagnosis checking of statistical analysis in RCTs indexed in PubMed.

    Science.gov (United States)

    Lee, Paul H; Tse, Andy C Y

    2017-11-01

    Statistical analysis is essential for reporting of the results of randomized controlled trials (RCTs), as well as evaluating their effectiveness. However, the validity of a statistical analysis also depends on whether the assumptions of that analysis are valid. To review all RCTs published in journals indexed in PubMed during December 2014 to provide a complete picture of how RCTs handle assumptions of statistical analysis. We reviewed all RCTs published in December 2014 that appeared in journals indexed in PubMed using the Cochrane highly sensitive search strategy. The 2014 impact factors of the journals were used as proxies for their quality. The type of statistical analysis used and whether the assumptions of the analysis were tested were reviewed. In total, 451 papers were included. Of the 278 papers that reported a crude analysis for the primary outcomes, 31 (27·2%) reported whether the outcome was normally distributed. Of the 172 papers that reported an adjusted analysis for the primary outcomes, diagnosis checking was rarely conducted, with only 20%, 8·6% and 7% checked for generalized linear model, Cox proportional hazard model and multilevel model, respectively. Study characteristics (study type, drug trial, funding sources, journal type and endorsement of CONSORT guidelines) were not associated with the reporting of diagnosis checking. The diagnosis of statistical analyses in RCTs published in PubMed-indexed journals was usually absent. Journals should provide guidelines about the reporting of a diagnosis of assumptions. © 2017 Stichting European Society for Clinical Investigation Journal Foundation.

  16. Development and psychometric properties of the Suicidality: Treatment Occurring in Paediatrics (STOP) Suicidality Assessment Scale (STOP-SAS) in children and adolescents.

    Science.gov (United States)

    Flamarique, I; Santosh, P; Zuddas, A; Arango, C; Purper-Ouakil, D; Hoekstra, P J; Coghill, D; Schulze, U; Dittmann, R W; Buitelaar, J K; Lievesley, K; Frongia, R; Llorente, C; Méndez, I; Sala, R; Fiori, F; Castro-Fornieles, J

    2016-12-13

    To create a self-reported, internet-based questionnaire for the assessment of suicide risk in children and adolescents. As part of the EU project 'Suicidality: Treatment Occurring in Paediatrics' (STOP project), we developed web-based Patient Reported Outcome Measures (PROMs) for children and adolescents and for proxy reports by parents and clinicians in order to assess suicidality. Based on a literature review, expert panels and focus groups of patients, we developed the items of the STOP Suicidality Assessment Scale (STOP-SAS) in Spanish and English, translated it into four more languages, and optimized it for web-based presentation using the HealthTracker TM platform. Of the total 19 questions developed for the STOP-SAS, four questions that assess low-level suicidality were identified as screening questions (three of them for use with children, and all four for use with adolescents, parents and clinicians). A total of 395 adolescents, 110 children, 637 parents and 716 clinicians completed the questionnaire using the HealthTracker TM , allowing us to evaluate the internal consistency and convergent validity of the STOP-SAS with the clinician-rated Columbia Suicide Severity Rating Scale (C-SSRS). Validity was also assessed with the receiver operating characteristic (ROC) area of the STOP-SAS with the C-SSRS. The STOP-SAS comprises 19 items in its adolescent, parent, and clinician versions, and 14 items in its children's version. Good internal consistency was found for adolescents (Cronbach's alpha: 0.965), children (Cronbach's alpha: 0.922), parents (Cronbach's alpha: 0.951) and clinicians (Cronbach's alpha: 0.955) versions. A strong correlation was found between the STOP-SAS and the C-SSRS for adolescents (r:0.670), parents (r:0.548), clinicians (r:0.863) and children (r:0.654). The ROC area was good for clinicians' (0.917), adolescents' (0.834) and parents' (0.756) versions but only fair (0.683) for children's version. The STOP-SAS is a comprehensive, web

  17. A κ-generalized statistical mechanics approach to income analysis

    Science.gov (United States)

    Clementi, F.; Gallegati, M.; Kaniadakis, G.

    2009-02-01

    This paper proposes a statistical mechanics approach to the analysis of income distribution and inequality. A new distribution function, having its roots in the framework of κ-generalized statistics, is derived that is particularly suitable for describing the whole spectrum of incomes, from the low-middle income region up to the high income Pareto power-law regime. Analytical expressions for the shape, moments and some other basic statistical properties are given. Furthermore, several well-known econometric tools for measuring inequality, which all exist in a closed form, are considered. A method for parameter estimation is also discussed. The model is shown to fit remarkably well the data on personal income for the United States, and the analysis of inequality performed in terms of its parameters is revealed as very powerful.

  18. A κ-generalized statistical mechanics approach to income analysis

    International Nuclear Information System (INIS)

    Clementi, F; Gallegati, M; Kaniadakis, G

    2009-01-01

    This paper proposes a statistical mechanics approach to the analysis of income distribution and inequality. A new distribution function, having its roots in the framework of κ-generalized statistics, is derived that is particularly suitable for describing the whole spectrum of incomes, from the low–middle income region up to the high income Pareto power-law regime. Analytical expressions for the shape, moments and some other basic statistical properties are given. Furthermore, several well-known econometric tools for measuring inequality, which all exist in a closed form, are considered. A method for parameter estimation is also discussed. The model is shown to fit remarkably well the data on personal income for the United States, and the analysis of inequality performed in terms of its parameters is revealed as very powerful

  19. Normality Tests for Statistical Analysis: A Guide for Non-Statisticians

    Science.gov (United States)

    Ghasemi, Asghar; Zahediasl, Saleh

    2012-01-01

    Statistical errors are common in scientific literature and about 50% of the published articles have at least one error. The assumption of normality needs to be checked for many statistical procedures, namely parametric tests, because their validity depends on it. The aim of this commentary is to overview checking for normality in statistical analysis using SPSS. PMID:23843808

  20. Social Anxiety Scale for Adolescents (SAS-A: Psychometric properties in a Spanish-speaking population

    Directory of Open Access Journals (Sweden)

    José Olivares

    2005-01-01

    Full Text Available El objetivo de este estudio instrumental fue estudiar la estructura factorial y propiedades psicométricas de la Escala de Ansiedad Social para Adolescentes (SASA. Participaron 2407 adolescentes (1263 chicos y 1144 chicas, con una edad media de 15 años, alumnos de nueve institutos de la región de Murcia. Los resultados obtenidos apoyan la estructura tri-factorial propuesta por los autores de la escala (FNE, SAD-New, SAD-General. Se encontraron correlaciones interescalas significativas y niveles de consistencia interna elevados para las subescalas, así como respecto de los efectos del sexo en la puntuación SAS-A/Total y en sus subescalas, alcanzando las chicas las puntuaciones más elevadas. Sólo se hallaron diferencias significativas para la edad en la subescala FNE y no se constataron efectos de interacción entre los dos factores. Estos hallazgos parecen avalar el uso de la SAS-A en población adolescente de habla española.

  1. Riik võib lüüa SAS-i esmaspäevaks Estonian Airi omanikeringist välja / Erik Müürsepp, Mikk Salu

    Index Scriptorium Estoniae

    Müürsepp, Erik

    2008-01-01

    SAS osaleb Estonian Airþile lisakapitali eraldamises ainult juhul, kui Eesti riik müüb oma osaluses lennukompaniis SAS-ile. Peaminister Andrus Ansipi ning majandus- ja kommunikatsiooniminister Juhan Partsi seisukoht

  2. Development of computer-assisted instruction application for statistical data analysis android platform as learning resource

    Science.gov (United States)

    Hendikawati, P.; Arifudin, R.; Zahid, M. Z.

    2018-03-01

    This study aims to design an android Statistics Data Analysis application that can be accessed through mobile devices to making it easier for users to access. The Statistics Data Analysis application includes various topics of basic statistical along with a parametric statistics data analysis application. The output of this application system is parametric statistics data analysis that can be used for students, lecturers, and users who need the results of statistical calculations quickly and easily understood. Android application development is created using Java programming language. The server programming language uses PHP with the Code Igniter framework, and the database used MySQL. The system development methodology used is the Waterfall methodology with the stages of analysis, design, coding, testing, and implementation and system maintenance. This statistical data analysis application is expected to support statistical lecturing activities and make students easier to understand the statistical analysis of mobile devices.

  3. Statistical analysis of metallicity in spiral galaxies

    Energy Technology Data Exchange (ETDEWEB)

    Galeotti, P [Consiglio Nazionale delle Ricerche, Turin (Italy). Lab. di Cosmo-Geofisica; Turin Univ. (Italy). Ist. di Fisica Generale)

    1981-04-01

    A principal component analysis of metallicity and other integral properties of 33 spiral galaxies is presented; the involved parameters are: morphological type, diameter, luminosity and metallicity. From the statistical analysis it is concluded that the sample has only two significant dimensions and additonal tests, involving different parameters, show similar results. Thus it seems that only type and luminosity are independent variables, being the other integral properties of spiral galaxies correlated with them.

  4. A survival analysis on critical components of nuclear power plants

    International Nuclear Information System (INIS)

    Durbec, V.; Pitner, P.; Riffard, T.

    1995-06-01

    Some tubes of heat exchangers of nuclear power plants may be affected by Primary Water Stress Corrosion Cracking (PWSCC) in highly stressed areas. These defects can shorten the lifetime of the component and lead to its replacement. In order to reduce the risk of cracking, a preventive remedial operation called shot peening was applied on the French reactors between 1985 and 1988. To assess and investigate the effects of shot peening, a statistical analysis was carried on the tube degradation results obtained from in service inspection that are regularly conducted using non destructive tests. The statistical method used is based on the Cox proportional hazards model, a powerful tool in the analysis of survival data, implemented in PROC PHRED recently available in SAS/STAT. This technique has a number of major advantages including the ability to deal with censored failure times data and with the complication of time-dependant co-variables. The paper focus on the modelling and a presentation of the results given by SAS. They provide estimate of how the relative risk of degradation changes after peening and indicate for which values of the prognostic factors analyzed the treatment is likely to be most beneficial. (authors). 2 refs., 3 figs., 6 tabs

  5. Statistical Analysis of Protein Ensembles

    Science.gov (United States)

    Máté, Gabriell; Heermann, Dieter

    2014-04-01

    As 3D protein-configuration data is piling up, there is an ever-increasing need for well-defined, mathematically rigorous analysis approaches, especially that the vast majority of the currently available methods rely heavily on heuristics. We propose an analysis framework which stems from topology, the field of mathematics which studies properties preserved under continuous deformations. First, we calculate a barcode representation of the molecules employing computational topology algorithms. Bars in this barcode represent different topological features. Molecules are compared through their barcodes by statistically determining the difference in the set of their topological features. As a proof-of-principle application, we analyze a dataset compiled of ensembles of different proteins, obtained from the Ensemble Protein Database. We demonstrate that our approach correctly detects the different protein groupings.

  6. State analysis of BOP using statistical and heuristic methods

    International Nuclear Information System (INIS)

    Heo, Gyun Young; Chang, Soon Heung

    2003-01-01

    Under the deregulation environment, the performance enhancement of BOP in nuclear power plants is being highlighted. To analyze performance level of BOP, we use the performance test procedures provided from an authorized institution such as ASME. However, through plant investigation, it was proved that the requirements of the performance test procedures about the reliability and quantity of sensors was difficult to be satisfied. As a solution of this, state analysis method that are the expanded concept of signal validation, was proposed on the basis of the statistical and heuristic approaches. Authors recommended the statistical linear regression model by analyzing correlation among BOP parameters as a reference state analysis method. Its advantage is that its derivation is not heuristic, it is possible to calculate model uncertainty, and it is easy to apply to an actual plant. The error of the statistical linear regression model is below 3% under normal as well as abnormal system states. Additionally a neural network model was recommended since the statistical model is impossible to apply to the validation of all of the sensors and is sensitive to the outlier that is the signal located out of a statistical distribution. Because there are a lot of sensors need to be validated in BOP, wavelet analysis (WA) were applied as a pre-processor for the reduction of input dimension and for the enhancement of training accuracy. The outlier localization capability of WA enhanced the robustness of the neural network. The trained neural network restored the degraded signals to the values within ±3% of the true signals

  7. Precision Statistical Analysis of Images Based on Brightness Distribution

    Directory of Open Access Journals (Sweden)

    Muzhir Shaban Al-Ani

    2017-07-01

    Full Text Available Study the content of images is considered an important topic in which reasonable and accurate analysis of images are generated. Recently image analysis becomes a vital field because of huge number of images transferred via transmission media in our daily life. These crowded media with images lead to highlight in research area of image analysis. In this paper, the implemented system is passed into many steps to perform the statistical measures of standard deviation and mean values of both color and grey images. Whereas the last step of the proposed method concerns to compare the obtained results in different cases of the test phase. In this paper, the statistical parameters are implemented to characterize the content of an image and its texture. Standard deviation, mean and correlation values are used to study the intensity distribution of the tested images. Reasonable results are obtained for both standard deviation and mean value via the implementation of the system. The major issue addressed in the work is concentrated on brightness distribution via statistical measures applying different types of lighting.

  8. Fisher statistics for analysis of diffusion tensor directional information.

    Science.gov (United States)

    Hutchinson, Elizabeth B; Rutecki, Paul A; Alexander, Andrew L; Sutula, Thomas P

    2012-04-30

    A statistical approach is presented for the quantitative analysis of diffusion tensor imaging (DTI) directional information using Fisher statistics, which were originally developed for the analysis of vectors in the field of paleomagnetism. In this framework, descriptive and inferential statistics have been formulated based on the Fisher probability density function, a spherical analogue of the normal distribution. The Fisher approach was evaluated for investigation of rat brain DTI maps to characterize tissue orientation in the corpus callosum, fornix, and hilus of the dorsal hippocampal dentate gyrus, and to compare directional properties in these regions following status epilepticus (SE) or traumatic brain injury (TBI) with values in healthy brains. Direction vectors were determined for each region of interest (ROI) for each brain sample and Fisher statistics were applied to calculate the mean direction vector and variance parameters in the corpus callosum, fornix, and dentate gyrus of normal rats and rats that experienced TBI or SE. Hypothesis testing was performed by calculation of Watson's F-statistic and associated p-value giving the likelihood that grouped observations were from the same directional distribution. In the fornix and midline corpus callosum, no directional differences were detected between groups, however in the hilus, significant (pstatistical comparison of tissue structural orientation. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. Statistical analysis of RHIC beam position monitors performance

    Science.gov (United States)

    Calaga, R.; Tomás, R.

    2004-04-01

    A detailed statistical analysis of beam position monitors (BPM) performance at RHIC is a critical factor in improving regular operations and future runs. Robust identification of malfunctioning BPMs plays an important role in any orbit or turn-by-turn analysis. Singular value decomposition and Fourier transform methods, which have evolved as powerful numerical techniques in signal processing, will aid in such identification from BPM data. This is the first attempt at RHIC to use a large set of data to statistically enhance the capability of these two techniques and determine BPM performance. A comparison from run 2003 data shows striking agreement between the two methods and hence can be used to improve BPM functioning at RHIC and possibly other accelerators.

  10. Statistical analysis of RHIC beam position monitors performance

    Directory of Open Access Journals (Sweden)

    R. Calaga

    2004-04-01

    Full Text Available A detailed statistical analysis of beam position monitors (BPM performance at RHIC is a critical factor in improving regular operations and future runs. Robust identification of malfunctioning BPMs plays an important role in any orbit or turn-by-turn analysis. Singular value decomposition and Fourier transform methods, which have evolved as powerful numerical techniques in signal processing, will aid in such identification from BPM data. This is the first attempt at RHIC to use a large set of data to statistically enhance the capability of these two techniques and determine BPM performance. A comparison from run 2003 data shows striking agreement between the two methods and hence can be used to improve BPM functioning at RHIC and possibly other accelerators.

  11. Statistics Education Research in Malaysia and the Philippines: A Comparative Analysis

    Science.gov (United States)

    Reston, Enriqueta; Krishnan, Saras; Idris, Noraini

    2014-01-01

    This paper presents a comparative analysis of statistics education research in Malaysia and the Philippines by modes of dissemination, research areas, and trends. An electronic search for published research papers in the area of statistics education from 2000-2012 yielded 20 for Malaysia and 19 for the Philippines. Analysis of these papers showed…

  12. Statistical analysis of next generation sequencing data

    CERN Document Server

    Nettleton, Dan

    2014-01-01

    Next Generation Sequencing (NGS) is the latest high throughput technology to revolutionize genomic research. NGS generates massive genomic datasets that play a key role in the big data phenomenon that surrounds us today. To extract signals from high-dimensional NGS data and make valid statistical inferences and predictions, novel data analytic and statistical techniques are needed. This book contains 20 chapters written by prominent statisticians working with NGS data. The topics range from basic preprocessing and analysis with NGS data to more complex genomic applications such as copy number variation and isoform expression detection. Research statisticians who want to learn about this growing and exciting area will find this book useful. In addition, many chapters from this book could be included in graduate-level classes in statistical bioinformatics for training future biostatisticians who will be expected to deal with genomic data in basic biomedical research, genomic clinical trials and personalized med...

  13. Selected papers on analysis, probability, and statistics

    CERN Document Server

    Nomizu, Katsumi

    1994-01-01

    This book presents papers that originally appeared in the Japanese journal Sugaku. The papers fall into the general area of mathematical analysis as it pertains to probability and statistics, dynamical systems, differential equations and analytic function theory. Among the topics discussed are: stochastic differential equations, spectra of the Laplacian and Schrödinger operators, nonlinear partial differential equations which generate dissipative dynamical systems, fractal analysis on self-similar sets and the global structure of analytic functions.

  14. Assessment of SFR reactor safety issues: Part II: Analysis results of ULOF transients imposed on a variety of different innovative core designs with SAS-SFR

    Energy Technology Data Exchange (ETDEWEB)

    Kruessmann, R., E-mail: regina.kruessmann@kit.edu [Karlsruhe Institute of Technology, Institute of Neutron Physics and Reactor Technology INR, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Ponomarev, A.; Pfrang, W.; Struwe, D. [Karlsruhe Institute of Technology, Institute of Neutron Physics and Reactor Technology INR, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Champigny, J.; Carluec, B. [AREVA, 10, rue J. Récamier, 69456 Lyon Cedex 06 (France); Schmitt, D.; Verwaerde, D. [EDF R& D, 1 avenue du général de Gaulle, 92140 Clamart (France)

    2015-04-15

    Highlights: • Comparison of different core designs for a sodium-cooled fast reactor. • Safety assessment with the code system SAS-SFR. • Unprotected Loss of Flow (ULOF) scenario. • Sodium boiling and core melting cannot be avoided. • A net negative Na void effect provides more grace time prior to local SA destruction. - Abstract: In the framework of cooperation agreements between KIT-INR and AREVA SAS NP as well as between KIT-INR and EDF R&D in the years 2008–2013, the evaluation of severe transient behavior in sodium-cooled fast reactors (SFRs) was investigated. In Part I of this contribution, the efficiency of newly conceived prevention and mitigation measures was investigated for unprotected loss-of-flow (ULOF), unprotected loss-of-heat-sink (ULOHS) and the unprotected transient-overpower (UTOP) transients. In this second part, consequence analyses were performed for the initiation phase of different unprotected loss-of-flow (ULOF) scenarios imposed on a variety of different core design options of SFRs. The code system SAS-SFR was used for this purpose. Results of analyses for cases postulating unavailability of prevention measures as shut-down systems, passive and/or active additional devices show that entering into an energetic power excursion as a consequence of the initiation phase of a ULOF cannot be avoided for those core designs with a cumulative void reactivity feedback larger than zero. However, even for core designs aiming at values of the void reactivity less than zero it is difficult to find system design characteristics which prevent the transient entering into partial core destruction. Further studies of the transient core and system behavior would require codes dedicated to specific aspects of transition phase analyses and of in-vessel material relocation analyses.

  15. Analysis of statistical misconception in terms of statistical reasoning

    Science.gov (United States)

    Maryati, I.; Priatna, N.

    2018-05-01

    Reasoning skill is needed for everyone to face globalization era, because every person have to be able to manage and use information from all over the world which can be obtained easily. Statistical reasoning skill is the ability to collect, group, process, interpret, and draw conclusion of information. Developing this skill can be done through various levels of education. However, the skill is low because many people assume that statistics is just the ability to count and using formulas and so do students. Students still have negative attitude toward course which is related to research. The purpose of this research is analyzing students’ misconception in descriptive statistic course toward the statistical reasoning skill. The observation was done by analyzing the misconception test result and statistical reasoning skill test; observing the students’ misconception effect toward statistical reasoning skill. The sample of this research was 32 students of math education department who had taken descriptive statistic course. The mean value of misconception test was 49,7 and standard deviation was 10,6 whereas the mean value of statistical reasoning skill test was 51,8 and standard deviation was 8,5. If the minimal value is 65 to state the standard achievement of a course competence, students’ mean value is lower than the standard competence. The result of students’ misconception study emphasized on which sub discussion that should be considered. Based on the assessment result, it was found that students’ misconception happen on this: 1) writing mathematical sentence and symbol well, 2) understanding basic definitions, 3) determining concept that will be used in solving problem. In statistical reasoning skill, the assessment was done to measure reasoning from: 1) data, 2) representation, 3) statistic format, 4) probability, 5) sample, and 6) association.

  16. Analysis of half diallel mating designs I: a practical analysis procedure for ANOVA approximation.

    Science.gov (United States)

    G.R. Johnson; J.N. King

    1998-01-01

    Procedures to analyze half-diallel mating designs using the SAS statistical package are presented. The procedure requires two runs of PROC and VARCOMP and results in estimates of additive and non-additive genetic variation. The procedures described can be modified to work on most statistical software packages which can compute variance component estimates. The...

  17. Comparative analysis of positive and negative attitudes toward statistics

    Science.gov (United States)

    Ghulami, Hassan Rahnaward; Ab Hamid, Mohd Rashid; Zakaria, Roslinazairimah

    2015-02-01

    Many statistics lecturers and statistics education researchers are interested to know the perception of their students' attitudes toward statistics during the statistics course. In statistics course, positive attitude toward statistics is a vital because it will be encourage students to get interested in the statistics course and in order to master the core content of the subject matters under study. Although, students who have negative attitudes toward statistics they will feel depressed especially in the given group assignment, at risk for failure, are often highly emotional, and could not move forward. Therefore, this study investigates the students' attitude towards learning statistics. Six latent constructs have been the measurement of students' attitudes toward learning statistic such as affect, cognitive competence, value, difficulty, interest, and effort. The questionnaire was adopted and adapted from the reliable and validate instrument of Survey of Attitudes towards Statistics (SATS). This study is conducted among engineering undergraduate engineering students in the university Malaysia Pahang (UMP). The respondents consist of students who were taking the applied statistics course from different faculties. From the analysis, it is found that the questionnaire is acceptable and the relationships among the constructs has been proposed and investigated. In this case, students show full effort to master the statistics course, feel statistics course enjoyable, have confidence that they have intellectual capacity, and they have more positive attitudes then negative attitudes towards statistics learning. In conclusion in terms of affect, cognitive competence, value, interest and effort construct the positive attitude towards statistics was mostly exhibited. While negative attitudes mostly exhibited by difficulty construct.

  18. Vapor Pressure Data Analysis and Statistics

    Science.gov (United States)

    2016-12-01

    near 8, 2000, and 200, respectively. The A (or a) value is directly related to vapor pressure and will be greater for high vapor pressure materials...1, (10) where n is the number of data points, Yi is the natural logarithm of the i th experimental vapor pressure value, and Xi is the...VAPOR PRESSURE DATA ANALYSIS AND STATISTICS ECBC-TR-1422 Ann Brozena RESEARCH AND TECHNOLOGY DIRECTORATE

  19. Statistical analysis of planktic foraminifera of the surface Continental ...

    African Journals Online (AJOL)

    Planktic foraminiferal assemblage recorded from selected samples obtained from shallow continental shelf sediments off southwestern Nigeria were subjected to statistical analysis. The Principal Component Analysis (PCA) was used to determine variants of planktic parameters. Values obtained for these parameters were ...

  20. Imaging mass spectrometry statistical analysis.

    Science.gov (United States)

    Jones, Emrys A; Deininger, Sören-Oliver; Hogendoorn, Pancras C W; Deelder, André M; McDonnell, Liam A

    2012-08-30

    Imaging mass spectrometry is increasingly used to identify new candidate biomarkers. This clinical application of imaging mass spectrometry is highly multidisciplinary: expertise in mass spectrometry is necessary to acquire high quality data, histology is required to accurately label the origin of each pixel's mass spectrum, disease biology is necessary to understand the potential meaning of the imaging mass spectrometry results, and statistics to assess the confidence of any findings. Imaging mass spectrometry data analysis is further complicated because of the unique nature of the data (within the mass spectrometry field); several of the assumptions implicit in the analysis of LC-MS/profiling datasets are not applicable to imaging. The very large size of imaging datasets and the reporting of many data analysis routines, combined with inadequate training and accessible reviews, have exacerbated this problem. In this paper we provide an accessible review of the nature of imaging data and the different strategies by which the data may be analyzed. Particular attention is paid to the assumptions of the data analysis routines to ensure that the reader is apprised of their correct usage in imaging mass spectrometry research. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. Comparison of SAS3A and MELT-III predictions for a transient overpower hypothetical accident

    International Nuclear Information System (INIS)

    Wilburn, N.P.

    1976-01-01

    A comparison is made of the predictions of the two major codes SAS3A and MELT-III for the hypothetical unprotected transient overpower accident in the FFTF. The predictions of temperatures, fuel restructuring, fuel melting, reactivity feedbacks, and core power are compared

  2. Applied Behavior Analysis and Statistical Process Control?

    Science.gov (United States)

    Hopkins, B. L.

    1995-01-01

    Incorporating statistical process control (SPC) methods into applied behavior analysis is discussed. It is claimed that SPC methods would likely reduce applied behavior analysts' intimate contacts with problems and would likely yield poor treatment and research decisions. Cases and data presented by Pfadt and Wheeler (1995) are cited as examples.…

  3. Statistical analysis of JET disruptions

    International Nuclear Information System (INIS)

    Tanga, A.; Johnson, M.F.

    1991-07-01

    In the operation of JET and of any tokamak many discharges are terminated by a major disruption. The disruptive termination of a discharge is usually an unwanted event which may cause damage to the structure of the vessel. In a reactor disruptions are potentially a very serious problem, hence the importance of studying them and devising methods to avoid disruptions. Statistical information has been collected about the disruptions which have occurred at JET over a long span of operations. The analysis is focused on the operational aspects of the disruptions rather than on the underlining physics. (Author)

  4. Analysis of Large Genomic Data in Silico: The EPICNorfolk Study of Obesity

    DEFF Research Database (Denmark)

    Zhao, Jing Hua; Luan, Jian'an; Tan, Qihua

    In human genetics, large-scale data are now available with advances in genotyping technologies and international collaborative projects. Our ongoing study of obesity involves Affymetrix 500k genechips on approximately 7000 individuals from the European Prospective Investigation of Cancer (EPIC......) Norfolk study. Although the scale of our data is well beyond the ability of many software systems, we have successfully performed the analysis using the statistical analysis system (SAS) software. Our implementation trades memory with computing time and requires moderate hardware configuration. By using...

  5. Simulation Experiments in Practice : Statistical Design and Regression Analysis

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2007-01-01

    In practice, simulation analysts often change only one factor at a time, and use graphical analysis of the resulting Input/Output (I/O) data. Statistical theory proves that more information is obtained when applying Design Of Experiments (DOE) and linear regression analysis. Unfortunately, classic

  6. Statistical analysis of the Ft. Calhoun reactor coolant pump system

    International Nuclear Information System (INIS)

    Patel, Bimal; Heising, C.D.

    1997-01-01

    In engineering science, statistical quality control techniques have traditionally been applied to control manufacturing processes. An application to commercial nuclear power plant maintenance and control is presented that can greatly improve plant safety. As a demonstration of such an approach, a specific system is analyzed: the reactor coolant pumps (RCPs) of the Ft. Calhoun nuclear power plant. This research uses capability analysis, Shewhart X-bar, R charts, canonical correlation methods, and design of experiments to analyze the process for the state of statistical control. The results obtained show that six out of ten parameters are under control specification limits and four parameters are not in the state of statistical control. The analysis shows that statistical process control methods can be applied as an early warning system capable of identifying significant equipment problems well in advance of traditional control room alarm indicators. Such a system would provide operators with ample time to respond to possible emergency situations and thus improve plant safety and reliability. (Author)

  7. Research and Development of Statistical Analysis Software System of Maize Seedling Experiment

    OpenAIRE

    Hui Cao

    2014-01-01

    In this study, software engineer measures were used to develop a set of software system for maize seedling experiments statistics and analysis works. During development works, B/S structure software design method was used and a set of statistics indicators for maize seedling evaluation were established. The experiments results indicated that this set of software system could finish quality statistics and analysis for maize seedling very well. The development of this software system explored a...

  8. Statistical trend analysis methods for temporal phenomena

    Energy Technology Data Exchange (ETDEWEB)

    Lehtinen, E.; Pulkkinen, U. [VTT Automation, (Finland); Poern, K. [Poern Consulting, Nykoeping (Sweden)

    1997-04-01

    We consider point events occurring in a random way in time. In many applications the pattern of occurrence is of intrinsic interest as indicating a trend or some other systematic feature in the rate of occurrence. The purpose of this report is to survey briefly different statistical trend analysis methods and illustrate their applicability to temporal phenomena in particular. The trend testing of point events is usually seen as the testing of the hypotheses concerning the intensity of the occurrence of events. When the intensity function is parametrized, the testing of trend is a typical parametric testing problem. In industrial applications the operational experience generally does not suggest any specified model and method in advance. Therefore, and particularly, if the Poisson process assumption is very questionable, it is desirable to apply tests that are valid for a wide variety of possible processes. The alternative approach for trend testing is to use some non-parametric procedure. In this report we have presented four non-parametric tests: The Cox-Stuart test, the Wilcoxon signed ranks test, the Mann test, and the exponential ordered scores test. In addition to the classical parametric and non-parametric approaches we have also considered the Bayesian trend analysis. First we discuss a Bayesian model, which is based on a power law intensity model. The Bayesian statistical inferences are based on the analysis of the posterior distribution of the trend parameters, and the probability of trend is immediately seen from these distributions. We applied some of the methods discussed in an example case. It should be noted, that this report is a feasibility study rather than a scientific evaluation of statistical methods, and the examples can only be seen as demonstrations of the methods. 14 refs, 10 figs.

  9. Statistical trend analysis methods for temporal phenomena

    International Nuclear Information System (INIS)

    Lehtinen, E.; Pulkkinen, U.; Poern, K.

    1997-04-01

    We consider point events occurring in a random way in time. In many applications the pattern of occurrence is of intrinsic interest as indicating a trend or some other systematic feature in the rate of occurrence. The purpose of this report is to survey briefly different statistical trend analysis methods and illustrate their applicability to temporal phenomena in particular. The trend testing of point events is usually seen as the testing of the hypotheses concerning the intensity of the occurrence of events. When the intensity function is parametrized, the testing of trend is a typical parametric testing problem. In industrial applications the operational experience generally does not suggest any specified model and method in advance. Therefore, and particularly, if the Poisson process assumption is very questionable, it is desirable to apply tests that are valid for a wide variety of possible processes. The alternative approach for trend testing is to use some non-parametric procedure. In this report we have presented four non-parametric tests: The Cox-Stuart test, the Wilcoxon signed ranks test, the Mann test, and the exponential ordered scores test. In addition to the classical parametric and non-parametric approaches we have also considered the Bayesian trend analysis. First we discuss a Bayesian model, which is based on a power law intensity model. The Bayesian statistical inferences are based on the analysis of the posterior distribution of the trend parameters, and the probability of trend is immediately seen from these distributions. We applied some of the methods discussed in an example case. It should be noted, that this report is a feasibility study rather than a scientific evaluation of statistical methods, and the examples can only be seen as demonstrations of the methods

  10. StOCNET : Software for the statistical analysis of social networks

    NARCIS (Netherlands)

    Huisman, M.; van Duijn, M.A.J.

    2003-01-01

    StOCNET3 is an open software system in a Windows environment for the advanced statistical analysis of social networks. It provides a platform to make a number of recently developed and therefore not (yet) standard statistical methods available to a wider audience. A flexible user interface utilizing

  11. AutoBayes: A System for Generating Data Analysis Programs from Statistical Models

    OpenAIRE

    Fischer, Bernd; Schumann, Johann

    2003-01-01

    Data analysis is an important scientific task which is required whenever information needs to be extracted from raw data. Statistical approaches to data analysis, which use methods from probability theory and numerical analysis, are well-founded but dificult to implement: the development of a statistical data analysis program for any given application is time-consuming and requires substantial knowledge and experience in several areas. In this paper, we describe AutoBayes, a program synthesis...

  12. Network similarity and statistical analysis of earthquake seismic data

    OpenAIRE

    Deyasi, Krishanu; Chakraborty, Abhijit; Banerjee, Anirban

    2016-01-01

    We study the structural similarity of earthquake networks constructed from seismic catalogs of different geographical regions. A hierarchical clustering of underlying undirected earthquake networks is shown using Jensen-Shannon divergence in graph spectra. The directed nature of links indicates that each earthquake network is strongly connected, which motivates us to study the directed version statistically. Our statistical analysis of each earthquake region identifies the hub regions. We cal...

  13. Statistical analysis and interpolation of compositional data in materials science.

    Science.gov (United States)

    Pesenson, Misha Z; Suram, Santosh K; Gregoire, John M

    2015-02-09

    Compositional data are ubiquitous in chemistry and materials science: analysis of elements in multicomponent systems, combinatorial problems, etc., lead to data that are non-negative and sum to a constant (for example, atomic concentrations). The constant sum constraint restricts the sampling space to a simplex instead of the usual Euclidean space. Since statistical measures such as mean and standard deviation are defined for the Euclidean space, traditional correlation studies, multivariate analysis, and hypothesis testing may lead to erroneous dependencies and incorrect inferences when applied to compositional data. Furthermore, composition measurements that are used for data analytics may not include all of the elements contained in the material; that is, the measurements may be subcompositions of a higher-dimensional parent composition. Physically meaningful statistical analysis must yield results that are invariant under the number of composition elements, requiring the application of specialized statistical tools. We present specifics and subtleties of compositional data processing through discussion of illustrative examples. We introduce basic concepts, terminology, and methods required for the analysis of compositional data and utilize them for the spatial interpolation of composition in a sputtered thin film. The results demonstrate the importance of this mathematical framework for compositional data analysis (CDA) in the fields of materials science and chemistry.

  14. An Application of Multivariate Statistical Analysis for Query-Driven Visualization

    Energy Technology Data Exchange (ETDEWEB)

    Gosink, Luke J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Garth, Christoph [Univ. of California, Davis, CA (United States); Anderson, John C. [Univ. of California, Davis, CA (United States); Bethel, E. Wes [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Joy, Kenneth I. [Univ. of California, Davis, CA (United States)

    2011-03-01

    Driven by the ability to generate ever-larger, increasingly complex data, there is an urgent need in the scientific community for scalable analysis methods that can rapidly identify salient trends in scientific data. Query-Driven Visualization (QDV) strategies are among the small subset of techniques that can address both large and highly complex datasets. This paper extends the utility of QDV strategies with a statistics-based framework that integrates non-parametric distribution estimation techniques with a new segmentation strategy to visually identify statistically significant trends and features within the solution space of a query. In this framework, query distribution estimates help users to interactively explore their query's solution and visually identify the regions where the combined behavior of constrained variables is most important, statistically, to their inquiry. Our new segmentation strategy extends the distribution estimation analysis by visually conveying the individual importance of each variable to these regions of high statistical significance. We demonstrate the analysis benefits these two strategies provide and show how they may be used to facilitate the refinement of constraints over variables expressed in a user's query. We apply our method to datasets from two different scientific domains to demonstrate its broad applicability.

  15. Distribution of sasX, pvl, and qacA/B genes in epidemic methicillin-resistant Staphylococcus aureus strains isolated from East China

    Directory of Open Access Journals (Sweden)

    Kong H

    2018-01-01

    Full Text Available Haishen Kong,1,2 Lingmei Fang,3 Rujin Jiang,4 Jixiang Tong2 1State Key Laboratory for Diagnosis and Treatment of Infectious Diseases, Collaborative Innovation Center for Diagnosis and Treatment of Infectious Diseases, First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou, China; 2Key Laboratory of Clinical In Vitro Diagnostic Techniques of Zhejiang Province, Department of Laboratory Medicine, First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou, China; 3Clinical Laboratory, Chunan First People’s Hospital, Zhejiang Province People’s Hospital Chunan Branch, Hangzhou, China; 4Clinical Laboratory, Yuhang Hospital of Traditional Chinese Medicine, Hangzhou, China Background: Methicillin-resistant Staphylococcus aureus (MRSA is a major nosocomial pathogen. Various virulence and antiseptic-resistant factors increase the pathogenicity of MRSA strains and allow for increased infection rates.Purpose: The purpose of this study was to investigate the prevalence and distribution of virulence-associated and antiseptic-resistant genes from epidemic MRSA strains isolated from East China.Methods: A newly designed multiplex PCR assay was used to assess whether the virulence-associated genes sasX and pvl and the chlorhexidine tolerance gene qacA/B were present in 189 clinical isolates of MRSA. Multilocus sequence typing (MLST and Staphylococcal protein A (spa typing of these isolates were also performed. The frequency of these genes in isolates with epidemic sequence types (STs was investigated. Results: Twenty STs and 36 spa types with five epidemic clones (ST5-t311, ST59-t437, ST5-t002, ST239-t030, and ST239-t037 were identified. The prevalence of sasX, pvl, and qacA/B in all isolates was 5.8%, 10.1%, and 20.1%, respectively. The prevalences of these genes in isolates with ST5, ST59, ST239, and other ST genetic backgrounds were all significantly different (P<0.001. Isolates that had the highest frequency of sas

  16. Explorations in Statistics: The Analysis of Ratios and Normalized Data

    Science.gov (United States)

    Curran-Everett, Douglas

    2013-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This ninth installment of "Explorations in Statistics" explores the analysis of ratios and normalized--or standardized--data. As researchers, we compute a ratio--a numerator divided by a denominator--to compute a…

  17. Statistical Energy Analysis (SEA) and Energy Finite Element Analysis (EFEA) Predictions for a Floor-Equipped Composite Cylinder

    Science.gov (United States)

    Grosveld, Ferdinand W.; Schiller, Noah H.; Cabell, Randolph H.

    2011-01-01

    Comet Enflow is a commercially available, high frequency vibroacoustic analysis software founded on Energy Finite Element Analysis (EFEA) and Energy Boundary Element Analysis (EBEA). Energy Finite Element Analysis (EFEA) was validated on a floor-equipped composite cylinder by comparing EFEA vibroacoustic response predictions with Statistical Energy Analysis (SEA) and experimental results. Statistical Energy Analysis (SEA) predictions were made using the commercial software program VA One 2009 from ESI Group. The frequency region of interest for this study covers the one-third octave bands with center frequencies from 100 Hz to 4000 Hz.

  18. Simulation Experiments in Practice : Statistical Design and Regression Analysis

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2007-01-01

    In practice, simulation analysts often change only one factor at a time, and use graphical analysis of the resulting Input/Output (I/O) data. The goal of this article is to change these traditional, naïve methods of design and analysis, because statistical theory proves that more information is

  19. Statistical trend analysis methodology for rare failures in changing technical systems

    International Nuclear Information System (INIS)

    Ott, K.O.; Hoffmann, H.J.

    1983-07-01

    A methodology for a statistical trend analysis (STA) in failure rates is presented. It applies primarily to relatively rare events in changing technologies or components. The formulation is more general and the assumptions are less restrictive than in a previously published version. Relations of the statistical analysis and probabilistic assessment (PRA) are discussed in terms of categorization of decisions for action following particular failure events. The significance of tentatively identified trends is explored. In addition to statistical tests for trend significance, a combination of STA and PRA results quantifying the trend complement is proposed. The STA approach is compared with other concepts for trend characterization. (orig.)

  20. 7755 EFFECT OF NPK FERTILIZER ON FRUIT YIELD AND YIELD ...

    African Journals Online (AJOL)

    Win7Ent

    2013-06-03

    Jun 3, 2013 ... peasant farmers in Nigeria. With the increased ... did not significantly (p=0.05) increase the fruit yield nor the seed yield. Key words: NPK fertilizer, Fruit ..... SAS (Statistical Analysis System) Version 9.1. SAS Institute Inc., Cary, ...

  1. Analysis of thrips distribution: application of spatial statistics and Kriging

    Science.gov (United States)

    John Aleong; Bruce L. Parker; Margaret Skinner; Diantha Howard

    1991-01-01

    Kriging is a statistical technique that provides predictions for spatially and temporally correlated data. Observations of thrips distribution and density in Vermont soils are made in both space and time. Traditional statistical analysis of such data assumes that the counts taken over space and time are independent, which is not necessarily true. Therefore, to analyze...

  2. Statistical wind analysis for near-space applications

    Science.gov (United States)

    Roney, Jason A.

    2007-09-01

    Statistical wind models were developed based on the existing observational wind data for near-space altitudes between 60 000 and 100 000 ft (18 30 km) above ground level (AGL) at two locations, Akon, OH, USA, and White Sands, NM, USA. These two sites are envisioned as playing a crucial role in the first flights of high-altitude airships. The analysis shown in this paper has not been previously applied to this region of the stratosphere for such an application. Standard statistics were compiled for these data such as mean, median, maximum wind speed, and standard deviation, and the data were modeled with Weibull distributions. These statistics indicated, on a yearly average, there is a lull or a “knee” in the wind between 65 000 and 72 000 ft AGL (20 22 km). From the standard statistics, trends at both locations indicated substantial seasonal variation in the mean wind speed at these heights. The yearly and monthly statistical modeling indicated that Weibull distributions were a reasonable model for the data. Forecasts and hindcasts were done by using a Weibull model based on 2004 data and comparing the model with the 2003 and 2005 data. The 2004 distribution was also a reasonable model for these years. Lastly, the Weibull distribution and cumulative function were used to predict the 50%, 95%, and 99% winds, which are directly related to the expected power requirements of a near-space station-keeping airship. These values indicated that using only the standard deviation of the mean may underestimate the operational conditions.

  3. Analysis of photon statistics with Silicon Photomultiplier

    International Nuclear Information System (INIS)

    D'Ascenzo, N.; Saveliev, V.; Wang, L.; Xie, Q.

    2015-01-01

    The Silicon Photomultiplier (SiPM) is a novel silicon-based photodetector, which represents the modern perspective of low photon flux detection. The aim of this paper is to provide an introduction on the statistical analysis methods needed to understand and estimate in quantitative way the correct features and description of the response of the SiPM to a coherent source of light

  4. Development of statistical analysis code for meteorological data (W-View)

    International Nuclear Information System (INIS)

    Tachibana, Haruo; Sekita, Tsutomu; Yamaguchi, Takenori

    2003-03-01

    A computer code (W-View: Weather View) was developed to analyze the meteorological data statistically based on 'the guideline of meteorological statistics for the safety analysis of nuclear power reactor' (Nuclear Safety Commission on January 28, 1982; revised on March 29, 2001). The code gives statistical meteorological data to assess the public dose in case of normal operation and severe accident to get the license of nuclear reactor operation. This code was revised from the original code used in a large office computer code to enable a personal computer user to analyze the meteorological data simply and conveniently and to make the statistical data tables and figures of meteorology. (author)

  5. Statistical analysis of the Ft. Calhoun reactor coolant pump system

    International Nuclear Information System (INIS)

    Heising, Carolyn D.

    1998-01-01

    In engineering science, statistical quality control techniques have traditionally been applied to control manufacturing processes. An application to commercial nuclear power plant maintenance and control is presented that can greatly improve plant safety. As a demonstration of such an approach to plant maintenance and control, a specific system is analyzed: the reactor coolant pumps (RCPs) of the Ft. Calhoun nuclear power plant. This research uses capability analysis, Shewhart X-bar, R-charts, canonical correlation methods, and design of experiments to analyze the process for the state of statistical control. The results obtained show that six out of ten parameters are under control specifications limits and four parameters are not in the state of statistical control. The analysis shows that statistical process control methods can be applied as an early warning system capable of identifying significant equipment problems well in advance of traditional control room alarm indicators Such a system would provide operators with ample time to respond to possible emergency situations and thus improve plant safety and reliability. (author)

  6. Development and application of the automated Monte Carlo biasing procedure in SAS4

    International Nuclear Information System (INIS)

    Tang, J.S.; Broadhead, B.L.

    1995-01-01

    An automated approach for biasing Monte Carlo shielding calculations is described. In particular, adjoint fluxes from a one-dimensional discrete-ordinates calculation are used to generate biasing parameters for a three-dimensional Monte Carlo calculation. The automated procedure consisting of cross-section processing, adjoint flux determination, biasing parameter generation, and the initiation of a MORSE-SGC/S Monte Carlo calculation has been implemented in the SAS4 module of the SCALE computer code system. (author)

  7. Propensity Score Analysis: An Alternative Statistical Approach for HRD Researchers

    Science.gov (United States)

    Keiffer, Greggory L.; Lane, Forrest C.

    2016-01-01

    Purpose: This paper aims to introduce matching in propensity score analysis (PSA) as an alternative statistical approach for researchers looking to make causal inferences using intact groups. Design/methodology/approach: An illustrative example demonstrated the varying results of analysis of variance, analysis of covariance and PSA on a heuristic…

  8. Simulation Experiments in Practice: Statistical Design and Regression Analysis

    OpenAIRE

    Kleijnen, J.P.C.

    2007-01-01

    In practice, simulation analysts often change only one factor at a time, and use graphical analysis of the resulting Input/Output (I/O) data. The goal of this article is to change these traditional, naïve methods of design and analysis, because statistical theory proves that more information is obtained when applying Design Of Experiments (DOE) and linear regression analysis. Unfortunately, classic DOE and regression analysis assume a single simulation response that is normally and independen...

  9. Statistical analysis of thermal conductivity of nanofluid containing ...

    Indian Academy of Sciences (India)

    Thermal conductivity measurements of nanofluids were analysed via two-factor completely randomized design and comparison of data means is carried out with Duncan's multiple-range test. Statistical analysis of experimental data show that temperature and weight fraction have a reasonable impact on the thermal ...

  10. Longitudinal data analysis a handbook of modern statistical methods

    CERN Document Server

    Fitzmaurice, Garrett; Verbeke, Geert; Molenberghs, Geert

    2008-01-01

    Although many books currently available describe statistical models and methods for analyzing longitudinal data, they do not highlight connections between various research threads in the statistical literature. Responding to this void, Longitudinal Data Analysis provides a clear, comprehensive, and unified overview of state-of-the-art theory and applications. It also focuses on the assorted challenges that arise in analyzing longitudinal data. After discussing historical aspects, leading researchers explore four broad themes: parametric modeling, nonparametric and semiparametric methods, joint

  11. Mathematical statistics

    CERN Document Server

    Pestman, Wiebe R

    2009-01-01

    This textbook provides a broad and solid introduction to mathematical statistics, including the classical subjects hypothesis testing, normal regression analysis, and normal analysis of variance. In addition, non-parametric statistics and vectorial statistics are considered, as well as applications of stochastic analysis in modern statistics, e.g., Kolmogorov-Smirnov testing, smoothing techniques, robustness and density estimation. For students with some elementary mathematical background. With many exercises. Prerequisites from measure theory and linear algebra are presented.

  12. Bayesian Sensitivity Analysis of Statistical Models with Missing Data.

    Science.gov (United States)

    Zhu, Hongtu; Ibrahim, Joseph G; Tang, Niansheng

    2014-04-01

    Methods for handling missing data depend strongly on the mechanism that generated the missing values, such as missing completely at random (MCAR) or missing at random (MAR), as well as other distributional and modeling assumptions at various stages. It is well known that the resulting estimates and tests may be sensitive to these assumptions as well as to outlying observations. In this paper, we introduce various perturbations to modeling assumptions and individual observations, and then develop a formal sensitivity analysis to assess these perturbations in the Bayesian analysis of statistical models with missing data. We develop a geometric framework, called the Bayesian perturbation manifold, to characterize the intrinsic structure of these perturbations. We propose several intrinsic influence measures to perform sensitivity analysis and quantify the effect of various perturbations to statistical models. We use the proposed sensitivity analysis procedure to systematically investigate the tenability of the non-ignorable missing at random (NMAR) assumption. Simulation studies are conducted to evaluate our methods, and a dataset is analyzed to illustrate the use of our diagnostic measures.

  13. Advanced data analysis in neuroscience integrating statistical and computational models

    CERN Document Server

    Durstewitz, Daniel

    2017-01-01

    This book is intended for use in advanced graduate courses in statistics / machine learning, as well as for all experimental neuroscientists seeking to understand statistical methods at a deeper level, and theoretical neuroscientists with a limited background in statistics. It reviews almost all areas of applied statistics, from basic statistical estimation and test theory, linear and nonlinear approaches for regression and classification, to model selection and methods for dimensionality reduction, density estimation and unsupervised clustering.  Its focus, however, is linear and nonlinear time series analysis from a dynamical systems perspective, based on which it aims to convey an understanding also of the dynamical mechanisms that could have generated observed time series. Further, it integrates computational modeling of behavioral and neural dynamics with statistical estimation and hypothesis testing. This way computational models in neuroscience are not only explanat ory frameworks, but become powerfu...

  14. Quantitative analysis and IBM SPSS statistics a guide for business and finance

    CERN Document Server

    Aljandali, Abdulkader

    2016-01-01

    This guide is for practicing statisticians and data scientists who use IBM SPSS for statistical analysis of big data in business and finance. This is the first of a two-part guide to SPSS for Windows, introducing data entry into SPSS, along with elementary statistical and graphical methods for summarizing and presenting data. Part I also covers the rudiments of hypothesis testing and business forecasting while Part II will present multivariate statistical methods, more advanced forecasting methods, and multivariate methods. IBM SPSS Statistics offers a powerful set of statistical and information analysis systems that run on a wide variety of personal computers. The software is built around routines that have been developed, tested, and widely used for more than 20 years. As such, IBM SPSS Statistics is extensively used in industry, commerce, banking, local and national governments, and education. Just a small subset of users of the package include the major clearing banks, the BBC, British Gas, British Airway...

  15. SAS4A and FPIN2X validation for slow ramp TOP accidents: experiments TS-1 and TS-2

    International Nuclear Information System (INIS)

    Hill, D.J.

    1986-01-01

    The purpose of this paper is to present further results in the series of experimental analyses being performed using SAS4A and FPIN2X in order to provide a systematic validation of these codes. The two experiments discussed here, TS-1 and TS-2, were performed by Westinghouse Hanford/Hanford Engineering Development Laboratory (WHC/HEDL) in the Transient Reactor Test (TREAT) Facility. They were slow ramp transient overpowers (TOPs) of ∼ 5 cent/s equivalent Fast Flux Test Facility (FFTF) ramp rate, single-pin experiments in flowing sodium loops. The good agreement found here adds significantly to the experimental data base that provides the foundation for SAS4A and FPIN2X validation. It also shows that prefailure internal fuel motion is a phenomenon that has to be correctly accounted for, not only as a potential inherent safety mechanism, but also before any accurate prediction of fuel failure and subsequent fuel motion and the associated reactivity effects can be made. This is also true for metal-fueled pins. This capability is provided by PINACLE, which is being incorporated into SAS4A

  16. What type of statistical model to choose for the analysis of radioimmunoassays

    International Nuclear Information System (INIS)

    Huet, S.

    1984-01-01

    The current techniques used for statistical analysis of radioimmunoassays are not very satisfactory for either the statistician or the biologist. They are based on an attempt to make the response curve linear to avoid complicated computations. The present article shows that this practice has considerable effects (often neglected) on the statistical assumptions which must be formulated. A more strict analysis is proposed by applying the four-parameter logistic model. The advantages of this method are: the statistical assumptions formulated are based on observed data, and the model can be applied to almost all radioimmunoassays [fr

  17. Computerized statistical analysis with bootstrap method in nuclear medicine

    International Nuclear Information System (INIS)

    Zoccarato, O.; Sardina, M.; Zatta, G.; De Agostini, A.; Barbesti, S.; Mana, O.; Tarolo, G.L.

    1988-01-01

    Statistical analysis of data samples involves some hypothesis about the features of data themselves. The accuracy of these hypotheses can influence the results of statistical inference. Among the new methods of computer-aided statistical analysis, the bootstrap method appears to be one of the most powerful, thanks to its ability to reproduce many artificial samples starting from a single original sample and because it works without hypothesis about data distribution. The authors applied the bootstrap method to two typical situation of Nuclear Medicine Department. The determination of the normal range of serum ferritin, as assessed by radioimmunoassay and defined by the mean value ±2 standard deviations, starting from an experimental sample of small dimension, shows an unacceptable lower limit (ferritin plasmatic levels below zero). On the contrary, the results obtained by elaborating 5000 bootstrap samples gives ans interval of values (10.95 ng/ml - 72.87 ng/ml) corresponding to the normal ranges commonly reported. Moreover the authors applied the bootstrap method in evaluating the possible error associated with the correlation coefficient determined between left ventricular ejection fraction (LVEF) values obtained by first pass radionuclide angiocardiography with 99m Tc and 195m Au. The results obtained indicate a high degree of statistical correlation and give the range of r 2 values to be considered acceptable for this type of studies

  18. Software for statistical data analysis used in Higgs searches

    International Nuclear Information System (INIS)

    Gumpert, Christian; Moneta, Lorenzo; Cranmer, Kyle; Kreiss, Sven; Verkerke, Wouter

    2014-01-01

    The analysis and interpretation of data collected by the Large Hadron Collider (LHC) requires advanced statistical tools in order to quantify the agreement between observation and theoretical models. RooStats is a project providing a statistical framework for data analysis with the focus on discoveries, confidence intervals and combination of different measurements in both Bayesian and frequentist approaches. It employs the RooFit data modelling language where mathematical concepts such as variables, (probability density) functions and integrals are represented as C++ objects. RooStats and RooFit rely on the persistency technology of the ROOT framework. The usage of a common data format enables the concept of digital publishing of complicated likelihood functions. The statistical tools have been developed in close collaboration with the LHC experiments to ensure their applicability to real-life use cases. Numerous physics results have been produced using the RooStats tools, with the discovery of the Higgs boson by the ATLAS and CMS experiments being certainly the most popular among them. We will discuss tools currently used by LHC experiments to set exclusion limits, to derive confidence intervals and to estimate discovery significances based on frequentist statistics and the asymptotic behaviour of likelihood functions. Furthermore, new developments in RooStats and performance optimisation necessary to cope with complex models depending on more than 1000 variables will be reviewed

  19. PRECISE - pregabalin in addition to usual care: Statistical analysis plan

    NARCIS (Netherlands)

    S. Mathieson (Stephanie); L. Billot (Laurent); C. Maher (Chris); A.J. McLachlan (Andrew J.); J. Latimer (Jane); B.W. Koes (Bart); M.J. Hancock (Mark J.); I. Harris (Ian); R.O. Day (Richard O.); J. Pik (Justin); S. Jan (Stephen); C.-W.C. Lin (Chung-Wei Christine)

    2016-01-01

    textabstractBackground: Sciatica is a severe, disabling condition that lacks high quality evidence for effective treatment strategies. This a priori statistical analysis plan describes the methodology of analysis for the PRECISE study. Methods/design: PRECISE is a prospectively registered, double

  20. Statistical margin to DNB safety analysis approach for LOFT

    International Nuclear Information System (INIS)

    Atkinson, S.A.

    1982-01-01

    A method was developed and used for LOFT thermal safety analysis to estimate the statistical margin to DNB for the hot rod, and to base safety analysis on desired DNB probability limits. This method is an advanced approach using response surface analysis methods, a very efficient experimental design, and a 2nd-order response surface equation with a 2nd-order error propagation analysis to define the MDNBR probability density function. Calculations for limiting transients were used in the response surface analysis thereby including transient interactions and trip uncertainties in the MDNBR probability density

  1. Multivariate statistical analysis of atom probe tomography data

    International Nuclear Information System (INIS)

    Parish, Chad M.; Miller, Michael K.

    2010-01-01

    The application of spectrum imaging multivariate statistical analysis methods, specifically principal component analysis (PCA), to atom probe tomography (APT) data has been investigated. The mathematical method of analysis is described and the results for two example datasets are analyzed and presented. The first dataset is from the analysis of a PM 2000 Fe-Cr-Al-Ti steel containing two different ultrafine precipitate populations. PCA properly describes the matrix and precipitate phases in a simple and intuitive manner. A second APT example is from the analysis of an irradiated reactor pressure vessel steel. Fine, nm-scale Cu-enriched precipitates having a core-shell structure were identified and qualitatively described by PCA. Advantages, disadvantages, and future prospects for implementing these data analysis methodologies for APT datasets, particularly with regard to quantitative analysis, are also discussed.

  2. Development of statistical analysis code for meteorological data (W-View)

    Energy Technology Data Exchange (ETDEWEB)

    Tachibana, Haruo; Sekita, Tsutomu; Yamaguchi, Takenori [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2003-03-01

    A computer code (W-View: Weather View) was developed to analyze the meteorological data statistically based on 'the guideline of meteorological statistics for the safety analysis of nuclear power reactor' (Nuclear Safety Commission on January 28, 1982; revised on March 29, 2001). The code gives statistical meteorological data to assess the public dose in case of normal operation and severe accident to get the license of nuclear reactor operation. This code was revised from the original code used in a large office computer code to enable a personal computer user to analyze the meteorological data simply and conveniently and to make the statistical data tables and figures of meteorology. (author)

  3. Cross-cultural adaptation of the Sport Anxiety Scale-2 (SAS-2 for the Brazilian context

    Directory of Open Access Journals (Sweden)

    Viviane Vedovato Silva-Rocha

    Full Text Available Abstract Objective To present the process of cross-cultural adaptation of the Sport Anxiety Scale-2 (SAS-2 for the Brazilian context. Method The following stages were used: translation into Brazilian Portuguese by independent translators, elaboration of a synthesis version, back-translation, evaluation by experts and pretest with target population. Results All the stages of cross-cultural adaptation were completed, and in the majority of items evaluated, good concordance between experts was obtained (≥ 80%. Suggested adjustments were compiled into the consensus version by the two authors, with the resulting material being considered adequate in the pretest (and thus no further changes were needed. Termed as “Escala de Ansiedade Esportiva-2,” the final version was considered by the main author of the original scale as an official version in Brazilian Portuguese. Conclusions In view of the fulfilment of all steps suggested for the cross-cultural adaptation process, the SAS-2 is now available in Brazilian Portuguese to be tested for its psychometric qualities.

  4. CORSSA: Community Online Resource for Statistical Seismicity Analysis

    Science.gov (United States)

    Zechar, J. D.; Hardebeck, J. L.; Michael, A. J.; Naylor, M.; Steacy, S.; Wiemer, S.; Zhuang, J.

    2011-12-01

    Statistical seismology is critical to the understanding of seismicity, the evaluation of proposed earthquake prediction and forecasting methods, and the assessment of seismic hazard. Unfortunately, despite its importance to seismology-especially to those aspects with great impact on public policy-statistical seismology is mostly ignored in the education of seismologists, and there is no central repository for the existing open-source software tools. To remedy these deficiencies, and with the broader goal to enhance the quality of statistical seismology research, we have begun building the Community Online Resource for Statistical Seismicity Analysis (CORSSA, www.corssa.org). We anticipate that the users of CORSSA will range from beginning graduate students to experienced researchers. More than 20 scientists from around the world met for a week in Zurich in May 2010 to kick-start the creation of CORSSA: the format and initial table of contents were defined; a governing structure was organized; and workshop participants began drafting articles. CORSSA materials are organized with respect to six themes, each will contain between four and eight articles. CORSSA now includes seven articles with an additional six in draft form along with forums for discussion, a glossary, and news about upcoming meetings, special issues, and recent papers. Each article is peer-reviewed and presents a balanced discussion, including illustrative examples and code snippets. Topics in the initial set of articles include: introductions to both CORSSA and statistical seismology, basic statistical tests and their role in seismology; understanding seismicity catalogs and their problems; basic techniques for modeling seismicity; and methods for testing earthquake predictability hypotheses. We have also begun curating a collection of statistical seismology software packages.

  5. Recent advances in statistical energy analysis

    Science.gov (United States)

    Heron, K. H.

    1992-01-01

    Statistical Energy Analysis (SEA) has traditionally been developed using modal summation and averaging approach, and has led to the need for many restrictive SEA assumptions. The assumption of 'weak coupling' is particularly unacceptable when attempts are made to apply SEA to structural coupling. It is now believed that this assumption is more a function of the modal formulation rather than a necessary formulation of SEA. The present analysis ignores this restriction and describes a wave approach to the calculation of plate-plate coupling loss factors. Predictions based on this method are compared with results obtained from experiments using point excitation on one side of an irregular six-sided box structure. Conclusions show that the use and calculation of infinite transmission coefficients is the way forward for the development of a purely predictive SEA code.

  6. Statistical analysis of tourism destination competitiveness

    Directory of Open Access Journals (Sweden)

    Attilio Gardini

    2013-05-01

    Full Text Available The growing relevance of tourism industry for modern advanced economies has increased the interest among researchers and policy makers in the statistical analysis of destination competitiveness. In this paper we outline a new model of destination competitiveness based on sound theoretical grounds and we develop a statistical test of the model on sample data based on Italian tourist destination decisions and choices. Our model focuses on the tourism decision process which starts from the demand schedule for holidays and ends with the choice of a specific holiday destination. The demand schedule is a function of individual preferences and of destination positioning, while the final decision is a function of the initial demand schedule and the information concerning services for accommodation and recreation in the selected destinations. Moreover, we extend previous studies that focused on image or attributes (such as climate and scenery by paying more attention to the services for accommodation and recreation in the holiday destinations. We test the proposed model using empirical data collected from a sample of 1.200 Italian tourists interviewed in 2007 (October - December. Data analysis shows that the selection probability for the destination included in the consideration set is not proportional to the share of inclusion because the share of inclusion is determined by the brand image, while the selection of the effective holiday destination is influenced by the real supply conditions. The analysis of Italian tourists preferences underline the existence of a latent demand for foreign holidays which points out a risk of market share reduction for Italian tourism system in the global market. We also find a snow ball effect which helps the most popular destinations, mainly in the northern Italian regions.

  7. Visual and statistical analysis of 18F-FDG PET in primary progressive aphasia

    International Nuclear Information System (INIS)

    Matias-Guiu, Jordi A.; Moreno-Ramos, Teresa; Garcia-Ramos, Rocio; Fernandez-Matarrubia, Marta; Oreja-Guevara, Celia; Matias-Guiu, Jorge; Cabrera-Martin, Maria Nieves; Perez-Castejon, Maria Jesus; Rodriguez-Rey, Cristina; Ortega-Candil, Aida; Carreras, Jose Luis

    2015-01-01

    Diagnosing progressive primary aphasia (PPA) and its variants is of great clinical importance, and fluorodeoxyglucose (FDG) positron emission tomography (PET) may be a useful diagnostic technique. The purpose of this study was to evaluate interobserver variability in the interpretation of FDG PET images in PPA as well as the diagnostic sensitivity and specificity of the technique. We also aimed to compare visual and statistical analyses of these images. There were 10 raters who analysed 44 FDG PET scans from 33 PPA patients and 11 controls. Five raters analysed the images visually, while the other five used maps created using Statistical Parametric Mapping software. Two spatial normalization procedures were performed: global mean normalization and cerebellar normalization. Clinical diagnosis was considered the gold standard. Inter-rater concordance was moderate for visual analysis (Fleiss' kappa 0.568) and substantial for statistical analysis (kappa 0.756-0.881). Agreement was good for all three variants of PPA except for the nonfluent/agrammatic variant studied with visual analysis. The sensitivity and specificity of each rater's diagnosis of PPA was high, averaging 87.8 and 89.9 % for visual analysis and 96.9 and 90.9 % for statistical analysis using global mean normalization, respectively. In cerebellar normalization, sensitivity was 88.9 % and specificity 100 %. FDG PET demonstrated high diagnostic accuracy for the diagnosis of PPA and its variants. Inter-rater concordance was higher for statistical analysis, especially for the nonfluent/agrammatic variant. These data support the use of FDG PET to evaluate patients with PPA and show that statistical analysis methods are particularly useful for identifying the nonfluent/agrammatic variant of PPA. (orig.)

  8. Australasian Resuscitation In Sepsis Evaluation trial statistical analysis plan.

    Science.gov (United States)

    Delaney, Anthony; Peake, Sandra L; Bellomo, Rinaldo; Cameron, Peter; Holdgate, Anna; Howe, Belinda; Higgins, Alisa; Presneill, Jeffrey; Webb, Steve

    2013-10-01

    The Australasian Resuscitation In Sepsis Evaluation (ARISE) study is an international, multicentre, randomised, controlled trial designed to evaluate the effectiveness of early goal-directed therapy compared with standard care for patients presenting to the ED with severe sepsis. In keeping with current practice, and taking into considerations aspects of trial design and reporting specific to non-pharmacologic interventions, this document outlines the principles and methods for analysing and reporting the trial results. The document is prepared prior to completion of recruitment into the ARISE study, without knowledge of the results of the interim analysis conducted by the data safety and monitoring committee and prior to completion of the two related international studies. The statistical analysis plan was designed by the ARISE chief investigators, and reviewed and approved by the ARISE steering committee. The data collected by the research team as specified in the study protocol, and detailed in the study case report form were reviewed. Information related to baseline characteristics, characteristics of delivery of the trial interventions, details of resuscitation and other related therapies, and other relevant data are described with appropriate comparisons between groups. The primary, secondary and tertiary outcomes for the study are defined, with description of the planned statistical analyses. A statistical analysis plan was developed, along with a trial profile, mock-up tables and figures. A plan for presenting baseline characteristics, microbiological and antibiotic therapy, details of the interventions, processes of care and concomitant therapies, along with adverse events are described. The primary, secondary and tertiary outcomes are described along with identification of subgroups to be analysed. A statistical analysis plan for the ARISE study has been developed, and is available in the public domain, prior to the completion of recruitment into the

  9. Development of the JFT-2M data analysis software system on the mainframe computer

    International Nuclear Information System (INIS)

    Matsuda, Toshiaki; Amagai, Akira; Suda, Shuji; Maemura, Katsumi; Hata, Ken-ichiro.

    1990-11-01

    We developed software system on the FACOM mainframe computer to analyze JFT-2M experimental data archived by JFT-2M data acquisition system. Then we can reduce and distribute the CPU load of the data acquisition system. And we can analyze JFT-2M experimental data by using complicated computational code with raw data, such as equilibrium calculation and transport analysis, and useful software package like SAS statistic package on the mainframe. (author)

  10. OECD benchmark a of MOX fueled PWR unit cells using SAS2H, triton and mocup

    International Nuclear Information System (INIS)

    Ganda, F.; Greenspan, A.

    2005-01-01

    Three code systems are tested by applying them to calculate the OECD PWR MOX unit cell benchmark A. The codes tested are the SAS2H code sequence of the SCALE5 code package using 44 group library, MOCUP (MCNP4C + ORIGEN2), and the new TRITON depletion sequence of SCALE5 using 238 group cross sections generated using CENTRM with continuous energy cross sections. The burnup-dependent k ∞ and actinides concentration calculated by all three code-systems were found to be in good agreement with the OECD benchmark average results. Limited results were calculated also with the WIMS-ANL code package. WIMS-ANL was found to significantly under-predict k ∞ as well as the concentration of Pu 242 , consistently with the predictions of the WIMS-LWR reported by two of the OECD benchmark participants. Additionally, SAS2H is benchmarked against MOCUP for a hydride fuel containing unit cell, giving very satisfactory agreement. (authors)

  11. Death and Disability in Patients with Sleep Apnea - A Meta-analysis

    Energy Technology Data Exchange (ETDEWEB)

    Fonseca, Maria Inês Pires, E-mail: pinespines@gmail.com; Pereira, Telmo; Caseiro, Paulo [Instituto Politécnico de Coimbra - ESTESC - Departamento de Fisiologia Clínica, Coimbra (Portugal)

    2015-01-15

    Several studies have been attempting to ascertain the risks of Sleep Apnea Syndrome (SAS) and its morbidity and mortality. The main objective was to verify whether SAS increases the risk of death; the secondary objective was to evaluate its morbidity in relation to cardiovascular disease and the number of days hospitalized. A systematic review and a meta-analysis were performed of the published literature. The research focused on studies comparing the number of deaths in patients with untreated SAS and in patients with non-SAS. The meta-analysis was based on 13 articles, corresponding to a total of 13394 participants divided into two groups (non-SAS = 6631; SAS = 6763). The meta-analysis revealed a clear association of SAS with the occurrence of fatal events, where the presence of SAS corresponded to a 61% higher risk of total mortality (OR=1.61; CI: 1.43 - 1.81; p < 0.00001), while the risk of death from cardiac causes was 2.52 times higher in these patients (OR = 2.52; IC: 1.80 - 3.52; p < 0.00001). Similar results were obtained for mortality from other causes (OR = 1.68; CI: 1.08 - 2.61; p = 0.02). Resembling results were obtained in the remaining outcomes: non-fatal cardiovascular events were higher in the SAS group (OR = 2.46; IC: 1.80 - 3.36; p < 0.00001), the average number of days hospitalized was also higher in the SAS group (IV = 18.09; IC: 13.34 - 22.84; p < 0.00001). The results show that untreated SAS significantly increases the risk of death, cardiovascular events and the average number of days hospitalized.

  12. Death and Disability in Patients with Sleep Apnea - A Meta-analysis

    International Nuclear Information System (INIS)

    Fonseca, Maria Inês Pires; Pereira, Telmo; Caseiro, Paulo

    2015-01-01

    Several studies have been attempting to ascertain the risks of Sleep Apnea Syndrome (SAS) and its morbidity and mortality. The main objective was to verify whether SAS increases the risk of death; the secondary objective was to evaluate its morbidity in relation to cardiovascular disease and the number of days hospitalized. A systematic review and a meta-analysis were performed of the published literature. The research focused on studies comparing the number of deaths in patients with untreated SAS and in patients with non-SAS. The meta-analysis was based on 13 articles, corresponding to a total of 13394 participants divided into two groups (non-SAS = 6631; SAS = 6763). The meta-analysis revealed a clear association of SAS with the occurrence of fatal events, where the presence of SAS corresponded to a 61% higher risk of total mortality (OR=1.61; CI: 1.43 - 1.81; p < 0.00001), while the risk of death from cardiac causes was 2.52 times higher in these patients (OR = 2.52; IC: 1.80 - 3.52; p < 0.00001). Similar results were obtained for mortality from other causes (OR = 1.68; CI: 1.08 - 2.61; p = 0.02). Resembling results were obtained in the remaining outcomes: non-fatal cardiovascular events were higher in the SAS group (OR = 2.46; IC: 1.80 - 3.36; p < 0.00001), the average number of days hospitalized was also higher in the SAS group (IV = 18.09; IC: 13.34 - 22.84; p < 0.00001). The results show that untreated SAS significantly increases the risk of death, cardiovascular events and the average number of days hospitalized

  13. Measuring the Success of an Academic Development Programme: A Statistical Analysis

    Science.gov (United States)

    Smith, L. C.

    2009-01-01

    This study uses statistical analysis to estimate the impact of first-year academic development courses in microeconomics, statistics, accountancy, and information systems, offered by the University of Cape Town's Commerce Academic Development Programme, on students' graduation performance relative to that achieved by mainstream students. The data…

  14. Analysis of Variance in Statistical Image Processing

    Science.gov (United States)

    Kurz, Ludwik; Hafed Benteftifa, M.

    1997-04-01

    A key problem in practical image processing is the detection of specific features in a noisy image. Analysis of variance (ANOVA) techniques can be very effective in such situations, and this book gives a detailed account of the use of ANOVA in statistical image processing. The book begins by describing the statistical representation of images in the various ANOVA models. The authors present a number of computationally efficient algorithms and techniques to deal with such problems as line, edge, and object detection, as well as image restoration and enhancement. By describing the basic principles of these techniques, and showing their use in specific situations, the book will facilitate the design of new algorithms for particular applications. It will be of great interest to graduate students and engineers in the field of image processing and pattern recognition.

  15. Study of relationship between MUF correlation and detection sensitivity of statistical analysis

    International Nuclear Information System (INIS)

    Tamura, Toshiaki; Ihara, Hitoshi; Yamamoto, Yoichi; Ikawa, Koji

    1989-11-01

    Various kinds of statistical analysis are proposed to NRTA (Near Real Time Materials Accountancy) which was devised to satisfy the timeliness goal of one of the detection goals of IAEA. It will be presumed that different statistical analysis results will occur between the case of considered rigorous error propagation (with MUF correlation) and the case of simplified error propagation (without MUF correlation). Therefore, measurement simulation and decision analysis were done using flow simulation of 800 MTHM/Y model reprocessing plant, and relationship between MUF correlation and detection sensitivity and false alarm of statistical analysis was studied. Specific character of material accountancy for 800 MTHM/Y model reprocessing plant was grasped by this simulation. It also became clear that MUF correlation decreases not only false alarm but also detection probability for protracted loss in case of CUMUF test and Page's test applied to NRTA. (author)

  16. An alternative to the SAS2H/ORIGEN-S sequence to account for water-density effects in BWR systems

    International Nuclear Information System (INIS)

    Leal, L.C.; Hermann, O.W.; Ryman, J.C.; Broadhead, B.L.

    1996-01-01

    A scheme to generate one-group problem-dependent cross-section libraries for point-depletion calculations with the ORIGEN-S code was developed as an alternative to the SAS2H sequence of the SCALE code system. The methodology, named Automatic Rapid Processing (ARP), generates libraries by interpolating in SAS2H precomputed cross section libraries. The method has been used to generate ORIGEN-S cross section libraries on a personal computer resulting in a great reduction of computer time without a sacrifice of accuracy over that required by corresponding SAS2H calculations. The ARP scheme generates ORIGEN-S libraries by interpolating in burnup and enrichment for PWR assemblies. The intent of this work is to describe a procedure which extends the application of the ARP methodology to BWR assemblies by including the axial water-density effects in the generation of the ORIGEN-S cross-section libraries. The axial liquid- to-steam change of state in BWR systems leads to a variation in the water density and significant cross-section changes as a function of the water density. To account for the axial water-density changes in a SAS2H calculation, the water density is entered explicitly in the generation of the one-group ORIGEN-S cross-section libraries generated from the SCALE 27-group library. In its original version, ARP does not account for the effects of water-density variation in ORIGEN-S cross-section library generation, and, therefore, its application is restricted to systems for which the impact of this parameter is negligible. To update the ARP methodology to account for the water-density effect, a detailed study of the cross-section change with this parameter was performed with an 8 x 8 (General Electric) BWR assembly

  17. 75 FR 24718 - Guidance for Industry on Documenting Statistical Analysis Programs and Data Files; Availability

    Science.gov (United States)

    2010-05-05

    ...] Guidance for Industry on Documenting Statistical Analysis Programs and Data Files; Availability AGENCY... documenting statistical analyses and data files submitted to the Center for Veterinary Medicine (CVM) for the... on Documenting Statistical Analysis Programs and Data Files; Availability'' giving interested persons...

  18. Survival analysis models and applications

    CERN Document Server

    Liu, Xian

    2012-01-01

    Survival analysis concerns sequential occurrences of events governed by probabilistic laws.  Recent decades have witnessed many applications of survival analysis in various disciplines. This book introduces both classic survival models and theories along with newly developed techniques. Readers will learn how to perform analysis of survival data by following numerous empirical illustrations in SAS. Survival Analysis: Models and Applications: Presents basic techniques before leading onto some of the most advanced topics in survival analysis.Assumes only a minimal knowledge of SAS whilst enablin

  19. Point defect characterization in HAADF-STEM images using multivariate statistical analysis

    International Nuclear Information System (INIS)

    Sarahan, Michael C.; Chi, Miaofang; Masiel, Daniel J.; Browning, Nigel D.

    2011-01-01

    Quantitative analysis of point defects is demonstrated through the use of multivariate statistical analysis. This analysis consists of principal component analysis for dimensional estimation and reduction, followed by independent component analysis to obtain physically meaningful, statistically independent factor images. Results from these analyses are presented in the form of factor images and scores. Factor images show characteristic intensity variations corresponding to physical structure changes, while scores relate how much those variations are present in the original data. The application of this technique is demonstrated on a set of experimental images of dislocation cores along a low-angle tilt grain boundary in strontium titanate. A relationship between chemical composition and lattice strain is highlighted in the analysis results, with picometer-scale shifts in several columns measurable from compositional changes in a separate column. -- Research Highlights: → Multivariate analysis of HAADF-STEM images. → Distinct structural variations among SrTiO 3 dislocation cores. → Picometer atomic column shifts correlated with atomic column population changes.

  20. STATCAT, Statistical Analysis of Parametric and Non-Parametric Data

    International Nuclear Information System (INIS)

    David, Hugh

    1990-01-01

    1 - Description of program or function: A suite of 26 programs designed to facilitate the appropriate statistical analysis and data handling of parametric and non-parametric data, using classical and modern univariate and multivariate methods. 2 - Method of solution: Data is read entry by entry, using a choice of input formats, and the resultant data bank is checked for out-of- range, rare, extreme or missing data. The completed STATCAT data bank can be treated by a variety of descriptive and inferential statistical methods, and modified, using other standard programs as required

  1. Recent progress on the R and D program of the seismic attenuation system (SAS) proposed for the advanced gravitational wave detector, LIGO II

    International Nuclear Information System (INIS)

    Bertolini, A.; Cella, G.; Chenyang, W.; Salvo, R. de; Kovalik, J.; Marka, S.; Sannibale, V.; Takamori, A.; Tariq, H.; Viboud, N.

    2001-01-01

    High-performance Seismic Isolation Systems in gravitational wave interferometers are needed not only to increase the sensitivity of the detectors but also to guarantee long periods of stable operation. SAS is essentially a system which produces the required in-band seismic isolation by use of passive mechanical filters and actively reduces the out of band seismic noise using inertial damping. The passive isolation is achieved for all the 6 degrees of freedom, with an Inverted Pendulum and a chain of single wire pendula whose masses are the Geometrical Anti-Spring Filters (GASF). The active control is applied to reduce mainly the noise below 4 Hz and to damp the resonances of the chain acting from the inverted pendulum table. Here we present a brief overview of SAS and recent results achieved from the full scale SAS prototype

  2. FADTTS: functional analysis of diffusion tensor tract statistics.

    Science.gov (United States)

    Zhu, Hongtu; Kong, Linglong; Li, Runze; Styner, Martin; Gerig, Guido; Lin, Weili; Gilmore, John H

    2011-06-01

    The aim of this paper is to present a functional analysis of a diffusion tensor tract statistics (FADTTS) pipeline for delineating the association between multiple diffusion properties along major white matter fiber bundles with a set of covariates of interest, such as age, diagnostic status and gender, and the structure of the variability of these white matter tract properties in various diffusion tensor imaging studies. The FADTTS integrates five statistical tools: (i) a multivariate varying coefficient model for allowing the varying coefficient functions in terms of arc length to characterize the varying associations between fiber bundle diffusion properties and a set of covariates, (ii) a weighted least squares estimation of the varying coefficient functions, (iii) a functional principal component analysis to delineate the structure of the variability in fiber bundle diffusion properties, (iv) a global test statistic to test hypotheses of interest, and (v) a simultaneous confidence band to quantify the uncertainty in the estimated coefficient functions. Simulated data are used to evaluate the finite sample performance of FADTTS. We apply FADTTS to investigate the development of white matter diffusivities along the splenium of the corpus callosum tract and the right internal capsule tract in a clinical study of neurodevelopment. FADTTS can be used to facilitate the understanding of normal brain development, the neural bases of neuropsychiatric disorders, and the joint effects of environmental and genetic factors on white matter fiber bundles. The advantages of FADTTS compared with the other existing approaches are that they are capable of modeling the structured inter-subject variability, testing the joint effects, and constructing their simultaneous confidence bands. However, FADTTS is not crucial for estimation and reduces to the functional analysis method for the single measure. Copyright © 2011 Elsevier Inc. All rights reserved.

  3. Statistical process control methods allow the analysis and improvement of anesthesia care.

    Science.gov (United States)

    Fasting, Sigurd; Gisvold, Sven E

    2003-10-01

    Quality aspects of the anesthetic process are reflected in the rate of intraoperative adverse events. The purpose of this report is to illustrate how the quality of the anesthesia process can be analyzed using statistical process control methods, and exemplify how this analysis can be used for quality improvement. We prospectively recorded anesthesia-related data from all anesthetics for five years. The data included intraoperative adverse events, which were graded into four levels, according to severity. We selected four adverse events, representing important quality and safety aspects, for statistical process control analysis. These were: inadequate regional anesthesia, difficult emergence from general anesthesia, intubation difficulties and drug errors. We analyzed the underlying process using 'p-charts' for statistical process control. In 65,170 anesthetics we recorded adverse events in 18.3%; mostly of lesser severity. Control charts were used to define statistically the predictable normal variation in problem rate, and then used as a basis for analysis of the selected problems with the following results: Inadequate plexus anesthesia: stable process, but unacceptably high failure rate; Difficult emergence: unstable process, because of quality improvement efforts; Intubation difficulties: stable process, rate acceptable; Medication errors: methodology not suited because of low rate of errors. By applying statistical process control methods to the analysis of adverse events, we have exemplified how this allows us to determine if a process is stable, whether an intervention is required, and if quality improvement efforts have the desired effect.

  4. Effect of the absolute statistic on gene-sampling gene-set analysis methods.

    Science.gov (United States)

    Nam, Dougu

    2017-06-01

    Gene-set enrichment analysis and its modified versions have commonly been used for identifying altered functions or pathways in disease from microarray data. In particular, the simple gene-sampling gene-set analysis methods have been heavily used for datasets with only a few sample replicates. The biggest problem with this approach is the highly inflated false-positive rate. In this paper, the effect of absolute gene statistic on gene-sampling gene-set analysis methods is systematically investigated. Thus far, the absolute gene statistic has merely been regarded as a supplementary method for capturing the bidirectional changes in each gene set. Here, it is shown that incorporating the absolute gene statistic in gene-sampling gene-set analysis substantially reduces the false-positive rate and improves the overall discriminatory ability. Its effect was investigated by power, false-positive rate, and receiver operating curve for a number of simulated and real datasets. The performances of gene-set analysis methods in one-tailed (genome-wide association study) and two-tailed (gene expression data) tests were also compared and discussed.

  5. An improved method for statistical analysis of raw accelerator mass spectrometry data

    International Nuclear Information System (INIS)

    Gutjahr, A.; Phillips, F.; Kubik, P.W.; Elmore, D.

    1987-01-01

    Hierarchical statistical analysis is an appropriate method for statistical treatment of raw accelerator mass spectrometry (AMS) data. Using Monte Carlo simulations we show that this method yields more accurate estimates of isotope ratios and analytical uncertainty than the generally used propagation of errors approach. The hierarchical analysis is also useful in design of experiments because it can be used to identify sources of variability. 8 refs., 2 figs

  6. Statistical Image Analysis of Tomograms with Application to Fibre Geometry Characterisation

    DEFF Research Database (Denmark)

    Emerson, Monica Jane

    The goal of this thesis is to develop statistical image analysis tools to characterise the micro-structure of complex materials used in energy technologies, with a strong focus on fibre composites. These quantification tools are based on extracting geometrical parameters defining structures from 2D...... with high resolution both in space and time to observe fast micro-structural changes. This thesis demonstrates that statistical image analysis combined with X-ray CT opens up numerous possibilities for understanding the behaviour of fibre composites under real life conditions. Besides enabling...

  7. The art of data analysis how to answer almost any question using basic statistics

    CERN Document Server

    Jarman, Kristin H

    2013-01-01

    A friendly and accessible approach to applying statistics in the real worldWith an emphasis on critical thinking, The Art of Data Analysis: How to Answer Almost Any Question Using Basic Statistics presents fun and unique examples, guides readers through the entire data collection and analysis process, and introduces basic statistical concepts along the way.Leaving proofs and complicated mathematics behind, the author portrays the more engaging side of statistics and emphasizes its role as a problem-solving tool.  In addition, light-hearted case studies

  8. Statistics in experimental design, preprocessing, and analysis of proteomics data.

    Science.gov (United States)

    Jung, Klaus

    2011-01-01

    High-throughput experiments in proteomics, such as 2-dimensional gel electrophoresis (2-DE) and mass spectrometry (MS), yield usually high-dimensional data sets of expression values for hundreds or thousands of proteins which are, however, observed on only a relatively small number of biological samples. Statistical methods for the planning and analysis of experiments are important to avoid false conclusions and to receive tenable results. In this chapter, the most frequent experimental designs for proteomics experiments are illustrated. In particular, focus is put on studies for the detection of differentially regulated proteins. Furthermore, issues of sample size planning, statistical analysis of expression levels as well as methods for data preprocessing are covered.

  9. Application of Multivariable Statistical Techniques in Plant-wide WWTP Control Strategies Analysis

    DEFF Research Database (Denmark)

    Flores Alsina, Xavier; Comas, J.; Rodríguez-Roda, I.

    2007-01-01

    The main objective of this paper is to present the application of selected multivariable statistical techniques in plant-wide wastewater treatment plant (WWTP) control strategies analysis. In this study, cluster analysis (CA), principal component analysis/factor analysis (PCA/FA) and discriminant...... analysis (DA) are applied to the evaluation matrix data set obtained by simulation of several control strategies applied to the plant-wide IWA Benchmark Simulation Model No 2 (BSM2). These techniques allow i) to determine natural groups or clusters of control strategies with a similar behaviour, ii......) to find and interpret hidden, complex and casual relation features in the data set and iii) to identify important discriminant variables within the groups found by the cluster analysis. This study illustrates the usefulness of multivariable statistical techniques for both analysis and interpretation...

  10. SAS Macros for Calculation of Population Attributable Fraction in a Cohort Study Design

    Directory of Open Access Journals (Sweden)

    Maarit A. Laaksonen

    2011-08-01

    Full Text Available The population attributable fraction (PAF is a useful measure for quantifying the impact of exposure to certain risk factors on a particular outcome at the population level. Recently, new model-based methods for the estimation of PAF and its confidence interval for different types of outcomes in a cohort study design have been proposed. In this paper, we introduce SAS macros implementing these methods and illustrate their application with a data example on the impact of different risk factors on type 2 diabetes incidence.

  11. The statistical analysis techniques to support the NGNP fuel performance experiments

    Energy Technology Data Exchange (ETDEWEB)

    Pham, Binh T., E-mail: Binh.Pham@inl.gov; Einerson, Jeffrey J.

    2013-10-15

    This paper describes the development and application of statistical analysis techniques to support the Advanced Gas Reactor (AGR) experimental program on Next Generation Nuclear Plant (NGNP) fuel performance. The experiments conducted in the Idaho National Laboratory’s Advanced Test Reactor employ fuel compacts placed in a graphite cylinder shrouded by a steel capsule. The tests are instrumented with thermocouples embedded in graphite blocks and the target quantity (fuel temperature) is regulated by the He–Ne gas mixture that fills the gap volume. Three techniques for statistical analysis, namely control charting, correlation analysis, and regression analysis, are implemented in the NGNP Data Management and Analysis System for automated processing and qualification of the AGR measured data. The neutronic and thermal code simulation results are used for comparative scrutiny. The ultimate objective of this work includes (a) a multi-faceted system for data monitoring and data accuracy testing, (b) identification of possible modes of diagnostics deterioration and changes in experimental conditions, (c) qualification of data for use in code validation, and (d) identification and use of data trends to support effective control of test conditions with respect to the test target. Analysis results and examples given in the paper show the three statistical analysis techniques providing a complementary capability to warn of thermocouple failures. It also suggests that the regression analysis models relating calculated fuel temperatures and thermocouple readings can enable online regulation of experimental parameters (i.e. gas mixture content), to effectively maintain the fuel temperature within a given range.

  12. Statistical Challenges of Big Data Analysis in Medicine

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2015-01-01

    Roč. 3, č. 1 (2015), s. 24-27 ISSN 1805-8698 R&D Projects: GA ČR GA13-23940S Grant - others:CESNET Development Fund(CZ) 494/2013 Institutional support: RVO:67985807 Keywords : big data * variable selection * classification * cluster analysis Subject RIV: BB - Applied Statistics, Operational Research http://www.ijbh.org/ijbh2015-1.pdf

  13. Statistical Analysis of Hypercalcaemia Data related to Transferability

    DEFF Research Database (Denmark)

    Frølich, Anne; Nielsen, Bo Friis

    2005-01-01

    In this report we describe statistical analysis related to a study of hypercalcaemia carried out in the Copenhagen area in the ten year period from 1984 to 1994. Results from the study have previously been publised in a number of papers [3, 4, 5, 6, 7, 8, 9] and in various abstracts and posters...... at conferences during the late eighties and early nineties. In this report we give a more detailed description of many of the analysis and provide some new results primarily by simultaneous studies of several databases....

  14. RISA: Remote Interface for Science Analysis

    Science.gov (United States)

    Gabriel, C.; Ibarra, A.; de La Calle, I.; Salgado, J.; Osuna, P.; Tapiador, D.

    2008-08-01

    The Scientific Analysis System (SAS) is the package for interactive and pipeline data reduction of all XMM-Newton data. Freely distributed by ESA to run under many different operating systems, the SAS has been used by almost every one of the 1600 refereed scientific publications obtained so far from the mission. We are developing RISA, the Remote Interface for Science Analysis, which makes it possible to run SAS through fully configurable web service workflows, enabling observers to access and analyse data making use of all of the existing SAS functionalities, without any installation/download of software/data. The workflows run primarily but not exclusively on the ESAC Grid, which offers scalable processing resources, directly connected to the XMM-Newton Science Archive. A first project internal version of RISA was issued in May 2007, a public release is expected already within this year.

  15. Statistical analysis of questionnaires a unified approach based on R and Stata

    CERN Document Server

    Bartolucci, Francesco; Gnaldi, Michela

    2015-01-01

    Statistical Analysis of Questionnaires: A Unified Approach Based on R and Stata presents special statistical methods for analyzing data collected by questionnaires. The book takes an applied approach to testing and measurement tasks, mirroring the growing use of statistical methods and software in education, psychology, sociology, and other fields. It is suitable for graduate students in applied statistics and psychometrics and practitioners in education, health, and marketing.The book covers the foundations of classical test theory (CTT), test reliability, va

  16. Reducing bias in the analysis of counting statistics data

    International Nuclear Information System (INIS)

    Hammersley, A.P.; Antoniadis, A.

    1997-01-01

    In the analysis of counting statistics data it is common practice to estimate the variance of the measured data points as the data points themselves. This practice introduces a bias into the results of further analysis which may be significant, and under certain circumstances lead to false conclusions. In the case of normal weighted least squares fitting this bias is quantified and methods to avoid it are proposed. (orig.)

  17. Benchmark validation of statistical models: Application to mediation analysis of imagery and memory.

    Science.gov (United States)

    MacKinnon, David P; Valente, Matthew J; Wurpts, Ingrid C

    2018-03-29

    This article describes benchmark validation, an approach to validating a statistical model. According to benchmark validation, a valid model generates estimates and research conclusions consistent with a known substantive effect. Three types of benchmark validation-(a) benchmark value, (b) benchmark estimate, and (c) benchmark effect-are described and illustrated with examples. Benchmark validation methods are especially useful for statistical models with assumptions that are untestable or very difficult to test. Benchmark effect validation methods were applied to evaluate statistical mediation analysis in eight studies using the established effect that increasing mental imagery improves recall of words. Statistical mediation analysis led to conclusions about mediation that were consistent with established theory that increased imagery leads to increased word recall. Benchmark validation based on established substantive theory is discussed as a general way to investigate characteristics of statistical models and a complement to mathematical proof and statistical simulation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  18. Analysis and meta-analysis of single-case designs with a standardized mean difference statistic: a primer and applications.

    Science.gov (United States)

    Shadish, William R; Hedges, Larry V; Pustejovsky, James E

    2014-04-01

    This article presents a d-statistic for single-case designs that is in the same metric as the d-statistic used in between-subjects designs such as randomized experiments and offers some reasons why such a statistic would be useful in SCD research. The d has a formal statistical development, is accompanied by appropriate power analyses, and can be estimated using user-friendly SPSS macros. We discuss both advantages and disadvantages of d compared to other approaches such as previous d-statistics, overlap statistics, and multilevel modeling. It requires at least three cases for computation and assumes normally distributed outcomes and stationarity, assumptions that are discussed in some detail. We also show how to test these assumptions. The core of the article then demonstrates in depth how to compute d for one study, including estimation of the autocorrelation and the ratio of between case variance to total variance (between case plus within case variance), how to compute power using a macro, and how to use the d to conduct a meta-analysis of studies using single-case designs in the free program R, including syntax in an appendix. This syntax includes how to read data, compute fixed and random effect average effect sizes, prepare a forest plot and a cumulative meta-analysis, estimate various influence statistics to identify studies contributing to heterogeneity and effect size, and do various kinds of publication bias analyses. This d may prove useful for both the analysis and meta-analysis of data from SCDs. Copyright © 2013 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  19. Atomistic modelling of scattering data in the Collaborative Computational Project for Small Angle Scattering (CCP-SAS).

    Science.gov (United States)

    Perkins, Stephen J; Wright, David W; Zhang, Hailiang; Brookes, Emre H; Chen, Jianhan; Irving, Thomas C; Krueger, Susan; Barlow, David J; Edler, Karen J; Scott, David J; Terrill, Nicholas J; King, Stephen M; Butler, Paul D; Curtis, Joseph E

    2016-12-01

    The capabilities of current computer simulations provide a unique opportunity to model small-angle scattering (SAS) data at the atomistic level, and to include other structural constraints ranging from molecular and atomistic energetics to crystallography, electron microscopy and NMR. This extends the capabilities of solution scattering and provides deeper insights into the physics and chemistry of the systems studied. Realizing this potential, however, requires integrating the experimental data with a new generation of modelling software. To achieve this, the CCP-SAS collaboration (http://www.ccpsas.org/) is developing open-source, high-throughput and user-friendly software for the atomistic and coarse-grained molecular modelling of scattering data. Robust state-of-the-art molecular simulation engines and molecular dynamics and Monte Carlo force fields provide constraints to the solution structure inferred from the small-angle scattering data, which incorporates the known physical chemistry of the system. The implementation of this software suite involves a tiered approach in which GenApp provides the deployment infrastructure for running applications on both standard and high-performance computing hardware, and SASSIE provides a workflow framework into which modules can be plugged to prepare structures, carry out simulations, calculate theoretical scattering data and compare results with experimental data. GenApp produces the accessible web-based front end termed SASSIE-web , and GenApp and SASSIE also make community SAS codes available. Applications are illustrated by case studies: (i) inter-domain flexibility in two- to six-domain proteins as exemplified by HIV-1 Gag, MASP and ubiquitin; (ii) the hinge conformation in human IgG2 and IgA1 antibodies; (iii) the complex formed between a hexameric protein Hfq and mRNA; and (iv) synthetic 'bottlebrush' polymers.

  20. Bayesian statistics applied to neutron activation data for reactor flux spectrum analysis

    International Nuclear Information System (INIS)

    Chiesa, Davide; Previtali, Ezio; Sisti, Monica

    2014-01-01

    Highlights: • Bayesian statistics to analyze the neutron flux spectrum from activation data. • Rigorous statistical approach for accurate evaluation of the neutron flux groups. • Cross section and activation data uncertainties included for the problem solution. • Flexible methodology applied to analyze different nuclear reactor flux spectra. • The results are in good agreement with the MCNP simulations of neutron fluxes. - Abstract: In this paper, we present a statistical method, based on Bayesian statistics, to analyze the neutron flux spectrum from the activation data of different isotopes. The experimental data were acquired during a neutron activation experiment performed at the TRIGA Mark II reactor of Pavia University (Italy) in four irradiation positions characterized by different neutron spectra. In order to evaluate the neutron flux spectrum, subdivided in energy groups, a system of linear equations, containing the group effective cross sections and the activation rate data, has to be solved. However, since the system’s coefficients are experimental data affected by uncertainties, a rigorous statistical approach is fundamental for an accurate evaluation of the neutron flux groups. For this purpose, we applied the Bayesian statistical analysis, that allows to include the uncertainties of the coefficients and the a priori information about the neutron flux. A program for the analysis of Bayesian hierarchical models, based on Markov Chain Monte Carlo (MCMC) simulations, was used to define the problem statistical model and solve it. The first analysis involved the determination of the thermal, resonance-intermediate and fast flux components and the dependence of the results on the Prior distribution choice was investigated to confirm the reliability of the Bayesian analysis. After that, the main resonances of the activation cross sections were analyzed to implement multi-group models with finer energy subdivisions that would allow to determine the

  1. Reactor noise analysis by statistical pattern recognition methods

    International Nuclear Information System (INIS)

    Howington, L.C.; Gonzalez, R.C.

    1976-01-01

    A multivariate statistical pattern recognition system for reactor noise analysis is presented. The basis of the system is a transformation for decoupling correlated variables and algorithms for inferring probability density functions. The system is adaptable to a variety of statistical properties of the data, and it has learning, tracking, updating, and data compacting capabilities. System design emphasizes control of the false-alarm rate. Its abilities to learn normal patterns, to recognize deviations from these patterns, and to reduce the dimensionality of data with minimum error were evaluated by experiments at the Oak Ridge National Laboratory (ORNL) High-Flux Isotope Reactor. Power perturbations of less than 0.1 percent of the mean value in selected frequency ranges were detected by the pattern recognition system

  2. ORNL-SAS: Versatile software for calculation of small-angle x-ray and neutron scattering intensity profiles from arbitrary structures

    International Nuclear Information System (INIS)

    Heller, William T; Tjioe, Elina

    2007-01-01

    ORNL-SAS is software for calculating solution small-angle scattering intensity profiles from any structure provided in the Protein Data Bank format and can also compare the results with experimental data

  3. Fibrinogen species as resolved by HPLC-SAXS data processing within the UltraScan Solution Modeler (US-SOMO) enhanced SAS module.

    Science.gov (United States)

    Brookes, Emre; Pérez, Javier; Cardinali, Barbara; Profumo, Aldo; Vachette, Patrice; Rocco, Mattia

    2013-12-01

    Fibrinogen is a large heterogeneous aggregation/degradation-prone protein playing a central role in blood coagulation and associated pathologies, whose structure is not completely resolved. When a high-molecular-weight fraction was analyzed by size-exclusion high-performance liquid chromatography/small-angle X-ray scattering (HPLC-SAXS), several composite peaks were apparent and because of the stickiness of fibrinogen the analysis was complicated by severe capillary fouling. Novel SAS analysis tools developed as a part of the UltraScan Solution Modeler ( US-SOMO ; http://somo.uthscsa.edu/), an open-source suite of utilities with advanced graphical user interfaces whose initial goal was the hydrodynamic modeling of biomacromolecules, were implemented and applied to this problem. They include the correction of baseline drift due to the accumulation of material on the SAXS capillary walls, and the Gaussian decomposition of non-baseline-resolved HPLC-SAXS elution peaks. It was thus possible to resolve at least two species co-eluting under the fibrinogen main monomer peak, probably resulting from in-column degradation, and two others under an oligomers peak. The overall and cross-sectional radii of gyration, molecular mass and mass/length ratio of all species were determined using the manual or semi-automated procedures available within the US-SOMO SAS module. Differences between monomeric species and linear and sideways oligomers were thus identified and rationalized. This new US-SOMO version additionally contains several computational and graphical tools, implementing functionalities such as the mapping of residues contributing to particular regions of P ( r ), and an advanced module for the comparison of primary I ( q ) versus q data with model curves computed from atomic level structures or bead models. It should be of great help in multi-resolution studies involving hydrodynamics, solution scattering and crystallographic/NMR data.

  4. Plan de negocio para la creación de la empresa Itech S.A.S.

    OpenAIRE

    2012-01-01

    ITEH S.A.S ofrecerá el servicio de consultoría y asesoría en el uso y gestión de las tecnologías de la información y las comunicaciones a las pymes del sector servicios, específicamente en el subsector jurídico, con el fin de apoyar sus procesos de negocio y fortalecer la competitividad de las mismas.

  5. Data analysis using the Gnu R system for statistical computation

    Energy Technology Data Exchange (ETDEWEB)

    Simone, James; /Fermilab

    2011-07-01

    R is a language system for statistical computation. It is widely used in statistics, bioinformatics, machine learning, data mining, quantitative finance, and the analysis of clinical drug trials. Among the advantages of R are: it has become the standard language for developing statistical techniques, it is being actively developed by a large and growing global user community, it is open source software, it is highly portable (Linux, OS-X and Windows), it has a built-in documentation system, it produces high quality graphics and it is easily extensible with over four thousand extension library packages available covering statistics and applications. This report gives a very brief introduction to R with some examples using lattice QCD simulation results. It then discusses the development of R packages designed for chi-square minimization fits for lattice n-pt correlation functions.

  6. Application of a statistical thermal design procedure to evaluate the PWR DNBR safety analysis limits

    International Nuclear Information System (INIS)

    Robeyns, J.; Parmentier, F.; Peeters, G.

    2001-01-01

    In the framework of safety analysis for the Belgian nuclear power plants and for the reload compatibility studies, Tractebel Energy Engineering (TEE) has developed, to define a 95/95 DNBR criterion, a statistical thermal design method based on the analytical full statistical approach: the Statistical Thermal Design Procedure (STDP). In that methodology, each DNBR value in the core assemblies is calculated with an adapted CHF (Critical Heat Flux) correlation implemented in the sub-channel code Cobra for core thermal hydraulic analysis. The uncertainties of the correlation are represented by the statistical parameters calculated from an experimental database. The main objective of a sub-channel analysis is to prove that in all class 1 and class 2 situations, the minimum DNBR (Departure from Nucleate Boiling Ratio) remains higher than the Safety Analysis Limit (SAL). The SAL value is calculated from the Statistical Design Limit (SDL) value adjusted with some penalties and deterministic factors. The search of a realistic value for the SDL is the objective of the statistical thermal design methods. In this report, we apply a full statistical approach to define the DNBR criterion or SDL (Statistical Design Limit) with the strict observance of the design criteria defined in the Standard Review Plan. The same statistical approach is used to define the expected number of rods experiencing DNB. (author)

  7. Analytical and statistical analysis of elemental composition of lichens

    International Nuclear Information System (INIS)

    Calvelo, S.; Baccala, N.; Bubach, D.; Arribere, M.A.; Riberio Guevara, S.

    1997-01-01

    The elemental composition of lichens from remote southern South America regions has been studied with analytical and statistical techniques to determine if the values obtained reflect species, growth forms or habitat characteristics. The enrichment factors are calculated discriminated by species and collection site and compared with data available in the literature. The elemental concentrations are standardized and compared for different species. The information was statistically processed, a cluster analysis was performed using the three first principal axes of the PCA; the three groups formed are presented. Their relationship with the species, collection sites and the lichen growth forms are interpreted. (author)

  8. The Fusion of Financial Analysis and Seismology: Statistical Methods from Financial Market Analysis Applied to Earthquake Data

    Science.gov (United States)

    Ohyanagi, S.; Dileonardo, C.

    2013-12-01

    As a natural phenomenon earthquake occurrence is difficult to predict. Statistical analysis of earthquake data was performed using candlestick chart and Bollinger Band methods. These statistical methods, commonly used in the financial world to analyze market trends were tested against earthquake data. Earthquakes above Mw 4.0 located on shore of Sanriku (37.75°N ~ 41.00°N, 143.00°E ~ 144.50°E) from February 1973 to May 2013 were selected for analysis. Two specific patterns in earthquake occurrence were recognized through the analysis. One is a spread of candlestick prior to the occurrence of events greater than Mw 6.0. A second pattern shows convergence in the Bollinger Band, which implies a positive or negative change in the trend of earthquakes. Both patterns match general models for the buildup and release of strain through the earthquake cycle, and agree with both the characteristics of the candlestick chart and Bollinger Band analysis. These results show there is a high correlation between patterns in earthquake occurrence and trend analysis by these two statistical methods. The results of this study agree with the appropriateness of the application of these financial analysis methods to the analysis of earthquake occurrence.

  9. Parametric analysis of the statistical model of the stick-slip process

    Science.gov (United States)

    Lima, Roberta; Sampaio, Rubens

    2017-06-01

    In this paper it is performed a parametric analysis of the statistical model of the response of a dry-friction oscillator. The oscillator is a spring-mass system which moves over a base with a rough surface. Due to this roughness, the mass is subject to a dry-frictional force modeled as a Coulomb friction. The system is stochastically excited by an imposed bang-bang base motion. The base velocity is modeled by a Poisson process for which a probabilistic model is fully specified. The excitation induces in the system stochastic stick-slip oscillations. The system response is composed by a random sequence alternating stick and slip-modes. With realizations of the system, a statistical model is constructed for this sequence. In this statistical model, the variables of interest of the sequence are modeled as random variables, as for example, the number of time intervals in which stick or slip occur, the instants at which they begin, and their duration. Samples of the system response are computed by integration of the dynamic equation of the system using independent samples of the base motion. Statistics and histograms of the random variables which characterize the stick-slip process are estimated for the generated samples. The objective of the paper is to analyze how these estimated statistics and histograms vary with the system parameters, i.e., to make a parametric analysis of the statistical model of the stick-slip process.

  10. Introduction to applied statistical signal analysis guide to biomedical and electrical engineering applications

    CERN Document Server

    Shiavi, Richard

    2007-01-01

    Introduction to Applied Statistical Signal Analysis is designed for the experienced individual with a basic background in mathematics, science, and computer. With this predisposed knowledge, the reader will coast through the practical introduction and move on to signal analysis techniques, commonly used in a broad range of engineering areas such as biomedical engineering, communications, geophysics, and speech.Introduction to Applied Statistical Signal Analysis intertwines theory and implementation with practical examples and exercises. Topics presented in detail include: mathematical

  11. Visual and statistical analysis of {sup 18}F-FDG PET in primary progressive aphasia

    Energy Technology Data Exchange (ETDEWEB)

    Matias-Guiu, Jordi A.; Moreno-Ramos, Teresa; Garcia-Ramos, Rocio; Fernandez-Matarrubia, Marta; Oreja-Guevara, Celia; Matias-Guiu, Jorge [Hospital Clinico San Carlos, Department of Neurology, Madrid (Spain); Cabrera-Martin, Maria Nieves; Perez-Castejon, Maria Jesus; Rodriguez-Rey, Cristina; Ortega-Candil, Aida; Carreras, Jose Luis [San Carlos Health Research Institute (IdISSC) Complutense University of Madrid, Department of Nuclear Medicine, Hospital Clinico San Carlos, Madrid (Spain)

    2015-05-01

    Diagnosing progressive primary aphasia (PPA) and its variants is of great clinical importance, and fluorodeoxyglucose (FDG) positron emission tomography (PET) may be a useful diagnostic technique. The purpose of this study was to evaluate interobserver variability in the interpretation of FDG PET images in PPA as well as the diagnostic sensitivity and specificity of the technique. We also aimed to compare visual and statistical analyses of these images. There were 10 raters who analysed 44 FDG PET scans from 33 PPA patients and 11 controls. Five raters analysed the images visually, while the other five used maps created using Statistical Parametric Mapping software. Two spatial normalization procedures were performed: global mean normalization and cerebellar normalization. Clinical diagnosis was considered the gold standard. Inter-rater concordance was moderate for visual analysis (Fleiss' kappa 0.568) and substantial for statistical analysis (kappa 0.756-0.881). Agreement was good for all three variants of PPA except for the nonfluent/agrammatic variant studied with visual analysis. The sensitivity and specificity of each rater's diagnosis of PPA was high, averaging 87.8 and 89.9 % for visual analysis and 96.9 and 90.9 % for statistical analysis using global mean normalization, respectively. In cerebellar normalization, sensitivity was 88.9 % and specificity 100 %. FDG PET demonstrated high diagnostic accuracy for the diagnosis of PPA and its variants. Inter-rater concordance was higher for statistical analysis, especially for the nonfluent/agrammatic variant. These data support the use of FDG PET to evaluate patients with PPA and show that statistical analysis methods are particularly useful for identifying the nonfluent/agrammatic variant of PPA. (orig.)

  12. PVeStA: A Parallel Statistical Model Checking and Quantitative Analysis Tool

    KAUST Repository

    AlTurki, Musab

    2011-01-01

    Statistical model checking is an attractive formal analysis method for probabilistic systems such as, for example, cyber-physical systems which are often probabilistic in nature. This paper is about drastically increasing the scalability of statistical model checking, and making such scalability of analysis available to tools like Maude, where probabilistic systems can be specified at a high level as probabilistic rewrite theories. It presents PVeStA, an extension and parallelization of the VeStA statistical model checking tool [10]. PVeStA supports statistical model checking of probabilistic real-time systems specified as either: (i) discrete or continuous Markov Chains; or (ii) probabilistic rewrite theories in Maude. Furthermore, the properties that it can model check can be expressed in either: (i) PCTL/CSL, or (ii) the QuaTEx quantitative temporal logic. As our experiments show, the performance gains obtained from parallelization can be very high. © 2011 Springer-Verlag.

  13. Statistical analysis of extreme values from insurance, finance, hydrology and other fields

    CERN Document Server

    Reiss, Rolf-Dieter

    1997-01-01

    The statistical analysis of extreme data is important for various disciplines, including hydrology, insurance, finance, engineering and environmental sciences. This book provides a self-contained introduction to the parametric modeling, exploratory analysis and statistical interference for extreme values. The entire text of this third edition has been thoroughly updated and rearranged to meet the new requirements. Additional sections and chapters, elaborated on more than 100 pages, are particularly concerned with topics like dependencies, the conditional analysis and the multivariate modeling of extreme data. Parts I–III about the basic extreme value methodology remain unchanged to some larger extent, yet notable are, e.g., the new sections about "An Overview of Reduced-Bias Estimation" (co-authored by M.I. Gomes), "The Spectral Decomposition Methodology", and "About Tail Independence" (co-authored by M. Frick), and the new chapter about "Extreme Value Statistics of Dependent Random Variables" (co-authored ...

  14. Download this PDF file

    African Journals Online (AJOL)

    RAGHAVENDRA

    2015-03-17

    Mar 17, 2015 ... are important in the design and implementation of indigenous rograms. The indigenous .... through observation, administering a structured questionnaire ... Descriptive Statistics: Statistical analysis system (SAS) version 9.2 ...

  15. Power flow as a complement to statistical energy analysis and finite element analysis

    Science.gov (United States)

    Cuschieri, J. M.

    1987-01-01

    Present methods of analysis of the structural response and the structure-borne transmission of vibrational energy use either finite element (FE) techniques or statistical energy analysis (SEA) methods. The FE methods are a very useful tool at low frequencies where the number of resonances involved in the analysis is rather small. On the other hand SEA methods can predict with acceptable accuracy the response and energy transmission between coupled structures at relatively high frequencies where the structural modal density is high and a statistical approach is the appropriate solution. In the mid-frequency range, a relatively large number of resonances exist which make finite element method too costly. On the other hand SEA methods can only predict an average level form. In this mid-frequency range a possible alternative is to use power flow techniques, where the input and flow of vibrational energy to excited and coupled structural components can be expressed in terms of input and transfer mobilities. This power flow technique can be extended from low to high frequencies and this can be integrated with established FE models at low frequencies and SEA models at high frequencies to form a verification of the method. This method of structural analysis using power flo and mobility methods, and its integration with SEA and FE analysis is applied to the case of two thin beams joined together at right angles.

  16. Usual Dietary Intakes: SAS Macros for Estimating Ratios of Two Dietary Components that are Consumed Nearly Every Day

    Science.gov (United States)

    The following SAS macros can be used to create a bivariate distribution of usual intake of two dietary components that are consumed nearly every day and to calculate percentiles of the population distribution of the ratio of usual intakes.

  17. SAS and SPSS macros to calculate standardized Cronbach's alpha using the upper bound of the phi coefficient for dichotomous items.

    Science.gov (United States)

    Sun, Wei; Chou, Chih-Ping; Stacy, Alan W; Ma, Huiyan; Unger, Jennifer; Gallaher, Peggy

    2007-02-01

    Cronbach's a is widely used in social science research to estimate the internal consistency of reliability of a measurement scale. However, when items are not strictly parallel, the Cronbach's a coefficient provides a lower-bound estimate of true reliability, and this estimate may be further biased downward when items are dichotomous. The estimation of standardized Cronbach's a for a scale with dichotomous items can be improved by using the upper bound of coefficient phi. SAS and SPSS macros have been developed in this article to obtain standardized Cronbach's a via this method. The simulation analysis showed that Cronbach's a from upper-bound phi might be appropriate for estimating the real reliability when standardized Cronbach's a is problematic.

  18. Bayesian Statistics and Uncertainty Quantification for Safety Boundary Analysis in Complex Systems

    Science.gov (United States)

    He, Yuning; Davies, Misty Dawn

    2014-01-01

    The analysis of a safety-critical system often requires detailed knowledge of safe regions and their highdimensional non-linear boundaries. We present a statistical approach to iteratively detect and characterize the boundaries, which are provided as parameterized shape candidates. Using methods from uncertainty quantification and active learning, we incrementally construct a statistical model from only few simulation runs and obtain statistically sound estimates of the shape parameters for safety boundaries.

  19. Validation of statistical models for creep rupture by parametric analysis

    Energy Technology Data Exchange (ETDEWEB)

    Bolton, J., E-mail: john.bolton@uwclub.net [65, Fisher Ave., Rugby, Warks CV22 5HW (United Kingdom)

    2012-01-15

    Statistical analysis is an efficient method for the optimisation of any candidate mathematical model of creep rupture data, and for the comparative ranking of competing models. However, when a series of candidate models has been examined and the best of the series has been identified, there is no statistical criterion to determine whether a yet more accurate model might be devised. Hence there remains some uncertainty that the best of any series examined is sufficiently accurate to be considered reliable as a basis for extrapolation. This paper proposes that models should be validated primarily by parametric graphical comparison to rupture data and rupture gradient data. It proposes that no mathematical model should be considered reliable for extrapolation unless the visible divergence between model and data is so small as to leave no apparent scope for further reduction. This study is based on the data for a 12% Cr alloy steel used in BS PD6605:1998 to exemplify its recommended statistical analysis procedure. The models considered in this paper include a) a relatively simple model, b) the PD6605 recommended model and c) a more accurate model of somewhat greater complexity. - Highlights: Black-Right-Pointing-Pointer The paper discusses the validation of creep rupture models derived from statistical analysis. Black-Right-Pointing-Pointer It demonstrates that models can be satisfactorily validated by a visual-graphic comparison of models to data. Black-Right-Pointing-Pointer The method proposed utilises test data both as conventional rupture stress and as rupture stress gradient. Black-Right-Pointing-Pointer The approach is shown to be more reliable than a well-established and widely used method (BS PD6605).

  20. Statistical experimental design for saltstone mixtures

    International Nuclear Information System (INIS)

    Harris, S.P.; Postles, R.L.

    1992-01-01

    The authors used a mixture experimental design for determining a window of operability for a process at the U.S. Department of Energy, Savannah River Site, Defense Waste Processing Facility (DWPF). The high-level radioactive waste at the Savannah River Site is stored in large underground carbon steel tanks. The waste consists of a supernate layer and a sludge layer. Cesium-137 will be removed from the supernate by precipitation and filtration. After further processing, the supernate layer will be fixed as a grout for disposal in concrete vaults. The remaining precipitate will be processed at the DWPF with treated waste tank sludge and glass-making chemicals into borosilicate glass. The leach-rate properties of the supernate grout formed from various mixes of solidified coefficients for NO 3 and chromium were used as a measure of leach rate. Various mixes of cement, Ca(OH) 2 , salt, slag, and fly ash were used. These constituents comprise the whole mix. Thus, a mixture experimental design was used. The regression procedure (PROC REG) in SAS was used to produce analysis of variance (ANOVA) statistics. In addition, detailed model diagnostics are readily available for identifying suspicious observations. For convenience, trillinear contour (TLC) plots, a standard graphics tool for examining mixture response surfaces, of the fitted model were produced using ECHIP

  1. The Bayesian New Statistics: Hypothesis testing, estimation, meta-analysis, and power analysis from a Bayesian perspective.

    Science.gov (United States)

    Kruschke, John K; Liddell, Torrin M

    2018-02-01

    In the practice of data analysis, there is a conceptual distinction between hypothesis testing, on the one hand, and estimation with quantified uncertainty on the other. Among frequentists in psychology, a shift of emphasis from hypothesis testing to estimation has been dubbed "the New Statistics" (Cumming 2014). A second conceptual distinction is between frequentist methods and Bayesian methods. Our main goal in this article is to explain how Bayesian methods achieve the goals of the New Statistics better than frequentist methods. The article reviews frequentist and Bayesian approaches to hypothesis testing and to estimation with confidence or credible intervals. The article also describes Bayesian approaches to meta-analysis, randomized controlled trials, and power analysis.

  2. Statistical analysis of solar proton events

    Directory of Open Access Journals (Sweden)

    V. Kurt

    2004-06-01

    Full Text Available A new catalogue of 253 solar proton events (SPEs with energy >10MeV and peak intensity >10 protons/cm2.s.sr (pfu at the Earth's orbit for three complete 11-year solar cycles (1970-2002 is given. A statistical analysis of this data set of SPEs and their associated flares that occurred during this time period is presented. It is outlined that 231 of these proton events are flare related and only 22 of them are not associated with Ha flares. It is also noteworthy that 42 of these events are registered as Ground Level Enhancements (GLEs in neutron monitors. The longitudinal distribution of the associated flares shows that a great number of these events are connected with west flares. This analysis enables one to understand the long-term dependence of the SPEs and the related flare characteristics on the solar cycle which are useful for space weather prediction.

  3. STATISTICAL ANALYSIS OF THE HEAVY NEUTRAL ATOMS MEASURED BY IBEX

    International Nuclear Information System (INIS)

    Park, Jeewoo; Kucharek, Harald; Möbius, Eberhard; Galli, André; Livadiotis, George; Fuselier, Steve A.; McComas, David J.

    2015-01-01

    We investigate the directional distribution of heavy neutral atoms in the heliosphere by using heavy neutral maps generated with the IBEX-Lo instrument over three years from 2009 to 2011. The interstellar neutral (ISN) O and Ne gas flow was found in the first-year heavy neutral map at 601 keV and its flow direction and temperature were studied. However, due to the low counting statistics, researchers have not treated the full sky maps in detail. The main goal of this study is to evaluate the statistical significance of each pixel in the heavy neutral maps to get a better understanding of the directional distribution of heavy neutral atoms in the heliosphere. Here, we examine three statistical analysis methods: the signal-to-noise filter, the confidence limit method, and the cluster analysis method. These methods allow us to exclude background from areas where the heavy neutral signal is statistically significant. These methods also allow the consistent detection of heavy neutral atom structures. The main emission feature expands toward lower longitude and higher latitude from the observational peak of the ISN O and Ne gas flow. We call this emission the extended tail. It may be an imprint of the secondary oxygen atoms generated by charge exchange between ISN hydrogen atoms and oxygen ions in the outer heliosheath

  4. Explorations in statistics: the analysis of ratios and normalized data.

    Science.gov (United States)

    Curran-Everett, Douglas

    2013-09-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This ninth installment of Explorations in Statistics explores the analysis of ratios and normalized-or standardized-data. As researchers, we compute a ratio-a numerator divided by a denominator-to compute a proportion for some biological response or to derive some standardized variable. In each situation, we want to control for differences in the denominator when the thing we really care about is the numerator. But there is peril lurking in a ratio: only if the relationship between numerator and denominator is a straight line through the origin will the ratio be meaningful. If not, the ratio will misrepresent the true relationship between numerator and denominator. In contrast, regression techniques-these include analysis of covariance-are versatile: they can accommodate an analysis of the relationship between numerator and denominator when a ratio is useless.

  5. Parametric statistical change point analysis

    CERN Document Server

    Chen, Jie

    2000-01-01

    This work is an in-depth study of the change point problem from a general point of view and a further examination of change point analysis of the most commonly used statistical models Change point problems are encountered in such disciplines as economics, finance, medicine, psychology, signal processing, and geology, to mention only several The exposition is clear and systematic, with a great deal of introductory material included Different models are presented in each chapter, including gamma and exponential models, rarely examined thus far in the literature Other models covered in detail are the multivariate normal, univariate normal, regression, and discrete models Extensive examples throughout the text emphasize key concepts and different methodologies are used, namely the likelihood ratio criterion, and the Bayesian and information criterion approaches A comprehensive bibliography and two indices complete the study

  6. Perceptual and statistical analysis of cardiac phase and amplitude images

    International Nuclear Information System (INIS)

    Houston, A.; Craig, A.

    1991-01-01

    A perceptual experiment was conducted using cardiac phase and amplitude images. Estimates of statistical parameters were derived from the images and the diagnostic potential of human and statistical decisions compared. Five methods were used to generate the images from 75 gated cardiac studies, 39 of which were classified as pathological. The images were presented to 12 observers experienced in nuclear medicine. The observers rated the images using a five-category scale based on their confidence of an abnormality presenting. Circular and linear statistics were used to analyse phase and amplitude image data, respectively. Estimates of mean, standard deviation (SD), skewness, kurtosis and the first term of the spatial correlation function were evaluated in the region of the left ventricle. A receiver operating characteristic analysis was performed on both sets of data and the human and statistical decisions compared. For phase images, circular SD was shown to discriminate better between normal and abnormal than experienced observers, but no single statistic discriminated as well as the human observer for amplitude images. (orig.)

  7. Statistical analysis of the count and profitability of air conditioners.

    Science.gov (United States)

    Rady, El Houssainy A; Mohamed, Salah M; Abd Elmegaly, Alaa A

    2018-08-01

    This article presents the statistical analysis of the number and profitability of air conditioners in an Egyptian company. Checking the same distribution for each categorical variable has been made using Kruskal-Wallis test.

  8. Statistical analysis of subjective preferences for video enhancement

    Science.gov (United States)

    Woods, Russell L.; Satgunam, PremNandhini; Bronstad, P. Matthew; Peli, Eli

    2010-02-01

    Measuring preferences for moving video quality is harder than for static images due to the fleeting and variable nature of moving video. Subjective preferences for image quality can be tested by observers indicating their preference for one image over another. Such pairwise comparisons can be analyzed using Thurstone scaling (Farrell, 1999). Thurstone (1927) scaling is widely used in applied psychology, marketing, food tasting and advertising research. Thurstone analysis constructs an arbitrary perceptual scale for the items that are compared (e.g. enhancement levels). However, Thurstone scaling does not determine the statistical significance of the differences between items on that perceptual scale. Recent papers have provided inferential statistical methods that produce an outcome similar to Thurstone scaling (Lipovetsky and Conklin, 2004). Here, we demonstrate that binary logistic regression can analyze preferences for enhanced video.

  9. Logistic Regression with Multiple Random Effects: A Simulation Study of Estimation Methods and Statistical Packages

    Science.gov (United States)

    Kim, Yoonsang; Emery, Sherry

    2013-01-01

    Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods’ performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages—SAS GLIMMIX Laplace and SuperMix Gaussian quadrature—perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes. PMID:24288415

  10. Logistic Regression with Multiple Random Effects: A Simulation Study of Estimation Methods and Statistical Packages.

    Science.gov (United States)

    Kim, Yoonsang; Choi, Young-Ku; Emery, Sherry

    2013-08-01

    Several statistical packages are capable of estimating generalized linear mixed models and these packages provide one or more of three estimation methods: penalized quasi-likelihood, Laplace, and Gauss-Hermite. Many studies have investigated these methods' performance for the mixed-effects logistic regression model. However, the authors focused on models with one or two random effects and assumed a simple covariance structure between them, which may not be realistic. When there are multiple correlated random effects in a model, the computation becomes intensive, and often an algorithm fails to converge. Moreover, in our analysis of smoking status and exposure to anti-tobacco advertisements, we have observed that when a model included multiple random effects, parameter estimates varied considerably from one statistical package to another even when using the same estimation method. This article presents a comprehensive review of the advantages and disadvantages of each estimation method. In addition, we compare the performances of the three methods across statistical packages via simulation, which involves two- and three-level logistic regression models with at least three correlated random effects. We apply our findings to a real dataset. Our results suggest that two packages-SAS GLIMMIX Laplace and SuperMix Gaussian quadrature-perform well in terms of accuracy, precision, convergence rates, and computing speed. We also discuss the strengths and weaknesses of the two packages in regard to sample sizes.

  11. Comparison of the SASSYS/SAS4A radial core expansion reactivity feedback model and the empirical correlation for FFTF

    International Nuclear Information System (INIS)

    Wigeland, R.A.

    1987-01-01

    The present emphasis on inherent safety for LMR designs has resulted in a need to represent the various reactivity feedback mechanisms as accurately as possible. The dominant negative reactivity feedback has been found to result from radial expansion of the core for most postulated ATWS events. For this reason, a more detailed model for calculating the reactivity feedback from radial core expansion has been recently developed for use with the SASSYS/SAS4A Code System. The purpose of this summary is to present an extension to the model so that it is more suitable for handling a core restraint design as used in FFTF, and to compare the SASSYS/SAS4A results using this model to the empirical correlation presently being used to account for radial core expansion reactivity feedback to FFTF

  12. metaCCA: summary statistics-based multivariate meta-analysis of genome-wide association studies using canonical correlation analysis.

    Science.gov (United States)

    Cichonska, Anna; Rousu, Juho; Marttinen, Pekka; Kangas, Antti J; Soininen, Pasi; Lehtimäki, Terho; Raitakari, Olli T; Järvelin, Marjo-Riitta; Salomaa, Veikko; Ala-Korpela, Mika; Ripatti, Samuli; Pirinen, Matti

    2016-07-01

    A dominant approach to genetic association studies is to perform univariate tests between genotype-phenotype pairs. However, analyzing related traits together increases statistical power, and certain complex associations become detectable only when several variants are tested jointly. Currently, modest sample sizes of individual cohorts, and restricted availability of individual-level genotype-phenotype data across the cohorts limit conducting multivariate tests. We introduce metaCCA, a computational framework for summary statistics-based analysis of a single or multiple studies that allows multivariate representation of both genotype and phenotype. It extends the statistical technique of canonical correlation analysis to the setting where original individual-level records are not available, and employs a covariance shrinkage algorithm to achieve robustness.Multivariate meta-analysis of two Finnish studies of nuclear magnetic resonance metabolomics by metaCCA, using standard univariate output from the program SNPTEST, shows an excellent agreement with the pooled individual-level analysis of original data. Motivated by strong multivariate signals in the lipid genes tested, we envision that multivariate association testing using metaCCA has a great potential to provide novel insights from already published summary statistics from high-throughput phenotyping technologies. Code is available at https://github.com/aalto-ics-kepaco anna.cichonska@helsinki.fi or matti.pirinen@helsinki.fi Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  13. Statistical Analysis of the Exchange Rate of Bitcoin.

    Science.gov (United States)

    Chu, Jeffrey; Nadarajah, Saralees; Chan, Stephen

    2015-01-01

    Bitcoin, the first electronic payment system, is becoming a popular currency. We provide a statistical analysis of the log-returns of the exchange rate of Bitcoin versus the United States Dollar. Fifteen of the most popular parametric distributions in finance are fitted to the log-returns. The generalized hyperbolic distribution is shown to give the best fit. Predictions are given for future values of the exchange rate.

  14. Statistical analysis and Monte Carlo simulation of growing self-avoiding walks on percolation

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Yuxia [Department of Physics, Wuhan University, Wuhan 430072 (China); Sang Jianping [Department of Physics, Wuhan University, Wuhan 430072 (China); Department of Physics, Jianghan University, Wuhan 430056 (China); Zou Xianwu [Department of Physics, Wuhan University, Wuhan 430072 (China)]. E-mail: xwzou@whu.edu.cn; Jin Zhunzhi [Department of Physics, Wuhan University, Wuhan 430072 (China)

    2005-09-26

    The two-dimensional growing self-avoiding walk on percolation was investigated by statistical analysis and Monte Carlo simulation. We obtained the expression of the mean square displacement and effective exponent as functions of time and percolation probability by statistical analysis and made a comparison with simulations. We got a reduced time to scale the motion of walkers in growing self-avoiding walks on regular and percolation lattices.

  15. General specifications for the development of a USL NASA PC R and D statistical analysis support package

    Science.gov (United States)

    Dominick, Wayne D. (Editor); Bassari, Jinous; Triantafyllopoulos, Spiros

    1984-01-01

    The University of Southwestern Louisiana (USL) NASA PC R and D statistical analysis support package is designed to be a three-level package to allow statistical analysis for a variety of applications within the USL Data Base Management System (DBMS) contract work. The design addresses usage of the statistical facilities as a library package, as an interactive statistical analysis system, and as a batch processing package.

  16. A method for statistical steady state thermal analysis of reactor cores

    International Nuclear Information System (INIS)

    Whetton, P.A.

    1981-01-01

    In a previous publication the author presented a method for undertaking statistical steady state thermal analyses of reactor cores. The present paper extends the technique to an assessment of confidence limits for the resulting probability functions which define the probability that a given thermal response value will be exceeded in a reactor core. Establishing such confidence limits is considered an integral part of any statistical thermal analysis and essential if such analysis are to be considered in any regulatory process. In certain applications the use of a best estimate probability function may be justifiable but it is recognised that a demonstrably conservative probability function is required for any regulatory considerations. (orig.)

  17. A statistical test for outlier identification in data envelopment analysis

    Directory of Open Access Journals (Sweden)

    Morteza Khodabin

    2010-09-01

    Full Text Available In the use of peer group data to assess individual, typical or best practice performance, the effective detection of outliers is critical for achieving useful results. In these ‘‘deterministic’’ frontier models, statistical theory is now mostly available. This paper deals with the statistical pared sample method and its capability of detecting outliers in data envelopment analysis. In the presented method, each observation is deleted from the sample once and the resulting linear program is solved, leading to a distribution of efficiency estimates. Based on the achieved distribution, a pared test is designed to identify the potential outlier(s. We illustrate the method through a real data set. The method could be used in a first step, as an exploratory data analysis, before using any frontier estimation.

  18. Radar Derived Spatial Statistics of Summer Rain. Volume 2; Data Reduction and Analysis

    Science.gov (United States)

    Konrad, T. G.; Kropfli, R. A.

    1975-01-01

    Data reduction and analysis procedures are discussed along with the physical and statistical descriptors used. The statistical modeling techniques are outlined and examples of the derived statistical characterization of rain cells in terms of the several physical descriptors are presented. Recommendations concerning analyses which can be pursued using the data base collected during the experiment are included.

  19. Instrumental Neutron Activation Analysis and Multivariate Statistics for Pottery Provenance

    Science.gov (United States)

    Glascock, M. D.; Neff, H.; Vaughn, K. J.

    2004-06-01

    The application of instrumental neutron activation analysis and multivariate statistics to archaeological studies of ceramics and clays is described. A small pottery data set from the Nasca culture in southern Peru is presented for illustration.

  20. Instrumental Neutron Activation Analysis and Multivariate Statistics for Pottery Provenance

    International Nuclear Information System (INIS)

    Glascock, M. D.; Neff, H.; Vaughn, K. J.

    2004-01-01

    The application of instrumental neutron activation analysis and multivariate statistics to archaeological studies of ceramics and clays is described. A small pottery data set from the Nasca culture in southern Peru is presented for illustration.

  1. Instrumental Neutron Activation Analysis and Multivariate Statistics for Pottery Provenance

    Energy Technology Data Exchange (ETDEWEB)

    Glascock, M. D.; Neff, H. [University of Missouri, Research Reactor Center (United States); Vaughn, K. J. [Pacific Lutheran University, Department of Anthropology (United States)

    2004-06-15

    The application of instrumental neutron activation analysis and multivariate statistics to archaeological studies of ceramics and clays is described. A small pottery data set from the Nasca culture in southern Peru is presented for illustration.

  2. Statistical analysis and data management

    International Nuclear Information System (INIS)

    Anon.

    1981-01-01

    This report provides an overview of the history of the WIPP Biology Program. The recommendations of the American Institute of Biological Sciences (AIBS) for the WIPP biology program are summarized. The data sets available for statistical analyses and problems associated with these data sets are also summarized. Biological studies base maps are presented. A statistical model is presented to evaluate any correlation between climatological data and small mammal captures. No statistically significant relationship between variance in small mammal captures on Dr. Gennaro's 90m x 90m grid and precipitation records from the Duval Potash Mine were found

  3. Detecting errors in micro and trace analysis by using statistics

    DEFF Research Database (Denmark)

    Heydorn, K.

    1993-01-01

    By assigning a standard deviation to each step in an analytical method it is possible to predict the standard deviation of each analytical result obtained by this method. If the actual variability of replicate analytical results agrees with the expected, the analytical method is said...... to be in statistical control. Significant deviations between analytical results from different laboratories reveal the presence of systematic errors, and agreement between different laboratories indicate the absence of systematic errors. This statistical approach, referred to as the analysis of precision, was applied...

  4. Enhancing the solubility and bioavailability of poorly water-soluble drugs using supercritical antisolvent (SAS) process.

    Science.gov (United States)

    Abuzar, Sharif Md; Hyun, Sang-Min; Kim, Jun-Hee; Park, Hee Jun; Kim, Min-Soo; Park, Jeong-Sook; Hwang, Sung-Joo

    2018-03-01

    Poor water solubility and poor bioavailability are problems with many pharmaceuticals. Increasing surface area by micronization is an effective strategy to overcome these problems, but conventional techniques often utilize solvents and harsh processing, which restricts their use. Newer, green technologies, such as supercritical fluid (SCF)-assisted particle formation, can produce solvent-free products under relatively mild conditions, offering many advantages over conventional methods. The antisolvent properties of the SCFs used for microparticle and nanoparticle formation have generated great interest in recent years, because the kinetics of the precipitation process and morphologies of the particles can be accurately controlled. The characteristics of the supercritical antisolvent (SAS) technique make it an ideal tool for enhancing the solubility and bioavailability of poorly water-soluble drugs. This review article focuses on SCFs and their properties, as well as the fundamentals of overcoming poorly water-soluble drug properties by micronization, crystal morphology control, and formation of composite solid dispersion nanoparticles with polymers and/or surfactants. This article also presents an overview of the main aspects of the SAS-assisted particle precipitation process, its mechanism, and parameters, as well as our own experiences, recent advances, and trends in development. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Statistical analysis of the BOIL program in RSYST-III

    International Nuclear Information System (INIS)

    Beck, W.; Hausch, H.J.

    1978-11-01

    The paper describes a statistical analysis in the RSYST-III program system. Using the example of the BOIL program, it is shown how the effects of inaccurate input data on the output data can be discovered. The existing possibilities of data generation, data handling, and data evaluation are outlined. (orig.) [de

  6. Multivariate statistical analysis of precipitation chemistry in Northwestern Spain

    International Nuclear Information System (INIS)

    Prada-Sanchez, J.M.; Garcia-Jurado, I.; Gonzalez-Manteiga, W.; Fiestras-Janeiro, M.G.; Espada-Rios, M.I.; Lucas-Dominguez, T.

    1993-01-01

    149 samples of rainwater were collected in the proximity of a power station in northwestern Spain at three rainwater monitoring stations. The resulting data are analyzed using multivariate statistical techniques. Firstly, the Principal Component Analysis shows that there are three main sources of pollution in the area (a marine source, a rural source and an acid source). The impact from pollution from these sources on the immediate environment of the stations is studied using Factorial Discriminant Analysis. 8 refs., 7 figs., 11 tabs

  7. Multivariate statistical analysis of precipitation chemistry in Northwestern Spain

    Energy Technology Data Exchange (ETDEWEB)

    Prada-Sanchez, J.M.; Garcia-Jurado, I.; Gonzalez-Manteiga, W.; Fiestras-Janeiro, M.G.; Espada-Rios, M.I.; Lucas-Dominguez, T. (University of Santiago, Santiago (Spain). Faculty of Mathematics, Dept. of Statistics and Operations Research)

    1993-07-01

    149 samples of rainwater were collected in the proximity of a power station in northwestern Spain at three rainwater monitoring stations. The resulting data are analyzed using multivariate statistical techniques. Firstly, the Principal Component Analysis shows that there are three main sources of pollution in the area (a marine source, a rural source and an acid source). The impact from pollution from these sources on the immediate environment of the stations is studied using Factorial Discriminant Analysis. 8 refs., 7 figs., 11 tabs.

  8. SWToolbox: A surface-water tool-box for statistical analysis of streamflow time series

    Science.gov (United States)

    Kiang, Julie E.; Flynn, Kate; Zhai, Tong; Hummel, Paul; Granato, Gregory

    2018-03-07

    This report is a user guide for the low-flow analysis methods provided with version 1.0 of the Surface Water Toolbox (SWToolbox) computer program. The software combines functionality from two software programs—U.S. Geological Survey (USGS) SWSTAT and U.S. Environmental Protection Agency (EPA) DFLOW. Both of these programs have been used primarily for computation of critical low-flow statistics. The main analysis methods are the computation of hydrologic frequency statistics such as the 7-day minimum flow that occurs on average only once every 10 years (7Q10), computation of design flows including biologically based flows, and computation of flow-duration curves and duration hydrographs. Other annual, monthly, and seasonal statistics can also be computed. The interface facilitates retrieval of streamflow discharge data from the USGS National Water Information System and outputs text reports for a record of the analysis. Tools for graphing data and screening tests are available to assist the analyst in conducting the analysis.

  9. Anomalous heat transfer modes of nanofluids: a review based on statistical analysis

    Science.gov (United States)

    2011-01-01

    This paper contains the results of a concise statistical review analysis of a large amount of publications regarding the anomalous heat transfer modes of nanofluids. The application of nanofluids as coolants is a novel practise with no established physical foundations explaining the observed anomalous heat transfer. As a consequence, traditional methods of performing a literature review may not be adequate in presenting objectively the results representing the bulk of the available literature. The current literature review analysis aims to resolve the problems faced by researchers in the past by employing an unbiased statistical analysis to present and reveal the current trends and general belief of the scientific community regarding the anomalous heat transfer modes of nanofluids. The thermal performance analysis indicated that statistically there exists a variable enhancement for conduction, convection/mixed heat transfer, pool boiling heat transfer and critical heat flux modes. The most popular proposed mechanisms in the literature to explain heat transfer in nanofluids are revealed, as well as possible trends between nanofluid properties and thermal performance. The review also suggests future experimentation to provide more conclusive answers to the control mechanisms and influential parameters of heat transfer in nanofluids. PMID:21711932

  10. Anomalous heat transfer modes of nanofluids: a review based on statistical analysis

    Science.gov (United States)

    Sergis, Antonis; Hardalupas, Yannis

    2011-05-01

    This paper contains the results of a concise statistical review analysis of a large amount of publications regarding the anomalous heat transfer modes of nanofluids. The application of nanofluids as coolants is a novel practise with no established physical foundations explaining the observed anomalous heat transfer. As a consequence, traditional methods of performing a literature review may not be adequate in presenting objectively the results representing the bulk of the available literature. The current literature review analysis aims to resolve the problems faced by researchers in the past by employing an unbiased statistical analysis to present and reveal the current trends and general belief of the scientific community regarding the anomalous heat transfer modes of nanofluids. The thermal performance analysis indicated that statistically there exists a variable enhancement for conduction, convection/mixed heat transfer, pool boiling heat transfer and critical heat flux modes. The most popular proposed mechanisms in the literature to explain heat transfer in nanofluids are revealed, as well as possible trends between nanofluid properties and thermal performance. The review also suggests future experimentation to provide more conclusive answers to the control mechanisms and influential parameters of heat transfer in nanofluids.

  11. Anomalous heat transfer modes of nanofluids: a review based on statistical analysis

    Directory of Open Access Journals (Sweden)

    Sergis Antonis

    2011-01-01

    Full Text Available Abstract This paper contains the results of a concise statistical review analysis of a large amount of publications regarding the anomalous heat transfer modes of nanofluids. The application of nanofluids as coolants is a novel practise with no established physical foundations explaining the observed anomalous heat transfer. As a consequence, traditional methods of performing a literature review may not be adequate in presenting objectively the results representing the bulk of the available literature. The current literature review analysis aims to resolve the problems faced by researchers in the past by employing an unbiased statistical analysis to present and reveal the current trends and general belief of the scientific community regarding the anomalous heat transfer modes of nanofluids. The thermal performance analysis indicated that statistically there exists a variable enhancement for conduction, convection/mixed heat transfer, pool boiling heat transfer and critical heat flux modes. The most popular proposed mechanisms in the literature to explain heat transfer in nanofluids are revealed, as well as possible trends between nanofluid properties and thermal performance. The review also suggests future experimentation to provide more conclusive answers to the control mechanisms and influential parameters of heat transfer in nanofluids.

  12. Modeling of asphalt-rubber rotational viscosity by statistical analysis and neural networks

    Directory of Open Access Journals (Sweden)

    Luciano Pivoto Specht

    2007-03-01

    Full Text Available It is of a great importance to know binders' viscosity in order to perform handling, mixing, application processes and asphalt mixes compaction in highway surfacing. This paper presents the results of viscosity measurement in asphalt-rubber binders prepared in laboratory. The binders were prepared varying the rubber content, rubber particle size, duration and temperature of mixture, all following a statistical design plan. The statistical analysis and artificial neural networks were used to create mathematical models for prediction of the binders viscosity. The comparison between experimental data and simulated results with the generated models showed best performance of the neural networks analysis in contrast to the statistic models. The results indicated that the rubber content and duration of mixture have major influence on the observed viscosity for the considered interval of parameters variation.

  13. Common pitfalls in statistical analysis: Odds versus risk

    Science.gov (United States)

    Ranganathan, Priya; Aggarwal, Rakesh; Pramesh, C. S.

    2015-01-01

    In biomedical research, we are often interested in quantifying the relationship between an exposure and an outcome. “Odds” and “Risk” are the most common terms which are used as measures of association between variables. In this article, which is the fourth in the series of common pitfalls in statistical analysis, we explain the meaning of risk and odds and the difference between the two. PMID:26623395

  14. Statistical Analysis of the Exchange Rate of Bitcoin.

    Directory of Open Access Journals (Sweden)

    Jeffrey Chu

    Full Text Available Bitcoin, the first electronic payment system, is becoming a popular currency. We provide a statistical analysis of the log-returns of the exchange rate of Bitcoin versus the United States Dollar. Fifteen of the most popular parametric distributions in finance are fitted to the log-returns. The generalized hyperbolic distribution is shown to give the best fit. Predictions are given for future values of the exchange rate.

  15. Statistical Analysis of the Exchange Rate of Bitcoin

    Science.gov (United States)

    Chu, Jeffrey; Nadarajah, Saralees; Chan, Stephen

    2015-01-01

    Bitcoin, the first electronic payment system, is becoming a popular currency. We provide a statistical analysis of the log-returns of the exchange rate of Bitcoin versus the United States Dollar. Fifteen of the most popular parametric distributions in finance are fitted to the log-returns. The generalized hyperbolic distribution is shown to give the best fit. Predictions are given for future values of the exchange rate. PMID:26222702

  16. Analysis of Variance with Summary Statistics in Microsoft® Excel®

    Science.gov (United States)

    Larson, David A.; Hsu, Ko-Cheng

    2010-01-01

    Students regularly are asked to solve Single Factor Analysis of Variance problems given only the sample summary statistics (number of observations per category, category means, and corresponding category standard deviations). Most undergraduate students today use Excel for data analysis of this type. However, Excel, like all other statistical…

  17. The Australasian Resuscitation in Sepsis Evaluation (ARISE) trial statistical analysis plan.

    Science.gov (United States)

    Delaney, Anthony P; Peake, Sandra L; Bellomo, Rinaldo; Cameron, Peter; Holdgate, Anna; Howe, Belinda; Higgins, Alisa; Presneill, Jeffrey; Webb, Steve

    2013-09-01

    The Australasian Resuscitation in Sepsis Evaluation (ARISE) study is an international, multicentre, randomised, controlled trial designed to evaluate the effectiveness of early goal-directed therapy compared with standard care for patients presenting to the emergency department with severe sepsis. In keeping with current practice, and considering aspects of trial design and reporting specific to non-pharmacological interventions, our plan outlines the principles and methods for analysing and reporting the trial results. The document is prepared before completion of recruitment into the ARISE study, without knowledge of the results of the interim analysis conducted by the data safety and monitoring committee and before completion of the two related international studies. Our statistical analysis plan was designed by the ARISE chief investigators, and reviewed and approved by the ARISE steering committee. We reviewed the data collected by the research team as specified in the study protocol and detailed in the study case report form. We describe information related to baseline characteristics, characteristics of delivery of the trial interventions, details of resuscitation, other related therapies and other relevant data with appropriate comparisons between groups. We define the primary, secondary and tertiary outcomes for the study, with description of the planned statistical analyses. We have developed a statistical analysis plan with a trial profile, mock-up tables and figures. We describe a plan for presenting baseline characteristics, microbiological and antibiotic therapy, details of the interventions, processes of care and concomitant therapies and adverse events. We describe the primary, secondary and tertiary outcomes with identification of subgroups to be analysed. We have developed a statistical analysis plan for the ARISE study, available in the public domain, before the completion of recruitment into the study. This will minimise analytical bias and

  18. Statistical Analysis Of Tank 19F Floor Sample Results

    International Nuclear Information System (INIS)

    Harris, S.

    2010-01-01

    Representative sampling has been completed for characterization of the residual material on the floor of Tank 19F as per the statistical sampling plan developed by Harris and Shine. Samples from eight locations have been obtained from the tank floor and two of the samples were archived as a contingency. Six samples, referred to in this report as the current scrape samples, have been submitted to and analyzed by SRNL. This report contains the statistical analysis of the floor sample analytical results to determine if further data are needed to reduce uncertainty. Included are comparisons with the prior Mantis samples results to determine if they can be pooled with the current scrape samples to estimate the upper 95% confidence limits (UCL95%) for concentration. Statistical analysis revealed that the Mantis and current scrape sample results are not compatible. Therefore, the Mantis sample results were not used to support the quantification of analytes in the residual material. Significant spatial variability among the current scrape sample results was not found. Constituent concentrations were similar between the North and South hemispheres as well as between the inner and outer regions of the tank floor. The current scrape sample results from all six samples fall within their 3-sigma limits. In view of the results from numerous statistical tests, the data were pooled from all six current scrape samples. As such, an adequate sample size was provided for quantification of the residual material on the floor of Tank 19F. The uncertainty is quantified in this report by an UCL95% on each analyte concentration. The uncertainty in analyte concentration was calculated as a function of the number of samples, the average, and the standard deviation of the analytical results. The UCL95% was based entirely on the six current scrape sample results (each averaged across three analytical determinations).

  19. Vector-field statistics for the analysis of time varying clinical gait data.

    Science.gov (United States)

    Donnelly, C J; Alexander, C; Pataky, T C; Stannage, K; Reid, S; Robinson, M A

    2017-01-01

    In clinical settings, the time varying analysis of gait data relies heavily on the experience of the individual(s) assessing these biological signals. Though three dimensional kinematics are recognised as time varying waveforms (1D), exploratory statistical analysis of these data are commonly carried out with multiple discrete or 0D dependent variables. In the absence of an a priori 0D hypothesis, clinicians are at risk of making type I and II errors in their analyis of time varying gait signatures in the event statistics are used in concert with prefered subjective clinical assesment methods. The aim of this communication was to determine if vector field waveform statistics were capable of providing quantitative corroboration to practically significant differences in time varying gait signatures as determined by two clinically trained gait experts. The case study was a left hemiplegic Cerebral Palsy (GMFCS I) gait patient following a botulinum toxin (BoNT-A) injection to their left gastrocnemius muscle. When comparing subjective clinical gait assessments between two testers, they were in agreement with each other for 61% of the joint degrees of freedom and phases of motion analysed. For tester 1 and tester 2, they were in agreement with the vector-field analysis for 78% and 53% of the kinematic variables analysed. When the subjective analyses of tester 1 and tester 2 were pooled together and then compared to the vector-field analysis, they were in agreement for 83% of the time varying kinematic variables analysed. These outcomes demonstrate that in principle, vector-field statistics corroborates with what a team of clinical gait experts would classify as practically meaningful pre- versus post time varying kinematic differences. The potential for vector-field statistics to be used as a useful clinical tool for the objective analysis of time varying clinical gait data is established. Future research is recommended to assess the usefulness of vector-field analyses

  20. Introduction to statistics and data analysis with exercises, solutions and applications in R

    CERN Document Server

    Heumann, Christian; Shalabh

    2016-01-01

    This introductory statistics textbook conveys the essential concepts and tools needed to develop and nurture statistical thinking. It presents descriptive, inductive and explorative statistical methods and guides the reader through the process of quantitative data analysis. In the experimental sciences and interdisciplinary research, data analysis has become an integral part of any scientific study. Issues such as judging the credibility of data, analyzing the data, evaluating the reliability of the obtained results and finally drawing the correct and appropriate conclusions from the results are vital. The text is primarily intended for undergraduate students in disciplines like business administration, the social sciences, medicine, politics, macroeconomics, etc. It features a wealth of examples, exercises and solutions with computer code in the statistical programming language R as well as supplementary material that will enable the reader to quickly adapt all methods to their own applications.

  1. Methodology сomparative statistical analysis of Russian industry based on cluster analysis

    Directory of Open Access Journals (Sweden)

    Sergey S. Shishulin

    2017-01-01

    Full Text Available The article is devoted to researching of the possibilities of applying multidimensional statistical analysis in the study of industrial production on the basis of comparing its growth rates and structure with other developed and developing countries of the world. The purpose of this article is to determine the optimal set of statistical methods and the results of their application to industrial production data, which would give the best access to the analysis of the result.Data includes such indicators as output, output, gross value added, the number of employed and other indicators of the system of national accounts and operational business statistics. The objects of observation are the industry of the countrys of the Customs Union, the United States, Japan and Erope in 2005-2015. As the research tool used as the simplest methods of transformation, graphical and tabular visualization of data, and methods of statistical analysis. In particular, based on a specialized software package (SPSS, the main components method, discriminant analysis, hierarchical methods of cluster analysis, Ward’s method and k-means were applied.The application of the method of principal components to the initial data makes it possible to substantially and effectively reduce the initial space of industrial production data. Thus, for example, in analyzing the structure of industrial production, the reduction was from fifteen industries to three basic, well-interpreted factors: the relatively extractive industries (with a low degree of processing, high-tech industries and consumer goods (medium-technology sectors. At the same time, as a result of comparison of the results of application of cluster analysis to the initial data and data obtained on the basis of the principal components method, it was established that clustering industrial production data on the basis of new factors significantly improves the results of clustering.As a result of analyzing the parameters of

  2. Data analysis for radiological characterisation: Geostatistical and statistical complementarity

    International Nuclear Information System (INIS)

    Desnoyers, Yvon; Dubot, Didier

    2012-01-01

    Radiological characterisation may cover a large range of evaluation objectives during a decommissioning and dismantling (D and D) project: removal of doubt, delineation of contaminated materials, monitoring of the decontamination work and final survey. At each stage, collecting relevant data to be able to draw the conclusions needed is quite a big challenge. In particular two radiological characterisation stages require an advanced sampling process and data analysis, namely the initial categorization and optimisation of the materials to be removed and the final survey to demonstrate compliance with clearance levels. On the one hand the latter is widely used and well developed in national guides and norms, using random sampling designs and statistical data analysis. On the other hand a more complex evaluation methodology has to be implemented for the initial radiological characterisation, both for sampling design and for data analysis. The geostatistical framework is an efficient way to satisfy the radiological characterisation requirements providing a sound decision-making approach for the decommissioning and dismantling of nuclear premises. The relevance of the geostatistical methodology relies on the presence of a spatial continuity for radiological contamination. Thus geo-statistics provides reliable methods for activity estimation, uncertainty quantification and risk analysis, leading to a sound classification of radiological waste (surfaces and volumes). This way, the radiological characterization of contaminated premises can be divided into three steps. First, the most exhaustive facility analysis provides historical and qualitative information. Then, a systematic (exhaustive or not) surface survey of the contamination is implemented on a regular grid. Finally, in order to assess activity levels and contamination depths, destructive samples are collected at several locations within the premises (based on the surface survey results) and analysed. Combined with

  3. Cdk1 Phosphorylates Drosophila Sas-4 to Recruit Polo to Daughter Centrioles and Convert Them to Centrosomes.

    Science.gov (United States)

    Novak, Zsofia A; Wainman, Alan; Gartenmann, Lisa; Raff, Jordan W

    2016-06-20

    Centrosomes and cilia are organized by a centriole pair comprising an older mother and a younger daughter. Centriole numbers are tightly regulated, and daughter centrioles (which assemble in S phase) cannot themselves duplicate or organize centrosomes until they have passed through mitosis. It is unclear how this mitotic "centriole conversion" is regulated, but it requires Plk1/Polo kinase. Here we show that in flies, Cdk1 phosphorylates the conserved centriole protein Sas-4 during mitosis. This creates a Polo-docking site that helps recruit Polo to daughter centrioles and is required for the subsequent recruitment of Asterless (Asl), a protein essential for centriole duplication and mitotic centrosome assembly. Point mutations in Sas-4 that prevent Cdk1 phosphorylation or Polo docking do not block centriole disengagement during mitosis, but block efficient centriole conversion and lead to embryonic lethality. These observations can explain why daughter centrioles have to pass through mitosis before they can duplicate and organize a centrosome. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.

  4. Multivariate statistical pattern recognition system for reactor noise analysis

    International Nuclear Information System (INIS)

    Gonzalez, R.C.; Howington, L.C.; Sides, W.H. Jr.; Kryter, R.C.

    1976-01-01

    A multivariate statistical pattern recognition system for reactor noise analysis was developed. The basis of the system is a transformation for decoupling correlated variables and algorithms for inferring probability density functions. The system is adaptable to a variety of statistical properties of the data, and it has learning, tracking, and updating capabilities. System design emphasizes control of the false-alarm rate. The ability of the system to learn normal patterns of reactor behavior and to recognize deviations from these patterns was evaluated by experiments at the ORNL High-Flux Isotope Reactor (HFIR). Power perturbations of less than 0.1 percent of the mean value in selected frequency ranges were detected by the system

  5. Multivariate statistical pattern recognition system for reactor noise analysis

    International Nuclear Information System (INIS)

    Gonzalez, R.C.; Howington, L.C.; Sides, W.H. Jr.; Kryter, R.C.

    1975-01-01

    A multivariate statistical pattern recognition system for reactor noise analysis was developed. The basis of the system is a transformation for decoupling correlated variables and algorithms for inferring probability density functions. The system is adaptable to a variety of statistical properties of the data, and it has learning, tracking, and updating capabilities. System design emphasizes control of the false-alarm rate. The ability of the system to learn normal patterns of reactor behavior and to recognize deviations from these patterns was evaluated by experiments at the ORNL High-Flux Isotope Reactor (HFIR). Power perturbations of less than 0.1 percent of the mean value in selected frequency ranges were detected by the system. 19 references

  6. RESEARCH OF THE DATA BANK OF STATISTICAL ANALYSIS OF THE ADVERTISING MARKET

    Directory of Open Access Journals (Sweden)

    Ekaterina F. Devochkina

    2014-01-01

    Full Text Available The article contains the description of the process of making statistical accounting of the Russian advertising market. The author pays attention to the forms of state statistical accounting of different years, marks their different features and shortage. Also the article contains analysis of alternative sources of numerical information of Russian advertising market.

  7. Statistical Analysis for High-Dimensional Data : The Abel Symposium 2014

    CERN Document Server

    Bühlmann, Peter; Glad, Ingrid; Langaas, Mette; Richardson, Sylvia; Vannucci, Marina

    2016-01-01

    This book features research contributions from The Abel Symposium on Statistical Analysis for High Dimensional Data, held in Nyvågar, Lofoten, Norway, in May 2014. The focus of the symposium was on statistical and machine learning methodologies specifically developed for inference in “big data” situations, with particular reference to genomic applications. The contributors, who are among the most prominent researchers on the theory of statistics for high dimensional inference, present new theories and methods, as well as challenging applications and computational solutions. Specific themes include, among others, variable selection and screening, penalised regression, sparsity, thresholding, low dimensional structures, computational challenges, non-convex situations, learning graphical models, sparse covariance and precision matrices, semi- and non-parametric formulations, multiple testing, classification, factor models, clustering, and preselection. Highlighting cutting-edge research and casting light on...

  8. Halo statistics analysis within medium volume cosmological N-body simulation

    Directory of Open Access Journals (Sweden)

    Martinović N.

    2015-01-01

    Full Text Available In this paper we present halo statistics analysis of a ΛCDM N body cosmological simulation (from first halo formation until z = 0. We study mean major merger rate as a function of time, where for time we consider both per redshift and per Gyr dependence. For latter we find that it scales as the well known power law (1 + zn for which we obtain n = 2.4. The halo mass function and halo growth function are derived and compared both with analytical and empirical fits. We analyse halo growth through out entire simulation, making it possible to continuously monitor evolution of halo number density within given mass ranges. The halo formation redshift is studied exploring possibility for a new simple preliminary analysis during the simulation run. Visualization of the simulation is portrayed as well. At redshifts z = 0−7 halos from simulation have good statistics for further analysis especially in mass range of 1011 − 1014 M./h. [176021 ’Visible and invisible matter in nearby galaxies: theory and observations

  9. SOCR Analyses - an Instructional Java Web-based Statistical Analysis Toolkit.

    Science.gov (United States)

    Chu, Annie; Cui, Jenny; Dinov, Ivo D

    2009-03-01

    The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test.The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website.In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most

  10. The scale analysis sequence for LWR fuel depletion

    International Nuclear Information System (INIS)

    Hermann, O.W.; Parks, C.V.

    1991-01-01

    The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system is used extensively to perform away-from-reactor safety analysis (particularly criticality safety, shielding, heat transfer analyses) for spent light water reactor (LWR) fuel. Spent fuel characteristics such as radiation sources, heat generation sources, and isotopic concentrations can be computed within SCALE using the SAS2 control module. A significantly enhanced version of the SAS2 control module, which is denoted as SAS2H, has been made available with the release of SCALE-4. For each time-dependent fuel composition, SAS2H performs one-dimensional (1-D) neutron transport analyses (via XSDRNPM-S) of the reactor fuel assembly using a two-part procedure with two separate unit-cell-lattice models. The cross sections derived from a transport analysis at each time step are used in a point-depletion computation (via ORIGEN-S) that produces the burnup-dependent fuel composition to be used in the next spectral calculation. A final ORIGEN-S case is used to perform the complete depletion/decay analysis using the burnup-dependent cross sections. The techniques used by SAS2H and two recent applications of the code are reviewed in this paper. 17 refs., 5 figs., 5 tabs

  11. Application of Snowfall and Wind Statistics to Snow Transport Modeling for Snowdrift Control in Minnesota.

    Science.gov (United States)

    Shulski, Martha D.; Seeley, Mark W.

    2004-11-01

    Models were utilized to determine the snow accumulation season (SAS) and to quantify windblown snow for the purpose of snowdrift control for locations in Minnesota. The models require mean monthly temperature, snowfall, density of snow, and wind frequency distribution statistics. Temperature and precipitation data were obtained from local cooperative observing sites, and wind data came from Automated Surface Observing System (ASOS)/Automated Weather Observing System (AWOS) sites in the region. The temperature-based algorithm used to define the SAS reveals a geographic variability in the starting and ending dates of the season, which is determined by latitude and elevation. Mean seasonal snowfall shows a geographic distribution that is affected by topography and proximity to Lake Superior. Mean snowfall density also exhibits variability, with lower-density snow events displaced to higher-latitude positions. Seasonal wind frequencies show a strong bimodal distribution with peaks from the northwest and southeast vector direction, with an exception for locations in close proximity to the Lake Superior shoreline. In addition, for western and south-central Minnesota there is a considerably higher frequency of wind speeds above the mean snow transport threshold of 7 m s-1. As such, this area is more conducive to higher potential snow transport totals. Snow relocation coefficients in this area are in the range of 0.4 0.9, and, according to the empirical models used in this analysis, this range implies that actual snow transport is 40% 90% of the total potential in south-central and western areas of the state.

  12. Short-run and Current Analysis Model in Statistics

    Directory of Open Access Journals (Sweden)

    Constantin Anghelache

    2006-01-01

    Full Text Available Using the short-run statistic indicators is a compulsory requirement implied in the current analysis. Therefore, there is a system of EUROSTAT indicators on short run which has been set up in this respect, being recommended for utilization by the member-countries. On the basis of these indicators, there are regular, usually monthly, analysis being achieved in respect of: the production dynamic determination; the evaluation of the short-run investment volume; the development of the turnover; the wage evolution: the employment; the price indexes and the consumer price index (inflation; the volume of exports and imports and the extent to which the imports are covered by the exports and the sold of trade balance. The EUROSTAT system of indicators of conjuncture is conceived as an open system, so that it can be, at any moment extended or restricted, allowing indicators to be amended or even removed, depending on the domestic users requirements as well as on the specific requirements of the harmonization and integration. For the short-run analysis, there is also the World Bank system of indicators of conjuncture, which is utilized, relying on the data sources offered by the World Bank, The World Institute for Resources or other international organizations statistics. The system comprises indicators of the social and economic development and focuses on the indicators for the following three fields: human resources, environment and economic performances. At the end of the paper, there is a case study on the situation of Romania, for which we used all these indicators.

  13. Short-run and Current Analysis Model in Statistics

    Directory of Open Access Journals (Sweden)

    Constantin Mitrut

    2006-03-01

    Full Text Available Using the short-run statistic indicators is a compulsory requirement implied in the current analysis. Therefore, there is a system of EUROSTAT indicators on short run which has been set up in this respect, being recommended for utilization by the member-countries. On the basis of these indicators, there are regular, usually monthly, analysis being achieved in respect of: the production dynamic determination; the evaluation of the short-run investment volume; the development of the turnover; the wage evolution: the employment; the price indexes and the consumer price index (inflation; the volume of exports and imports and the extent to which the imports are covered by the exports and the sold of trade balance. The EUROSTAT system of indicators of conjuncture is conceived as an open system, so that it can be, at any moment extended or restricted, allowing indicators to be amended or even removed, depending on the domestic users requirements as well as on the specific requirements of the harmonization and integration. For the short-run analysis, there is also the World Bank system of indicators of conjuncture, which is utilized, relying on the data sources offered by the World Bank, The World Institute for Resources or other international organizations statistics. The system comprises indicators of the social and economic development and focuses on the indicators for the following three fields: human resources, environment and economic performances. At the end of the paper, there is a case study on the situation of Romania, for which we used all these indicators.

  14. Statistical analysis of proteomics, metabolomics, and lipidomics data using mass spectrometry

    CERN Document Server

    Mertens, Bart

    2017-01-01

    This book presents an overview of computational and statistical design and analysis of mass spectrometry-based proteomics, metabolomics, and lipidomics data. This contributed volume provides an introduction to the special aspects of statistical design and analysis with mass spectrometry data for the new omic sciences. The text discusses common aspects of design and analysis between and across all (or most) forms of mass spectrometry, while also providing special examples of application with the most common forms of mass spectrometry. Also covered are applications of computational mass spectrometry not only in clinical study but also in the interpretation of omics data in plant biology studies. Omics research fields are expected to revolutionize biomolecular research by the ability to simultaneously profile many compounds within either patient blood, urine, tissue, or other biological samples. Mass spectrometry is one of the key analytical techniques used in these new omic sciences. Liquid chromatography mass ...

  15. The Proposal of Co-Branding Strategy PT. XYZ and SAS in Automotive Sector in SPAIN Market to Increase PT. XYZ Reputation in International Market

    OpenAIRE

    Putra A, Freggy Griyatta; Nasution, Reza Ashari

    2012-01-01

    The development of the lubricant market in the Asia-Pacific, Africa, Middle East, and South America is an opportunity for PT.XYZ in the future. PT. XYZ wants to improve their brand image through co-branding with the SAS company as one of the local oil company in Spain. The study also analyzes the co-branding strategy through Joint Venture of PT. XYZ with SAS to improve the company's brand image in the International market. The conceptual framework of this research started from the goal of PT....

  16. Three-Dimensional Assembly Tolerance Analysis Based on the Jacobian-Torsor Statistical Model

    Directory of Open Access Journals (Sweden)

    Peng Heping

    2017-01-01

    Full Text Available The unified Jacobian-Torsor model has been developed for deterministic (worst case tolerance analysis. This paper presents a comprehensive model for performing statistical tolerance analysis by integrating the unified Jacobian-Torsor model and Monte Carlo simulation. In this model, an assembly is sub-divided into surfaces, the Small Displacements Torsor (SDT parameters are used to express the relative position between any two surfaces of the assembly. Then, 3D dimension-chain can be created by using a surface graph of the assembly and the unified Jacobian-Torsor model is developed based on the effect of each functional element on the whole functional requirements of products. Finally, Monte Carlo simulation is implemented for the statistical tolerance analysis. A numerical example is given to demonstrate the capability of the proposed method in handling three-dimensional assembly tolerance analysis.

  17. Statistical methods for data analysis in particle physics

    CERN Document Server

    AUTHOR|(CDS)2070643

    2015-01-01

    This concise set of course-based notes provides the reader with the main concepts and tools to perform statistical analysis of experimental data, in particular in the field of high-energy physics (HEP). First, an introduction to probability theory and basic statistics is given, mainly as reminder from advanced undergraduate studies, yet also in view to clearly distinguish the Frequentist versus Bayesian approaches and interpretations in subsequent applications. More advanced concepts and applications are gradually introduced, culminating in the chapter on upper limits as many applications in HEP concern hypothesis testing, where often the main goal is to provide better and better limits so as to be able to distinguish eventually between competing hypotheses or to rule out some of them altogether. Many worked examples will help newcomers to the field and graduate students to understand the pitfalls in applying theoretical concepts to actual data

  18. Statistical Analysis of 30 Years Rainfall Data: A Case Study

    Science.gov (United States)

    Arvind, G.; Ashok Kumar, P.; Girish Karthi, S.; Suribabu, C. R.

    2017-07-01

    Rainfall is a prime input for various engineering design such as hydraulic structures, bridges and culverts, canals, storm water sewer and road drainage system. The detailed statistical analysis of each region is essential to estimate the relevant input value for design and analysis of engineering structures and also for crop planning. A rain gauge station located closely in Trichy district is selected for statistical analysis where agriculture is the prime occupation. The daily rainfall data for a period of 30 years is used to understand normal rainfall, deficit rainfall, Excess rainfall and Seasonal rainfall of the selected circle headquarters. Further various plotting position formulae available is used to evaluate return period of monthly, seasonally and annual rainfall. This analysis will provide useful information for water resources planner, farmers and urban engineers to assess the availability of water and create the storage accordingly. The mean, standard deviation and coefficient of variation of monthly and annual rainfall was calculated to check the rainfall variability. From the calculated results, the rainfall pattern is found to be erratic. The best fit probability distribution was identified based on the minimum deviation between actual and estimated values. The scientific results and the analysis paved the way to determine the proper onset and withdrawal of monsoon results which were used for land preparation and sowing.

  19. Statistical strategies to reveal potential vibrational markers for in vivo analysis by confocal Raman spectroscopy

    Science.gov (United States)

    Oliveira Mendes, Thiago de; Pinto, Liliane Pereira; Santos, Laurita dos; Tippavajhala, Vamshi Krishna; Téllez Soto, Claudio Alberto; Martin, Airton Abrahão

    2016-07-01

    The analysis of biological systems by spectroscopic techniques involves the evaluation of hundreds to thousands of variables. Hence, different statistical approaches are used to elucidate regions that discriminate classes of samples and to propose new vibrational markers for explaining various phenomena like disease monitoring, mechanisms of action of drugs, food, and so on. However, the technical statistics are not always widely discussed in applied sciences. In this context, this work presents a detailed discussion including the various steps necessary for proper statistical analysis. It includes univariate parametric and nonparametric tests, as well as multivariate unsupervised and supervised approaches. The main objective of this study is to promote proper understanding of the application of various statistical tools in these spectroscopic methods used for the analysis of biological samples. The discussion of these methods is performed on a set of in vivo confocal Raman spectra of human skin analysis that aims to identify skin aging markers. In the Appendix, a complete routine of data analysis is executed in a free software that can be used by the scientific community involved in these studies.

  20. A method for statistical steady state thermal analysis of reactor cores

    International Nuclear Information System (INIS)

    Whetton, P.A.

    1980-01-01

    This paper presents a method for performing a statistical steady state thermal analysis of a reactor core. The technique is only outlined here since detailed thermal equations are dependent on the core geometry. The method has been applied to a pressurised water reactor core and the results are presented for illustration purposes. Random hypothetical cores are generated using the Monte-Carlo method. The technique shows that by splitting the parameters into two types, denoted core-wise and in-core, the Monte Carlo method may be used inexpensively. The idea of using extremal statistics to characterise the low probability events (i.e. the tails of a distribution) is introduced together with a method of forming the final probability distribution. After establishing an acceptable probability of exceeding a thermal design criterion, the final probability distribution may be used to determine the corresponding thermal response value. If statistical and deterministic (i.e. conservative) thermal response values are compared, information on the degree of pessimism in the deterministic method of analysis may be inferred and the restrictive performance limitations imposed by this method relieved. (orig.)

  1. Two-condition within-participant statistical mediation analysis: A path-analytic framework.

    Science.gov (United States)

    Montoya, Amanda K; Hayes, Andrew F

    2017-03-01

    Researchers interested in testing mediation often use designs where participants are measured on a dependent variable Y and a mediator M in both of 2 different circumstances. The dominant approach to assessing mediation in such a design, proposed by Judd, Kenny, and McClelland (2001), relies on a series of hypothesis tests about components of the mediation model and is not based on an estimate of or formal inference about the indirect effect. In this article we recast Judd et al.'s approach in the path-analytic framework that is now commonly used in between-participant mediation analysis. By so doing, it is apparent how to estimate the indirect effect of a within-participant manipulation on some outcome through a mediator as the product of paths of influence. This path-analytic approach eliminates the need for discrete hypothesis tests about components of the model to support a claim of mediation, as Judd et al.'s method requires, because it relies only on an inference about the product of paths-the indirect effect. We generalize methods of inference for the indirect effect widely used in between-participant designs to this within-participant version of mediation analysis, including bootstrap confidence intervals and Monte Carlo confidence intervals. Using this path-analytic approach, we extend the method to models with multiple mediators operating in parallel and serially and discuss the comparison of indirect effects in these more complex models. We offer macros and code for SPSS, SAS, and Mplus that conduct these analyses. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  2. Statistical analysis of first period of operation of FTU Tokamak

    International Nuclear Information System (INIS)

    Crisanti, F.; Apruzzese, G.; Frigione, D.; Kroegler, H.; Lovisetto, L.; Mazzitelli, G.; Podda, S.

    1996-09-01

    On the FTU Tokamak the plasma physics operations started on the 20/4/90. The first plasma had a plasma current Ip=0.75 MA for about a second. The experimental phase lasted until 7/7/94, when a long shut-down begun for installing the toroidal limiter in the inner side of the vacuum vessel. In these four years of operations plasma experiments have been successfully exploited, e.g. experiments of single and multiple pellet injections; full current drive up to Ip=300 KA was obtained by using waves at the frequency of the Lower Hybrid; analysis of ohmic plasma parameters with different materials (from the low Z silicon to high Z tungsten) as plasma facing element was performed. In this work a statistical analysis of the full period of operation is presented. Moreover, a comparison with the statistical data from other Tokamaks is attempted

  3. Head First Statistics

    CERN Document Server

    Griffiths, Dawn

    2009-01-01

    Wouldn't it be great if there were a statistics book that made histograms, probability distributions, and chi square analysis more enjoyable than going to the dentist? Head First Statistics brings this typically dry subject to life, teaching you everything you want and need to know about statistics through engaging, interactive, and thought-provoking material, full of puzzles, stories, quizzes, visual aids, and real-world examples. Whether you're a student, a professional, or just curious about statistical analysis, Head First's brain-friendly formula helps you get a firm grasp of statistics

  4. Using R and RStudio for data management, statistical analysis and graphics

    CERN Document Server

    Horton, Nicholas J

    2015-01-01

    This is the second edition of the popular book on using R for statistical analysis and graphics. The authors, who run a popular blog supplementing their books, have focused on adding many new examples to this new edition. These examples are presented primarily in new chapters based on the following themes: simulation, probability, statistics, mathematics/computing, and graphics. The authors have also added many other updates, including a discussion of RStudio-a very popular development environment for R.

  5. Statistical analysis of absorptive laser damage in dielectric thin films

    International Nuclear Information System (INIS)

    Budgor, A.B.; Luria-Budgor, K.F.

    1978-01-01

    The Weibull distribution arises as an example of the theory of extreme events. It is commonly used to fit statistical data arising in the failure analysis of electrical components and in DC breakdown of materials. This distribution is employed to analyze time-to-damage and intensity-to-damage statistics obtained when irradiating thin film coated samples of SiO 2 , ZrO 2 , and Al 2 O 3 with tightly focused laser beams. The data used is furnished by Milam. The fit to the data is excellent; and least squared correlation coefficients greater than 0.9 are often obtained

  6. Statistical analysis of failure time in stress corrosion cracking of fuel tube in light water reactor

    International Nuclear Information System (INIS)

    Hirao, Keiichi; Yamane, Toshimi; Minamino, Yoritoshi

    1991-01-01

    This report is to show how the life due to stress corrosion cracking breakdown of fuel cladding tubes is evaluated by applying the statistical techniques to that examined by a few testing methods. The statistical distribution of the limiting values of constant load stress corrosion cracking life, the statistical analysis by making the probabilistic interpretation of constant load stress corrosion cracking life, and the statistical analysis of stress corrosion cracking life by the slow strain rate test (SSRT) method are described. (K.I.)

  7. Implementation and statistical analysis of Metropolis algorithm for SU(3)

    International Nuclear Information System (INIS)

    Katznelson, E.; Nobile, A.

    1984-12-01

    In this paper we study the statistical properties of an implementation of the Metropolis algorithm for SU(3) gauge theory. It is shown that the results have normal distribution. We demonstrate that in this case error analysis can be carried on in a simple way and we show that applying it to both the measurement strategy and the output data analysis has an important influence on the performance and reliability of the simulation. (author)

  8. Analysis of Statistical Distributions Used for Modeling Reliability and Failure Rate of Temperature Alarm Circuit

    International Nuclear Information System (INIS)

    EI-Shanshoury, G.I.

    2011-01-01

    Several statistical distributions are used to model various reliability and maintainability parameters. The applied distribution depends on the' nature of the data being analyzed. The presented paper deals with analysis of some statistical distributions used in reliability to reach the best fit of distribution analysis. The calculations rely on circuit quantity parameters obtained by using Relex 2009 computer program. The statistical analysis of ten different distributions indicated that Weibull distribution gives the best fit distribution for modeling the reliability of the data set of Temperature Alarm Circuit (TAC). However, the Exponential distribution is found to be the best fit distribution for modeling the failure rate

  9. Statistical mechanical analysis of LMFBR fuel cladding tubes

    International Nuclear Information System (INIS)

    Poncelet, J.-P.; Pay, A.

    1977-01-01

    The most important design requirement on fuel pin cladding for LMFBR's is its mechanical integrity. Disruptive factors include internal pressure from mixed oxide fuel fission gas release, thermal stresses and high temperature creep, neutron-induced differential void-swelling as a source of stress in the cladding and irradiation creep of stainless steel material, corrosion by fission products. Under irradiation these load-restraining mechanisms are accentuated by stainless steel embrittlement and strength alterations. To account for the numerous uncertainties involved in the analysis by theoretical models and computer codes statistical tools are unavoidably requested, i.e. Monte Carlo simulation methods. Thanks to these techniques, uncertainties in nominal characteristics, material properties and environmental conditions can be linked up in a correct way and used for a more accurate conceptual design. First, a thermal creep damage index is set up through a sufficiently sophisticated clad physical analysis including arbitrary time dependence of power and neutron flux as well as effects of sodium temperature, burnup and steel mechanical behavior. Although this strain limit approach implies a more general but time consuming model., on the counterpart the net output is improved and e.g. clad temperature, stress and strain maxima may be easily assessed. A full spectrum of variables are statistically treated to account for their probability distributions. Creep damage probability may be obtained and can contribute to a quantitative fuel probability estimation

  10. A robust statistical method for association-based eQTL analysis.

    Directory of Open Access Journals (Sweden)

    Ning Jiang

    Full Text Available It has been well established that theoretical kernel for recently surging genome-wide association study (GWAS is statistical inference of linkage disequilibrium (LD between a tested genetic marker and a putative locus affecting a disease trait. However, LD analysis is vulnerable to several confounding factors of which population stratification is the most prominent. Whilst many methods have been proposed to correct for the influence either through predicting the structure parameters or correcting inflation in the test statistic due to the stratification, these may not be feasible or may impose further statistical problems in practical implementation.We propose here a novel statistical method to control spurious LD in GWAS from population structure by incorporating a control marker into testing for significance of genetic association of a polymorphic marker with phenotypic variation of a complex trait. The method avoids the need of structure prediction which may be infeasible or inadequate in practice and accounts properly for a varying effect of population stratification on different regions of the genome under study. Utility and statistical properties of the new method were tested through an intensive computer simulation study and an association-based genome-wide mapping of expression quantitative trait loci in genetically divergent human populations.The analyses show that the new method confers an improved statistical power for detecting genuine genetic association in subpopulations and an effective control of spurious associations stemmed from population structure when compared with other two popularly implemented methods in the literature of GWAS.

  11. Statistical and Machine-Learning Data Mining Techniques for Better Predictive Modeling and Analysis of Big Data

    CERN Document Server

    Ratner, Bruce

    2011-01-01

    The second edition of a bestseller, Statistical and Machine-Learning Data Mining: Techniques for Better Predictive Modeling and Analysis of Big Data is still the only book, to date, to distinguish between statistical data mining and machine-learning data mining. The first edition, titled Statistical Modeling and Analysis for Database Marketing: Effective Techniques for Mining Big Data, contained 17 chapters of innovative and practical statistical data mining techniques. In this second edition, renamed to reflect the increased coverage of machine-learning data mining techniques, the author has

  12. Constitution of an incident database suited to statistical analysis and examples

    International Nuclear Information System (INIS)

    Verpeaux, J.L.

    1990-01-01

    The Nuclear Protection and Safety Institute (IPSN) has set up and is developing an incidents database, which is used for the management and analysis of incidents encountered in French PWR plants. IPSN has already carried out several incidents or safety important events statistical analysis, and is improving its database on the basis of the experience it gained from this various studies. A description of the analysis method and of the developed database is presented

  13. A new statistic for the analysis of circular data in gamma-ray astronomy

    Science.gov (United States)

    Protheroe, R. J.

    1985-01-01

    A new statistic is proposed for the analysis of circular data. The statistic is designed specifically for situations where a test of uniformity is required which is powerful against alternatives in which a small fraction of the observations is grouped in a small range of directions, or phases.

  14. Statistical Compilation of the ICT Sector and Policy Analysis | CRDI ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Statistical Compilation of the ICT Sector and Policy Analysis. As the presence and influence of information and communication technologies (ICTs) continues to widen and deepen, so too does its impact on economic development. However, much work needs to be done before the linkages between economic development ...

  15. Statistical Compilation of the ICT Sector and Policy Analysis | IDRC ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Statistical Compilation of the ICT Sector and Policy Analysis. As the presence and influence of information and communication technologies (ICTs) continues to widen and deepen, so too does its impact on economic development. However, much work needs to be done before the linkages between economic development ...

  16. A critical discussion of null hypothesis significance testing and statistical power analysis within psychological research

    DEFF Research Database (Denmark)

    Jones, Allan; Sommerlund, Bo

    2007-01-01

    The uses of null hypothesis significance testing (NHST) and statistical power analysis within psychological research are critically discussed. The article looks at the problems of relying solely on NHST when dealing with small and large sample sizes. The use of power-analysis in estimating...... the potential error introduced by small and large samples is advocated. Power analysis is not recommended as a replacement to NHST but as an additional source of information about the phenomena under investigation. Moreover, the importance of conceptual analysis in relation to statistical analysis of hypothesis...

  17. Dataset on statistical analysis of editorial board composition of Hindawi journals indexed in Emerging sources citation index

    Directory of Open Access Journals (Sweden)

    Hilary I. Okagbue

    2018-04-01

    Full Text Available This data article contains the statistical analysis of the total, percentage and distribution of editorial board composition of 111 Hindawi journals indexed in Emerging Sources Citation Index (ESCI across the continents. The reliability of the data was shown using correlation, goodness-of-fit test, analysis of variance and statistical variability tests. Keywords: Hindawi, Bibliometrics, Data analysis, ESCI, Random, Smart campus, Web of science, Ranking analytics, Statistics

  18. Statistical analysis of the determinations of the Sun's Galactocentric distance

    Science.gov (United States)

    Malkin, Zinovy

    2013-02-01

    Based on several tens of R0 measurements made during the past two decades, several studies have been performed to derive the best estimate of R0. Some used just simple averaging to derive a result, whereas others provided comprehensive analyses of possible errors in published results. In either case, detailed statistical analyses of data used were not performed. However, a computation of the best estimates of the Galactic rotation constants is not only an astronomical but also a metrological task. Here we perform an analysis of 53 R0 measurements (published in the past 20 years) to assess the consistency of the data. Our analysis shows that they are internally consistent. It is also shown that any trend in the R0 estimates from the last 20 years is statistically negligible, which renders the presence of a bandwagon effect doubtful. On the other hand, the formal errors in the published R0 estimates improve significantly with time.

  19. Statistical analysis of global horizontal solar irradiation GHI in Fez city, Morocco

    Science.gov (United States)

    Bounoua, Z.; Mechaqrane, A.

    2018-05-01

    An accurate knowledge of the solar energy reaching the ground is necessary for sizing and optimizing the performances of solar installations. This paper describes a statistical analysis of the global horizontal solar irradiation (GHI) at Fez city, Morocco. For better reliability, we have first applied a set of check procedures to test the quality of hourly GHI measurements. We then eliminate the erroneous values which are generally due to measurement or the cosine effect errors. Statistical analysis show that the annual mean daily values of GHI is of approximately 5 kWh/m²/day. Daily monthly mean values and other parameter are also calculated.

  20. Statistical and machine learning approaches for network analysis

    CERN Document Server

    Dehmer, Matthias

    2012-01-01

    Explore the multidisciplinary nature of complex networks through machine learning techniques Statistical and Machine Learning Approaches for Network Analysis provides an accessible framework for structurally analyzing graphs by bringing together known and novel approaches on graph classes and graph measures for classification. By providing different approaches based on experimental data, the book uniquely sets itself apart from the current literature by exploring the application of machine learning techniques to various types of complex networks. Comprised of chapters written by internation