WorldWideScience

Sample records for subsection statistical analyses

  1. Needle Terpenes as Chemotaxonomic Markers in Pinus: Subsections Pinus and Pinaster.

    Science.gov (United States)

    Mitić, Zorica S; Jovanović, Snežana Č; Zlatković, Bojan K; Nikolić, Biljana M; Stojanović, Gordana S; Marin, Petar D

    2017-05-01

    Chemical compositions of needle essential oils of 27 taxa from the section Pinus, including 20 and 7 taxa of the subsections Pinus and Pinaster, respectively, were compared in order to determine chemotaxonomic significance of terpenes at infrageneric level. According to analysis of variance, six out of 31 studied terpene characters were characterized by a high level of significance, indicating statistically significant difference between the examined subsections. Agglomerative hierarchical cluster analysis has shown separation of eight groups, where representatives of subsect. Pinaster were distributed within the first seven groups on the dendrogram together with P. nigra subsp. laricio and P. merkusii from the subsect. Pinus. On the other hand, the eighth group included the majority of the members of subsect. Pinus. Our findings, based on terpene characters, complement those obtained from morphological, biochemical, and molecular parameters studied over the past two decades. In addition, results presented in this article confirmed that terpenes are good markers at infrageneric level. © 2017 Wiley-VHCA AG, Zurich, Switzerland.

  2. Ecological units of the Northern Region: Subsections

    Science.gov (United States)

    John A. Nesser; Gary L. Ford; C. Lee Maynard; Debbie Dumroese

    1997-01-01

    Ecological units are described at the subsection level of the Forest Service National Hierarchical Framework of Ecological Units. A total of 91 subsections are delineated on the 1996 map "Ecological Units of the Northern Region: Subsections," based on physical and biological criteria. This document consists of descriptions of the climate, geomorphology,...

  3. 22 CFR 505.13 - General exemptions (Subsection (j)).

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 2 2010-04-01 2010-04-01 true General exemptions (Subsection (j)). 505.13... exemptions (Subsection (j)). (a) General exemptions are available for systems of records which are maintained by the Central Intelligence Agency (Subsection (j)(1)), or maintained by an agency which performs as...

  4. The taxonomy of the European species of Hebeloma section Denudata subsections Hiemalia, Echinospora subsect. nov. and Clepsydroida subsect. nov. and five new species

    DEFF Research Database (Denmark)

    Eberhardt, Ursula; Beker, Henry J.; Vesterholt, Jan Hansen

    2016-01-01

    Hebeloma section Denudata includes the majority of the taxa commonly referred to as the Hebeloma crustuliniforme complex. In a recent paper we described in detail H. subsection Denudata and fifteen European species recognised within this subsection, using morphological and molecular methods. In t...

  5. Studies in Coprinus III — Coprinus section Veliformes. Subdivision and revision of subsection Nivei emend

    NARCIS (Netherlands)

    Uljé, C.B.; Noordeloos, M.E.

    1993-01-01

    Coprinus section Veliformes is defined and delimited to comprise four subsections: subsection Micacei, subsection Domestici, subsection Nivei, and subsection Narcotici, subsection nov. A key to the subsections is given. Subsection Nivei is emended, including also most taxa of subsection Flocculosi

  6. Type studies in Coprinus subsection Lanatuli

    NARCIS (Netherlands)

    Uljé, C.B.; Noordeloos, M.E.

    2001-01-01

    As a prelude to a monograph of the genus Coprinus, types were studied of a number of species said to belong to Coprinus subsection Lanatuli (Coprinus alnivorus. C. alutaceivelatus, C. ammophilae, C. arachnoideus, C. asterophoroides, C. brunneistragulatus, C. bubalinus, C. citrinovelatus, C.

  7. Studies in Coprinus IV — Coprinus section coprinus. Subdivision and revision of subsection Alachuani

    NARCIS (Netherlands)

    Uljé, C.B.; Noordeloos, M.E.

    1997-01-01

    Coprinus section Coprinus is defined and delimited to comprise four subsections: Atramentarii, Coprinus, Lanatuli and Alachuani. A key to the subsections is given as well as a key to the species of subsection Alachuani known from the Netherlands or to be expected in the Netherlands on account of

  8. Studies in Coprinus V — Coprinus section Coprinus. Revision of subsection Lanatuli Sing

    NARCIS (Netherlands)

    Uljé, C.B.; Noordeloos, M.E.

    1999-01-01

    A key is given to the species of subsection Lanatuli known from the Netherlands or to be expected in the Netherlands on account of records from neighbouring countries. For a key to the subsections in Coprinus section Coprinus see Uljé & Noordel., Persoonia 16 (1997) 267. Coprinus bicornis and C.

  9. A Coupled Model for Work Roll Thermal Contour with Subsectional Cooling in Aluminum Strip Cold Rolling

    Directory of Open Access Journals (Sweden)

    Shao Jian

    2014-10-01

    Full Text Available Little attention had been given to the evaluation of subsectional cooling control ability under complicated working conditions. In this paper, heat generation was calculated by using finite difference method. Strip hardening, work roll elastic deformation and elastic recovery of strip were taken into account. The mean coefficient of convective heat transfer on work roll surface was simulated by FLUENT. Calculation model had used the alternative finite difference scheme, which improved the model stability and computing speed. The simulation result shows that subsectional cooling control ability is different between different rolling passes. Positive and negative control abilities are roughly the same in the same pass. The increase of rolled length, working pressure of header and friction coefficient has positive effect on subsectional cooling control ability, and the rolling speed is on the contrary. On the beginning of the pass, when work roll surface has not reached the stable temperature, control ability of subsectional cooling is mainly affected by rolled length. The effect of mean coefficient of convective heat transfer and coefficient of friction is linear. When rolling speed is over 500 m/min, control ability of subsectional cooling becomes stable.

  10. Statistical and extra-statistical considerations in differential item functioning analyses

    Directory of Open Access Journals (Sweden)

    G. K. Huysamen

    2004-10-01

    Full Text Available This article briefly describes the main procedures for performing differential item functioning (DIF analyses and points out some of the statistical and extra-statistical implications of these methods. Research findings on the sources of DIF, including those associated with translated tests, are reviewed. As DIF analyses are oblivious of correlations between a test and relevant criteria, the elimination of differentially functioning items does not necessarily improve predictive validity or reduce any predictive bias. The implications of the results of past DIF research for test development in the multilingual and multi-cultural South African society are considered. Opsomming Hierdie artikel beskryf kortliks die hoofprosedures vir die ontleding van differensiële itemfunksionering (DIF en verwys na sommige van die statistiese en buite-statistiese implikasies van hierdie metodes. ’n Oorsig word verskaf van navorsingsbevindings oor die bronne van DIF, insluitend dié by vertaalde toetse. Omdat DIF-ontledings nie die korrelasies tussen ’n toets en relevante kriteria in ag neem nie, sal die verwydering van differensieel-funksionerende items nie noodwendig voorspellingsgeldigheid verbeter of voorspellingsydigheid verminder nie. Die implikasies van vorige DIF-navorsingsbevindings vir toetsontwikkeling in die veeltalige en multikulturele Suid-Afrikaanse gemeenskap word oorweeg.

  11. 29 CFR 71.50 - General exemptions pursuant to subsection (j) of the Privacy Act.

    Science.gov (United States)

    2010-07-01

    ... (Investigative Case Tracking Systems/Audit Information Reporting Systems, USDOL/OIG), a system of records... ACCESS TO RECORDS UNDER THE PRIVACY ACT OF 1974 Exemption of Records Systems Under the Privacy Act § 71.50 General exemptions pursuant to subsection (j) of the Privacy Act. (a) The following systems of...

  12. SOCR Analyses - an Instructional Java Web-based Statistical Analysis Toolkit.

    Science.gov (United States)

    Chu, Annie; Cui, Jenny; Dinov, Ivo D

    2009-03-01

    The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test.The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website.In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most

  13. 29 CFR 71.52 - Specific exemptions pursuant to subsection (k)(5) of the Privacy Act.

    Science.gov (United States)

    2010-07-01

    ... (Investigative Case Tracking Systems/Audit Information Reporting Systems, USDOL/OIG), a system of records... AND ACCESS TO RECORDS UNDER THE PRIVACY ACT OF 1974 Exemption of Records Systems Under the Privacy Act § 71.52 Specific exemptions pursuant to subsection (k)(5) of the Privacy Act. (a) The following systems...

  14. 29 CFR 71.51 - Specific exemptions pursuant to subsection (k)(2) of the Privacy Act.

    Science.gov (United States)

    2010-07-01

    ... (Investigative Case Tracking Systems/Audit Information Reporting Systems, USDOL/OIG), a system of records... AND ACCESS TO RECORDS UNDER THE PRIVACY ACT OF 1974 Exemption of Records Systems Under the Privacy Act § 71.51 Specific exemptions pursuant to subsection (k)(2) of the Privacy Act. (a) The following systems...

  15. SOCR Analyses: Implementation and Demonstration of a New Graphical Statistics Educational Toolkit

    Directory of Open Access Journals (Sweden)

    Annie Chu

    2009-04-01

    Full Text Available The web-based, Java-written SOCR (Statistical Online Computational Resource toolshave been utilized in many undergraduate and graduate level statistics courses for sevenyears now (Dinov 2006; Dinov et al. 2008b. It has been proven that these resourcescan successfully improve students' learning (Dinov et al. 2008b. Being rst publishedonline in 2005, SOCR Analyses is a somewhat new component and it concentrate on datamodeling for both parametric and non-parametric data analyses with graphical modeldiagnostics. One of the main purposes of SOCR Analyses is to facilitate statistical learn-ing for high school and undergraduate students. As we have already implemented SOCRDistributions and Experiments, SOCR Analyses and Charts fulll the rest of a standardstatistics curricula. Currently, there are four core components of SOCR Analyses. Linearmodels included in SOCR Analyses are simple linear regression, multiple linear regression,one-way and two-way ANOVA. Tests for sample comparisons include t-test in the para-metric category. Some examples of SOCR Analyses' in the non-parametric category areWilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, Kolmogorov-Smirno testand Fligner-Killeen test. Hypothesis testing models include contingency table, Friedman'stest and Fisher's exact test. The last component of Analyses is a utility for computingsample sizes for normal distribution. In this article, we present the design framework,computational implementation and the utilization of SOCR Analyses.

  16. Team-Based Learning in a Subsection of a Veterinary Course as Compared to Standard Lectures

    Science.gov (United States)

    Malone, Erin; Spieth, Amie

    2012-01-01

    Team-Based Learning (TBL) maximizes class time for student practice in complex problems using peer learning in an instructor-guided format. Generally entire courses are structured using the comprehensive guidelines of TBL. We used TBL in a subsection of a veterinary course to determine if it remained effective in this format. One section of the…

  17. Statistical Data Analyses of Trace Chemical, Biochemical, and Physical Analytical Signatures

    Energy Technology Data Exchange (ETDEWEB)

    Udey, Ruth Norma [Michigan State Univ., East Lansing, MI (United States)

    2013-01-01

    Analytical and bioanalytical chemistry measurement results are most meaningful when interpreted using rigorous statistical treatments of the data. The same data set may provide many dimensions of information depending on the questions asked through the applied statistical methods. Three principal projects illustrated the wealth of information gained through the application of statistical data analyses to diverse problems.

  18. Utility experience in code updating of equipment built to 1974 code, Section 3, Subsection NF

    International Nuclear Information System (INIS)

    Rao, K.R.; Deshpande, N.

    1990-01-01

    This paper addresses changes to ASME Code Subsection NF and reconciles the differences between the updated codes and the as built construction code, of ASME Section III, 1974 to which several nuclear plants have been built. Since Section III is revised every three years and replacement parts complying with the construction code are invariably not available from the plant stock inventory, parts must be procured from vendors who comply with the requirements of the latest codes. Aspects of the ASME code which reflect Subsection NF are identified and compared with the later Code editions and addenda, especially up to and including the 1974 ASME code used as the basis for the plant qualification. The concern of the regulatory agencies is that if later code allowables and provisions are adopted it is possible to reduce the safety margins of the construction code. Areas of concern are highlighted and the specific changes of later codes are discerned; adoption of which, would not sacrifice the intended safety margins of the codes to which plants are licensed

  19. A simple device for subsectioning aqueous sediments from the box or spade corer

    Digital Repository Service at National Institute of Oceanography (India)

    Valsangkar, A.B.

    , combination of any sub-section interval is possible; however, the care to be taken in selecting the appropriate size rings. Results and discussion The proposed apparatus/device has been successfully tested and used5 for the first time during 38th... PVC liners (12.5 cm OD, and 50 cm length), and 2 PVC liners (10 cm OD and 50 cm length) (Figure 1). The sub-cores were carefully removed from the box by inserting a PVC disk at the bottom. The sub-core was then placed on the top of the device...

  20. The complete nucleotide sequences of the five genetically distinct plastid genomes of Oenothera, subsection Oenothera: I. sequence evaluation and plastome evolution.

    Science.gov (United States)

    Greiner, Stephan; Wang, Xi; Rauwolf, Uwe; Silber, Martina V; Mayer, Klaus; Meurer, Jörg; Haberer, Georg; Herrmann, Reinhold G

    2008-04-01

    The flowering plant genus Oenothera is uniquely suited for studying molecular mechanisms of speciation. It assembles an intriguing combination of genetic features, including permanent translocation heterozygosity, biparental transmission of plastids, and a general interfertility of well-defined species. This allows an exchange of plastids and nuclei between species often resulting in plastome-genome incompatibility. For evaluation of its molecular determinants we present the complete nucleotide sequences of the five basic, genetically distinguishable plastid chromosomes of subsection Oenothera (=Euoenothera) of the genus, which are associated in distinct combinations with six basic genomes. Sizes of the chromosomes range from 163 365 bp (plastome IV) to 165 728 bp (plastome I), display between 96.3% and 98.6% sequence similarity and encode a total of 113 unique genes. Plastome diversification is caused by an abundance of nucleotide substitutions, small insertions, deletions and repetitions. The five plastomes deviate from the general ancestral design of plastid chromosomes of vascular plants by a subsection-specific 56 kb inversion within the large single-copy segment. This inversion disrupted operon structures and predates the divergence of the subsection presumably 1 My ago. Phylogenetic relationships suggest plastomes I-III in one clade, while plastome IV appears to be closest to the common ancestor.

  1. The complete nucleotide sequences of the five genetically distinct plastid genomes of Oenothera, subsection Oenothera: I. Sequence evaluation and plastome evolution†

    Science.gov (United States)

    Greiner, Stephan; Wang, Xi; Rauwolf, Uwe; Silber, Martina V.; Mayer, Klaus; Meurer, Jörg; Haberer, Georg; Herrmann, Reinhold G.

    2008-01-01

    The flowering plant genus Oenothera is uniquely suited for studying molecular mechanisms of speciation. It assembles an intriguing combination of genetic features, including permanent translocation heterozygosity, biparental transmission of plastids, and a general interfertility of well-defined species. This allows an exchange of plastids and nuclei between species often resulting in plastome–genome incompatibility. For evaluation of its molecular determinants we present the complete nucleotide sequences of the five basic, genetically distinguishable plastid chromosomes of subsection Oenothera (=Euoenothera) of the genus, which are associated in distinct combinations with six basic genomes. Sizes of the chromosomes range from 163 365 bp (plastome IV) to 165 728 bp (plastome I), display between 96.3% and 98.6% sequence similarity and encode a total of 113 unique genes. Plastome diversification is caused by an abundance of nucleotide substitutions, small insertions, deletions and repetitions. The five plastomes deviate from the general ancestral design of plastid chromosomes of vascular plants by a subsection-specific 56 kb inversion within the large single-copy segment. This inversion disrupted operon structures and predates the divergence of the subsection presumably 1 My ago. Phylogenetic relationships suggest plastomes I–III in one clade, while plastome IV appears to be closest to the common ancestor. PMID:18299283

  2. "What If" Analyses: Ways to Interpret Statistical Significance Test Results Using EXCEL or "R"

    Science.gov (United States)

    Ozturk, Elif

    2012-01-01

    The present paper aims to review two motivations to conduct "what if" analyses using Excel and "R" to understand the statistical significance tests through the sample size context. "What if" analyses can be used to teach students what statistical significance tests really do and in applied research either prospectively to estimate what sample size…

  3. Statistical analyses of digital collections: Using a large corpus of systematic reviews to study non-citations

    DEFF Research Database (Denmark)

    Frandsen, Tove Faber; Nicolaisen, Jeppe

    2017-01-01

    Using statistical methods to analyse digital material for patterns makes it possible to detect patterns in big data that we would otherwise not be able to detect. This paper seeks to exemplify this fact by statistically analysing a large corpus of references in systematic reviews. The aim...

  4. A study on technical issues of materials and design bases in ASME section III subsection NH code

    International Nuclear Information System (INIS)

    Lee, Hyeong Yeon; Kim, Jong Bum; Yoo, Bong

    2000-12-01

    In this study, an analysis of evaluation report by ORNL on the technical issues of elevated temperatures design guide line, ASME Code Section III Subsection NH was conducted and a brief evaluation procedure of the creep-fatigue damage was presented. ORNL published the report in 1993 and reviewed the issue areas where code rules or regulatory guides may be lacking or inadequate to ensure safe operation over the expected life cycles for liquid metal reactor systems. From historical viewpoint of the ASME NH code development, ASME Code Case 47 was changed much in 1989 edition, which includes the stress relaxation behavior in creep damage evaluation. Afterwards the 1992 version of CC N-47 was upgraded to Subsection NH in 1995 edition, which is the same with that of CC N-47 1992 edition except few material data. This report brings up the technical and regulatory issues that can not guarantee the safe and reliable operation of the ALMR which got the conceptual design certification from NRC. Twenty three technical issues were raised and settlement methodology were proposed. Additionally, the status of items approved by ASME code subgroup of elevated temperature design committee for the revision of the most recent 1998 edition of ASME NH was described

  5. Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses.

    Science.gov (United States)

    Faul, Franz; Erdfelder, Edgar; Buchner, Axel; Lang, Albert-Georg

    2009-11-01

    G*Power is a free power analysis program for a variety of statistical tests. We present extensions and improvements of the version introduced by Faul, Erdfelder, Lang, and Buchner (2007) in the domain of correlation and regression analyses. In the new version, we have added procedures to analyze the power of tests based on (1) single-sample tetrachoric correlations, (2) comparisons of dependent correlations, (3) bivariate linear regression, (4) multiple linear regression based on the random predictor model, (5) logistic regression, and (6) Poisson regression. We describe these new features and provide a brief introduction to their scope and handling.

  6. Critical examination of valve design according to sub-section B 3500 of the French design and construction rules for mechanical components of PWR nuclear islands (RCCM January 1985 edition)

    International Nuclear Information System (INIS)

    Huet, J.; Dabrowski, J.F.; Maurin, N.; Quere, M.

    1989-01-01

    The RCCM design rules for class 1 valves (sub-section B 3500), which apply to Pressurized Water Reactor components, are reviewed and the agreement between the principle of sub-section B 3200 and the rules of sub-section B 3600 concerning piping design is examined. The equations used in the code are examined with respect to their origin, their justification and their conservatism. The applicability of the body shape rules and their relationship with fatigue strength reduction factors are also discussed. A parametric analysis is presented to show the influence of transition radii under different loads, including thermal shock. The study specifically examines the stress concentration factors obtained from a 3D finite element analysis of a tee shaped valve under mechanical loading (pressure, bending, etc.). The objective of this study is to understand and improve the existing rules so that ultimately a program can be written which will allow the stress calculations to be confirmed quickly using a microcomputer. (author)

  7. Statistical analyses in the study of solar wind-magnetosphere coupling

    International Nuclear Information System (INIS)

    Baker, D.N.

    1985-01-01

    Statistical analyses provide a valuable method for establishing initially the existence (or lack of existence) of a relationship between diverse data sets. Statistical methods also allow one to make quantitative assessments of the strengths of observed relationships. This paper reviews the essential techniques and underlying statistical bases for the use of correlative methods in solar wind-magnetosphere coupling studies. Techniques of visual correlation and time-lagged linear cross-correlation analysis are emphasized, but methods of multiple regression, superposed epoch analysis, and linear prediction filtering are also described briefly. The long history of correlation analysis in the area of solar wind-magnetosphere coupling is reviewed with the assessments organized according to data averaging time scales (minutes to years). It is concluded that these statistical methods can be very useful first steps, but that case studies and various advanced analysis methods should be employed to understand fully the average response of the magnetosphere to solar wind input. It is clear that many workers have not always recognized underlying assumptions of statistical methods and thus the significance of correlation results can be in doubt. Long-term averages (greater than or equal to 1 hour) can reveal gross relationships, but only when dealing with high-resolution data (1 to 10 min) can one reach conclusions pertinent to magnetospheric response time scales and substorm onset mechanisms

  8. Statistical Analyses of Scatterplots to Identify Important Factors in Large-Scale Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Kleijnen, J.P.C.; Helton, J.C.

    1999-04-01

    The robustness of procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses is investigated. These procedures are based on attempts to detect increasingly complex patterns in the scatterplots under consideration and involve the identification of (1) linear relationships with correlation coefficients, (2) monotonic relationships with rank correlation coefficients, (3) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (4) trends in variability as defined by variances and interquartile ranges, and (5) deviations from randomness as defined by the chi-square statistic. The following two topics related to the robustness of these procedures are considered for a sequence of example analyses with a large model for two-phase fluid flow: the presence of Type I and Type II errors, and the stability of results obtained with independent Latin hypercube samples. Observations from analysis include: (1) Type I errors are unavoidable, (2) Type II errors can occur when inappropriate analysis procedures are used, (3) physical explanations should always be sought for why statistical procedures identify variables as being important, and (4) the identification of important variables tends to be stable for independent Latin hypercube samples.

  9. Statistical analyses of extreme food habits

    International Nuclear Information System (INIS)

    Breuninger, M.; Neuhaeuser-Berthold, M.

    2000-01-01

    This report is a summary of the results of the project ''Statistical analyses of extreme food habits'', which was ordered from the National Office for Radiation Protection as a contribution to the amendment of the ''General Administrative Regulation to paragraph 45 of the Decree on Radiation Protection: determination of the radiation exposition by emission of radioactive substances from facilities of nuclear technology''. Its aim is to show if the calculation of the radiation ingested by 95% of the population by food intake, like it is planned in a provisional draft, overestimates the true exposure. If such an overestimation exists, the dimension of it should be determined. It was possible to prove the existence of this overestimation but its dimension could only roughly be estimated. To identify the real extent of it, it is necessary to include the specific activities of the nuclides, which were not available for this investigation. In addition to this the report shows how the amounts of food consumption of different groups of foods influence each other and which connections between these amounts should be taken into account, in order to estimate the radiation exposition as precise as possible. (orig.) [de

  10. Hydrometeorological and statistical analyses of heavy rainfall in Midwestern USA

    Science.gov (United States)

    Thorndahl, S.; Smith, J. A.; Krajewski, W. F.

    2012-04-01

    During the last two decades the mid-western states of the United States of America has been largely afflicted by heavy flood producing rainfall. Several of these storms seem to have similar hydrometeorological properties in terms of pattern, track, evolution, life cycle, clustering, etc. which raise the question if it is possible to derive general characteristics of the space-time structures of these heavy storms. This is important in order to understand hydrometeorological features, e.g. how storms evolve and with what frequency we can expect extreme storms to occur. In the literature, most studies of extreme rainfall are based on point measurements (rain gauges). However, with high resolution and quality radar observation periods exceeding more than two decades, it is possible to do long-term spatio-temporal statistical analyses of extremes. This makes it possible to link return periods to distributed rainfall estimates and to study precipitation structures which cause floods. However, doing these statistical frequency analyses of rainfall based on radar observations introduces some different challenges, converting radar reflectivity observations to "true" rainfall, which are not problematic doing traditional analyses on rain gauge data. It is for example difficult to distinguish reflectivity from high intensity rain from reflectivity from other hydrometeors such as hail, especially using single polarization radars which are used in this study. Furthermore, reflectivity from bright band (melting layer) should be discarded and anomalous propagation should be corrected in order to produce valid statistics of extreme radar rainfall. Other challenges include combining observations from several radars to one mosaic, bias correction against rain gauges, range correction, ZR-relationships, etc. The present study analyzes radar rainfall observations from 1996 to 2011 based the American NEXRAD network of radars over an area covering parts of Iowa, Wisconsin, Illinois, and

  11. Regulatory Safety Issues in the Structural Design Criteria of ASME Section III Subsection NH and for Very High Temperatures for VHTR & GEN IV

    Energy Technology Data Exchange (ETDEWEB)

    William J. O’Donnell; Donald S. Griffin

    2007-05-07

    The objective of this task is to identify issues relevant to ASME Section III, Subsection NH [1], and related Code Cases that must be resolved for licensing purposes for VHTGRs (Very High Temperature Gas Reactor concepts such as those of PBMR, Areva, and GA); and to identify the material models, design criteria, and analysis methods that need to be added to the ASME Code to cover the unresolved safety issues. Subsection NH was originally developed to provide structural design criteria and limits for elevated-temperature design of Liquid Metal Fast Breeder Reactor (LMFBR) systems and some gas-cooled systems. The U.S. Nuclear Regulatory Commission (NRC) and its Advisory Committee for Reactor Safeguards (ACRS) reviewed the design limits and procedures in the process of reviewing the Clinch River Breeder Reactor (CRBR) for a construction permit in the late 1970s and early 1980s, and identified issues that needed resolution. In the years since then, the NRC and various contractors have evaluated the applicability of the ASME Code and Code Cases to high-temperature reactor designs such as the VHTGRs, and identified issues that need to be resolved to provide a regulatory basis for licensing. This Report describes: (1) NRC and ACRS safety concerns raised during the licensing process of CRBR , (2) how some of these issues are addressed by the current Subsection NH of the ASME Code; and (3) the material models, design criteria, and analysis methods that need to be added to the ASME Code and Code Cases to cover unresolved regulatory issues for very high temperature service.

  12. Towards an Industrial Application of Statistical Uncertainty Analysis Methods to Multi-physical Modelling and Safety Analyses

    International Nuclear Information System (INIS)

    Zhang, Jinzhao; Segurado, Jacobo; Schneidesch, Christophe

    2013-01-01

    Since 1980's, Tractebel Engineering (TE) has being developed and applied a multi-physical modelling and safety analyses capability, based on a code package consisting of the best estimate 3D neutronic (PANTHER), system thermal hydraulic (RELAP5), core sub-channel thermal hydraulic (COBRA-3C), and fuel thermal mechanic (FRAPCON/FRAPTRAN) codes. A series of methodologies have been developed to perform and to license the reactor safety analysis and core reload design, based on the deterministic bounding approach. Following the recent trends in research and development as well as in industrial applications, TE has been working since 2010 towards the application of the statistical sensitivity and uncertainty analysis methods to the multi-physical modelling and licensing safety analyses. In this paper, the TE multi-physical modelling and safety analyses capability is first described, followed by the proposed TE best estimate plus statistical uncertainty analysis method (BESUAM). The chosen statistical sensitivity and uncertainty analysis methods (non-parametric order statistic method or bootstrap) and tool (DAKOTA) are then presented, followed by some preliminary results of their applications to FRAPCON/FRAPTRAN simulation of OECD RIA fuel rod codes benchmark and RELAP5/MOD3.3 simulation of THTF tests. (authors)

  13. Enactment of KEPIC MNH Based on 2007 ASME BPVC Section III Division 1, Subsection NH: Class 1 Components in Elevated Temperature Service

    International Nuclear Information System (INIS)

    Koo, Gyeong Hoi; Kim, J. B.; Lee, H. Y.; Park, C. G.

    2008-11-01

    This report is a draft of an enactment of KEPIC MNH based on 2007 ASME Boiler and Pressure Vessel Code, Section III, Division 1 Subsection NH for Class 1 Components in Elevated Temperature Service and contains of ASME Article NH-3000 design, the mandatory appendix I-14, and non-mandatory appendices T and X

  14. Enactment of KEPIC MNH Based on 2007 ASME BPVC Section III Division 1, Subsection NH: Class 1 Components in Elevated Temperature Service

    Energy Technology Data Exchange (ETDEWEB)

    Koo, Gyeong Hoi; Kim, J. B.; Lee, H. Y.; Park, C. G

    2008-11-15

    This report is a draft of an enactment of KEPIC MNH based on 2007 ASME Boiler and Pressure Vessel Code, Section III, Division 1 Subsection NH for Class 1 Components in Elevated Temperature Service and contains of ASME Article NH-3000 design, the mandatory appendix I-14, and non-mandatory appendices T and X.

  15. Regulatory Safety Issues in the Structural Design Criteria of ASME Section III Subsection NH and for Very High Temperatures for VHTR and GEN IV

    International Nuclear Information System (INIS)

    O'Donnell, William J.; Griffin, Donald S.

    2007-01-01

    The objective of this task is to identify issues relevant to ASME Section III, Subsection NH [1], and related Code Cases that must be resolved for licensing purposes for VHTGRs (Very High Temperature Gas Reactor concepts such as those of PBMR, Areva, and GA); and to identify the material models, design criteria, and analysis methods that need to be added to the ASME Code to cover the unresolved safety issues. Subsection NH was originally developed to provide structural design criteria and limits for elevated-temperature design of Liquid Metal Fast Breeder Reactor (LMFBR) systems and some gas-cooled systems. The U.S. Nuclear Regulatory Commission (NRC) and its Advisory Committee for Reactor Safeguards (ACRS) reviewed the design limits and procedures in the process of reviewing the Clinch River Breeder Reactor (CRBR) for a construction permit in the late 1970s and early 1980s, and identified issues that needed resolution. In the years since then, the NRC and various contractors have evaluated the applicability of the ASME Code and Code Cases to high-temperature reactor designs such as the VHTGRs, and identified issues that need to be resolved to provide a regulatory basis for licensing. This Report describes: (1) NRC and ACRS safety concerns raised during the licensing process of CRBR , (2) how some of these issues are addressed by the current Subsection NH of the ASME Code; and (3) the material models, design criteria, and analysis methods that need to be added to the ASME Code and Code Cases to cover unresolved regulatory issues for very high temperature service.

  16. Statistical analyses of the data on occupational radiation expousure at JPDR

    International Nuclear Information System (INIS)

    Kato, Shohei; Anazawa, Yutaka; Matsuno, Kenji; Furuta, Toshishiro; Akiyama, Isamu

    1980-01-01

    In the statistical analyses of the data on occupational radiation exposure at JPDR, statistical features were obtained as follows. (1) The individual doses followed log-normal distribution. (2) In the distribution of doses from one job in controlled area, the logarithm of the mean (μ) depended on the exposure rate (γ(mR/h)), and the σ correlated to the nature of the job and normally distributed. These relations were as follows. μ = 0.48 ln r-0.24, σ = 1.2 +- 0.58 (3) For the data containing different groups, the distribution of doses showed a polygonal line on the log-normal probability paper. (4) Under the dose limitation, the distribution of the doses showed asymptotic curve along the limit on the log-normal probability paper. (author)

  17. Statistical Reporting Errors and Collaboration on Statistical Analyses in Psychological Science.

    Science.gov (United States)

    Veldkamp, Coosje L S; Nuijten, Michèle B; Dominguez-Alvarez, Linda; van Assen, Marcel A L M; Wicherts, Jelte M

    2014-01-01

    Statistical analysis is error prone. A best practice for researchers using statistics would therefore be to share data among co-authors, allowing double-checking of executed tasks just as co-pilots do in aviation. To document the extent to which this 'co-piloting' currently occurs in psychology, we surveyed the authors of 697 articles published in six top psychology journals and asked them whether they had collaborated on four aspects of analyzing data and reporting results, and whether the described data had been shared between the authors. We acquired responses for 49.6% of the articles and found that co-piloting on statistical analysis and reporting results is quite uncommon among psychologists, while data sharing among co-authors seems reasonably but not completely standard. We then used an automated procedure to study the prevalence of statistical reporting errors in the articles in our sample and examined the relationship between reporting errors and co-piloting. Overall, 63% of the articles contained at least one p-value that was inconsistent with the reported test statistic and the accompanying degrees of freedom, and 20% of the articles contained at least one p-value that was inconsistent to such a degree that it may have affected decisions about statistical significance. Overall, the probability that a given p-value was inconsistent was over 10%. Co-piloting was not found to be associated with reporting errors.

  18. Statistical analyses of conserved features of genomic islands in bacteria.

    Science.gov (United States)

    Guo, F-B; Xia, Z-K; Wei, W; Zhao, H-L

    2014-03-17

    We performed statistical analyses of five conserved features of genomic islands of bacteria. Analyses were made based on 104 known genomic islands, which were identified by comparative methods. Four of these features include sequence size, abnormal G+C content, flanking tRNA gene, and embedded mobility gene, which are frequently investigated. One relatively new feature, G+C homogeneity, was also investigated. Among the 104 known genomic islands, 88.5% were found to fall in the typical length of 10-200 kb and 80.8% had G+C deviations with absolute values larger than 2%. For the 88 genomic islands whose hosts have been sequenced and annotated, 52.3% of them were found to have flanking tRNA genes and 64.7% had embedded mobility genes. For the homogeneity feature, 85% had an h homogeneity index less than 0.1, indicating that their G+C content is relatively uniform. Taking all the five features into account, 87.5% of 88 genomic islands had three of them. Only one genomic island had only one conserved feature and none of the genomic islands had zero features. These statistical results should help to understand the general structure of known genomic islands. We found that larger genomic islands tend to have relatively small G+C deviations relative to absolute values. For example, the absolute G+C deviations of 9 genomic islands longer than 100,000 bp were all less than 5%. This is a novel but reasonable result given that larger genomic islands should have greater restrictions in their G+C contents, in order to maintain the stable G+C content of the recipient genome.

  19. Methodology development for statistical evaluation of reactor safety analyses

    International Nuclear Information System (INIS)

    Mazumdar, M.; Marshall, J.A.; Chay, S.C.; Gay, R.

    1976-07-01

    In February 1975, Westinghouse Electric Corporation, under contract to Electric Power Research Institute, started a one-year program to develop methodology for statistical evaluation of nuclear-safety-related engineering analyses. The objectives of the program were to develop an understanding of the relative efficiencies of various computational methods which can be used to compute probability distributions of output variables due to input parameter uncertainties in analyses of design basis events for nuclear reactors and to develop methods for obtaining reasonably accurate estimates of these probability distributions at an economically feasible level. A series of tasks was set up to accomplish these objectives. Two of the tasks were to investigate the relative efficiencies and accuracies of various Monte Carlo and analytical techniques for obtaining such estimates for a simple thermal-hydraulic problem whose output variable of interest is given in a closed-form relationship of the input variables and to repeat the above study on a thermal-hydraulic problem in which the relationship between the predicted variable and the inputs is described by a short-running computer program. The purpose of the report presented is to document the results of the investigations completed under these tasks, giving the rationale for choices of techniques and problems, and to present interim conclusions

  20. Taxonomic revision of the tropical African group of Carex subsect. Elatae (sect. Spirostachyae, Cyperaceae

    Directory of Open Access Journals (Sweden)

    Escudero, Marcial

    2011-12-01

    Full Text Available The tropical African monophyletic group of Carex subsect. Elatae (sect. Spirostachyae is distributed in continental tropical Africa, Madagascar, the Mascarene archipelago, and Bioko Island (32 km off the coast of West Africa, in the Gulf of Guinea. The first monographic treatment of this Carex group, as well as of the tribe Cariceae, was published by Kükenthal (as sect. Elatae Kük.. Recently, the first molecular (nrDNA, cpDNA phylogeny of Carex sect. Elatae has been published, which also included the species of sect. Spirostachyae. In the resulting consensus trees, most species of sect. Elatae were embedded within core Spirostachyae and so this section was joined with sect. Spirostachyae as subsect. Elatae. Within subsect. Elatae, several groups were described, one of which was termed the “tropical African group”. Here we present a taxonomic revision of this group, based on more than 280 vouchers from 29 herbaria as well as in field trips in Tropical Africa. In the revision, we recognise 12 species (16 taxa within the tropical African group, and so have somewhat modified our previous view, in which 10 species, 12 taxa were listed. One new species from Tanzania is included in this treatment, C. uluguruensis Luceño & M. Escudero. Several combinations are made, C. cyrtosaccus is treated as a synonym of C. vallis-rosetto and, finally, the binomial C. greenwayi has been recognised.Las especies de la subsección Elatae (sección Spirostachyae del género Carex que se distribuyen por África tropical continental, Madagascar, el archipiélago de las Mascareñas y la isla de Bioko (a 32 km del litoral de África occidental, en el golfo de Guinea forman un grupo monofilético. El primer tratamiento taxonómico de este grupo de cárices, así como de la tribu Cariceae en su conjunto, fue elaborado por Kükenthal (sección Elatae Kük.; recientemente, se ha publicado el primer estudio de filogenia molecular (nrDNA, cpDNA de los táxones de este grupo

  1. Statistical reporting errors and collaboration on statistical analyses in psychological science

    NARCIS (Netherlands)

    Veldkamp, C.L.S.; Nuijten, M.B.; Dominguez Alvarez, L.; van Assen, M.A.L.M.; Wicherts, J.M.

    2014-01-01

    Statistical analysis is error prone. A best practice for researchers using statistics would therefore be to share data among co-authors, allowing double-checking of executed tasks just as co-pilots do in aviation. To document the extent to which this ‘co-piloting’ currently occurs in psychology, we

  2. A weighted U-statistic for genetic association analyses of sequencing data.

    Science.gov (United States)

    Wei, Changshuai; Li, Ming; He, Zihuai; Vsevolozhskaya, Olga; Schaid, Daniel J; Lu, Qing

    2014-12-01

    With advancements in next-generation sequencing technology, a massive amount of sequencing data is generated, which offers a great opportunity to comprehensively investigate the role of rare variants in the genetic etiology of complex diseases. Nevertheless, the high-dimensional sequencing data poses a great challenge for statistical analysis. The association analyses based on traditional statistical methods suffer substantial power loss because of the low frequency of genetic variants and the extremely high dimensionality of the data. We developed a Weighted U Sequencing test, referred to as WU-SEQ, for the high-dimensional association analysis of sequencing data. Based on a nonparametric U-statistic, WU-SEQ makes no assumption of the underlying disease model and phenotype distribution, and can be applied to a variety of phenotypes. Through simulation studies and an empirical study, we showed that WU-SEQ outperformed a commonly used sequence kernel association test (SKAT) method when the underlying assumptions were violated (e.g., the phenotype followed a heavy-tailed distribution). Even when the assumptions were satisfied, WU-SEQ still attained comparable performance to SKAT. Finally, we applied WU-SEQ to sequencing data from the Dallas Heart Study (DHS), and detected an association between ANGPTL 4 and very low density lipoprotein cholesterol. © 2014 WILEY PERIODICALS, INC.

  3. Non-Statistical Methods of Analysing of Bankruptcy Risk

    Directory of Open Access Journals (Sweden)

    Pisula Tomasz

    2015-06-01

    Full Text Available The article focuses on assessing the effectiveness of a non-statistical approach to bankruptcy modelling in enterprises operating in the logistics sector. In order to describe the issue more comprehensively, the aforementioned prediction of the possible negative results of business operations was carried out for companies functioning in the Polish region of Podkarpacie, and in Slovakia. The bankruptcy predictors selected for the assessment of companies operating in the logistics sector included 28 financial indicators characterizing these enterprises in terms of their financial standing and management effectiveness. The purpose of the study was to identify factors (models describing the bankruptcy risk in enterprises in the context of their forecasting effectiveness in a one-year and two-year time horizon. In order to assess their practical applicability the models were carefully analysed and validated. The usefulness of the models was assessed in terms of their classification properties, and the capacity to accurately identify enterprises at risk of bankruptcy and healthy companies as well as proper calibration of the models to the data from training sample sets.

  4. Applied statistics a handbook of BMDP analyses

    CERN Document Server

    Snell, E J

    1987-01-01

    This handbook is a realization of a long term goal of BMDP Statistical Software. As the software supporting statistical analysis has grown in breadth and depth to the point where it can serve many of the needs of accomplished statisticians it can also serve as an essential support to those needing to expand their knowledge of statistical applications. Statisticians should not be handicapped by heavy computation or by the lack of needed options. When Applied Statistics, Principle and Examples by Cox and Snell appeared we at BMDP were impressed with the scope of the applications discussed and felt that many statisticians eager to expand their capabilities in handling such problems could profit from having the solutions carried further, to get them started and guided to a more advanced level in problem solving. Who would be better to undertake that task than the authors of Applied Statistics? A year or two later discussions with David Cox and Joyce Snell at Imperial College indicated that a wedding of the proble...

  5. Statistical reliability analyses of two wood plastic composite extrusion processes

    International Nuclear Information System (INIS)

    Crookston, Kevin A.; Mark Young, Timothy; Harper, David; Guess, Frank M.

    2011-01-01

    Estimates of the reliability of wood plastic composites (WPC) are explored for two industrial extrusion lines. The goal of the paper is to use parametric and non-parametric analyses to examine potential differences in the WPC metrics of reliability for the two extrusion lines that may be helpful for use by the practitioner. A parametric analysis of the extrusion lines reveals some similarities and disparities in the best models; however, a non-parametric analysis reveals unique and insightful differences between Kaplan-Meier survival curves for the modulus of elasticity (MOE) and modulus of rupture (MOR) of the WPC industrial data. The distinctive non-parametric comparisons indicate the source of the differences in strength between the 10.2% and 48.0% fractiles [3,183-3,517 MPa] for MOE and for MOR between the 2.0% and 95.1% fractiles [18.9-25.7 MPa]. Distribution fitting as related to selection of the proper statistical methods is discussed with relevance to estimating the reliability of WPC. The ability to detect statistical differences in the product reliability of WPC between extrusion processes may benefit WPC producers in improving product reliability and safety of this widely used house-decking product. The approach can be applied to many other safety and complex system lifetime comparisons.

  6. POLLEN AND SEED SURFACE MORFOLOGY IN SOME REPRESENTATIVES OF THE GENUS RHODODENDRON SUBSECT. RHODORASTRUM (ERICACEAE IN THE RUSSIAN FAR EAST

    Directory of Open Access Journals (Sweden)

    I. M. Koksheeva

    2015-05-01

    Full Text Available Comparative study of pollen and seed morphology of three species of Rhododendron L. subsect. Rhodorastrum (Maxim. Cullen (Rh. dauricum L., Rh. mucronolatum Turcz., Rh. sichotense Pojark. is performed. Results of discriminant analysis of the total of morphometric characters of pollen and seeds have proved the distinctness of all three species from each other. Differences of polen are observed in the type of sculpture (granulate, rugulate, microrugulate and in the diameter of tetrads. The coefficient of elongation of the exotesta cells is established as a valuable morphometric character

  7. 31 CFR Appendix A to Subpart H of... - Notice for Purposes of Subsection 314(b) of the USA Patriot Act and 31 CFR 103.110

    Science.gov (United States)

    2010-07-01

    ...(b) of the USA Patriot Act and 31 CFR 103.110 A Appendix A to Subpart H of Part 103 Money and Finance: Treasury Regulations Relating to Money and Finance FINANCIAL RECORDKEEPING AND REPORTING OF CURRENCY AND... Activity App. A, Subpt. H, Pt. 103 Appendix A to Subpart H of Part 103—Notice for Purposes of Subsection...

  8. Methods in pharmacoepidemiology: a review of statistical analyses and data reporting in pediatric drug utilization studies.

    Science.gov (United States)

    Sequi, Marco; Campi, Rita; Clavenna, Antonio; Bonati, Maurizio

    2013-03-01

    To evaluate the quality of data reporting and statistical methods performed in drug utilization studies in the pediatric population. Drug utilization studies evaluating all drug prescriptions to children and adolescents published between January 1994 and December 2011 were retrieved and analyzed. For each study, information on measures of exposure/consumption, the covariates considered, descriptive and inferential analyses, statistical tests, and methods of data reporting was extracted. An overall quality score was created for each study using a 12-item checklist that took into account the presence of outcome measures, covariates of measures, descriptive measures, statistical tests, and graphical representation. A total of 22 studies were reviewed and analyzed. Of these, 20 studies reported at least one descriptive measure. The mean was the most commonly used measure (18 studies), but only five of these also reported the standard deviation. Statistical analyses were performed in 12 studies, with the chi-square test being the most commonly performed test. Graphs were presented in 14 papers. Sixteen papers reported the number of drug prescriptions and/or packages, and ten reported the prevalence of the drug prescription. The mean quality score was 8 (median 9). Only seven of the 22 studies received a score of ≥10, while four studies received a score of statistical methods and reported data in a satisfactory manner. We therefore conclude that the methodology of drug utilization studies needs to be improved.

  9. Characteristics of electrostatic solitary waves observed in the plasma sheet boundary: Statistical analyses

    Directory of Open Access Journals (Sweden)

    H. Kojima

    1999-01-01

    Full Text Available We present the characteristics of the Electrostatic Solitary Waves (ESW observed by the Geotail spacecraft in the plasma sheet boundary layer based on the statistical analyses. We also discuss the results referring to a model of ESW generation due to electron beams, which is proposed by computer simulations. In this generation model, the nonlinear evolution of Langmuir waves excited by electron bump-on-tail instabilities leads to formation of isolated electrostatic potential structures corresponding to "electron hole" in the phase space. The statistical analyses of the Geotail data, which we conducted under the assumption that polarity of ESW potentials is positive, show that most of ESW propagate in the same direction of electron beams, which are observed by the plasma instrument, simultaneously. Further, we also find that the ESW potential energy is much smaller than the background electron thermal energy and that the ESW potential widths are typically shorter than 60 times of local electron Debye length when we assume that the ESW potentials travel in the same velocity of electron beams. These results are very consistent with the ESW generation model that the nonlinear evolution of electron bump-on-tail instability leads to the formation of electron holes in the phase space.

  10. arXiv Statistical Analyses of Higgs- and Z-Portal Dark Matter Models

    CERN Document Server

    Ellis, John; Marzola, Luca; Raidal, Martti

    2018-06-12

    We perform frequentist and Bayesian statistical analyses of Higgs- and Z-portal models of dark matter particles with spin 0, 1/2 and 1. Our analyses incorporate data from direct detection and indirect detection experiments, as well as LHC searches for monojet and monophoton events, and we also analyze the potential impacts of future direct detection experiments. We find acceptable regions of the parameter spaces for Higgs-portal models with real scalar, neutral vector, Majorana or Dirac fermion dark matter particles, and Z-portal models with Majorana or Dirac fermion dark matter particles. In many of these cases, there are interesting prospects for discovering dark matter particles in Higgs or Z decays, as well as dark matter particles weighing $\\gtrsim 100$ GeV. Negative results from planned direct detection experiments would still allow acceptable regions for Higgs- and Z-portal models with Majorana or Dirac fermion dark matter particles.

  11. Statistical analyses of scatterplots to identify important factors in large-scale simulations, 2: robustness of techniques

    International Nuclear Information System (INIS)

    Kleijnen, J.P.C.; Helton, J.C.

    1999-01-01

    The robustness of procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses is investigated. These procedures are based on attempts to detect increasingly complex patterns in the scatterplots under consideration and involve the identification of (i) linear relationships with correlation coefficients, (ii) monotonic relationships with rank correlation coefficients, (iii) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (iv) trends in variability as defined by variances and interquartile ranges, and (v) deviations from randomness as defined by the chi-square statistic. The following two topics related to the robustness of these procedures are considered for a sequence of example analyses with a large model for two-phase fluid flow: the presence of Type I and Type II errors, and the stability of results obtained with independent Latin hypercube samples. Observations from analysis include: (i) Type I errors are unavoidable, (ii) Type II errors can occur when inappropriate analysis procedures are used, (iii) physical explanations should always be sought for why statistical procedures identify variables as being important, and (iv) the identification of important variables tends to be stable for independent Latin hypercube samples

  12. Scripts for TRUMP data analyses. Part II (HLA-related data): statistical analyses specific for hematopoietic stem cell transplantation.

    Science.gov (United States)

    Kanda, Junya

    2016-01-01

    The Transplant Registry Unified Management Program (TRUMP) made it possible for members of the Japan Society for Hematopoietic Cell Transplantation (JSHCT) to analyze large sets of national registry data on autologous and allogeneic hematopoietic stem cell transplantation. However, as the processes used to collect transplantation information are complex and differed over time, the background of these processes should be understood when using TRUMP data. Previously, information on the HLA locus of patients and donors had been collected using a questionnaire-based free-description method, resulting in some input errors. To correct minor but significant errors and provide accurate HLA matching data, the use of a Stata or EZR/R script offered by the JSHCT is strongly recommended when analyzing HLA data in the TRUMP dataset. The HLA mismatch direction, mismatch counting method, and different impacts of HLA mismatches by stem cell source are other important factors in the analysis of HLA data. Additionally, researchers should understand the statistical analyses specific for hematopoietic stem cell transplantation, such as competing risk, landmark analysis, and time-dependent analysis, to correctly analyze transplant data. The data center of the JSHCT can be contacted if statistical assistance is required.

  13. Teachers' Perceptions of Computer Use in Education in the TRNC Schools

    Science.gov (United States)

    Silman, Fatos; Gundogdu, Kerim

    2007-01-01

    The aim of this study is to examine the perceptions of the classroom teachers on the computer use in the TRNC schools. A questionnaire was applied to 84 classroom teachers in 5 schools in Nicosia, the capital city of the TRNC. The answers to the first part of the questionnaire with five subsections were analysed. Descriptive statistics were used…

  14. A guide to statistical analysis in microbial ecology: a community-focused, living review of multivariate data analyses

    OpenAIRE

    Buttigieg, Pier Luigi; Ramette, Alban Nicolas

    2014-01-01

    The application of multivariate statistical analyses has become a consistent feature in microbial ecology. However, many microbial ecologists are still in the process of developing a deep understanding of these methods and appreciating their limitations. As a consequence, staying abreast of progress and debate in this arena poses an additional challenge to many microbial ecologists. To address these issues, we present the GUide to STatistical Analysis in Microbial Ecology (GUSTA ME): a dynami...

  15. Statistical analyses of scatterplots to identify important factors in large-scale simulations, 1: Review and comparison of techniques

    International Nuclear Information System (INIS)

    Kleijnen, J.P.C.; Helton, J.C.

    1999-01-01

    Procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses are described and illustrated. These procedures attempt to detect increasingly complex patterns in scatterplots and involve the identification of (i) linear relationships with correlation coefficients, (ii) monotonic relationships with rank correlation coefficients, (iii) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (iv) trends in variability as defined by variances and interquartile ranges, and (v) deviations from randomness as defined by the chi-square statistic. A sequence of example analyses with a large model for two-phase fluid flow illustrates how the individual procedures can differ in the variables that they identify as having effects on particular model outcomes. The example analyses indicate that the use of a sequence of procedures is a good analysis strategy and provides some assurance that an important effect is not overlooked

  16. The Relationship Between Radiative Forcing and Temperature. What Do Statistical Analyses of the Instrumental Temperature Record Measure?

    International Nuclear Information System (INIS)

    Kaufmann, R.K.; Kauppi, H.; Stock, J.H.

    2006-01-01

    Comparing statistical estimates for the long-run temperature effect of doubled CO2 with those generated by climate models begs the question, is the long-run temperature effect of doubled CO2 that is estimated from the instrumental temperature record using statistical techniques consistent with the transient climate response, the equilibrium climate sensitivity, or the effective climate sensitivity. Here, we attempt to answer the question, what do statistical analyses of the observational record measure, by using these same statistical techniques to estimate the temperature effect of a doubling in the atmospheric concentration of carbon dioxide from seventeen simulations run for the Coupled Model Intercomparison Project 2 (CMIP2). The results indicate that the temperature effect estimated by the statistical methodology is consistent with the transient climate response and that this consistency is relatively unaffected by sample size or the increase in radiative forcing in the sample

  17. Comparison of elevated temperature design codes of ASME Subsection NH and RCC-MRx

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hyeong-Yeon, E-mail: hylee@kaeri.re.kr

    2016-11-15

    Highlights: • Comparison of elevated temperature design (ETD) codes was made. • Material properties and evaluation procedures were compared. • Two heat-resistant materials of Grade 91 steel and austenitic stainless steel 316 are the target materials in the present study. • Application of the ETD codes to Generation IV reactor components and a comparison of the conservatism was conducted. - Abstract: The elevated temperature design (ETD) codes are used for the design evaluation of Generation IV (Gen IV) reactor systems such as sodium-cooled fast reactor (SFR), lead-cooled fast reactor (LFR), and very high temperature reactor (VHTR). In the present study, ETD code comparisons were made in terms of the material properties and design evaluation procedures for the recent versions of the two major ETD codes, ASME Section III Subsection NH and RCC-MRx. Conservatism in the design evaluation procedures was quantified and compared based on the evaluation results for SFR components as per the two ETD codes. The target materials are austenitic stainless steel 316 and Mod.9Cr-1Mo steel, which are the major two materials in a Gen IV SFR. The differences in the design evaluation procedures as well as the material properties in the two ETD codes are highlighted.

  18. Statistical analyses of variability/reproducibility of environmentally assisted cyclic crack growth rate data utilizing JAERI Material Performance Database (JMPD)

    International Nuclear Information System (INIS)

    Tsuji, Hirokazu; Yokoyama, Norio; Nakajima, Hajime; Kondo, Tatsuo

    1993-05-01

    Statistical analyses were conducted by using the cyclic crack growth rate data for pressure vessel steels stored in the JAERI Material Performance Database (JMPD), and comparisons were made on variability and/or reproducibility of the data between obtained by ΔK-increasing and by ΔK-constant type tests. Based on the results of the statistical analyses, it was concluded that ΔK-constant type tests are generally superior to the commonly used ΔK-increasing type ones from the viewpoint of variability and/or reproducibility of the data. Such a tendency was more pronounced in the tests conducted in simulated LWR primary coolants than those in air. (author)

  19. Statistical analyses to support guidelines for marine avian sampling. Final report

    Science.gov (United States)

    Kinlan, Brian P.; Zipkin, Elise; O'Connell, Allan F.; Caldow, Chris

    2012-01-01

    distribution to describe counts of a given species in a particular region and season. 4. Using a large database of historical at-sea seabird survey data, we applied this technique to identify appropriate statistical distributions for modeling a variety of species, allowing the distribution to vary by season. For each species and season, we used the selected distribution to calculate and map retrospective statistical power to detect hotspots and coldspots, and map pvalues from Monte Carlo significance tests of hotspots and coldspots, in discrete lease blocks designated by the U.S. Department of Interior, Bureau of Ocean Energy Management (BOEM). 5. Because our definition of hotspots and coldspots does not explicitly include variability over time, we examine the relationship between the temporal scale of sampling and the proportion of variance captured in time series of key environmental correlates of marine bird abundance, as well as available marine bird abundance time series, and use these analyses to develop recommendations for the temporal distribution of sampling to adequately represent both shortterm and long-term variability. We conclude by presenting a schematic “decision tree” showing how this power analysis approach would fit in a general framework for avian survey design, and discuss implications of model assumptions and results. We discuss avenues for future development of this work, and recommendations for practical implementation in the context of siting and wildlife assessment for offshore renewable energy development projects.

  20. Multivariate statistical analyses demonstrate unique host immune responses to single and dual lentiviral infection.

    Directory of Open Access Journals (Sweden)

    Sunando Roy

    2009-10-01

    Full Text Available Feline immunodeficiency virus (FIV and human immunodeficiency virus (HIV are recently identified lentiviruses that cause progressive immune decline and ultimately death in infected cats and humans. It is of great interest to understand how to prevent immune system collapse caused by these lentiviruses. We recently described that disease caused by a virulent FIV strain in cats can be attenuated if animals are first infected with a feline immunodeficiency virus derived from a wild cougar. The detailed temporal tracking of cat immunological parameters in response to two viral infections resulted in high-dimensional datasets containing variables that exhibit strong co-variation. Initial analyses of these complex data using univariate statistical techniques did not account for interactions among immunological response variables and therefore potentially obscured significant effects between infection state and immunological parameters.Here, we apply a suite of multivariate statistical tools, including Principal Component Analysis, MANOVA and Linear Discriminant Analysis, to temporal immunological data resulting from FIV superinfection in domestic cats. We investigated the co-variation among immunological responses, the differences in immune parameters among four groups of five cats each (uninfected, single and dual infected animals, and the "immune profiles" that discriminate among them over the first four weeks following superinfection. Dual infected cats mount an immune response by 24 days post superinfection that is characterized by elevated levels of CD8 and CD25 cells and increased expression of IL4 and IFNgamma, and FAS. This profile discriminates dual infected cats from cats infected with FIV alone, which show high IL-10 and lower numbers of CD8 and CD25 cells.Multivariate statistical analyses demonstrate both the dynamic nature of the immune response to FIV single and dual infection and the development of a unique immunological profile in dual

  1. A simple and robust statistical framework for planning, analysing and interpreting faecal egg count reduction test (FECRT) studies

    DEFF Research Database (Denmark)

    Denwood, M.J.; McKendrick, I.J.; Matthews, L.

    Introduction. There is an urgent need for a method of analysing FECRT data that is computationally simple and statistically robust. A method for evaluating the statistical power of a proposed FECRT study would also greatly enhance the current guidelines. Methods. A novel statistical framework has...... been developed that evaluates observed FECRT data against two null hypotheses: (1) the observed efficacy is consistent with the expected efficacy, and (2) the observed efficacy is inferior to the expected efficacy. The method requires only four simple summary statistics of the observed data. Power...... that the notional type 1 error rate of the new statistical test is accurate. Power calculations demonstrate a power of only 65% with a sample size of 20 treatment and control animals, which increases to 69% with 40 control animals or 79% with 40 treatment animals. Discussion. The method proposed is simple...

  2. The intervals method: a new approach to analyse finite element outputs using multivariate statistics

    Directory of Open Access Journals (Sweden)

    Jordi Marcé-Nogué

    2017-10-01

    Full Text Available Background In this paper, we propose a new method, named the intervals’ method, to analyse data from finite element models in a comparative multivariate framework. As a case study, several armadillo mandibles are analysed, showing that the proposed method is useful to distinguish and characterise biomechanical differences related to diet/ecomorphology. Methods The intervals’ method consists of generating a set of variables, each one defined by an interval of stress values. Each variable is expressed as a percentage of the area of the mandible occupied by those stress values. Afterwards these newly generated variables can be analysed using multivariate methods. Results Applying this novel method to the biological case study of whether armadillo mandibles differ according to dietary groups, we show that the intervals’ method is a powerful tool to characterize biomechanical performance and how this relates to different diets. This allows us to positively discriminate between specialist and generalist species. Discussion We show that the proposed approach is a useful methodology not affected by the characteristics of the finite element mesh. Additionally, the positive discriminating results obtained when analysing a difficult case study suggest that the proposed method could be a very useful tool for comparative studies in finite element analysis using multivariate statistical approaches.

  3. The intervals method: a new approach to analyse finite element outputs using multivariate statistics

    Science.gov (United States)

    De Esteban-Trivigno, Soledad; Püschel, Thomas A.; Fortuny, Josep

    2017-01-01

    Background In this paper, we propose a new method, named the intervals’ method, to analyse data from finite element models in a comparative multivariate framework. As a case study, several armadillo mandibles are analysed, showing that the proposed method is useful to distinguish and characterise biomechanical differences related to diet/ecomorphology. Methods The intervals’ method consists of generating a set of variables, each one defined by an interval of stress values. Each variable is expressed as a percentage of the area of the mandible occupied by those stress values. Afterwards these newly generated variables can be analysed using multivariate methods. Results Applying this novel method to the biological case study of whether armadillo mandibles differ according to dietary groups, we show that the intervals’ method is a powerful tool to characterize biomechanical performance and how this relates to different diets. This allows us to positively discriminate between specialist and generalist species. Discussion We show that the proposed approach is a useful methodology not affected by the characteristics of the finite element mesh. Additionally, the positive discriminating results obtained when analysing a difficult case study suggest that the proposed method could be a very useful tool for comparative studies in finite element analysis using multivariate statistical approaches. PMID:29043107

  4. Description of comprehensive pump test change to ASME OM code, subsection ISTB

    International Nuclear Information System (INIS)

    Hartley, R.S.

    1994-01-01

    The American Society of Mechanical Engineers (ASME) Operations and Maintenance (OM) Main Committee and Board on Nuclear Codes and Standards (BNCS) recently approved changes to ASME OM Code-1990, Subsection ISTB, Inservice Testing of Pumps in Light-Water Reactor Power Plants. The changes will be included in the 1994 addenda to ISTB. The changes, designated as the comprehensive pump test, incorporate a new, improved philosophy for testing safety-related pumps in nuclear power plants. An important philosophical difference between the open-quotes old codeclose quotes inservice testing (IST) requirements and these changes is that the changes concentrate on less frequent, more meaningful testing while minimizing damaging and uninformative low-flow testing. The comprehensive pump test change establishes a more involved biannual test for all pumps and significantly reduces the rigor of the quarterly test for standby pumps. The increased rigor and cost of the biannual comprehensive tests are offset by the reduced cost of testing and potential damage to the standby pumps, which comprise a large portion of the safety-related pumps at most plants. This paper provides background on the pump testing requirements, discusses potential industry benefits of the change, describes the development of the comprehensive pump test, and gives examples and reasons for many of the specific changes. This paper also describes additional changes to ISTB that will be included in the 1994 addenda that are associated with, but not part of, the comprehensive pump test

  5. "Who Was 'Shadow'?" The Computer Knows: Applying Grammar-Program Statistics in Content Analyses to Solve Mysteries about Authorship.

    Science.gov (United States)

    Ellis, Barbara G.; Dick, Steven J.

    1996-01-01

    Employs the statistics-documentation portion of a word-processing program's grammar-check feature together with qualitative analyses to determine that Henry Watterson, long-time editor of the "Louisville Courier-Journal," was probably the South's famed Civil War correspondent "Shadow." (TB)

  6. Statistical parameters of random heterogeneity estimated by analysing coda waves based on finite difference method

    Science.gov (United States)

    Emoto, K.; Saito, T.; Shiomi, K.

    2017-12-01

    Short-period (2 s) seismograms. We found that the energy of the coda of long-period seismograms shows a spatially flat distribution. This phenomenon is well known in short-period seismograms and results from the scattering by small-scale heterogeneities. We estimate the statistical parameters that characterize the small-scale random heterogeneity by modelling the spatiotemporal energy distribution of long-period seismograms. We analyse three moderate-size earthquakes that occurred in southwest Japan. We calculate the spatial distribution of the energy density recorded by a dense seismograph network in Japan at the period bands of 8-16 s, 4-8 s and 2-4 s and model them by using 3-D finite difference (FD) simulations. Compared to conventional methods based on statistical theories, we can calculate more realistic synthetics by using the FD simulation. It is not necessary to assume a uniform background velocity, body or surface waves and scattering properties considered in general scattering theories. By taking the ratio of the energy of the coda area to that of the entire area, we can separately estimate the scattering and the intrinsic absorption effects. Our result reveals the spectrum of the random inhomogeneity in a wide wavenumber range including the intensity around the corner wavenumber as P(m) = 8πε2a3/(1 + a2m2)2, where ε = 0.05 and a = 3.1 km, even though past studies analysing higher-frequency records could not detect the corner. Finally, we estimate the intrinsic attenuation by modelling the decay rate of the energy. The method proposed in this study is suitable for quantifying the statistical properties of long-wavelength subsurface random inhomogeneity, which leads the way to characterizing a wider wavenumber range of spectra, including the corner wavenumber.

  7. Statistical contact angle analyses; "slow moving" drops on a horizontal silicon-oxide surface.

    Science.gov (United States)

    Schmitt, M; Grub, J; Heib, F

    2015-06-01

    Sessile drop experiments on horizontal surfaces are commonly used to characterise surface properties in science and in industry. The advancing angle and the receding angle are measurable on every solid. Specially on horizontal surfaces even the notions themselves are critically questioned by some authors. Building a standard, reproducible and valid method of measuring and defining specific (advancing/receding) contact angles is an important challenge of surface science. Recently we have developed two/three approaches, by sigmoid fitting, by independent and by dependent statistical analyses, which are practicable for the determination of specific angles/slopes if inclining the sample surface. These approaches lead to contact angle data which are independent on "user-skills" and subjectivity of the operator which is also of urgent need to evaluate dynamic measurements of contact angles. We will show in this contribution that the slightly modified procedures are also applicable to find specific angles for experiments on horizontal surfaces. As an example droplets on a flat freshly cleaned silicon-oxide surface (wafer) are dynamically measured by sessile drop technique while the volume of the liquid is increased/decreased. The triple points, the time, the contact angles during the advancing and the receding of the drop obtained by high-precision drop shape analysis are statistically analysed. As stated in the previous contribution the procedure is called "slow movement" analysis due to the small covered distance and the dominance of data points with low velocity. Even smallest variations in velocity such as the minimal advancing motion during the withdrawing of the liquid are identifiable which confirms the flatness and the chemical homogeneity of the sample surface and the high sensitivity of the presented approaches. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Statistical analyses of incidents on onshore gas transmission pipelines based on PHMSA database

    International Nuclear Information System (INIS)

    Lam, Chio; Zhou, Wenxing

    2016-01-01

    This article reports statistical analyses of the mileage and pipe-related incidents data corresponding to the onshore gas transmission pipelines in the US between 2002 and 2013 collected by the Pipeline Hazardous Material Safety Administration of the US Department of Transportation. The analysis indicates that there are approximately 480,000 km of gas transmission pipelines in the US, approximately 60% of them more than 45 years old as of 2013. Eighty percent of the pipelines are Class 1 pipelines, and about 20% of the pipelines are Classes 2 and 3 pipelines. It is found that the third-party excavation, external corrosion, material failure and internal corrosion are the four leading failure causes, responsible for more than 75% of the total incidents. The 12-year average rate of rupture equals 3.1 × 10"−"5 per km-year due to all failure causes combined. External corrosion is the leading cause for ruptures: the 12-year average rupture rate due to external corrosion equals 1.0 × 10"−"5 per km-year and is twice the rupture rate due to the third-party excavation or material failure. The study provides insights into the current state of gas transmission pipelines in the US and baseline failure statistics for the quantitative risk assessments of such pipelines. - Highlights: • Analyze PHMSA pipeline mileage and incident data between 2002 and 2013. • Focus on gas transmission pipelines. • Leading causes for pipeline failures are identified. • Provide baseline failure statistics for risk assessments of gas transmission pipelines.

  9. Cost and quality effectiveness of objective-based and statistically-based quality control for volatile organic compounds analyses of gases

    International Nuclear Information System (INIS)

    Bennett, J.T.; Crowder, C.A.; Connolly, M.J.

    1994-01-01

    Gas samples from drums of radioactive waste at the Department of Energy (DOE) Idaho National Engineering Laboratory are being characterized for 29 volatile organic compounds to determine the feasibility of storing the waste in DOE's Waste Isolation Pilot Plant (WIPP) in Carlsbad, New Mexico. Quality requirements for the gas chromatography (GC) and GC/mass spectrometry chemical methods used to analyze the waste are specified in the Quality Assurance Program Plan for the WIPP Experimental Waste Characterization Program. Quality requirements consist of both objective criteria (data quality objectives, DQOs) and statistical criteria (process control). The DQOs apply to routine sample analyses, while the statistical criteria serve to determine and monitor precision and accuracy (P ampersand A) of the analysis methods and are also used to assign upper confidence limits to measurement results close to action levels. After over two years and more than 1000 sample analyses there are two general conclusions concerning the two approaches to quality control: (1) Objective criteria (e.g., ± 25% precision, ± 30% accuracy) based on customer needs and the usually prescribed criteria for similar EPA- approved methods are consistently attained during routine analyses. (2) Statistical criteria based on short term method performance are almost an order of magnitude more stringent than objective criteria and are difficult to satisfy following the same routine laboratory procedures which satisfy the objective criteria. A more cost effective and representative approach to establishing statistical method performances criteria would be either to utilize a moving average of P ampersand A from control samples over a several month time period or to determine within a sample variation by one-way analysis of variance of several months replicate sample analysis results or both. Confidence intervals for results near action levels could also be determined by replicate analysis of the sample in

  10. PRIS-STATISTICS: Power Reactor Information System Statistical Reports. User's Manual

    International Nuclear Information System (INIS)

    2013-01-01

    The IAEA developed the Power Reactor Information System (PRIS)-Statistics application to assist PRIS end users with generating statistical reports from PRIS data. Statistical reports provide an overview of the status, specification and performance results of every nuclear power reactor in the world. This user's manual was prepared to facilitate the use of the PRIS-Statistics application and to provide guidelines and detailed information for each report in the application. Statistical reports support analyses of nuclear power development and strategies, and the evaluation of nuclear power plant performance. The PRIS database can be used for comprehensive trend analyses and benchmarking against best performers and industrial standards.

  11. Introduction to Statistics

    Directory of Open Access Journals (Sweden)

    Mirjam Nielen

    2017-01-01

    Full Text Available Always wondered why research papers often present rather complicated statistical analyses? Or wondered how to properly analyse the results of a pragmatic trial from your own practice? This talk will give an overview of basic statistical principles and focus on the why of statistics, rather than on the how.This is a podcast of Mirjam's talk at the Veterinary Evidence Today conference, Edinburgh November 2, 2016. 

  12. Systematic Mapping and Statistical Analyses of Valley Landform and Vegetation Asymmetries Across Hydroclimatic Gradients

    Science.gov (United States)

    Poulos, M. J.; Pierce, J. L.; McNamara, J. P.; Flores, A. N.; Benner, S. G.

    2015-12-01

    Terrain aspect alters the spatial distribution of insolation across topography, driving eco-pedo-hydro-geomorphic feedbacks that can alter landform evolution and result in valley asymmetries for a suite of land surface characteristics (e.g. slope length and steepness, vegetation, soil properties, and drainage development). Asymmetric valleys serve as natural laboratories for studying how landscapes respond to climate perturbation. In the semi-arid montane granodioritic terrain of the Idaho batholith, Northern Rocky Mountains, USA, prior works indicate that reduced insolation on northern (pole-facing) aspects prolongs snow pack persistence, and is associated with thicker, finer-grained soils, that retain more water, prolong the growing season, support coniferous forest rather than sagebrush steppe ecosystems, stabilize slopes at steeper angles, and produce sparser drainage networks. We hypothesize that the primary drivers of valley asymmetry development are changes in the pedon-scale water-balance that coalesce to alter catchment-scale runoff and drainage development, and ultimately cause the divide between north and south-facing land surfaces to migrate northward. We explore this conceptual framework by coupling land surface analyses with statistical modeling to assess relationships and the relative importance of land surface characteristics. Throughout the Idaho batholith, we systematically mapped and tabulated various statistical measures of landforms, land cover, and hydroclimate within discrete valley segments (n=~10,000). We developed a random forest based statistical model to predict valley slope asymmetry based upon numerous measures (n>300) of landscape asymmetries. Preliminary results suggest that drainages are tightly coupled with hillslopes throughout the region, with drainage-network slope being one of the strongest predictors of land-surface-averaged slope asymmetry. When slope-related statistics are excluded, due to possible autocorrelation, valley

  13. Influence of peer review on the reporting of primary outcome(s) and statistical analyses of randomised trials.

    Science.gov (United States)

    Hopewell, Sally; Witt, Claudia M; Linde, Klaus; Icke, Katja; Adedire, Olubusola; Kirtley, Shona; Altman, Douglas G

    2018-01-11

    Selective reporting of outcomes in clinical trials is a serious problem. We aimed to investigate the influence of the peer review process within biomedical journals on reporting of primary outcome(s) and statistical analyses within reports of randomised trials. Each month, PubMed (May 2014 to April 2015) was searched to identify primary reports of randomised trials published in six high-impact general and 12 high-impact specialty journals. The corresponding author of each trial was invited to complete an online survey asking authors about changes made to their manuscript as part of the peer review process. Our main outcomes were to assess: (1) the nature and extent of changes as part of the peer review process, in relation to reporting of the primary outcome(s) and/or primary statistical analysis; (2) how often authors followed these requests; and (3) whether this was related to specific journal or trial characteristics. Of 893 corresponding authors who were invited to take part in the online survey 258 (29%) responded. The majority of trials were multicentre (n = 191; 74%); median sample size 325 (IQR 138 to 1010). The primary outcome was clearly defined in 92% (n = 238), of which the direction of treatment effect was statistically significant in 49%. The majority responded (1-10 Likert scale) they were satisfied with the overall handling (mean 8.6, SD 1.5) and quality of peer review (mean 8.5, SD 1.5) of their manuscript. Only 3% (n = 8) said that the editor or peer reviewers had asked them to change or clarify the trial's primary outcome. However, 27% (n = 69) reported they were asked to change or clarify the statistical analysis of the primary outcome; most had fulfilled the request, the main motivation being to improve the statistical methods (n = 38; 55%) or avoid rejection (n = 30; 44%). Overall, there was little association between authors being asked to make this change and the type of journal, intervention, significance of the

  14. A weighted U statistic for association analyses considering genetic heterogeneity.

    Science.gov (United States)

    Wei, Changshuai; Elston, Robert C; Lu, Qing

    2016-07-20

    Converging evidence suggests that common complex diseases with the same or similar clinical manifestations could have different underlying genetic etiologies. While current research interests have shifted toward uncovering rare variants and structural variations predisposing to human diseases, the impact of heterogeneity in genetic studies of complex diseases has been largely overlooked. Most of the existing statistical methods assume the disease under investigation has a homogeneous genetic effect and could, therefore, have low power if the disease undergoes heterogeneous pathophysiological and etiological processes. In this paper, we propose a heterogeneity-weighted U (HWU) method for association analyses considering genetic heterogeneity. HWU can be applied to various types of phenotypes (e.g., binary and continuous) and is computationally efficient for high-dimensional genetic data. Through simulations, we showed the advantage of HWU when the underlying genetic etiology of a disease was heterogeneous, as well as the robustness of HWU against different model assumptions (e.g., phenotype distributions). Using HWU, we conducted a genome-wide analysis of nicotine dependence from the Study of Addiction: Genetics and Environments dataset. The genome-wide analysis of nearly one million genetic markers took 7h, identifying heterogeneous effects of two new genes (i.e., CYP3A5 and IKBKB) on nicotine dependence. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  15. A Meta-Meta-Analysis: Empirical Review of Statistical Power, Type I Error Rates, Effect Sizes, and Model Selection of Meta-Analyses Published in Psychology

    Science.gov (United States)

    Cafri, Guy; Kromrey, Jeffrey D.; Brannick, Michael T.

    2010-01-01

    This article uses meta-analyses published in "Psychological Bulletin" from 1995 to 2005 to describe meta-analyses in psychology, including examination of statistical power, Type I errors resulting from multiple comparisons, and model choice. Retrospective power estimates indicated that univariate categorical and continuous moderators, individual…

  16. Municipal solid waste composition: Sampling methodology, statistical analyses, and case study evaluation

    International Nuclear Information System (INIS)

    Edjabou, Maklawe Essonanawe; Jensen, Morten Bang; Götze, Ramona; Pivnenko, Kostyantyn; Petersen, Claus; Scheutz, Charlotte; Astrup, Thomas Fruergaard

    2015-01-01

    Highlights: • Tiered approach to waste sorting ensures flexibility and facilitates comparison of solid waste composition data. • Food and miscellaneous wastes are the main fractions contributing to the residual household waste. • Separation of food packaging from food leftovers during sorting is not critical for determination of the solid waste composition. - Abstract: Sound waste management and optimisation of resource recovery require reliable data on solid waste generation and composition. In the absence of standardised and commonly accepted waste characterisation methodologies, various approaches have been reported in literature. This limits both comparability and applicability of the results. In this study, a waste sampling and sorting methodology for efficient and statistically robust characterisation of solid waste was introduced. The methodology was applied to residual waste collected from 1442 households distributed among 10 individual sub-areas in three Danish municipalities (both single and multi-family house areas). In total 17 tonnes of waste were sorted into 10–50 waste fractions, organised according to a three-level (tiered approach) facilitating comparison of the waste data between individual sub-areas with different fractionation (waste from one municipality was sorted at “Level III”, e.g. detailed, while the two others were sorted only at “Level I”). The results showed that residual household waste mainly contained food waste (42 ± 5%, mass per wet basis) and miscellaneous combustibles (18 ± 3%, mass per wet basis). The residual household waste generation rate in the study areas was 3–4 kg per person per week. Statistical analyses revealed that the waste composition was independent of variations in the waste generation rate. Both, waste composition and waste generation rates were statistically similar for each of the three municipalities. While the waste generation rates were similar for each of the two housing types (single

  17. Municipal solid waste composition: Sampling methodology, statistical analyses, and case study evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Edjabou, Maklawe Essonanawe, E-mail: vine@env.dtu.dk [Department of Environmental Engineering, Technical University of Denmark, 2800 Kgs. Lyngby (Denmark); Jensen, Morten Bang; Götze, Ramona; Pivnenko, Kostyantyn [Department of Environmental Engineering, Technical University of Denmark, 2800 Kgs. Lyngby (Denmark); Petersen, Claus [Econet AS, Omøgade 8, 2.sal, 2100 Copenhagen (Denmark); Scheutz, Charlotte; Astrup, Thomas Fruergaard [Department of Environmental Engineering, Technical University of Denmark, 2800 Kgs. Lyngby (Denmark)

    2015-02-15

    Highlights: • Tiered approach to waste sorting ensures flexibility and facilitates comparison of solid waste composition data. • Food and miscellaneous wastes are the main fractions contributing to the residual household waste. • Separation of food packaging from food leftovers during sorting is not critical for determination of the solid waste composition. - Abstract: Sound waste management and optimisation of resource recovery require reliable data on solid waste generation and composition. In the absence of standardised and commonly accepted waste characterisation methodologies, various approaches have been reported in literature. This limits both comparability and applicability of the results. In this study, a waste sampling and sorting methodology for efficient and statistically robust characterisation of solid waste was introduced. The methodology was applied to residual waste collected from 1442 households distributed among 10 individual sub-areas in three Danish municipalities (both single and multi-family house areas). In total 17 tonnes of waste were sorted into 10–50 waste fractions, organised according to a three-level (tiered approach) facilitating comparison of the waste data between individual sub-areas with different fractionation (waste from one municipality was sorted at “Level III”, e.g. detailed, while the two others were sorted only at “Level I”). The results showed that residual household waste mainly contained food waste (42 ± 5%, mass per wet basis) and miscellaneous combustibles (18 ± 3%, mass per wet basis). The residual household waste generation rate in the study areas was 3–4 kg per person per week. Statistical analyses revealed that the waste composition was independent of variations in the waste generation rate. Both, waste composition and waste generation rates were statistically similar for each of the three municipalities. While the waste generation rates were similar for each of the two housing types (single

  18. Testing Genetic Pleiotropy with GWAS Summary Statistics for Marginal and Conditional Analyses.

    Science.gov (United States)

    Deng, Yangqing; Pan, Wei

    2017-12-01

    There is growing interest in testing genetic pleiotropy, which is when a single genetic variant influences multiple traits. Several methods have been proposed; however, these methods have some limitations. First, all the proposed methods are based on the use of individual-level genotype and phenotype data; in contrast, for logistical, and other, reasons, summary statistics of univariate SNP-trait associations are typically only available based on meta- or mega-analyzed large genome-wide association study (GWAS) data. Second, existing tests are based on marginal pleiotropy, which cannot distinguish between direct and indirect associations of a single genetic variant with multiple traits due to correlations among the traits. Hence, it is useful to consider conditional analysis, in which a subset of traits is adjusted for another subset of traits. For example, in spite of substantial lowering of low-density lipoprotein cholesterol (LDL) with statin therapy, some patients still maintain high residual cardiovascular risk, and, for these patients, it might be helpful to reduce their triglyceride (TG) level. For this purpose, in order to identify new therapeutic targets, it would be useful to identify genetic variants with pleiotropic effects on LDL and TG after adjusting the latter for LDL; otherwise, a pleiotropic effect of a genetic variant detected by a marginal model could simply be due to its association with LDL only, given the well-known correlation between the two types of lipids. Here, we develop a new pleiotropy testing procedure based only on GWAS summary statistics that can be applied for both marginal analysis and conditional analysis. Although the main technical development is based on published union-intersection testing methods, care is needed in specifying conditional models to avoid invalid statistical estimation and inference. In addition to the previously used likelihood ratio test, we also propose using generalized estimating equations under the

  19. Temporal scaling and spatial statistical analyses of groundwater level fluctuations

    Science.gov (United States)

    Sun, H.; Yuan, L., Sr.; Zhang, Y.

    2017-12-01

    Natural dynamics such as groundwater level fluctuations can exhibit multifractionality and/or multifractality due likely to multi-scale aquifer heterogeneity and controlling factors, whose statistics requires efficient quantification methods. This study explores multifractionality and non-Gaussian properties in groundwater dynamics expressed by time series of daily level fluctuation at three wells located in the lower Mississippi valley, after removing the seasonal cycle in the temporal scaling and spatial statistical analysis. First, using the time-scale multifractional analysis, a systematic statistical method is developed to analyze groundwater level fluctuations quantified by the time-scale local Hurst exponent (TS-LHE). Results show that the TS-LHE does not remain constant, implying the fractal-scaling behavior changing with time and location. Hence, we can distinguish the potentially location-dependent scaling feature, which may characterize the hydrology dynamic system. Second, spatial statistical analysis shows that the increment of groundwater level fluctuations exhibits a heavy tailed, non-Gaussian distribution, which can be better quantified by a Lévy stable distribution. Monte Carlo simulations of the fluctuation process also show that the linear fractional stable motion model can well depict the transient dynamics (i.e., fractal non-Gaussian property) of groundwater level, while fractional Brownian motion is inadequate to describe natural processes with anomalous dynamics. Analysis of temporal scaling and spatial statistics therefore may provide useful information and quantification to understand further the nature of complex dynamics in hydrology.

  20. Review of Statistical Analyses Resulting from Performance of HLDWD-DWPF-005

    International Nuclear Information System (INIS)

    Beck, R.S.

    1997-01-01

    The Engineering Department at the Defense Waste Processing Facility (DWPF) has reviewed two reports from the Statistical Consulting Section (SCS) involving the statistical analysis of test results for analysis of small sample inserts (references 1 ampersand 2). The test results cover two proposed analytical methods, a room temperature hydrofluoric acid preparation (Cold Chem) and a sodium peroxide/sodium hydroxide fusion modified for insert samples (Modified Fusion). The reports support implementation of the proposed small sample containers and analytical methods at DWPF. Hydragard sampler valve performance was typical of previous results (reference 3). Using an element from each major feed stream. lithium from the frit and iron from the sludge, the sampler was determined to deliver a uniform mixture in either sample container.The lithium to iron ratios were equivalent for the standard 15 ml vial and the 3 ml insert.The proposed method provide equivalent analyses as compared to the current methods. The biases associated with the proposed methods on a vitrified basis are less than 5% for major elements. The sum of oxides for the proposed method compares favorably with the sum of oxides for the conventional methods. However, the average sum of oxides for the Cold Chem method was 94.3% which is below the minimum required recovery of 95%. Both proposed methods, cold Chem and Modified Fusion, will be required at first to provide an accurate analysis which will routinely meet the 95% and 105% average sum of oxides limit for Product Composition Control System (PCCS).Issued to be resolved during phased implementation are as follows: (1) Determine calcine/vitrification factor for radioactive feed; (2) Evaluate covariance matrix change against process operating ranges to determine optimum sample size; (3) Evaluate sources for low sum of oxides; and (4) Improve remote operability of production versions of equipment and instruments for installation in 221-S.The specifics of

  1. Exploratory study on a statistical method to analyse time resolved data obtained during nanomaterial exposure measurements

    International Nuclear Information System (INIS)

    Clerc, F; Njiki-Menga, G-H; Witschger, O

    2013-01-01

    Most of the measurement strategies that are suggested at the international level to assess workplace exposure to nanomaterials rely on devices measuring, in real time, airborne particles concentrations (according different metrics). Since none of the instruments to measure aerosols can distinguish a particle of interest to the background aerosol, the statistical analysis of time resolved data requires special attention. So far, very few approaches have been used for statistical analysis in the literature. This ranges from simple qualitative analysis of graphs to the implementation of more complex statistical models. To date, there is still no consensus on a particular approach and the current period is always looking for an appropriate and robust method. In this context, this exploratory study investigates a statistical method to analyse time resolved data based on a Bayesian probabilistic approach. To investigate and illustrate the use of the this statistical method, particle number concentration data from a workplace study that investigated the potential for exposure via inhalation from cleanout operations by sandpapering of a reactor producing nanocomposite thin films have been used. In this workplace study, the background issue has been addressed through the near-field and far-field approaches and several size integrated and time resolved devices have been used. The analysis of the results presented here focuses only on data obtained with two handheld condensation particle counters. While one was measuring at the source of the released particles, the other one was measuring in parallel far-field. The Bayesian probabilistic approach allows a probabilistic modelling of data series, and the observed task is modelled in the form of probability distributions. The probability distributions issuing from time resolved data obtained at the source can be compared with the probability distributions issuing from the time resolved data obtained far-field, leading in a

  2. A multi-criteria evaluation system for marine litter pollution based on statistical analyses of OSPAR beach litter monitoring time series.

    Science.gov (United States)

    Schulz, Marcus; Neumann, Daniel; Fleet, David M; Matthies, Michael

    2013-12-01

    During the last decades, marine pollution with anthropogenic litter has become a worldwide major environmental concern. Standardized monitoring of litter since 2001 on 78 beaches selected within the framework of the Convention for the Protection of the Marine Environment of the North-East Atlantic (OSPAR) has been used to identify temporal trends of marine litter. Based on statistical analyses of this dataset a two-part multi-criteria evaluation system for beach litter pollution of the North-East Atlantic and the North Sea is proposed. Canonical correlation analyses, linear regression analyses, and non-parametric analyses of variance were used to identify different temporal trends. A classification of beaches was derived from cluster analyses and served to define different states of beach quality according to abundances of 17 input variables. The evaluation system is easily applicable and relies on the above-mentioned classification and on significant temporal trends implied by significant rank correlations. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Multiple and asymmetrical origin of polyploid dog rose hybrids (Rosa L. sect. Caninae (DC.) Ser.) involving unreduced gametes.

    Science.gov (United States)

    Herklotz, V; Ritz, C M

    2017-08-01

    Polyploidy and hybridization are important factors for generating diversity in plants. The species-rich dog roses ( Rosa sect. Caninae ) originated by allopolyploidy and are characterized by unbalanced meiosis producing polyploid egg cells (usually 4 x ) and haploid sperm cells (1 x ). In extant natural stands species hybridize spontaneously, but the extent of natural hybridization is unknown. The aim of the study was to document the frequency of reciprocal hybridization between the subsections Rubigineae and Caninae with special reference to the contribution of unreduced egg cells (5 x ) producing 6 x offspring after fertilization with reduced (1 x ) sperm cells. We tested whether hybrids arose by independent multiple events or via a single or few incidences followed by a subsequent spread of hybrids. Population genetics of 45 mixed stands of dog roses across central and south-eastern Europe were analysed using microsatellite markers and flow cytometry. Hybrids were recognized by the presence of diagnostic alleles and multivariate statistics were used to display the relationships between parental species and hybrids. Among plants classified to subsect. Rubigineae , 32 % hybridogenic individuals were detected but only 8 % hybrids were found in plants assigned to subsect. Caninae . This bias between reciprocal crossings was accompanied by a higher ploidy level in Rubigineae hybrids, which originated more frequently by unreduced egg cells. Genetic patterns of hybrids were strongly geographically structured, supporting their independent origin. The biased crossing barriers between subsections are explained by the facilitated production of unreduced gametes in subsect. Rubigineae . Unreduced egg cells probably provide the highly homologous chromosome sets required for correct chromosome pairing in hybrids. Furthermore, the higher frequency of Rubigineae hybrids is probably influenced by abundance effects because the plants of subsect. Caninae are much more abundant

  4. The significance of the environmental 'cross-section' clause of article 130r section 2 subsection 2 of the EEC treaty for the realization of the Single European Market

    International Nuclear Information System (INIS)

    Breier, A.S.

    1992-01-01

    The completion of the single European Market on 31.12.1992 has not only led to economic advantages within the European Community. The author exemplifies the detrimental environmental effects brought about by the framing of tax, energy, goods traffic and aviation policies for the purposes of the Single Market in accordance with EEC Treaty. On viewing the factual material the author comes to the conclusion that at least the one-sided concepts of the two areas of traffic policy are incompatible with the environmental ''cross-section'' clause of Article 130r Section 2 Subsection 2 of the EEC Treaty. (orig.) [de

  5. Using statistical inference for decision making in best estimate analyses

    International Nuclear Information System (INIS)

    Sermer, P.; Weaver, K.; Hoppe, F.; Olive, C.; Quach, D.

    2008-01-01

    For broad classes of safety analysis problems, one needs to make decisions when faced with randomly varying quantities which are also subject to errors. The means for doing this involves a statistical approach which takes into account the nature of the physical problems, and the statistical constraints they impose. We describe the methodology for doing this which has been developed at Nuclear Safety Solutions, and we draw some comparisons to other methods which are commonly used in Canada and internationally. Our methodology has the advantages of being robust and accurate and compares favourably to other best estimate methods. (author)

  6. Statistical Diversions

    Science.gov (United States)

    Petocz, Peter; Sowey, Eric

    2012-01-01

    The term "data snooping" refers to the practice of choosing which statistical analyses to apply to a set of data after having first looked at those data. Data snooping contradicts a fundamental precept of applied statistics, that the scheme of analysis is to be planned in advance. In this column, the authors shall elucidate the…

  7. Essentials of Excel, Excel VBA, SAS and Minitab for statistical and financial analyses

    CERN Document Server

    Lee, Cheng-Few; Chang, Jow-Ran; Tai, Tzu

    2016-01-01

    This introductory textbook for business statistics teaches statistical analysis and research methods via business case studies and financial data using Excel, MINITAB, and SAS. Every chapter in this textbook engages the reader with data of individual stock, stock indices, options, and futures. One studies and uses statistics to learn how to study, analyze, and understand a data set of particular interest. Some of the more popular statistical programs that have been developed to use statistical and computational methods to analyze data sets are SAS, SPSS, and MINITAB. Of those, we look at MINITAB and SAS in this textbook. One of the main reasons to use MINITAB is that it is the easiest to use among the popular statistical programs. We look at SAS because it is the leading statistical package used in industry. We also utilize the much less costly and ubiquitous Microsoft Excel to do statistical analysis, as the benefits of Excel have become widely recognized in the academic world and its analytical capabilities...

  8. Update and Improve Subsection NH - Alternative Simplified Creep-Fatigue Design Methods

    International Nuclear Information System (INIS)

    Asayama, Tai

    2009-01-01

    This report described the results of investigation on Task 10 of DOE/ASME Materials NGNP/Generation IV Project based on a contract between ASME Standards Technology, LLC (ASME ST-LLC) and Japan Atomic Energy Agency (JAEA). Task 10 is to Update and Improve Subsection NH -- Alternative Simplified Creep-Fatigue Design Methods. Five newly proposed promising creep-fatigue evaluation methods were investigated. Those are (1) modified ductility exhaustion method, (2) strain range separation method, (3) approach for pressure vessel application, (4) hybrid method of time fraction and ductility exhaustion, and (5) simplified model test approach. The outlines of those methods are presented first, and predictability of experimental results of these methods is demonstrated using the creep-fatigue data collected in previous Tasks 3 and 5. All the methods (except the simplified model test approach which is not ready for application) predicted experimental results fairly accurately. On the other hand, predicted creep-fatigue life in long-term regions showed considerable differences among the methodologies. These differences come from the concepts each method is based on. All the new methods investigated in this report have advantages over the currently employed time fraction rule and offer technical insights that should be thought much of in the improvement of creep-fatigue evaluation procedures. The main points of the modified ductility exhaustion method, the strain range separation method, the approach for pressure vessel application and the hybrid method can be reflected in the improvement of the current time fraction rule. The simplified mode test approach would offer a whole new advantage including robustness and simplicity which are definitely attractive but this approach is yet to be validated for implementation at this point. Therefore, this report recommends the following two steps as a course of improvement of NH based on newly proposed creep-fatigue evaluation

  9. Practical Statistics

    CERN Document Server

    Lyons, L.

    2016-01-01

    Accelerators and detectors are expensive, both in terms of money and human effort. It is thus important to invest effort in performing a good statistical anal- ysis of the data, in order to extract the best information from it. This series of five lectures deals with practical aspects of statistical issues that arise in typical High Energy Physics analyses.

  10. A guide to statistical analysis in microbial ecology: a community-focused, living review of multivariate data analyses.

    Science.gov (United States)

    Buttigieg, Pier Luigi; Ramette, Alban

    2014-12-01

    The application of multivariate statistical analyses has become a consistent feature in microbial ecology. However, many microbial ecologists are still in the process of developing a deep understanding of these methods and appreciating their limitations. As a consequence, staying abreast of progress and debate in this arena poses an additional challenge to many microbial ecologists. To address these issues, we present the GUide to STatistical Analysis in Microbial Ecology (GUSTA ME): a dynamic, web-based resource providing accessible descriptions of numerous multivariate techniques relevant to microbial ecologists. A combination of interactive elements allows users to discover and navigate between methods relevant to their needs and examine how they have been used by others in the field. We have designed GUSTA ME to become a community-led and -curated service, which we hope will provide a common reference and forum to discuss and disseminate analytical techniques relevant to the microbial ecology community. © 2014 The Authors. FEMS Microbiology Ecology published by John Wiley & Sons Ltd on behalf of Federation of European Microbiological Societies.

  11. Cancer Statistics Animator

    Science.gov (United States)

    This tool allows users to animate cancer trends over time by cancer site and cause of death, race, and sex. Provides access to incidence, mortality, and survival. Select the type of statistic, variables, format, and then extract the statistics in a delimited format for further analyses.

  12. Perceived Statistical Knowledge Level and Self-Reported Statistical Practice Among Academic Psychologists

    Directory of Open Access Journals (Sweden)

    Laura Badenes-Ribera

    2018-06-01

    Full Text Available Introduction: Publications arguing against the null hypothesis significance testing (NHST procedure and in favor of good statistical practices have increased. The most frequently mentioned alternatives to NHST are effect size statistics (ES, confidence intervals (CIs, and meta-analyses. A recent survey conducted in Spain found that academic psychologists have poor knowledge about effect size statistics, confidence intervals, and graphic displays for meta-analyses, which might lead to a misinterpretation of the results. In addition, it also found that, although the use of ES is becoming generalized, the same thing is not true for CIs. Finally, academics with greater knowledge about ES statistics presented a profile closer to good statistical practice and research design. Our main purpose was to analyze the extension of these results to a different geographical area through a replication study.Methods: For this purpose, we elaborated an on-line survey that included the same items as the original research, and we asked academic psychologists to indicate their level of knowledge about ES, their CIs, and meta-analyses, and how they use them. The sample consisted of 159 Italian academic psychologists (54.09% women, mean age of 47.65 years. The mean number of years in the position of professor was 12.90 (SD = 10.21.Results: As in the original research, the results showed that, although the use of effect size estimates is becoming generalized, an under-reporting of CIs for ES persists. The most frequent ES statistics mentioned were Cohen's d and R2/η2, which can have outliers or show non-normality or violate statistical assumptions. In addition, academics showed poor knowledge about meta-analytic displays (e.g., forest plot and funnel plot and quality checklists for studies. Finally, academics with higher-level knowledge about ES statistics seem to have a profile closer to good statistical practices.Conclusions: Changing statistical practice is not

  13. Detailed statistical contact angle analyses; "slow moving" drops on inclining silicon-oxide surfaces.

    Science.gov (United States)

    Schmitt, M; Groß, K; Grub, J; Heib, F

    2015-06-01

    Contact angle determination by sessile drop technique is essential to characterise surface properties in science and in industry. Different specific angles can be observed on every solid which are correlated with the advancing or the receding of the triple line. Different procedures and definitions for the determination of specific angles exist which are often not comprehensible or reproducible. Therefore one of the most important things in this area is to build standard, reproducible and valid methods for determining advancing/receding contact angles. This contribution introduces novel techniques to analyse dynamic contact angle measurements (sessile drop) in detail which are applicable for axisymmetric and non-axisymmetric drops. Not only the recently presented fit solution by sigmoid function and the independent analysis of the different parameters (inclination, contact angle, velocity of the triple point) but also the dependent analysis will be firstly explained in detail. These approaches lead to contact angle data and different access on specific contact angles which are independent from "user-skills" and subjectivity of the operator. As example the motion behaviour of droplets on flat silicon-oxide surfaces after different surface treatments is dynamically measured by sessile drop technique when inclining the sample plate. The triple points, the inclination angles, the downhill (advancing motion) and the uphill angles (receding motion) obtained by high-precision drop shape analysis are independently and dependently statistically analysed. Due to the small covered distance for the dependent analysis (contact angle determination. They are characterised by small deviations of the computed values. Additional to the detailed introduction of this novel analytical approaches plus fit solution special motion relations for the drop on inclined surfaces and detailed relations about the reactivity of the freshly cleaned silicon wafer surface resulting in acceleration

  14. Transformation (normalization) of slope gradient and surface curvatures, automated for statistical analyses from DEMs

    Science.gov (United States)

    Csillik, O.; Evans, I. S.; Drăguţ, L.

    2015-03-01

    Automated procedures are developed to alleviate long tails in frequency distributions of morphometric variables. They minimize the skewness of slope gradient frequency distributions, and modify the kurtosis of profile and plan curvature distributions toward that of the Gaussian (normal) model. Box-Cox (for slope) and arctangent (for curvature) transformations are tested on nine digital elevation models (DEMs) of varying origin and resolution, and different landscapes, and shown to be effective. Resulting histograms are illustrated and show considerable improvements over those for previously recommended slope transformations (sine, square root of sine, and logarithm of tangent). Unlike previous approaches, the proposed method evaluates the frequency distribution of slope gradient values in a given area and applies the most appropriate transform if required. Sensitivity of the arctangent transformation is tested, showing that Gaussian-kurtosis transformations are acceptable also in terms of histogram shape. Cube root transformations of curvatures produced bimodal histograms. The transforms are applicable to morphometric variables and many others with skewed or long-tailed distributions. By avoiding long tails and outliers, they permit parametric statistics such as correlation, regression and principal component analyses to be applied, with greater confidence that requirements for linearity, additivity and even scatter of residuals (constancy of error variance) are likely to be met. It is suggested that such transformations should be routinely applied in all parametric analyses of long-tailed variables. Our Box-Cox and curvature automated transformations are based on a Python script, implemented as an easy-to-use script tool in ArcGIS.

  15. Statistical approaches in published ophthalmic clinical science papers: a comparison to statistical practice two decades ago.

    Science.gov (United States)

    Zhang, Harrison G; Ying, Gui-Shuang

    2018-02-09

    The aim of this study is to evaluate the current practice of statistical analysis of eye data in clinical science papers published in British Journal of Ophthalmology ( BJO ) and to determine whether the practice of statistical analysis has improved in the past two decades. All clinical science papers (n=125) published in BJO in January-June 2017 were reviewed for their statistical analysis approaches for analysing primary ocular measure. We compared our findings to the results from a previous paper that reviewed BJO papers in 1995. Of 112 papers eligible for analysis, half of the studies analysed the data at an individual level because of the nature of observation, 16 (14%) studies analysed data from one eye only, 36 (32%) studies analysed data from both eyes at ocular level, one study (1%) analysed the overall summary of ocular finding per individual and three (3%) studies used the paired comparison. Among studies with data available from both eyes, 50 (89%) of 56 papers in 2017 did not analyse data from both eyes or ignored the intereye correlation, as compared with in 60 (90%) of 67 papers in 1995 (P=0.96). Among studies that analysed data from both eyes at an ocular level, 33 (92%) of 36 studies completely ignored the intereye correlation in 2017, as compared with in 16 (89%) of 18 studies in 1995 (P=0.40). A majority of studies did not analyse the data properly when data from both eyes were available. The practice of statistical analysis did not improve in the past two decades. Collaborative efforts should be made in the vision research community to improve the practice of statistical analysis for ocular data. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  16. Influence of Immersion Conditions on The Tensile Strength of Recycled Kevlar®/Polyester/Low-Melting-Point Polyester Nonwoven Geotextiles through Applying Statistical Analyses

    Directory of Open Access Journals (Sweden)

    Jing-Chzi Hsieh

    2016-05-01

    Full Text Available The recycled Kevlar®/polyester/low-melting-point polyester (recycled Kevlar®/PET/LPET nonwoven geotextiles are immersed in neutral, strong acid, and strong alkali solutions, respectively, at different temperatures for four months. Their tensile strength is then tested according to various immersion periods at various temperatures, in order to determine their durability to chemicals. For the purpose of analyzing the possible factors that influence mechanical properties of geotextiles under diverse environmental conditions, the experimental results and statistical analyses are incorporated in this study. Therefore, influences of the content of recycled Kevlar® fibers, implementation of thermal treatment, and immersion periods on the tensile strength of recycled Kevlar®/PET/LPET nonwoven geotextiles are examined, after which their influential levels are statistically determined by performing multiple regression analyses. According to the results, the tensile strength of nonwoven geotextiles can be enhanced by adding recycled Kevlar® fibers and thermal treatment.

  17. Lies, damn lies and statistics

    International Nuclear Information System (INIS)

    Jones, M.D.

    2001-01-01

    Statistics are widely employed within archaeological research. This is becoming increasingly so as user friendly statistical packages make increasingly sophisticated analyses available to non statisticians. However, all statistical techniques are based on underlying assumptions of which the end user may be unaware. If statistical analyses are applied in ignorance of the underlying assumptions there is the potential for highly erroneous inferences to be drawn. This does happen within archaeology and here this is illustrated with the example of 'date pooling', a technique that has been widely misused in archaeological research. This misuse may have given rise to an inevitable and predictable misinterpretation of New Zealand's archaeological record. (author). 10 refs., 6 figs., 1 tab

  18. Statistical parametric mapping and statistical probabilistic anatomical mapping analyses of basal/acetazolamide Tc-99m ECD brain SPECT for efficacy assessment of endovascular stent placement for middle cerebral artery stenosis

    International Nuclear Information System (INIS)

    Lee, Tae-Hong; Kim, Seong-Jang; Kim, In-Ju; Kim, Yong-Ki; Kim, Dong-Soo; Park, Kyung-Pil

    2007-01-01

    Statistical parametric mapping (SPM) and statistical probabilistic anatomical mapping (SPAM) were applied to basal/acetazolamide Tc-99m ECD brain perfusion SPECT images in patients with middle cerebral artery (MCA) stenosis to assess the efficacy of endovascular stenting of the MCA. Enrolled in the study were 11 patients (8 men and 3 women, mean age 54.2 ± 6.2 years) who had undergone endovascular stent placement for MCA stenosis. Using SPM and SPAM analyses, we compared the number of significant voxels and cerebral counts in basal and acetazolamide SPECT images before and after stenting, and assessed the perfusion changes and cerebral vascular reserve index (CVRI). The numbers of hypoperfusion voxels in SPECT images were decreased from 10,083 ± 8,326 to 4,531 ± 5,091 in basal images (P 0.0317) and from 13,398 ± 14,222 to 7,699 ± 10,199 in acetazolamide images (P = 0.0142) after MCA stenting. On SPAM analysis, the increases in cerebral counts were significant in acetazolamide images (90.9 ± 2.2 to 93.5 ± 2.3, P = 0.0098) but not in basal images (91 ± 2.7 to 92 ± 2.6, P = 0.1602). The CVRI also showed a statistically significant increase from before stenting (median 0.32; 95% CI -2.19-2.37) to after stenting (median 1.59; 95% CI -0.85-4.16; P = 0.0068). This study revealed the usefulness of voxel-based analysis of basal/acetazolamide brain perfusion SPECT after MCA stent placement. This study showed that SPM and SPAM analyses of basal/acetazolamide Tc-99m brain SPECT could be used to evaluate the short-term hemodynamic efficacy of successful MCA stent placement. (orig.)

  19. Statistical Analysis of Data for Timber Strengths

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Hoffmeyer, P.

    Statistical analyses are performed for material strength parameters from approximately 6700 specimens of structural timber. Non-parametric statistical analyses and fits to the following distributions types have been investigated: Normal, Lognormal, 2 parameter Weibull and 3-parameter Weibull...

  20. [Statistics for statistics?--Thoughts about psychological tools].

    Science.gov (United States)

    Berger, Uwe; Stöbel-Richter, Yve

    2007-12-01

    Statistical methods take a prominent place among psychologists' educational programs. Being known as difficult to understand and heavy to learn, students fear of these contents. Those, who do not aspire after a research carrier at the university, will forget the drilled contents fast. Furthermore, because it does not apply for the work with patients and other target groups at a first glance, the methodological education as a whole was often questioned. For many psychological practitioners the statistical education makes only sense by enforcing respect against other professions, namely physicians. For the own business, statistics is rarely taken seriously as a professional tool. The reason seems to be clear: Statistics treats numbers, while psychotherapy treats subjects. So, does statistics ends in itself? With this article, we try to answer the question, if and how statistical methods were represented within the psychotherapeutical and psychological research. Therefore, we analyzed 46 Originals of a complete volume of the journal Psychotherapy, Psychosomatics, Psychological Medicine (PPmP). Within the volume, 28 different analyse methods were applied, from which 89 per cent were directly based upon statistics. To be able to write and critically read Originals as a backbone of research, presumes a high degree of statistical education. To ignore statistics means to ignore research and at least to reveal the own professional work to arbitrariness.

  1. Dispensing processes impact apparent biological activity as determined by computational and statistical analyses.

    Directory of Open Access Journals (Sweden)

    Sean Ekins

    Full Text Available Dispensing and dilution processes may profoundly influence estimates of biological activity of compounds. Published data show Ephrin type-B receptor 4 IC50 values obtained via tip-based serial dilution and dispensing versus acoustic dispensing with direct dilution differ by orders of magnitude with no correlation or ranking of datasets. We generated computational 3D pharmacophores based on data derived by both acoustic and tip-based transfer. The computed pharmacophores differ significantly depending upon dispensing and dilution methods. The acoustic dispensing-derived pharmacophore correctly identified active compounds in a subsequent test set where the tip-based method failed. Data from acoustic dispensing generates a pharmacophore containing two hydrophobic features, one hydrogen bond donor and one hydrogen bond acceptor. This is consistent with X-ray crystallography studies of ligand-protein interactions and automatically generated pharmacophores derived from this structural data. In contrast, the tip-based data suggest a pharmacophore with two hydrogen bond acceptors, one hydrogen bond donor and no hydrophobic features. This pharmacophore is inconsistent with the X-ray crystallographic studies and automatically generated pharmacophores. In short, traditional dispensing processes are another important source of error in high-throughput screening that impacts computational and statistical analyses. These findings have far-reaching implications in biological research.

  2. Modern applied statistics with S-plus

    CERN Document Server

    Venables, W N

    1994-01-01

    S-Plus is a powerful environment for statistical and graphical analysis of data. It provides the tools to implement many statistical ideas which have been made possible by the widespread availability of workstations having good graphics and computational capabilities. This book is a guide to using S-Plus to perform statistical analyses and provides both an introduction to the use of S-Plus and a course in modern statistical methods. The aim of the book is to show how to use S-Plus as a powerful and graphical system. Readers are assumed to have a basic grounding in statistics, and so the book is intended for would-be users of S-Plus, and both students and researchers using statistics. Throughout, the emphasis is on presenting practical problems and full analyses of real data sets.

  3. Statistics in a nutshell

    CERN Document Server

    Boslaugh, Sarah

    2013-01-01

    Need to learn statistics for your job? Want help passing a statistics course? Statistics in a Nutshell is a clear and concise introduction and reference for anyone new to the subject. Thoroughly revised and expanded, this edition helps you gain a solid understanding of statistics without the numbing complexity of many college texts. Each chapter presents easy-to-follow descriptions, along with graphics, formulas, solved examples, and hands-on exercises. If you want to perform common statistical analyses and learn a wide range of techniques without getting in over your head, this is your book.

  4. Statistics Clinic

    Science.gov (United States)

    Feiveson, Alan H.; Foy, Millennia; Ploutz-Snyder, Robert; Fiedler, James

    2014-01-01

    Do you have elevated p-values? Is the data analysis process getting you down? Do you experience anxiety when you need to respond to criticism of statistical methods in your manuscript? You may be suffering from Insufficient Statistical Support Syndrome (ISSS). For symptomatic relief of ISSS, come for a free consultation with JSC biostatisticians at our help desk during the poster sessions at the HRP Investigators Workshop. Get answers to common questions about sample size, missing data, multiple testing, when to trust the results of your analyses and more. Side effects may include sudden loss of statistics anxiety, improved interpretation of your data, and increased confidence in your results.

  5. Statistical Pattern Recognition

    CERN Document Server

    Webb, Andrew R

    2011-01-01

    Statistical pattern recognition relates to the use of statistical techniques for analysing data measurements in order to extract information and make justified decisions.  It is a very active area of study and research, which has seen many advances in recent years. Applications such as data mining, web searching, multimedia data retrieval, face recognition, and cursive handwriting recognition, all require robust and efficient pattern recognition techniques. This third edition provides an introduction to statistical pattern theory and techniques, with material drawn from a wide range of fields,

  6. Fundamental data analyses for measurement control

    International Nuclear Information System (INIS)

    Campbell, K.; Barlich, G.L.; Fazal, B.; Strittmatter, R.B.

    1987-02-01

    A set of measurment control data analyses was selected for use by analysts responsible for maintaining measurement quality of nuclear materials accounting instrumentation. The analyses consist of control charts for bias and precision and statistical tests used as analytic supplements to the control charts. They provide the desired detection sensitivity and yet can be interpreted locally, quickly, and easily. The control charts provide for visual inspection of data and enable an alert reviewer to spot problems possibly before statistical tests detect them. The statistical tests are useful for automating the detection of departures from the controlled state or from the underlying assumptions (such as normality). 8 refs., 3 figs., 5 tabs

  7. Otolith patterns of rockfishes from the northeastern Pacific.

    Science.gov (United States)

    Tuset, Victor M; Imondi, Ralph; Aguado, Guillermo; Otero-Ferrer, José L; Santschi, Linda; Lombarte, Antoni; Love, Milton

    2015-04-01

    Sagitta otolith shape was analysed in twenty sympatric rockfishes off the southern California coast (Northeastern Pacific). The variation in shape was quantified using canonical variate analysis based on fifth wavelet function decomposition of otolith contour. We selected wavelets because this representation allow the identifications of zones or single morphological points along the contour. The entire otoliths along with four subsections (anterior, ventral, posterodorsal, and anterodorsal) with morphological meaning were examined. Multivariate analyses (MANOVA) showed significant differences in the contours of whole otolith morphology and corresponding subsection among rockfishes. Four patterns were found: fusiform, oblong, and two types of elliptic. A redundancy analysis indicated that anterior and anterodorsal subsections contribute most to define the entire otolith shape. Complementarily, the eco-morphological study indicated that the depth distribution and strategies for capture prey were correlated to otolith shape, especially with the anterodorsal zone. © 2014 Wiley Periodicals, Inc.

  8. Does environmental data collection need statistics?

    NARCIS (Netherlands)

    Pulles, M.P.J.

    1998-01-01

    The term 'statistics' with reference to environmental science and policymaking might mean different things: the development of statistical methodology, the methodology developed by statisticians to interpret and analyse such data, or the statistical data that are needed to understand environmental

  9. Reporting characteristics of meta-analyses in orthodontics: methodological assessment and statistical recommendations.

    Science.gov (United States)

    Papageorgiou, Spyridon N; Papadopoulos, Moschos A; Athanasiou, Athanasios E

    2014-02-01

    Ideally meta-analyses (MAs) should consolidate the characteristics of orthodontic research in order to produce an evidence-based answer. However severe flaws are frequently observed in most of them. The aim of this study was to evaluate the statistical methods, the methodology, and the quality characteristics of orthodontic MAs and to assess their reporting quality during the last years. Electronic databases were searched for MAs (with or without a proper systematic review) in the field of orthodontics, indexed up to 2011. The AMSTAR tool was used for quality assessment of the included articles. Data were analyzed with Student's t-test, one-way ANOVA, and generalized linear modelling. Risk ratios with 95% confidence intervals were calculated to represent changes during the years in reporting of key items associated with quality. A total of 80 MAs with 1086 primary studies were included in this evaluation. Using the AMSTAR tool, 25 (27.3%) of the MAs were found to be of low quality, 37 (46.3%) of medium quality, and 18 (22.5%) of high quality. Specific characteristics like explicit protocol definition, extensive searches, and quality assessment of included trials were associated with a higher AMSTAR score. Model selection and dealing with heterogeneity or publication bias were often problematic in the identified reviews. The number of published orthodontic MAs is constantly increasing, while their overall quality is considered to range from low to medium. Although the number of MAs of medium and high level seems lately to rise, several other aspects need improvement to increase their overall quality.

  10. Statistics for NAEG: past efforts, new results, and future plans

    International Nuclear Information System (INIS)

    Gilbert, R.O.; Simpson, J.C.; Kinnison, R.R.; Engel, D.W.

    1983-06-01

    A brief review of Nevada Applied Ecology Group (NAEG) objectives is followed by a summary of past statistical analyses conducted by Pacific Northwest Laboratory for the NAEG. Estimates of spatial pattern of radionuclides and other statistical analyses at NS's 201, 219 and 221 are reviewed as background for new analyses presented in this paper. Suggested NAEG activities and statistical analyses needed for the projected termination date of NAEG studies in March 1986 are given

  11. Radiation Protection Ordinance 1989. Supplement with Radiation Protection Register Ordinance, general administration regulation pursuant to Sect. 45 Radiation Protection Ordinance, general administration regulation pursuant to Sect. 62 sub-sect. radiation passport

    International Nuclear Information System (INIS)

    Veith, H.M.

    1990-01-01

    The addendum contains regulations issued supplementary to the Radiation Protection Ordinance: The Radiation Protection Register as of April 3, 1990 including the law on the setting up of a Federal Office on Radiation Protection; the general administration regulation pursuant to Sect. 45 Radiation Protection Ordinance as of February 21, 1990; the general administration regulation pursuant to Sect. 62 sub-sect. 2 Radiation Protection Ordinance as of May 3, 1990 (AVV Radiation passport). The volume contains, apart from the legal texts, the appropriate decision by the Bundesrat, the official explanation from the Bundestag Publications as well as a comprehensive introduction into the new legal matter. (orig.) [de

  12. Report on the FY17 Development of Computer Program for ASME Section III, Division 5, Subsection HB, Subpart B Rules

    Energy Technology Data Exchange (ETDEWEB)

    Swindeman, M. J. [Argonne National Lab. (ANL), Argonne, IL (United States); Jetter, R. I. [Argonne National Lab. (ANL), Argonne, IL (United States); Sham, T. -L. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-01-01

    One of the objectives of the high temperature design methodology activities is to develop and validate both improvements and the basic features of ASME Boiler and Pressure Vessel Code, Section III, Rules for Construction of Nuclear Facility Components, Division 5, High Temperature Reactors, Subsection HB, Subpart B (HBB). The overall scope of this task is to develop a computer program to aid assessment procedures of components under specified loading conditions in accordance with the elevated temperature design requirements for Division 5 Class A components. There are many features and alternative paths of varying complexity in HBB. The initial focus of this computer program is a basic path through the various options for a single reference material, 316H stainless steel. However, the computer program is being structured for eventual incorporation all of the features and permitted materials of HBB. This report will first provide a description of the overall computer program, particular challenges in developing numerical procedures for the assessment, and an overall approach to computer program development. This is followed by a more comprehensive appendix, which is the draft computer program manual for the program development. The strain limits rules have been implemented in the computer program. The evaluation of creep-fatigue damage will be implemented in future work scope.

  13. Statistical Modelling of Synaptic Vesicles Distribution and Analysing their Physical Characteristics

    DEFF Research Database (Denmark)

    Khanmohammadi, Mahdieh

    transmission electron microscopy is used to acquire images from two experimental groups of rats: 1) rats subjected to a behavioral model of stress and 2) rats subjected to sham stress as the control group. The synaptic vesicle distribution and interactions are modeled by employing a point process approach......This Ph.D. thesis deals with mathematical and statistical modeling of synaptic vesicle distribution, shape, orientation and interactions. The first major part of this thesis treats the problem of determining the effect of stress on synaptic vesicle distribution and interactions. Serial section...... on differences of statistical measures in section and the same measures in between sections. Three-dimensional (3D) datasets are reconstructed by using image registration techniques and estimated thicknesses. We distinguish the effect of stress by estimating the synaptic vesicle densities and modeling...

  14. Additional methodology development for statistical evaluation of reactor safety analyses

    International Nuclear Information System (INIS)

    Marshall, J.A.; Shore, R.W.; Chay, S.C.; Mazumdar, M.

    1977-03-01

    The project described is motivated by the desire for methods to quantify uncertainties and to identify conservatisms in nuclear power plant safety analysis. The report examines statistical methods useful for assessing the probability distribution of output response from complex nuclear computer codes, considers sensitivity analysis and several other topics, and also sets the path for using the developed methods for realistic assessment of the design basis accident

  15. Stress analyses for reactor pressure vessels by the example of a product line '69 boiling water reactor

    International Nuclear Information System (INIS)

    Mkrtchyan, Lilit; Schau, Henry; Wolf, Werner; Holzer, Wieland; Wernicke, Robert; Trieglaff, Ralf

    2011-01-01

    The reactor pressure vessels (RPV) of boiling water reactors (BWR) belonging to the product line '69 have unusually designed heads. The spherical cap-shaped bottom head of the vessel is welded directly to the support flange of the lower shell course. This unusual construction has led repeatedly to controversial discussions concerning the limits and admissibility of stress intensities arising in the junction of the bottom head to the cylindrical shell. In the present paper, stress analyses for the design conditions are performed with the finite element method in order to determine and categorize the occurring stresses. The procedure of stress classification in accordance with the guidelines of German KTA 3201.2 and Section III of the ASME Code (Subsection NB) is described and subsequently demonstrated by the example of a typical BWR vessel. The accomplished investigations yield allowable stress intensities in the considered area. Additionally, limit load analyses are carried out to verify the obtained results. Complementary studies, performed for a torispherical head, prove that the determined maximum peak stresses in the junction between the bottom head and the cylindrical shell are not unusual also for pressure vessels with regular bottom head constructions. (orig.)

  16. Statistical analyses of the performance of Macedonian investment and pension funds

    Directory of Open Access Journals (Sweden)

    Petar Taleski

    2015-10-01

    Full Text Available The foundation of the post-modern portfolio theory is creating a portfolio based on a desired target return. This specifically applies to the performance of investment and pension funds that provide a rate of return meeting payment requirements from investment funds. A desired target return is the goal of an investment or pension fund. It is the primary benchmark used to measure performances, dynamic monitoring and evaluation of the risk–return ratio on investment funds. The analysis in this paper is based on monthly returns of Macedonian investment and pension funds (June 2011 - June 2014. Such analysis utilizes the basic, but highly informative statistical characteristic moments like skewness, kurtosis, Jarque–Bera, and Chebyishev’s Inequality. The objective of this study is to perform a trough analysis, utilizing the above mentioned and other types of statistical techniques (Sharpe, Sortino, omega, upside potential, Calmar, Sterling to draw relevant conclusions regarding the risks and characteristic moments in Macedonian investment and pension funds. Pension funds are the second largest segment of the financial system, and has great potential for further growth due to constant inflows from pension insurance. The importance of investment funds for the financial system in the Republic of Macedonia is still small, although open-end investment funds have been the fastest growing segment of the financial system. Statistical analysis has shown that pension funds have delivered a significantly positive volatility-adjusted risk premium in the analyzed period more so than investment funds.

  17. Statistical power of intervention analyses: simulation and empirical application to treated lumber prices

    Science.gov (United States)

    Jeffrey P. Prestemon

    2009-01-01

    Timber product markets are subject to large shocks deriving from natural disturbances and policy shifts. Statistical modeling of shocks is often done to assess their economic importance. In this article, I simulate the statistical power of univariate and bivariate methods of shock detection using time series intervention models. Simulations show that bivariate methods...

  18. Notices about using elementary statistics in psychology

    OpenAIRE

    松田, 文子; 三宅, 幹子; 橋本, 優花里; 山崎, 理央; 森田, 愛子; 小嶋, 佳子

    2003-01-01

    Improper uses of elementary statistics that were often observed in beginners' manuscripts and papers were collected and better ways were suggested. This paper consists of three parts: About descriptive statistics, multivariate analyses, and statistical tests.

  19. Stress analyses for reactor pressure vessels by the example of a product line '69 boiling water reactor

    Energy Technology Data Exchange (ETDEWEB)

    Mkrtchyan, Lilit; Schau, Henry [TUEV SUED Energietechnik GmbH, Mannheim (Germany). Abt. Strukturverhalten; Wolf, Werner; Holzer, Wieland [TUEV SUED Industrie Service GmbH, Muenchen (Germany). Abt. Behaelter und Turbosatz; Wernicke, Robert; Trieglaff, Ralf [TUEV NORD SysTec GmbH und Co. KG, Hamburg (Germany). Abt. Festigkeit und Konstruktion

    2011-08-15

    The reactor pressure vessels (RPV) of boiling water reactors (BWR) belonging to the product line '69 have unusually designed heads. The spherical cap-shaped bottom head of the vessel is welded directly to the support flange of the lower shell course. This unusual construction has led repeatedly to controversial discussions concerning the limits and admissibility of stress intensities arising in the junction of the bottom head to the cylindrical shell. In the present paper, stress analyses for the design conditions are performed with the finite element method in order to determine and categorize the occurring stresses. The procedure of stress classification in accordance with the guidelines of German KTA 3201.2 and Section III of the ASME Code (Subsection NB) is described and subsequently demonstrated by the example of a typical BWR vessel. The accomplished investigations yield allowable stress intensities in the considered area. Additionally, limit load analyses are carried out to verify the obtained results. Complementary studies, performed for a torispherical head, prove that the determined maximum peak stresses in the junction between the bottom head and the cylindrical shell are not unusual also for pressure vessels with regular bottom head constructions. (orig.)

  20. A d-statistic for single-case designs that is equivalent to the usual between-groups d-statistic.

    Science.gov (United States)

    Shadish, William R; Hedges, Larry V; Pustejovsky, James E; Boyajian, Jonathan G; Sullivan, Kristynn J; Andrade, Alma; Barrientos, Jeannette L

    2014-01-01

    We describe a standardised mean difference statistic (d) for single-case designs that is equivalent to the usual d in between-groups experiments. We show how it can be used to summarise treatment effects over cases within a study, to do power analyses in planning new studies and grant proposals, and to meta-analyse effects across studies of the same question. We discuss limitations of this d-statistic, and possible remedies to them. Even so, this d-statistic is better founded statistically than other effect size measures for single-case design, and unlike many general linear model approaches such as multilevel modelling or generalised additive models, it produces a standardised effect size that can be integrated over studies with different outcome measures. SPSS macros for both effect size computation and power analysis are available.

  1. Modern applied statistics with s-plus

    CERN Document Server

    Venables, W N

    1997-01-01

    S-PLUS is a powerful environment for the statistical and graphical analysis of data. It provides the tools to implement many statistical ideas which have been made possible by the widespread availability of workstations having good graphics and computational capabilities. This book is a guide to using S-PLUS to perform statistical analyses and provides both an introduction to the use of S-PLUS and a course in modern statistical methods. S-PLUS is available for both Windows and UNIX workstations, and both versions are covered in depth. The aim of the book is to show how to use S-PLUS as a powerful and graphical system. Readers are assumed to have a basic grounding in statistics, and so the book is intended for would-be users of S-PLUS, and both students and researchers using statistics. Throughout, the emphasis is on presenting practical problems and full analyses of real data sets. Many of the methods discussed are state-of-the-art approaches to topics such as linear and non-linear regression models, robust a...

  2. Errors in statistical decision making Chapter 2 in Applied Statistics in Agricultural, Biological, and Environmental Sciences

    Science.gov (United States)

    Agronomic and Environmental research experiments result in data that are analyzed using statistical methods. These data are unavoidably accompanied by uncertainty. Decisions about hypotheses, based on statistical analyses of these data are therefore subject to error. This error is of three types,...

  3. Authigenic oxide Neodymium Isotopic composition as a proxy of seawater: applying multivariate statistical analyses.

    Science.gov (United States)

    McKinley, C. C.; Scudder, R.; Thomas, D. J.

    2016-12-01

    The Neodymium Isotopic composition (Nd IC) of oxide coatings has been applied as a tracer of water mass composition and used to address fundamental questions about past ocean conditions. The leached authigenic oxide coating from marine sediment is widely assumed to reflect the dissolved trace metal composition of the bottom water interacting with sediment at the seafloor. However, recent studies have shown that readily reducible sediment components, in addition to trace metal fluxes from the pore water, are incorporated into the bottom water, influencing the trace metal composition of leached oxide coatings. This challenges the prevailing application of the authigenic oxide Nd IC as a proxy of seawater composition. Therefore, it is important to identify the component end-members that create sediments of different lithology and determine if, or how they might contribute to the Nd IC of oxide coatings. To investigate lithologic influence on the results of sequential leaching, we selected two sites with complete bulk sediment statistical characterization. Site U1370 in the South Pacific Gyre, is predominantly composed of Rhyolite ( 60%) and has a distinguishable ( 10%) Fe-Mn Oxyhydroxide component (Dunlea et al., 2015). Site 1149 near the Izu-Bonin-Arc is predominantly composed of dispersed ash ( 20-50%) and eolian dust from Asia ( 50-80%) (Scudder et al., 2014). We perform a two-step leaching procedure: a 14 mL of 0.02 M hydroxylamine hydrochloride (HH) in 20% acetic acid buffered to a pH 4 for one hour, targeting metals bound to Fe- and Mn- oxides fractions, and a second HH leach for 12 hours, designed to remove any remaining oxides from the residual component. We analyze all three resulting fractions for a large suite of major, trace and rare earth elements, a sub-set of the samples are also analyzed for Nd IC. We use multivariate statistical analyses of the resulting geochemical data to identify how each component of the sediment partitions across the sequential

  4. Intuitive introductory statistics

    CERN Document Server

    Wolfe, Douglas A

    2017-01-01

    This textbook is designed to give an engaging introduction to statistics and the art of data analysis. The unique scope includes, but also goes beyond, classical methodology associated with the normal distribution. What if the normal model is not valid for a particular data set? This cutting-edge approach provides the alternatives. It is an introduction to the world and possibilities of statistics that uses exercises, computer analyses, and simulations throughout the core lessons. These elementary statistical methods are intuitive. Counting and ranking features prominently in the text. Nonparametric methods, for instance, are often based on counts and ranks and are very easy to integrate into an introductory course. The ease of computation with advanced calculators and statistical software, both of which factor into this text, allows important techniques to be introduced earlier in the study of statistics. This book's novel scope also includes measuring symmetry with Walsh averages, finding a nonp...

  5. 47 CFR 1.363 - Introduction of statistical data.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Introduction of statistical data. 1.363 Section... Proceedings Evidence § 1.363 Introduction of statistical data. (a) All statistical studies, offered in... analyses, and experiments, and those parts of other studies involving statistical methodology shall be...

  6. The Use of Statistical Process Control Tools for Analysing Financial Statements

    Directory of Open Access Journals (Sweden)

    Niezgoda Janusz

    2017-06-01

    Full Text Available This article presents the proposed application of one type of the modified Shewhart control charts in the monitoring of changes in the aggregated level of financial ratios. The control chart x̅ has been used as a basis of analysis. The examined variable from the sample in the mentioned chart is the arithmetic mean. The author proposes to substitute it with a synthetic measure that is determined and based on the selected ratios. As the ratios mentioned above, are expressed in different units and characters, the author applies standardisation. The results of selected comparative analyses have been presented for both bankrupts and non-bankrupts. They indicate the possibility of using control charts as an auxiliary tool in financial analyses.

  7. Inferring the origin of rare fruit distillates from compositional data using multivariate statistical analyses and the identification of new flavour constituents.

    Science.gov (United States)

    Mihajilov-Krstev, Tatjana M; Denić, Marija S; Zlatković, Bojan K; Stankov-Jovanović, Vesna P; Mitić, Violeta D; Stojanović, Gordana S; Radulović, Niko S

    2015-04-01

    In Serbia, delicatessen fruit alcoholic drinks are produced from autochthonous fruit-bearing species such as cornelian cherry, blackberry, elderberry, wild strawberry, European wild apple, European blueberry and blackthorn fruits. There are no chemical data on many of these and herein we analysed volatile minor constituents of these rare fruit distillates. Our second goal was to determine possible chemical markers of these distillates through a statistical/multivariate treatment of the herein obtained and previously reported data. Detailed chemical analyses revealed a complex volatile profile of all studied fruit distillates with 371 identified compounds. A number of constituents were recognised as marker compounds for a particular distillate. Moreover, 33 of them represent newly detected flavour constituents in alcoholic beverages or, in general, in foodstuffs. With the aid of multivariate analyses, these volatile profiles were successfully exploited to infer the origin of raw materials used in the production of these spirits. It was also shown that all fruit distillates possessed weak antimicrobial properties. It seems that the aroma of these highly esteemed wild-fruit spirits depends on the subtle balance of various minor volatile compounds, whereby some of them are specific to a certain type of fruit distillate and enable their mutual distinction. © 2014 Society of Chemical Industry.

  8. Elementary Statistics Tables

    CERN Document Server

    Neave, Henry R

    2012-01-01

    This book, designed for students taking a basic introductory course in statistical analysis, is far more than just a book of tables. Each table is accompanied by a careful but concise explanation and useful worked examples. Requiring little mathematical background, Elementary Statistics Tables is thus not just a reference book but a positive and user-friendly teaching and learning aid. The new edition contains a new and comprehensive "teach-yourself" section on a simple but powerful approach, now well-known in parts of industry but less so in academia, to analysing and interpreting process dat

  9. Statistical analysis and interpretation of prenatal diagnostic imaging studies, Part 2: descriptive and inferential statistical methods.

    Science.gov (United States)

    Tuuli, Methodius G; Odibo, Anthony O

    2011-08-01

    The objective of this article is to discuss the rationale for common statistical tests used for the analysis and interpretation of prenatal diagnostic imaging studies. Examples from the literature are used to illustrate descriptive and inferential statistics. The uses and limitations of linear and logistic regression analyses are discussed in detail.

  10. First-Generation Transgenic Plants and Statistics

    NARCIS (Netherlands)

    Nap, Jan-Peter; Keizer, Paul; Jansen, Ritsert

    1993-01-01

    The statistical analyses of populations of first-generation transgenic plants are commonly based on mean and variance and generally require a test of normality. Since in many cases the assumptions of normality are not met, analyses can result in erroneous conclusions. Transformation of data to

  11. DESIGNING ENVIRONMENTAL MONITORING DATABASES FOR STATISTIC ASSESSMENT

    Science.gov (United States)

    Databases designed for statistical analyses have characteristics that distinguish them from databases intended for general use. EMAP uses a probabilistic sampling design to collect data to produce statistical assessments of environmental conditions. In addition to supporting the ...

  12. Consumer Loyalty and Loyalty Programs: a topographic examination of the scientific literature using bibliometrics, spatial statistics and network analyses

    Directory of Open Access Journals (Sweden)

    Viviane Moura Rocha

    2015-04-01

    Full Text Available This paper presents a topographic analysis of the fields of consumer loyalty and loyalty programs, vastly studied in the last decades and still relevant in the marketing literature. After the identification of 250 scientific papers that were published in the last ten years in indexed journals, a subset of 76 were chosen and their 3223 references were extracted. The journals in which these papers were published, their key words, abstracts, authors, institutions of origin and citation patterns were identified and analyzed using bibliometrics, spatial statistics techniques and network analyses. The results allow the identification of the central components of the field, as well as its main authors, journals, institutions and countries that intermediate the diffusion of knowledge, which contributes to the understanding of the constitution of the field by researchers and students.

  13. Multivariate statistical methods a first course

    CERN Document Server

    Marcoulides, George A

    2014-01-01

    Multivariate statistics refer to an assortment of statistical methods that have been developed to handle situations in which multiple variables or measures are involved. Any analysis of more than two variables or measures can loosely be considered a multivariate statistical analysis. An introductory text for students learning multivariate statistical methods for the first time, this book keeps mathematical details to a minimum while conveying the basic principles. One of the principal strategies used throughout the book--in addition to the presentation of actual data analyses--is poin

  14. Statistical Analyses of High-Resolution Aircraft and Satellite Observations of Sea Ice: Applications for Improving Model Simulations

    Science.gov (United States)

    Farrell, S. L.; Kurtz, N. T.; Richter-Menge, J.; Harbeck, J. P.; Onana, V.

    2012-12-01

    Satellite-derived estimates of ice thickness and observations of ice extent over the last decade point to a downward trend in the basin-scale ice volume of the Arctic Ocean. This loss has broad-ranging impacts on the regional climate and ecosystems, as well as implications for regional infrastructure, marine navigation, national security, and resource exploration. New observational datasets at small spatial and temporal scales are now required to improve our understanding of physical processes occurring within the ice pack and advance parameterizations in the next generation of numerical sea-ice models. High-resolution airborne and satellite observations of the sea ice are now available at meter-scale resolution or better that provide new details on the properties and morphology of the ice pack across basin scales. For example the NASA IceBridge airborne campaign routinely surveys the sea ice of the Arctic and Southern Oceans with an advanced sensor suite including laser and radar altimeters and digital cameras that together provide high-resolution measurements of sea ice freeboard, thickness, snow depth and lead distribution. Here we present statistical analyses of the ice pack primarily derived from the following IceBridge instruments: the Digital Mapping System (DMS), a nadir-looking, high-resolution digital camera; the Airborne Topographic Mapper, a scanning lidar; and the University of Kansas snow radar, a novel instrument designed to estimate snow depth on sea ice. Together these instruments provide data from which a wide range of sea ice properties may be derived. We provide statistics on lead distribution and spacing, lead width and area, floe size and distance between floes, as well as ridge height, frequency and distribution. The goals of this study are to (i) identify unique statistics that can be used to describe the characteristics of specific ice regions, for example first-year/multi-year ice, diffuse ice edge/consolidated ice pack, and convergent

  15. Statistical criteria for characterizing irradiance time series.

    Energy Technology Data Exchange (ETDEWEB)

    Stein, Joshua S.; Ellis, Abraham; Hansen, Clifford W.

    2010-10-01

    We propose and examine several statistical criteria for characterizing time series of solar irradiance. Time series of irradiance are used in analyses that seek to quantify the performance of photovoltaic (PV) power systems over time. Time series of irradiance are either measured or are simulated using models. Simulations of irradiance are often calibrated to or generated from statistics for observed irradiance and simulations are validated by comparing the simulation output to the observed irradiance. Criteria used in this comparison should derive from the context of the analyses in which the simulated irradiance is to be used. We examine three statistics that characterize time series and their use as criteria for comparing time series. We demonstrate these statistics using observed irradiance data recorded in August 2007 in Las Vegas, Nevada, and in June 2009 in Albuquerque, New Mexico.

  16. Statistic analyses of the color experience according to the age of the observer.

    Science.gov (United States)

    Hunjet, Anica; Parac-Osterman, Durdica; Vucaj, Edita

    2013-04-01

    Psychological experience of color is a real state of the communication between the environment and color, and it will depend on the source of the light, angle of the view, and particular on the observer and his health condition. Hering's theory or a theory of the opponent processes supposes that cones, which are situated in the retina of the eye, are not sensible on the three chromatic domains (areas, fields, zones) (red, green and purple-blue), but they produce a signal based on the principle of the opposed pairs of colors. A reason of this theory depends on the fact that certain disorders of the color eyesight, which include blindness to certain colors, cause blindness to pairs of opponent colors. This paper presents a demonstration of the experience of blue and yellow tone according to the age of the observer. For the testing of the statistically significant differences in the omission in the color experience according to the color of the background we use following statistical tests: Mann-Whitnney U Test, Kruskal-Wallis ANOVA and Median test. It was proven that the differences are statistically significant in the elderly persons (older than 35 years).

  17. Statistical analysis of management data

    CERN Document Server

    Gatignon, Hubert

    2013-01-01

    This book offers a comprehensive approach to multivariate statistical analyses. It provides theoretical knowledge of the concepts underlying the most important multivariate techniques and an overview of actual applications.

  18. Practical Statistics for Environmental and Biological Scientists

    CERN Document Server

    Townend, John

    2012-01-01

    All students and researchers in environmental and biological sciences require statistical methods at some stage of their work. Many have a preconception that statistics are difficult and unpleasant and find that the textbooks available are difficult to understand. Practical Statistics for Environmental and Biological Scientists provides a concise, user-friendly, non-technical introduction to statistics. The book covers planning and designing an experiment, how to analyse and present data, and the limitations and assumptions of each statistical method. The text does not refer to a specific comp

  19. Statistical limitations in functional neuroimaging. I. Non-inferential methods and statistical models.

    Science.gov (United States)

    Petersson, K M; Nichols, T E; Poline, J B; Holmes, A P

    1999-01-01

    Functional neuroimaging (FNI) provides experimental access to the intact living brain making it possible to study higher cognitive functions in humans. In this review and in a companion paper in this issue, we discuss some common methods used to analyse FNI data. The emphasis in both papers is on assumptions and limitations of the methods reviewed. There are several methods available to analyse FNI data indicating that none is optimal for all purposes. In order to make optimal use of the methods available it is important to know the limits of applicability. For the interpretation of FNI results it is also important to take into account the assumptions, approximations and inherent limitations of the methods used. This paper gives a brief overview over some non-inferential descriptive methods and common statistical models used in FNI. Issues relating to the complex problem of model selection are discussed. In general, proper model selection is a necessary prerequisite for the validity of the subsequent statistical inference. The non-inferential section describes methods that, combined with inspection of parameter estimates and other simple measures, can aid in the process of model selection and verification of assumptions. The section on statistical models covers approaches to global normalization and some aspects of univariate, multivariate, and Bayesian models. Finally, approaches to functional connectivity and effective connectivity are discussed. In the companion paper we review issues related to signal detection and statistical inference. PMID:10466149

  20. The semi-empirical low-level background statistics

    International Nuclear Information System (INIS)

    Tran Manh Toan; Nguyen Trieu Tu

    1992-01-01

    A semi-empirical low-level background statistics was proposed. The one can be applied to evaluated the sensitivity of low background systems, and to analyse the statistical error, the 'Rejection' and 'Accordance' criteria for processing of low-level experimental data. (author). 5 refs, 1 figs

  1. Search Databases and Statistics

    DEFF Research Database (Denmark)

    Refsgaard, Jan C; Munk, Stephanie; Jensen, Lars J

    2016-01-01

    having strengths and weaknesses that must be considered for the individual needs. These are reviewed in this chapter. Equally critical for generating highly confident output datasets is the application of sound statistical criteria to limit the inclusion of incorrect peptide identifications from database...... searches. Additionally, careful filtering and use of appropriate statistical tests on the output datasets affects the quality of all downstream analyses and interpretation of the data. Our considerations and general practices on these aspects of phosphoproteomics data processing are presented here....

  2. Using R-Project for Free Statistical Analysis in Extension Research

    Science.gov (United States)

    Mangiafico, Salvatore S.

    2013-01-01

    One option for Extension professionals wishing to use free statistical software is to use online calculators, which are useful for common, simple analyses. A second option is to use a free computing environment capable of performing statistical analyses, like R-project. R-project is free, cross-platform, powerful, and respected, but may be…

  3. Statistical analysis and data management

    International Nuclear Information System (INIS)

    Anon.

    1981-01-01

    This report provides an overview of the history of the WIPP Biology Program. The recommendations of the American Institute of Biological Sciences (AIBS) for the WIPP biology program are summarized. The data sets available for statistical analyses and problems associated with these data sets are also summarized. Biological studies base maps are presented. A statistical model is presented to evaluate any correlation between climatological data and small mammal captures. No statistically significant relationship between variance in small mammal captures on Dr. Gennaro's 90m x 90m grid and precipitation records from the Duval Potash Mine were found

  4. Design and implementation of a modular program system for the carrying-through of statistical analyses

    International Nuclear Information System (INIS)

    Beck, W.

    1984-01-01

    From the complexity of computer programs for the solution of scientific and technical problems results a lot of questions. Typical questions concern the strength and weakness of computer programs, the propagation of incertainties among the input data, the sensitivity of input data on output data and the substitute of complex models by more simple ones, which provide equivalent results in certain ranges. Those questions have a general practical meaning, principle answers may be found by statistical methods, which are based on the Monte Carlo Method. In this report the statistical methods are chosen, described and valuated. They are implemented into the modular program system STAR, which is an own component of the program system RSYST. The design of STAR considers users with different knowledge of data processing and statistics. The variety of statistical methods, generating and evaluating procedures. The processing of large data sets in complex structures. The coupling to other components of RSYST and RSYST foreign programs. That the system can be easily modificated and enlarged. Four examples are given, which demonstrate the application of STAR. (orig.) [de

  5. Statistical analyses of the magnet data for the advanced photon source storage ring magnets

    International Nuclear Information System (INIS)

    Kim, S.H.; Carnegie, D.W.; Doose, C.; Hogrefe, R.; Kim, K.; Merl, R.

    1995-01-01

    The statistics of the measured magnetic data of 80 dipole, 400 quadrupole, and 280 sextupole magnets of conventional resistive designs for the APS storage ring is summarized. In order to accommodate the vacuum chamber, the curved dipole has a C-type cross section and the quadrupole and sextupole cross sections have 180 degrees and 120 degrees symmetries, respectively. The data statistics include the integrated main fields, multipole coefficients, magnetic and mechanical axes, and roll angles of the main fields. The average and rms values of the measured magnet data meet the storage ring requirements

  6. Grey literature in meta-analyses.

    Science.gov (United States)

    Conn, Vicki S; Valentine, Jeffrey C; Cooper, Harris M; Rantz, Marilyn J

    2003-01-01

    In meta-analysis, researchers combine the results of individual studies to arrive at cumulative conclusions. Meta-analysts sometimes include "grey literature" in their evidential base, which includes unpublished studies and studies published outside widely available journals. Because grey literature is a source of data that might not employ peer review, critics have questioned the validity of its data and the results of meta-analyses that include it. To examine evidence regarding whether grey literature should be included in meta-analyses and strategies to manage grey literature in quantitative synthesis. This article reviews evidence on whether the results of studies published in peer-reviewed journals are representative of results from broader samplings of research on a topic as a rationale for inclusion of grey literature. Strategies to enhance access to grey literature are addressed. The most consistent and robust difference between published and grey literature is that published research is more likely to contain results that are statistically significant. Effect size estimates of published research are about one-third larger than those of unpublished studies. Unfunded and small sample studies are less likely to be published. Yet, importantly, methodological rigor does not differ between published and grey literature. Meta-analyses that exclude grey literature likely (a) over-represent studies with statistically significant findings, (b) inflate effect size estimates, and (c) provide less precise effect size estimates than meta-analyses including grey literature. Meta-analyses should include grey literature to fully reflect the existing evidential base and should assess the impact of methodological variations through moderator analysis.

  7. Statistical Analyses of Second Indoor Bio-Release Field Evaluation Study at Idaho National Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Amidan, Brett G.; Pulsipher, Brent A.; Matzke, Brett D.

    2009-12-17

    number of zeros. Using QQ plots these data characteristics show a lack of normality from the data after contamination. Normality is improved when looking at log(CFU/cm2). Variance component analysis (VCA) and analysis of variance (ANOVA) were used to estimate the amount of variance due to each source and to determine which sources of variability were statistically significant. In general, the sampling methods interacted with the across event variability and with the across room variability. For this reason, it was decided to do analyses for each sampling method, individually. The between event variability and between room variability were significant for each method, except for the between event variability for the swabs. For both the wipes and vacuums, the within room standard deviation was much larger (26.9 for wipes and 7.086 for vacuums) than the between event standard deviation (6.552 for wipes and 1.348 for vacuums) and the between room standard deviation (6.783 for wipes and 1.040 for vacuums). Swabs between room standard deviation was 0.151, while both the within room and between event standard deviations are less than 0.10 (all measurements in CFU/cm2).

  8. Reproducible statistical analysis with multiple languages

    DEFF Research Database (Denmark)

    Lenth, Russell; Højsgaard, Søren

    2011-01-01

    This paper describes the system for making reproducible statistical analyses. differs from other systems for reproducible analysis in several ways. The two main differences are: (1) Several statistics programs can be in used in the same document. (2) Documents can be prepared using OpenOffice or ......Office or \\LaTeX. The main part of this paper is an example showing how to use and together in an OpenOffice text document. The paper also contains some practical considerations on the use of literate programming in statistics....

  9. Critical analysis of adsorption data statistically

    Science.gov (United States)

    Kaushal, Achla; Singh, S. K.

    2017-10-01

    Experimental data can be presented, computed, and critically analysed in a different way using statistics. A variety of statistical tests are used to make decisions about the significance and validity of the experimental data. In the present study, adsorption was carried out to remove zinc ions from contaminated aqueous solution using mango leaf powder. The experimental data was analysed statistically by hypothesis testing applying t test, paired t test and Chi-square test to (a) test the optimum value of the process pH, (b) verify the success of experiment and (c) study the effect of adsorbent dose in zinc ion removal from aqueous solutions. Comparison of calculated and tabulated values of t and χ 2 showed the results in favour of the data collected from the experiment and this has been shown on probability charts. K value for Langmuir isotherm was 0.8582 and m value for Freundlich adsorption isotherm obtained was 0.725, both are mango leaf powder.

  10. Assessing the suitability of summary data for two-sample Mendelian randomization analyses using MR-Egger regression: the role of the I2 statistic.

    Science.gov (United States)

    Bowden, Jack; Del Greco M, Fabiola; Minelli, Cosetta; Davey Smith, George; Sheehan, Nuala A; Thompson, John R

    2016-12-01

    : MR-Egger regression has recently been proposed as a method for Mendelian randomization (MR) analyses incorporating summary data estimates of causal effect from multiple individual variants, which is robust to invalid instruments. It can be used to test for directional pleiotropy and provides an estimate of the causal effect adjusted for its presence. MR-Egger regression provides a useful additional sensitivity analysis to the standard inverse variance weighted (IVW) approach that assumes all variants are valid instruments. Both methods use weights that consider the single nucleotide polymorphism (SNP)-exposure associations to be known, rather than estimated. We call this the `NO Measurement Error' (NOME) assumption. Causal effect estimates from the IVW approach exhibit weak instrument bias whenever the genetic variants utilized violate the NOME assumption, which can be reliably measured using the F-statistic. The effect of NOME violation on MR-Egger regression has yet to be studied. An adaptation of the I2 statistic from the field of meta-analysis is proposed to quantify the strength of NOME violation for MR-Egger. It lies between 0 and 1, and indicates the expected relative bias (or dilution) of the MR-Egger causal estimate in the two-sample MR context. We call it IGX2 . The method of simulation extrapolation is also explored to counteract the dilution. Their joint utility is evaluated using simulated data and applied to a real MR example. In simulated two-sample MR analyses we show that, when a causal effect exists, the MR-Egger estimate of causal effect is biased towards the null when NOME is violated, and the stronger the violation (as indicated by lower values of IGX2 ), the stronger the dilution. When additionally all genetic variants are valid instruments, the type I error rate of the MR-Egger test for pleiotropy is inflated and the causal effect underestimated. Simulation extrapolation is shown to substantially mitigate these adverse effects. We

  11. Statistical methods for analysing the relationship between bank profitability and liquidity

    OpenAIRE

    Boguslaw Guzik

    2006-01-01

    The article analyses the most popular methods for the empirical estimation of the relationship between bank profitability and liquidity. Owing to the fact that profitability depends on various factors (both economic and non-economic), a simple correlation coefficient, two-dimensional (profitability/liquidity) graphs or models where profitability depends only on liquidity variable do not provide good and reliable results. Quite good results can be obtained only when multifactorial profitabilit...

  12. Statistical Literacy in the Data Science Workplace

    Science.gov (United States)

    Grant, Robert

    2017-01-01

    Statistical literacy, the ability to understand and make use of statistical information including methods, has particular relevance in the age of data science, when complex analyses are undertaken by teams from diverse backgrounds. Not only is it essential to communicate to the consumers of information but also within the team. Writing from the…

  13. Building Effective Collaboration in a Comprehensive Approach (Etablissement d’une collaboration efficace selon une approche globale)

    Science.gov (United States)

    2017-11-01

    contribute to these identified needs. 1.3 REPORT OVERVIEW In this introductory chapter we have briefly introduced the complexity of security and...The response was distributed over fifteen civil organisations and fourteen military (sub)sections (n = 47; 73, resp.). Statistical tests indicated...which provided an indication of their engagement in coordination. Subsequently, we used statistical multiple regressions to test for relationships

  14. Development of the Statistical Reasoning in Biology Concept Inventory (SRBCI)

    Science.gov (United States)

    Deane, Thomas; Nomme, Kathy; Jeffery, Erica; Pollock, Carol; Birol, Gülnur

    2016-01-01

    We followed established best practices in concept inventory design and developed a 12-item inventory to assess student ability in statistical reasoning in biology (Statistical Reasoning in Biology Concept Inventory [SRBCI]). It is important to assess student thinking in this conceptual area, because it is a fundamental requirement of being statistically literate and associated skills are needed in almost all walks of life. Despite this, previous work shows that non–expert-like thinking in statistical reasoning is common, even after instruction. As science educators, our goal should be to move students along a novice-to-expert spectrum, which could be achieved with growing experience in statistical reasoning. We used item response theory analyses (the one-parameter Rasch model and associated analyses) to assess responses gathered from biology students in two populations at a large research university in Canada in order to test SRBCI’s robustness and sensitivity in capturing useful data relating to the students’ conceptual ability in statistical reasoning. Our analyses indicated that SRBCI is a unidimensional construct, with items that vary widely in difficulty and provide useful information about such student ability. SRBCI should be useful as a diagnostic tool in a variety of biology settings and as a means of measuring the success of teaching interventions designed to improve statistical reasoning skills. PMID:26903497

  15. Entropy statistics and information theory

    NARCIS (Netherlands)

    Frenken, K.; Hanusch, H.; Pyka, A.

    2007-01-01

    Entropy measures provide important tools to indicate variety in distributions at particular moments in time (e.g., market shares) and to analyse evolutionary processes over time (e.g., technical change). Importantly, entropy statistics are suitable to decomposition analysis, which renders the

  16. Hydrogeologic characterization and evolution of the 'excavation damaged zone' by statistical analyses of pressure signals: application to galleries excavated at the clay-stone sites of Mont Terri (Ga98) and Tournemire (Ga03)

    International Nuclear Information System (INIS)

    Fatmi, H.; Ababou, R.; Matray, J.M.; Joly, C.

    2010-01-01

    Document available in extended abstract form only. This paper presents methods of statistical analysis and interpretation of hydrogeological signals in clayey formations, e.g., pore water pressure and atmospheric pressure. The purpose of these analyses is to characterize the hydraulic behaviour of this type of formation in the case of a deep repository of Mid- Level/High-Level and Long-lived radioactive wastes, and to study the evolution of the geologic formation and its EDZ (Excavation Damaged Zone) during the excavation of galleries. We focus on galleries Ga98 and Ga03 in the sites of Mont Terri (Jura, Switzerland) and Tournemire (France, Aveyron), through data collected in the BPP- 1 and PH2 boreholes, respectively. The Mont Terri site, crossing the Aalenian Opalinus clay-stone, is an underground laboratory managed by an international consortium, namely the Mont Terri project (Switzerland). The Tournemire site, crossing the Toarcian clay-stone, is an Underground Research facility managed by IRSN (France). We have analysed pore water and atmospheric pressure signals at these sites, sometimes in correlation with other data. The methods of analysis are based on the theory of stationary random signals (correlation functions, Fourier spectra, transfer functions, envelopes), and on multi-resolution wavelet analysis (adapted to nonstationary and evolutionary signals). These methods are also combined with filtering techniques, and they can be used for single signals as well as pairs of signals (cross-analyses). The objective of this work is to exploit pressure measurements in selected boreholes from the two compacted clay sites, in order to: - evaluate phenomena affecting the measurements (earth tides, barometric pressures..); - estimate hydraulic properties (specific storage..) of the clay-stones prior to excavation works and compare them with those estimated by pulse or slug tests on shorter time scales; - analyze the effects of drift excavation on pore pressures

  17. The effects of clinical and statistical heterogeneity on the predictive values of results from meta-analyses

    NARCIS (Netherlands)

    Melsen, W G; Rovers, M M; Bonten, M J M; Bootsma, M C J|info:eu-repo/dai/nl/304830305

    Variance between studies in a meta-analysis will exist. This heterogeneity may be of clinical, methodological or statistical origin. The last of these is quantified by the I(2) -statistic. We investigated, using simulated studies, the accuracy of I(2) in the assessment of heterogeneity and the

  18. "Are cognitive interventions effective in Alzheimer's disease? A controlled meta- analysis of the effects of bias": Correction to Oltra-Cucarella et al. (2016).

    Science.gov (United States)

    2016-07-01

    Reports an error in "Are Cognitive Interventions Effective in Alzheimer's Disease? A Controlled Meta-Analysis of the Effects of Bias" by Javier Oltra-Cucarella, Rubén Pérez-Elvira, Raul Espert and Anita Sohn McCormick (Neuropsychology, Advanced Online Publication, Apr 7, 2016, np). In the article the first sentence of the third paragraph of the Source of bias subsection in the Statistical Analysis subsection of the Correlational Meta-Analysis section should read "For the control condition bias, three comparison groups were differentiated: (a) a structured cognitive intervention, (b) a placebo control condition, and (c) a pharma control condition without cognitive intervention or no treatment at all." (The following abstract of the original article appeared in record 2016-16656-001.) There is limited evidence about the efficacy of cognitive interventions for Alzheimer's disease (AD). However, aside from the methodological quality of the studies analyzed, the methodology used in previous meta-analyses is itself a risk of bias as different types of effect sizes (ESs) were calculated and combined. This study aimed at examining the results of nonpharmacological interventions for AD with an adequate control of statistical methods and to demonstrate a different approach to meta-analysis. ESs were calculated with the independent groups pre/post design. Average ESs for separate outcomes were calculated and moderator analyses were performed so as to offer an overview of the effects of bias. Eighty-seven outcomes from 19 studies (n = 812) were meta-analyzed. ESs were small on average for cognitive and functional outcomes after intervention. Moderator analyses showed no effect of control of bias, although ESs were different from zero only in some circumstances (e.g., memory outcomes in randomized studies). Cognitive interventions showed no more efficacy than placebo interventions, and functional ESs were consistently low across conditions. cognitive interventions delivered

  19. The disagreeable behaviour of the kappa statistic.

    Science.gov (United States)

    Flight, Laura; Julious, Steven A

    2015-01-01

    It is often of interest to measure the agreement between a number of raters when an outcome is nominal or ordinal. The kappa statistic is used as a measure of agreement. The statistic is highly sensitive to the distribution of the marginal totals and can produce unreliable results. Other statistics such as the proportion of concordance, maximum attainable kappa and prevalence and bias adjusted kappa should be considered to indicate how well the kappa statistic represents agreement in the data. Each kappa should be considered and interpreted based on the context of the data being analysed. Copyright © 2014 John Wiley & Sons, Ltd.

  20. Dispersal of potato cyst nematodes measured using historical and spatial statistical analyses.

    Science.gov (United States)

    Banks, N C; Hodda, M; Singh, S K; Matveeva, E M

    2012-06-01

    Rates and modes of dispersal of potato cyst nematodes (PCNs) were investigated. Analysis of records from eight countries suggested that PCNs spread a mean distance of 5.3 km/year radially from the site of first detection, and spread 212 km over ≈40 years before detection. Data from four countries with more detailed histories of invasion were analyzed further, using distance from first detection, distance from previous detection, distance from nearest detection, straight line distance, and road distance. Linear distance from first detection was significantly related to the time since the first detection. Estimated rate of spread was 5.7 km/year, and did not differ statistically between countries. Time between the first detection and estimated introduction date varied between 0 and 20 years, and differed among countries. Road distances from nearest and first detection were statistically significantly related to time, and gave slightly higher estimates for rate of spread of 6.0 and 7.9 km/year, respectively. These results indicate that the original site of introduction of PCNs may act as a source for subsequent spread and that this may occur at a relatively constant rate over time regardless of whether this distance is measured by road or by a straight line. The implications of this constant radial rate of dispersal for biosecurity and pest management are discussed, along with the effects of control strategies.

  1. Statistical methods in spatial genetics

    DEFF Research Database (Denmark)

    Guillot, Gilles; Leblois, Raphael; Coulon, Aurelie

    2009-01-01

    The joint analysis of spatial and genetic data is rapidly becoming the norm in population genetics. More and more studies explicitly describe and quantify the spatial organization of genetic variation and try to relate it to underlying ecological processes. As it has become increasingly difficult...... to keep abreast with the latest methodological developments, we review the statistical toolbox available to analyse population genetic data in a spatially explicit framework. We mostly focus on statistical concepts but also discuss practical aspects of the analytical methods, highlighting not only...

  2. Usage statistics and demonstrator services

    CERN Multimedia

    CERN. Geneva

    2007-01-01

    An understanding of the use of repositories and their contents is clearly desirable for authors and repository managers alike, as well as those who are analysing the state of scholarly communications. A number of individual initiatives have produced statistics of variious kinds for individual repositories, but the real challenge is to produce statistics that can be collected and compared transparently on a global scale. This presentation details the steps to be taken to address the issues to attain this capability View Les Carr's biography

  3. Narrative Review of Statistical Reporting Checklists, Mandatory Statistical Editing, and Rectifying Common Problems in the Reporting of Scientific Articles.

    Science.gov (United States)

    Dexter, Franklin; Shafer, Steven L

    2017-03-01

    Considerable attention has been drawn to poor reproducibility in the biomedical literature. One explanation is inadequate reporting of statistical methods by authors and inadequate assessment of statistical reporting and methods during peer review. In this narrative review, we examine scientific studies of several well-publicized efforts to improve statistical reporting. We also review several retrospective assessments of the impact of these efforts. These studies show that instructions to authors and statistical checklists are not sufficient; no findings suggested that either improves the quality of statistical methods and reporting. Second, even basic statistics, such as power analyses, are frequently missing or incorrectly performed. Third, statistical review is needed for all papers that involve data analysis. A consistent finding in the studies was that nonstatistical reviewers (eg, "scientific reviewers") and journal editors generally poorly assess statistical quality. We finish by discussing our experience with statistical review at Anesthesia & Analgesia from 2006 to 2016.

  4. Multivariate statistical methods and data mining in particle physics (4/4)

    CERN Multimedia

    CERN. Geneva

    2008-01-01

    The lectures will cover multivariate statistical methods and their applications in High Energy Physics. The methods will be viewed in the framework of a statistical test, as used e.g. to discriminate between signal and background events. Topics will include an introduction to the relevant statistical formalism, linear test variables, neural networks, probability density estimation (PDE) methods, kernel-based PDE, decision trees and support vector machines. The methods will be evaluated with respect to criteria relevant to HEP analyses such as statistical power, ease of computation and sensitivity to systematic effects. Simple computer examples that can be extended to more complex analyses will be presented.

  5. Multivariate statistical methods and data mining in particle physics (2/4)

    CERN Multimedia

    CERN. Geneva

    2008-01-01

    The lectures will cover multivariate statistical methods and their applications in High Energy Physics. The methods will be viewed in the framework of a statistical test, as used e.g. to discriminate between signal and background events. Topics will include an introduction to the relevant statistical formalism, linear test variables, neural networks, probability density estimation (PDE) methods, kernel-based PDE, decision trees and support vector machines. The methods will be evaluated with respect to criteria relevant to HEP analyses such as statistical power, ease of computation and sensitivity to systematic effects. Simple computer examples that can be extended to more complex analyses will be presented.

  6. Multivariate statistical methods and data mining in particle physics (1/4)

    CERN Multimedia

    CERN. Geneva

    2008-01-01

    The lectures will cover multivariate statistical methods and their applications in High Energy Physics. The methods will be viewed in the framework of a statistical test, as used e.g. to discriminate between signal and background events. Topics will include an introduction to the relevant statistical formalism, linear test variables, neural networks, probability density estimation (PDE) methods, kernel-based PDE, decision trees and support vector machines. The methods will be evaluated with respect to criteria relevant to HEP analyses such as statistical power, ease of computation and sensitivity to systematic effects. Simple computer examples that can be extended to more complex analyses will be presented.

  7. The complete nucleotide sequences of the 5 genetically distinct plastid genomes of Oenothera, subsection Oenothera: II. A microevolutionary view using bioinformatics and formal genetic data.

    Science.gov (United States)

    Greiner, Stephan; Wang, Xi; Herrmann, Reinhold G; Rauwolf, Uwe; Mayer, Klaus; Haberer, Georg; Meurer, Jörg

    2008-09-01

    A unique combination of genetic features and a rich stock of information make the flowering plant genus Oenothera an appealing model to explore the molecular basis of speciation processes including nucleus-organelle coevolution. From representative species, we have recently reported complete nucleotide sequences of the 5 basic and genetically distinguishable plastid chromosomes of subsection Oenothera (I-V). In nature, Oenothera plastid genomes are associated with 6 distinct, either homozygous or heterozygous, diploid nuclear genotypes of the 3 basic genomes A, B, or C. Artificially produced plastome-genome combinations that do not occur naturally often display interspecific plastome-genome incompatibility (PGI). In this study, we compare formal genetic data available from all 30 plastome-genome combinations with sequence differences between the plastomes to uncover potential determinants for interspecific PGI. Consistent with an active role in speciation, a remarkable number of genes have high Ka/Ks ratios. Different from the Solanacean cybrid model Atropa/tobacco, RNA editing seems not to be relevant for PGIs in Oenothera. However, predominantly sequence polymorphisms in intergenic segments are proposed as possible sources for PGI. A single locus, the bidirectional promoter region between psbB and clpP, is suggested to contribute to compartmental PGI in the interspecific AB hybrid containing plastome I (AB-I), consistent with its perturbed photosystem II activity.

  8. Region-of-interest analyses of one-dimensional biomechanical trajectories: bridging 0D and 1D theory, augmenting statistical power

    Directory of Open Access Journals (Sweden)

    Todd C. Pataky

    2016-11-01

    Full Text Available One-dimensional (1D kinematic, force, and EMG trajectories are often analyzed using zero-dimensional (0D metrics like local extrema. Recently whole-trajectory 1D methods have emerged in the literature as alternatives. Since 0D and 1D methods can yield qualitatively different results, the two approaches may appear to be theoretically distinct. The purposes of this paper were (a to clarify that 0D and 1D approaches are actually just special cases of a more general region-of-interest (ROI analysis framework, and (b to demonstrate how ROIs can augment statistical power. We first simulated millions of smooth, random 1D datasets to validate theoretical predictions of the 0D, 1D and ROI approaches and to emphasize how ROIs provide a continuous bridge between 0D and 1D results. We then analyzed a variety of public datasets to demonstrate potential effects of ROIs on biomechanical conclusions. Results showed, first, that a priori ROI particulars can qualitatively affect the biomechanical conclusions that emerge from analyses and, second, that ROIs derived from exploratory/pilot analyses can detect smaller biomechanical effects than are detectable using full 1D methods. We recommend regarding ROIs, like data filtering particulars and Type I error rate, as parameters which can affect hypothesis testing results, and thus as sensitivity analysis tools to ensure arbitrary decisions do not influence scientific interpretations. Last, we describe open-source Python and MATLAB implementations of 1D ROI analysis for arbitrary experimental designs ranging from one-sample t tests to MANOVA.

  9. Football goal distributions and extremal statistics

    Science.gov (United States)

    Greenhough, J.; Birch, P. C.; Chapman, S. C.; Rowlands, G.

    2002-12-01

    We analyse the distributions of the number of goals scored by home teams, away teams, and the total scored in the match, in domestic football games from 169 countries between 1999 and 2001. The probability density functions (PDFs) of goals scored are too heavy-tailed to be fitted over their entire ranges by Poisson or negative binomial distributions which would be expected for uncorrelated processes. Log-normal distributions cannot include zero scores and here we find that the PDFs are consistent with those arising from extremal statistics. In addition, we show that it is sufficient to model English top division and FA Cup matches in the seasons of 1970/71-2000/01 on Poisson or negative binomial distributions, as reported in analyses of earlier seasons, and that these are not consistent with extremal statistics.

  10. Analysing the spatial patterns of livestock anthrax in Kazakhstan in relation to environmental factors: a comparison of local (Gi* and morphology cluster statistics

    Directory of Open Access Journals (Sweden)

    Ian T. Kracalik

    2012-11-01

    Full Text Available We compared a local clustering and a cluster morphology statistic using anthrax outbreaks in large (cattle and small (sheep and goats domestic ruminants across Kazakhstan. The Getis-Ord (Gi* statistic and a multidirectional optimal ecotope algorithm (AMOEBA were compared using 1st, 2nd and 3rd order Rook contiguity matrices. Multivariate statistical tests were used to evaluate the environmental signatures between clusters and non-clusters from the AMOEBA and Gi* tests. A logistic regression was used to define a risk surface for anthrax outbreaks and to compare agreement between clustering methodologies. Tests revealed differences in the spatial distribution of clusters as well as the total number of clusters in large ruminants for AMOEBA (n = 149 and for small ruminants (n = 9. In contrast, Gi* revealed fewer large ruminant clusters (n = 122 and more small ruminant clusters (n = 61. Significant environmental differences were found between groups using the Kruskall-Wallis and Mann- Whitney U tests. Logistic regression was used to model the presence/absence of anthrax outbreaks and define a risk surface for large ruminants to compare with cluster analyses. The model predicted 32.2% of the landscape as high risk. Approximately 75% of AMOEBA clusters corresponded to predicted high risk, compared with ~64% of Gi* clusters. In general, AMOEBA predicted more irregularly shaped clusters of outbreaks in both livestock groups, while Gi* tended to predict larger, circular clusters. Here we provide an evaluation of both tests and a discussion of the use of each to detect environmental conditions associated with anthrax outbreak clusters in domestic livestock. These findings illustrate important differences in spatial statistical methods for defining local clusters and highlight the importance of selecting appropriate levels of data aggregation.

  11. Research Pearls: The Significance of Statistics and Perils of Pooling. Part 3: Pearls and Pitfalls of Meta-analyses and Systematic Reviews.

    Science.gov (United States)

    Harris, Joshua D; Brand, Jefferson C; Cote, Mark P; Dhawan, Aman

    2017-08-01

    Within the health care environment, there has been a recent and appropriate trend towards emphasizing the value of care provision. Reduced cost and higher quality improve the value of care. Quality is a challenging, heterogeneous, variably defined concept. At the core of quality is the patient's outcome, quantified by a vast assortment of subjective and objective outcome measures. There has been a recent evolution towards evidence-based medicine in health care, clearly elucidating the role of high-quality evidence across groups of patients and studies. Synthetic studies, such as systematic reviews and meta-analyses, are at the top of the evidence-based medicine hierarchy. Thus, these investigations may be the best potential source of guiding diagnostic, therapeutic, prognostic, and economic medical decision making. Systematic reviews critically appraise and synthesize the best available evidence to provide a conclusion statement (a "take-home point") in response to a specific answerable clinical question. A meta-analysis uses statistical methods to quantitatively combine data from single studies. Meta-analyses should be performed with high methodological quality homogenous studies (Level I or II) or evidence randomized studies, to minimize confounding variable bias. When it is known that the literature is inadequate or a recent systematic review has already been performed with a demonstration of insufficient data, then a new systematic review does not add anything meaningful to the literature. PROSPERO registration and PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines assist authors in the design and conduct of systematic reviews and should always be used. Complete transparency of the conduct of the review permits reproducibility and improves fidelity of the conclusions. Pooling of data from overly dissimilar investigations should be avoided. This particularly applies to Level IV evidence, that is, noncomparative investigations

  12. Comparing Visual and Statistical Analysis of Multiple Baseline Design Graphs.

    Science.gov (United States)

    Wolfe, Katie; Dickenson, Tammiee S; Miller, Bridget; McGrath, Kathleen V

    2018-04-01

    A growing number of statistical analyses are being developed for single-case research. One important factor in evaluating these methods is the extent to which each corresponds to visual analysis. Few studies have compared statistical and visual analysis, and information about more recently developed statistics is scarce. Therefore, our purpose was to evaluate the agreement between visual analysis and four statistical analyses: improvement rate difference (IRD); Tau-U; Hedges, Pustejovsky, Shadish (HPS) effect size; and between-case standardized mean difference (BC-SMD). Results indicate that IRD and BC-SMD had the strongest overall agreement with visual analysis. Although Tau-U had strong agreement with visual analysis on raw values, it had poorer agreement when those values were dichotomized to represent the presence or absence of a functional relation. Overall, visual analysis appeared to be more conservative than statistical analysis, but further research is needed to evaluate the nature of these disagreements.

  13. Analysis of Norwegian bio energy statistics. Quality improvement proposals; Analyse av norsk bioenergistatistikk. Forslag til kvalitetsheving

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2011-07-01

    This report is an assessment of the current model and presentation form of bio energy statistics. It appears proposed revision and enhancement of both collection and data representation. In the context of market development both in general for energy and particularly for bio energy and government targets, a good bio energy statistics form the basis to follow up the objectives and means.(eb)

  14. Statistical ecology comes of age

    Science.gov (United States)

    Gimenez, Olivier; Buckland, Stephen T.; Morgan, Byron J. T.; Bez, Nicolas; Bertrand, Sophie; Choquet, Rémi; Dray, Stéphane; Etienne, Marie-Pierre; Fewster, Rachel; Gosselin, Frédéric; Mérigot, Bastien; Monestiez, Pascal; Morales, Juan M.; Mortier, Frédéric; Munoz, François; Ovaskainen, Otso; Pavoine, Sandrine; Pradel, Roger; Schurr, Frank M.; Thomas, Len; Thuiller, Wilfried; Trenkel, Verena; de Valpine, Perry; Rexstad, Eric

    2014-01-01

    The desire to predict the consequences of global environmental change has been the driver towards more realistic models embracing the variability and uncertainties inherent in ecology. Statistical ecology has gelled over the past decade as a discipline that moves away from describing patterns towards modelling the ecological processes that generate these patterns. Following the fourth International Statistical Ecology Conference (1–4 July 2014) in Montpellier, France, we analyse current trends in statistical ecology. Important advances in the analysis of individual movement, and in the modelling of population dynamics and species distributions, are made possible by the increasing use of hierarchical and hidden process models. Exciting research perspectives include the development of methods to interpret citizen science data and of efficient, flexible computational algorithms for model fitting. Statistical ecology has come of age: it now provides a general and mathematically rigorous framework linking ecological theory and empirical data. PMID:25540151

  15. Statistical ecology comes of age.

    Science.gov (United States)

    Gimenez, Olivier; Buckland, Stephen T; Morgan, Byron J T; Bez, Nicolas; Bertrand, Sophie; Choquet, Rémi; Dray, Stéphane; Etienne, Marie-Pierre; Fewster, Rachel; Gosselin, Frédéric; Mérigot, Bastien; Monestiez, Pascal; Morales, Juan M; Mortier, Frédéric; Munoz, François; Ovaskainen, Otso; Pavoine, Sandrine; Pradel, Roger; Schurr, Frank M; Thomas, Len; Thuiller, Wilfried; Trenkel, Verena; de Valpine, Perry; Rexstad, Eric

    2014-12-01

    The desire to predict the consequences of global environmental change has been the driver towards more realistic models embracing the variability and uncertainties inherent in ecology. Statistical ecology has gelled over the past decade as a discipline that moves away from describing patterns towards modelling the ecological processes that generate these patterns. Following the fourth International Statistical Ecology Conference (1-4 July 2014) in Montpellier, France, we analyse current trends in statistical ecology. Important advances in the analysis of individual movement, and in the modelling of population dynamics and species distributions, are made possible by the increasing use of hierarchical and hidden process models. Exciting research perspectives include the development of methods to interpret citizen science data and of efficient, flexible computational algorithms for model fitting. Statistical ecology has come of age: it now provides a general and mathematically rigorous framework linking ecological theory and empirical data.

  16. How to Make Nothing Out of Something: Analyses of the Impact of Study Sampling and Statistical Interpretation in Misleading Meta-Analytic Conclusions

    Directory of Open Access Journals (Sweden)

    Michael Robert Cunningham

    2016-10-01

    Full Text Available The limited resource model states that self-control is governed by a relatively finite set of inner resources on which people draw when exerting willpower. Once self-control resources have been used up or depleted, they are less available for other self-control tasks, leading to a decrement in subsequent self-control success. The depletion effect has been studied for over 20 years, tested or extended in more than 600 studies, and supported in an independent meta-analysis (Hagger, Wood, Stiff, and Chatzisarantis, 2010. Meta-analyses are supposed to reduce bias in literature reviews. Carter, Kofler, Forster, and McCullough’s (2015 meta-analysis, by contrast, included a series of questionable decisions involving sampling, methods, and data analysis. We provide quantitative analyses of key sampling issues: exclusion of many of the best depletion studies based on idiosyncratic criteria and the emphasis on mini meta-analyses with low statistical power as opposed to the overall depletion effect. We discuss two key methodological issues: failure to code for research quality, and the quantitative impact of weak studies by novice researchers. We discuss two key data analysis issues: questionable interpretation of the results of trim and fill and funnel plot asymmetry test procedures, and the use and misinterpretation of the untested Precision Effect Test [PET] and Precision Effect Estimate with Standard Error (PEESE procedures. Despite these serious problems, the Carter et al. meta-analysis results actually indicate that there is a real depletion effect – contrary to their title.

  17. Computed statistics at streamgages, and methods for estimating low-flow frequency statistics and development of regional regression equations for estimating low-flow frequency statistics at ungaged locations in Missouri

    Science.gov (United States)

    Southard, Rodney E.

    2013-01-01

    estimates on one of these streams can be calculated at an ungaged location that has a drainage area that is between 40 percent of the drainage area of the farthest upstream streamgage and within 150 percent of the drainage area of the farthest downstream streamgage along the stream of interest. The second method may be used on any stream with a streamgage that has operated for 10 years or longer and for which anthropogenic effects have not changed the low-flow characteristics at the ungaged location since collection of the streamflow data. A ratio of drainage area of the stream at the ungaged location to the drainage area of the stream at the streamgage was computed to estimate the statistic at the ungaged location. The range of applicability is between 40- and 150-percent of the drainage area of the streamgage, and the ungaged location must be located on the same stream as the streamgage. The third method uses regional regression equations to estimate selected low-flow frequency statistics for unregulated streams in Missouri. This report presents regression equations to estimate frequency statistics for the 10-year recurrence interval and for the N-day durations of 1, 2, 3, 7, 10, 30, and 60 days. Basin and climatic characteristics were computed using geographic information system software and digital geospatial data. A total of 35 characteristics were computed for use in preliminary statewide and regional regression analyses based on existing digital geospatial data and previous studies. Spatial analyses for geographical bias in the predictive accuracy of the regional regression equations defined three low-flow regions with the State representing the three major physiographic provinces in Missouri. Region 1 includes the Central Lowlands, Region 2 includes the Ozark Plateaus, and Region 3 includes the Mississippi Alluvial Plain. A total of 207 streamgages were used in the regression analyses for the regional equations. Of the 207 U.S. Geological Survey streamgages, 77 were

  18. Statistical learning and prejudice.

    Science.gov (United States)

    Madison, Guy; Ullén, Fredrik

    2012-12-01

    Human behavior is guided by evolutionarily shaped brain mechanisms that make statistical predictions based on limited information. Such mechanisms are important for facilitating interpersonal relationships, avoiding dangers, and seizing opportunities in social interaction. We thus suggest that it is essential for analyses of prejudice and prejudice reduction to take the predictive accuracy and adaptivity of the studied prejudices into account.

  19. Biostatistics and epidemiology a primer for health and biomedical professionals

    CERN Document Server

    Wassertheil-Smoller, Sylvia

    2015-01-01

    Since the publication of the first edition, Biostatistics and Epidemiology has attracted loyal readers from across specialty areas in the biomedical community. Not only does this textbook teach foundations of epidemiological design and statistical methods, but it also includes topics applicable to new areas of research. Areas covered in the fourth edition include a new chapter on risk prediction, risk reclassification and evaluation of biomarkers, new material on propensity analyses, and a vastly expanded chapter on genetic epidemiology, which  is particularly relevant to those who wish to understand the epidemiological and statistical aspects of scientific articles in this rapidly advancing field. Biostatistics and Epidemiology was written to be accessible for readers without backgrounds in mathematics. It provides clear explanations of underlying principles, as well as practical guidelines of "how to do it" and "how to interpret it."a philosophical explanation of the logic of science, subsections that ...

  20. The Abad composite (SE Spain): a Messinian reference section for the Mediterranean and the APTS

    NARCIS (Netherlands)

    Sierro, F.J.; Hilgen, F.J.; Krijgsman, W.; Flores, J.A.

    2000-01-01

    A high-resolution integrated stratigraphy is presented for the Abad marls of the Sorbas and Nijar basins in SE Spain (preevaporitic Messinian of the Western Mediterranean). Detailed cyclostratigraphic and biostratigraphic analyses of partially overlapping subsections were needed to overcome

  1. Experimental design techniques in statistical practice a practical software-based approach

    CERN Document Server

    Gardiner, W P

    1998-01-01

    Provides an introduction to the diverse subject area of experimental design, with many practical and applicable exercises to help the reader understand, present and analyse the data. The pragmatic approach offers technical training for use of designs and teaches statistical and non-statistical skills in design and analysis of project studies throughout science and industry. Provides an introduction to the diverse subject area of experimental design and includes practical and applicable exercises to help understand, present and analyse the data Offers technical training for use of designs and teaches statistical and non-statistical skills in design and analysis of project studies throughout science and industry Discusses one-factor designs and blocking designs, factorial experimental designs, Taguchi methods and response surface methods, among other topics.

  2. Statistical analysis applied to safety culture self-assessment

    International Nuclear Information System (INIS)

    Macedo Soares, P.P.

    2002-01-01

    Interviews and opinion surveys are instruments used to assess the safety culture in an organization as part of the Safety Culture Enhancement Programme. Specific statistical tools are used to analyse the survey results. This paper presents an example of an opinion survey with the corresponding application of the statistical analysis and the conclusions obtained. Survey validation, Frequency statistics, Kolmogorov-Smirnov non-parametric test, Student (T-test) and ANOVA means comparison tests and LSD post-hoc multiple comparison test, are discussed. (author)

  3. Decision-making in probability and statistics Chilean curriculum

    DEFF Research Database (Denmark)

    Elicer, Raimundo

    2018-01-01

    Probability and statistics have become prominent subjects in school mathematics curricula. As an exemplary case, I investigate the role of decision making in the justification for probability and statistics in the current Chilean upper secondary mathematics curriculum. For addressing this concern......, I draw upon Fairclough’s model for Critical Discourse Analysis to analyse selected texts as examples of discourse practices. The texts are interconnected with politically driven ideas of stochastics “for all”, the notion of statistical literacy coined by statisticians’ communities, schooling...

  4. A statistical evaluation of asbestos air concentrations

    International Nuclear Information System (INIS)

    Lange, J.H.

    1999-01-01

    Both area and personal air samples collected during an asbestos abatement project were matched and statistically analysed. Among the many parameters studied were fibre concentrations and their variability. Mean values for area and personal samples were 0.005 and 0.024 f cm - - 3 of air, respectively. Summary values for area and personal samples suggest that exposures are low with no single exposure value exceeding the current OSHA TWA value of 0.1 f cm -3 of air. Within- and between-worker analysis suggests that these data are homogeneous. Comparison of within- and between-worker values suggests that the exposure source and variability for abatement are more related to the process than individual practices. This supports the importance of control measures for abatement. Study results also suggest that area and personal samples are not statistically related, that is, there is no association observed for these two sampling methods when data are analysed by correlation or regression analysis. Personal samples were statistically higher in concentration than area samples. Area sampling cannot be used as a surrogate exposure for asbestos abatement workers. (author)

  5. Process Flow Features as a Host-Based Event Knowledge Representation

    Science.gov (United States)

    2012-06-14

    sequences of packets. The time window concept of observing past packets was used to model history or mem- ory. Having memory of past packets can increase the...network statistics ( IPv4 statistics and IPv6 statistics), and IP routing table information. 3.3.2 Data Preparation. The following subsections outlines...vol. 7, no. 1, pp. 3–35, 1999. 13. R. Kemmerer and G. Vigna, “Intrusion detection: a brief history and overview,” Computer, vol. 35, no. 4, pp. 27–30

  6. Development of the Statistical Reasoning in Biology Concept Inventory (SRBCI).

    Science.gov (United States)

    Deane, Thomas; Nomme, Kathy; Jeffery, Erica; Pollock, Carol; Birol, Gülnur

    2016-01-01

    We followed established best practices in concept inventory design and developed a 12-item inventory to assess student ability in statistical reasoning in biology (Statistical Reasoning in Biology Concept Inventory [SRBCI]). It is important to assess student thinking in this conceptual area, because it is a fundamental requirement of being statistically literate and associated skills are needed in almost all walks of life. Despite this, previous work shows that non-expert-like thinking in statistical reasoning is common, even after instruction. As science educators, our goal should be to move students along a novice-to-expert spectrum, which could be achieved with growing experience in statistical reasoning. We used item response theory analyses (the one-parameter Rasch model and associated analyses) to assess responses gathered from biology students in two populations at a large research university in Canada in order to test SRBCI's robustness and sensitivity in capturing useful data relating to the students' conceptual ability in statistical reasoning. Our analyses indicated that SRBCI is a unidimensional construct, with items that vary widely in difficulty and provide useful information about such student ability. SRBCI should be useful as a diagnostic tool in a variety of biology settings and as a means of measuring the success of teaching interventions designed to improve statistical reasoning skills. © 2016 T. Deane et al. CBE—Life Sciences Education © 2016 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  7. Statistics and Corporate Environmental Management: Relations and Problems

    DEFF Research Database (Denmark)

    Madsen, Henning; Ulhøi, John Parm

    1997-01-01

    Statistical methods have long been used to analyse the macroeconomic consequences of environmentally damaging activities, political actions to control, prevent, or reduce these damages, and environmental problems in the natural environment. Up to now, however, they have had a limited and not very...... in the external environment. The nature and extent of the practical use of quantitative techniques in corporate environmental management systems is discussed on the basis of a number of company surveys in four European countries.......Statistical methods have long been used to analyse the macroeconomic consequences of environmentally damaging activities, political actions to control, prevent, or reduce these damages, and environmental problems in the natural environment. Up to now, however, they have had a limited and not very...... specific use in corporate environmental management systems. This paper will address some of the special problems related to the use of statistical techniques in corporate environmental management systems. One important aspect of this is the interaction of internal decisions and activities with conditions...

  8. Selection, rejection and optimisation of pyrolytic graphite (PG) crystal analysers for use on the new IRIS graphite analyser bank

    International Nuclear Information System (INIS)

    Marshall, P.J.; Sivia, D.S.; Adams, M.A.; Telling, M.T.F.

    2000-01-01

    This report discusses design problems incurred by equipping the IRIS high-resolution inelastic spectrometer at the ISIS pulsed neutron source, UK with a new 4212 piece pyrolytic graphite crystal analyser array. Of the 4212 graphite pieces required, approximately 2500 will be newly purchased PG crystals with the remainder comprising of the currently installed graphite analysers. The quality of the new analyser pieces, with respect to manufacturing specifications, is assessed, as is the optimum arrangement of new PG pieces amongst old to circumvent degradation of the spectrometer's current angular resolution. Techniques employed to achieve these criteria include accurate calliper measurements, FORTRAN programming and statistical analysis. (author)

  9. Unconscious analyses of visual scenes based on feature conjunctions.

    Science.gov (United States)

    Tachibana, Ryosuke; Noguchi, Yasuki

    2015-06-01

    To efficiently process a cluttered scene, the visual system analyzes statistical properties or regularities of visual elements embedded in the scene. It is controversial, however, whether those scene analyses could also work for stimuli unconsciously perceived. Here we show that our brain performs the unconscious scene analyses not only using a single featural cue (e.g., orientation) but also based on conjunctions of multiple visual features (e.g., combinations of color and orientation information). Subjects foveally viewed a stimulus array (duration: 50 ms) where 4 types of bars (red-horizontal, red-vertical, green-horizontal, and green-vertical) were intermixed. Although a conscious perception of those bars was inhibited by a subsequent mask stimulus, the brain correctly analyzed the information about color, orientation, and color-orientation conjunctions of those invisible bars. The information of those features was then used for the unconscious configuration analysis (statistical processing) of the central bars, which induced a perceptual bias and illusory feature binding in visible stimuli at peripheral locations. While statistical analyses and feature binding are normally 2 key functions of the visual system to construct coherent percepts of visual scenes, our results show that a high-level analysis combining those 2 functions is correctly performed by unconscious computations in the brain. (c) 2015 APA, all rights reserved).

  10. Determination of the minimum size of a statistical representative volume element from a fibre-reinforced composite based on point pattern statistics

    DEFF Research Database (Denmark)

    Hansen, Jens Zangenberg; Brøndsted, Povl

    2013-01-01

    In a previous study, Trias et al. [1] determined the minimum size of a statistical representative volume element (SRVE) of a unidirectional fibre-reinforced composite primarily based on numerical analyses of the stress/strain field. In continuation of this, the present study determines the minimu...... size of an SRVE based on a statistical analysis on the spatial statistics of the fibre packing patterns found in genuine laminates, and those generated numerically using a microstructure generator. © 2012 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved....

  11. Replication unreliability in psychology: elusive phenomena or elusive statistical power?

    Directory of Open Access Journals (Sweden)

    Patrizio E Tressoldi

    2012-07-01

    Full Text Available The focus of this paper is to analyse whether the unreliability of results related to certain controversial psychological phenomena may be a consequence of their low statistical power.Applying the Null Hypothesis Statistical Testing (NHST, still the widest used statistical approach, unreliability derives from the failure to refute the null hypothesis, in particular when exact or quasi-exact replications of experiments are carried out.Taking as example the results of meta-analyses related to four different controversial phenomena, subliminal semantic priming, incubation effect for problem solving, unconscious thought theory, and non-local perception, it was found that, except for semantic priming on categorization, the statistical power to detect the expected effect size of the typical study, is low or very low.The low power in most studies undermines the use of NHST to study phenomena with moderate or low effect sizes.We conclude by providing some suggestions on how to increase the statistical power or use different statistical approaches to help discriminate whether the results obtained may or may not be used to support or to refute the reality of a phenomenon with small effect size.

  12. Power, effects, confidence, and significance: an investigation of statistical practices in nursing research.

    Science.gov (United States)

    Gaskin, Cadeyrn J; Happell, Brenda

    2014-05-01

    To (a) assess the statistical power of nursing research to detect small, medium, and large effect sizes; (b) estimate the experiment-wise Type I error rate in these studies; and (c) assess the extent to which (i) a priori power analyses, (ii) effect sizes (and interpretations thereof), and (iii) confidence intervals were reported. Statistical review. Papers published in the 2011 volumes of the 10 highest ranked nursing journals, based on their 5-year impact factors. Papers were assessed for statistical power, control of experiment-wise Type I error, reporting of a priori power analyses, reporting and interpretation of effect sizes, and reporting of confidence intervals. The analyses were based on 333 papers, from which 10,337 inferential statistics were identified. The median power to detect small, medium, and large effect sizes was .40 (interquartile range [IQR]=.24-.71), .98 (IQR=.85-1.00), and 1.00 (IQR=1.00-1.00), respectively. The median experiment-wise Type I error rate was .54 (IQR=.26-.80). A priori power analyses were reported in 28% of papers. Effect sizes were routinely reported for Spearman's rank correlations (100% of papers in which this test was used), Poisson regressions (100%), odds ratios (100%), Kendall's tau correlations (100%), Pearson's correlations (99%), logistic regressions (98%), structural equation modelling/confirmatory factor analyses/path analyses (97%), and linear regressions (83%), but were reported less often for two-proportion z tests (50%), analyses of variance/analyses of covariance/multivariate analyses of variance (18%), t tests (8%), Wilcoxon's tests (8%), Chi-squared tests (8%), and Fisher's exact tests (7%), and not reported for sign tests, Friedman's tests, McNemar's tests, multi-level models, and Kruskal-Wallis tests. Effect sizes were infrequently interpreted. Confidence intervals were reported in 28% of papers. The use, reporting, and interpretation of inferential statistics in nursing research need substantial

  13. Research design and statistical methods in Indian medical journals: a retrospective survey.

    Science.gov (United States)

    Hassan, Shabbeer; Yellur, Rajashree; Subramani, Pooventhan; Adiga, Poornima; Gokhale, Manoj; Iyer, Manasa S; Mayya, Shreemathi S

    2015-01-01

    Good quality medical research generally requires not only an expertise in the chosen medical field of interest but also a sound knowledge of statistical methodology. The number of medical research articles which have been published in Indian medical journals has increased quite substantially in the past decade. The aim of this study was to collate all evidence on study design quality and statistical analyses used in selected leading Indian medical journals. Ten (10) leading Indian medical journals were selected based on impact factors and all original research articles published in 2003 (N = 588) and 2013 (N = 774) were categorized and reviewed. A validated checklist on study design, statistical analyses, results presentation, and interpretation was used for review and evaluation of the articles. Main outcomes considered in the present study were - study design types and their frequencies, error/defects proportion in study design, statistical analyses, and implementation of CONSORT checklist in RCT (randomized clinical trials). From 2003 to 2013: The proportion of erroneous statistical analyses did not decrease (χ2=0.592, Φ=0.027, p=0.4418), 25% (80/320) in 2003 compared to 22.6% (111/490) in 2013. Compared with 2003, significant improvement was seen in 2013; the proportion of papers using statistical tests increased significantly (χ2=26.96, Φ=0.16, pdesign decreased significantly (χ2=16.783, Φ=0.12 pdesigns has remained very low (7.3%, 43/588) with majority showing some errors (41 papers, 95.3%). Majority of the published studies were retrospective in nature both in 2003 [79.1% (465/588)] and in 2013 [78.2% (605/774)]. Major decreases in error proportions were observed in both results presentation (χ2=24.477, Φ=0.17, presearch seems to have made no major progress regarding using correct statistical analyses, but error/defects in study designs have decreased significantly. Randomized clinical trials are quite rarely published and have high proportion of

  14. Wind Statistics from a Forested Landscape

    DEFF Research Database (Denmark)

    Arnqvist, Johan; Segalini, Antonio; Dellwik, Ebba

    2015-01-01

    An analysis and interpretation of measurements from a 138-m tall tower located in a forested landscape is presented. Measurement errors and statistical uncertainties are carefully evaluated to ensure high data quality. A 40(Formula presented.) wide wind-direction sector is selected as the most...... representative for large-scale forest conditions, and from that sector first-, second- and third-order statistics, as well as analyses regarding the characteristic length scale, the flux-profile relationship and surface roughness are presented for a wide range of stability conditions. The results are discussed...

  15. Simultaneous assessment of phase chemistry, phase abundance and bulk chemistry with statistical electron probe micro-analyses: Application to cement clinkers

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, William; Krakowiak, Konrad J.; Ulm, Franz-Josef, E-mail: ulm@mit.edu

    2014-01-15

    According to recent developments in cement clinker engineering, the optimization of chemical substitutions in the main clinker phases offers a promising approach to improve both reactivity and grindability of clinkers. Thus, monitoring the chemistry of the phases may become part of the quality control at the cement plants, along with the usual measurements of the abundance of the mineralogical phases (quantitative X-ray diffraction) and the bulk chemistry (X-ray fluorescence). This paper presents a new method to assess these three complementary quantities with a single experiment. The method is based on electron microprobe spot analyses, performed over a grid located on a representative surface of the sample and interpreted with advanced statistical tools. This paper describes the method and the experimental program performed on industrial clinkers to establish the accuracy in comparison to conventional methods. -- Highlights: •A new method of clinker characterization •Combination of electron probe technique with cluster analysis •Simultaneous assessment of phase abundance, composition and bulk chemistry •Experimental validation performed on industrial clinkers.

  16. Simultaneous assessment of phase chemistry, phase abundance and bulk chemistry with statistical electron probe micro-analyses: Application to cement clinkers

    International Nuclear Information System (INIS)

    Wilson, William; Krakowiak, Konrad J.; Ulm, Franz-Josef

    2014-01-01

    According to recent developments in cement clinker engineering, the optimization of chemical substitutions in the main clinker phases offers a promising approach to improve both reactivity and grindability of clinkers. Thus, monitoring the chemistry of the phases may become part of the quality control at the cement plants, along with the usual measurements of the abundance of the mineralogical phases (quantitative X-ray diffraction) and the bulk chemistry (X-ray fluorescence). This paper presents a new method to assess these three complementary quantities with a single experiment. The method is based on electron microprobe spot analyses, performed over a grid located on a representative surface of the sample and interpreted with advanced statistical tools. This paper describes the method and the experimental program performed on industrial clinkers to establish the accuracy in comparison to conventional methods. -- Highlights: •A new method of clinker characterization •Combination of electron probe technique with cluster analysis •Simultaneous assessment of phase abundance, composition and bulk chemistry •Experimental validation performed on industrial clinkers

  17. How the Mastery Rubric for Statistical Literacy Can Generate Actionable Evidence about Statistical and Quantitative Learning Outcomes

    Directory of Open Access Journals (Sweden)

    Rochelle E. Tractenberg

    2016-12-01

    Full Text Available Statistical literacy is essential to an informed citizenry; and two emerging trends highlight a growing need for training that achieves this literacy. The first trend is towards “big” data: while automated analyses can exploit massive amounts of data, the interpretation—and possibly more importantly, the replication—of results are challenging without adequate statistical literacy. The second trend is that science and scientific publishing are struggling with insufficient/inappropriate statistical reasoning in writing, reviewing, and editing. This paper describes a model for statistical literacy (SL and its development that can support modern scientific practice. An established curriculum development and evaluation tool—the Mastery Rubric—is integrated with a new, developmental, model of statistical literacy that reflects the complexity of reasoning and habits of mind that scientists need to cultivate in order to recognize, choose, and interpret statistical methods. This developmental model provides actionable evidence, and explicit opportunities for consequential assessment that serves students, instructors, developers/reviewers/accreditors of a curriculum, and institutions. By supporting the enrichment, rather than increasing the amount, of statistical training in the basic and life sciences, this approach supports curriculum development, evaluation, and delivery to promote statistical literacy for students and a collective quantitative proficiency more broadly.

  18. Sunspot activity and influenza pandemics: a statistical assessment of the purported association.

    Science.gov (United States)

    Towers, S

    2017-10-01

    Since 1978, a series of papers in the literature have claimed to find a significant association between sunspot activity and the timing of influenza pandemics. This paper examines these analyses, and attempts to recreate the three most recent statistical analyses by Ertel (1994), Tapping et al. (2001), and Yeung (2006), which all have purported to find a significant relationship between sunspot numbers and pandemic influenza. As will be discussed, each analysis had errors in the data. In addition, in each analysis arbitrary selections or assumptions were also made, and the authors did not assess the robustness of their analyses to changes in those arbitrary assumptions. Varying the arbitrary assumptions to other, equally valid, assumptions negates the claims of significance. Indeed, an arbitrary selection made in one of the analyses appears to have resulted in almost maximal apparent significance; changing it only slightly yields a null result. This analysis applies statistically rigorous methodology to examine the purported sunspot/pandemic link, using more statistically powerful un-binned analysis methods, rather than relying on arbitrarily binned data. The analyses are repeated using both the Wolf and Group sunspot numbers. In all cases, no statistically significant evidence of any association was found. However, while the focus in this particular analysis was on the purported relationship of influenza pandemics to sunspot activity, the faults found in the past analyses are common pitfalls; inattention to analysis reproducibility and robustness assessment are common problems in the sciences, that are unfortunately not noted often enough in review.

  19. Municipal solid waste composition: Sampling methodology, statistical analyses, and case study evaluation

    DEFF Research Database (Denmark)

    Edjabou, Vincent Maklawe Essonanawe; Jensen, Morten Bang; Götze, Ramona

    2015-01-01

    Sound waste management and optimisation of resource recovery require reliable data on solid waste generation and composition. In the absence of standardised and commonly accepted waste characterisation methodologies, various approaches have been reported in literature. This limits both...... comparability and applicability of the results. In this study, a waste sampling and sorting methodology for efficient and statistically robust characterisation of solid waste was introduced. The methodology was applied to residual waste collected from 1442 households distributed among 10 individual sub......-areas in three Danish municipalities (both single and multi-family house areas). In total 17 tonnes of waste were sorted into 10-50 waste fractions, organised according to a three-level (tiered approach) facilitating,comparison of the waste data between individual sub-areas with different fractionation (waste...

  20. New applications of statistical tools in plant pathology.

    Science.gov (United States)

    Garrett, K A; Madden, L V; Hughes, G; Pfender, W F

    2004-09-01

    ABSTRACT The series of papers introduced by this one address a range of statistical applications in plant pathology, including survival analysis, nonparametric analysis of disease associations, multivariate analyses, neural networks, meta-analysis, and Bayesian statistics. Here we present an overview of additional applications of statistics in plant pathology. An analysis of variance based on the assumption of normally distributed responses with equal variances has been a standard approach in biology for decades. Advances in statistical theory and computation now make it convenient to appropriately deal with discrete responses using generalized linear models, with adjustments for overdispersion as needed. New nonparametric approaches are available for analysis of ordinal data such as disease ratings. Many experiments require the use of models with fixed and random effects for data analysis. New or expanded computing packages, such as SAS PROC MIXED, coupled with extensive advances in statistical theory, allow for appropriate analyses of normally distributed data using linear mixed models, and discrete data with generalized linear mixed models. Decision theory offers a framework in plant pathology for contexts such as the decision about whether to apply or withhold a treatment. Model selection can be performed using Akaike's information criterion. Plant pathologists studying pathogens at the population level have traditionally been the main consumers of statistical approaches in plant pathology, but new technologies such as microarrays supply estimates of gene expression for thousands of genes simultaneously and present challenges for statistical analysis. Applications to the study of the landscape of the field and of the genome share the risk of pseudoreplication, the problem of determining the appropriate scale of the experimental unit and of obtaining sufficient replication at that scale.

  1. Statistical aspects of the cleanup of Enewetak Atoll

    International Nuclear Information System (INIS)

    Giacomini, J.J.; Miller, F.L. Jr.

    1981-01-01

    The Desert Research Institute participated in the Enewetak Atoll Radiological Cleanup by providing data-base management and statistical analysis support for the Department of Energy team. The data-base management responsibilities included both design and implementation of a system for recording (in machine-retrievable form) all radiological measurements made during the cleanup, excluding personnel dosimetry. Statistical analyses were performed throughout the cleanup and were used to guide excavation activities

  2. Robust statistics for deterministic and stochastic gravitational waves in non-Gaussian noise. II. Bayesian analyses

    International Nuclear Information System (INIS)

    Allen, Bruce; Creighton, Jolien D.E.; Flanagan, Eanna E.; Romano, Joseph D.

    2003-01-01

    In a previous paper (paper I), we derived a set of near-optimal signal detection techniques for gravitational wave detectors whose noise probability distributions contain non-Gaussian tails. The methods modify standard methods by truncating or clipping sample values which lie in those non-Gaussian tails. The methods were derived, in the frequentist framework, by minimizing false alarm probabilities at fixed false detection probability in the limit of weak signals. For stochastic signals, the resulting statistic consisted of a sum of an autocorrelation term and a cross-correlation term; it was necessary to discard 'by hand' the autocorrelation term in order to arrive at the correct, generalized cross-correlation statistic. In the present paper, we present an alternative derivation of the same signal detection techniques from within the Bayesian framework. We compute, for both deterministic and stochastic signals, the probability that a signal is present in the data, in the limit where the signal-to-noise ratio squared per frequency bin is small, where the signal is nevertheless strong enough to be detected (integrated signal-to-noise ratio large compared to 1), and where the total probability in the non-Gaussian tail part of the noise distribution is small. We show that, for each model considered, the resulting probability is to a good approximation a monotonic function of the detection statistic derived in paper I. Moreover, for stochastic signals, the new Bayesian derivation automatically eliminates the problematic autocorrelation term

  3. A statistical evaluation of asbestos air concentrations

    Energy Technology Data Exchange (ETDEWEB)

    Lange, J.H. [Envirosafe Training and Consultants, Pittsburgh, PA (United States)

    1999-07-01

    Both area and personal air samples collected during an asbestos abatement project were matched and statistically analysed. Among the many parameters studied were fibre concentrations and their variability. Mean values for area and personal samples were 0.005 and 0.024 f cm{sup -}-{sup 3} of air, respectively. Summary values for area and personal samples suggest that exposures are low with no single exposure value exceeding the current OSHA TWA value of 0.1 f cm{sup -3} of air. Within- and between-worker analysis suggests that these data are homogeneous. Comparison of within- and between-worker values suggests that the exposure source and variability for abatement are more related to the process than individual practices. This supports the importance of control measures for abatement. Study results also suggest that area and personal samples are not statistically related, that is, there is no association observed for these two sampling methods when data are analysed by correlation or regression analysis. Personal samples were statistically higher in concentration than area samples. Area sampling cannot be used as a surrogate exposure for asbestos abatement workers. (author)

  4. Response surface use in safety analyses

    International Nuclear Information System (INIS)

    Prosek, A.

    1999-01-01

    When thousands of complex computer code runs related to nuclear safety are needed for statistical analysis, the response surface is used to replace the computer code. The main purpose of the study was to develop and demonstrate a tool called optimal statistical estimator (OSE) intended for response surface generation of complex and non-linear phenomena. The performance of optimal statistical estimator was tested by the results of 59 different RELAP5/MOD3.2 code calculations of the small-break loss-of-coolant accident in a two loop pressurized water reactor. The results showed that OSE adequately predicted the response surface for the peak cladding temperature. Some good characteristic of the OSE like monotonic function between two neighbor points and independence on the number of output parameters suggest that OSE can be used for response surface generation of any safety or system parameter in the thermal-hydraulic safety analyses.(author)

  5. Statistical Analysis Of Reconnaissance Geochemical Data From ...

    African Journals Online (AJOL)

    , Co, Mo, Hg, Sb, Tl, Sc, Cr, Ni, La, W, V, U, Th, Bi, Sr and Ga in 56 stream sediment samples collected from Orle drainage system were subjected to univariate and multivariate statistical analyses. The univariate methods used include ...

  6. Economic Analyses of Ware Yam Production in Orlu Agricultural ...

    African Journals Online (AJOL)

    Economic Analyses of Ware Yam Production in Orlu Agricultural Zone of Imo State. ... International Journal of Agriculture and Rural Development ... statistics, gross margin analysis, marginal analysis and multiple regression analysis. Results ...

  7. 77 FR 2345 - Advisory Council on Transportation Statistics; Request for Nominations

    Science.gov (United States)

    2012-01-17

    ... and Innovative Technology Administration (RITA), Bureau of Transportation Statistics (BTS), DOT... BTS solicits nominations for interested persons to serve on the ACTS. The ACTS is composed of BTS... transportation statistics and analyses collected, supported, or disseminated by the BTS and DOT. DATES...

  8. Computational modeling and statistical analyses on individual contact rate and exposure to disease in complex and confined transportation hubs

    Science.gov (United States)

    Wang, W. L.; Tsui, K. L.; Lo, S. M.; Liu, S. B.

    2018-01-01

    Crowded transportation hubs such as metro stations are thought as ideal places for the development and spread of epidemics. However, for the special features of complex spatial layout, confined environment with a large number of highly mobile individuals, it is difficult to quantify human contacts in such environments, wherein disease spreading dynamics were less explored in the previous studies. Due to the heterogeneity and dynamic nature of human interactions, increasing studies proved the importance of contact distance and length of contact in transmission probabilities. In this study, we show how detailed information on contact and exposure patterns can be obtained by statistical analyses on microscopic crowd simulation data. To be specific, a pedestrian simulation model-CityFlow was employed to reproduce individuals' movements in a metro station based on site survey data, values and distributions of individual contact rate and exposure in different simulation cases were obtained and analyzed. It is interesting that Weibull distribution fitted the histogram values of individual-based exposure in each case very well. Moreover, we found both individual contact rate and exposure had linear relationship with the average crowd densities of the environments. The results obtained in this paper can provide reference to epidemic study in complex and confined transportation hubs and refine the existing disease spreading models.

  9. Sensitivity and uncertainty analyses for performance assessment modeling

    International Nuclear Information System (INIS)

    Doctor, P.G.

    1988-08-01

    Sensitivity and uncertainty analyses methods for computer models are being applied in performance assessment modeling in the geologic high level radioactive waste repository program. The models used in performance assessment tend to be complex physical/chemical models with large numbers of input variables. There are two basic approaches to sensitivity and uncertainty analyses: deterministic and statistical. The deterministic approach to sensitivity analysis involves numerical calculation or employs the adjoint form of a partial differential equation to compute partial derivatives; the uncertainty analysis is based on Taylor series expansions of the input variables propagated through the model to compute means and variances of the output variable. The statistical approach to sensitivity analysis involves a response surface approximation to the model with the sensitivity coefficients calculated from the response surface parameters; the uncertainty analysis is based on simulation. The methods each have strengths and weaknesses. 44 refs

  10. Categorization of the trophic status of a hydroelectric power plant reservoir in the Brazilian Amazon by statistical analyses and fuzzy approaches.

    Science.gov (United States)

    da Costa Lobato, Tarcísio; Hauser-Davis, Rachel Ann; de Oliveira, Terezinha Ferreira; Maciel, Marinalva Cardoso; Tavares, Maria Regina Madruga; da Silveira, Antônio Morais; Saraiva, Augusto Cesar Fonseca

    2015-02-15

    The Amazon area has been increasingly suffering from anthropogenic impacts, especially due to the construction of hydroelectric power plant reservoirs. The analysis and categorization of the trophic status of these reservoirs are of interest to indicate man-made changes in the environment. In this context, the present study aimed to categorize the trophic status of a hydroelectric power plant reservoir located in the Brazilian Amazon by constructing a novel Water Quality Index (WQI) and Trophic State Index (TSI) for the reservoir using major ion concentrations and physico-chemical water parameters determined in the area and taking into account the sampling locations and the local hydrological regimes. After applying statistical analyses (factor analysis and cluster analysis) and establishing a rule base of a fuzzy system to these indicators, the results obtained by the proposed method were then compared to the generally applied Carlson and a modified Lamparelli trophic state index (TSI), specific for trophic regions. The categorization of the trophic status by the proposed fuzzy method was shown to be more reliable, since it takes into account the specificities of the study area, while the Carlson and Lamparelli TSI do not, and, thus, tend to over or underestimate the trophic status of these ecosystems. The statistical techniques proposed and applied in the present study, are, therefore, relevant in cases of environmental management and policy decision-making processes, aiding in the identification of the ecological status of water bodies. With this, it is possible to identify which factors should be further investigated and/or adjusted in order to attempt the recovery of degraded water bodies. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Correlating tephras and cryptotephras using glass compositional analyses and numerical and statistical methods: Review and evaluation

    Science.gov (United States)

    Lowe, David J.; Pearce, Nicholas J. G.; Jorgensen, Murray A.; Kuehn, Stephen C.; Tryon, Christian A.; Hayward, Chris L.

    2017-11-01

    We define tephras and cryptotephras and their components (mainly ash-sized particles of glass ± crystals in distal deposits) and summarize the basis of tephrochronology as a chronostratigraphic correlational and dating tool for palaeoenvironmental, geological, and archaeological research. We then document and appraise recent advances in analytical methods used to determine the major, minor, and trace elements of individual glass shards from tephra or cryptotephra deposits to aid their correlation and application. Protocols developed recently for the electron probe microanalysis of major elements in individual glass shards help to improve data quality and standardize reporting procedures. A narrow electron beam (diameter ∼3-5 μm) can now be used to analyze smaller glass shards than previously attainable. Reliable analyses of 'microshards' (defined here as glass shards T2 test). Randomization tests can be used where distributional assumptions such as multivariate normality underlying parametric tests are doubtful. Compositional data may be transformed and scaled before being subjected to multivariate statistical procedures including calculation of distance matrices, hierarchical cluster analysis, and PCA. Such transformations may make the assumption of multivariate normality more appropriate. A sequential procedure using Mahalanobis distance and the Hotelling two-sample T2 test is illustrated using glass major element data from trachytic to phonolitic Kenyan tephras. All these methods require a broad range of high-quality compositional data which can be used to compare 'unknowns' with reference (training) sets that are sufficiently complete to account for all possible correlatives, including tephras with heterogeneous glasses that contain multiple compositional groups. Currently, incomplete databases are tending to limit correlation efficacy. The development of an open, online global database to facilitate progress towards integrated, high

  12. Statistical analysis on hollow and core-shell structured vanadium oxide microspheres as cathode materials for Lithium ion batteries

    Directory of Open Access Journals (Sweden)

    Xing Liang

    2018-06-01

    Full Text Available In this data, the statistical analyses of vanadium oxide microspheres cathode materials are presented for the research article entitled “Statistical analyses on hollow and core-shell structured vanadium oxides microspheres as cathode materials for Lithium ion batteries” (Liang et al., 2017 [1]. This article shows the statistical analyses on N2 adsorption-desorption isotherm and morphology vanadium oxide microspheres as cathode materials for LIBs. Keywords: Adsorption-desorption isotherm, Pore size distribution, SEM images, TEM images

  13. Cross section analyses in MiniBooNE and SciBooNE experiments

    OpenAIRE

    Katori, Teppei

    2013-01-01

    The MiniBooNE experiment (2002-2012) and the SciBooNE experiment (2007-2008) are modern high statistics neutrino experiments, and they developed many new ideas in neutrino cross section analyses. In this note, I discuss selected topics of these analyses.

  14. Basic statistical tools in research and data analysis

    Directory of Open Access Journals (Sweden)

    Zulfiqar Ali

    2016-01-01

    Full Text Available Statistical methods involved in carrying out a study include planning, designing, collecting data, analysing, drawing meaningful interpretation and reporting of the research findings. The statistical analysis gives meaning to the meaningless numbers, thereby breathing life into a lifeless data. The results and inferences are precise only if proper statistical tests are used. This article will try to acquaint the reader with the basic research tools that are utilised while conducting various studies. The article covers a brief outline of the variables, an understanding of quantitative and qualitative variables and the measures of central tendency. An idea of the sample size estimation, power analysis and the statistical errors is given. Finally, there is a summary of parametric and non-parametric tests used for data analysis.

  15. Fulltext PDF

    Indian Academy of Sciences (India)

    the colors or words must fit together in a harmonious way. Beauty is the first test, there is no permanent place for ugly mathematics". So, beauty is inherent in ... and fun of creative learning in mathematics. This CD named 'T A TVA' has 48 softwares, arranged under different subsections such as graphs, symmetry, statistics ...

  16. Use Of R in Statistics Lithuania

    Directory of Open Access Journals (Sweden)

    Tomas Rudys

    2016-06-01

    Full Text Available Recently R becoming more and more popular among official statistics offices. It can be used not even for research purposes, but also for a production of official statistics. Statistics Lithuania recently started an analysis of possibilities where R can be used and could it replace some other statistical programming languages or systems. For this reason a work group was arranged. In the paper we will present overview of the current situation on implementation of R in Statistics Lithuania, some problems we are chasing with and some future plans. At the current situation R is used mainly for research purposes. Looking for- ward a short courses on basic R was prepared and at the moment we are starting to use R for data analysis, data manipulation from Oracle data bases, some reports preparation, data editing, survey estimation. On the other hand we found some problems working with big data sets, also survey sampling as there are surveys with complex sampling designs. We are also analysing the running of R on our servers in order to have possibilities to use more random access memory (RAM. Despite the problems, we are trying to use R in more fields in production of official statistics.

  17. Descriptive and inferential statistical methods used in burns research.

    Science.gov (United States)

    Al-Benna, Sammy; Al-Ajam, Yazan; Way, Benjamin; Steinstraesser, Lars

    2010-05-01

    Burns research articles utilise a variety of descriptive and inferential methods to present and analyse data. The aim of this study was to determine the descriptive methods (e.g. mean, median, SD, range, etc.) and survey the use of inferential methods (statistical tests) used in articles in the journal Burns. This study defined its population as all original articles published in the journal Burns in 2007. Letters to the editor, brief reports, reviews, and case reports were excluded. Study characteristics, use of descriptive statistics and the number and types of statistical methods employed were evaluated. Of the 51 articles analysed, 11(22%) were randomised controlled trials, 18(35%) were cohort studies, 11(22%) were case control studies and 11(22%) were case series. The study design and objectives were defined in all articles. All articles made use of continuous and descriptive data. Inferential statistics were used in 49(96%) articles. Data dispersion was calculated by standard deviation in 30(59%). Standard error of the mean was quoted in 19(37%). The statistical software product was named in 33(65%). Of the 49 articles that used inferential statistics, the tests were named in 47(96%). The 6 most common tests used (Student's t-test (53%), analysis of variance/co-variance (33%), chi(2) test (27%), Wilcoxon & Mann-Whitney tests (22%), Fisher's exact test (12%)) accounted for the majority (72%) of statistical methods employed. A specified significance level was named in 43(88%) and the exact significance levels were reported in 28(57%). Descriptive analysis and basic statistical techniques account for most of the statistical tests reported. This information should prove useful in deciding which tests should be emphasised in educating burn care professionals. These results highlight the need for burn care professionals to have a sound understanding of basic statistics, which is crucial in interpreting and reporting data. Advice should be sought from professionals

  18. Statistics for X-chromosome associations.

    Science.gov (United States)

    Özbek, Umut; Lin, Hui-Min; Lin, Yan; Weeks, Daniel E; Chen, Wei; Shaffer, John R; Purcell, Shaun M; Feingold, Eleanor

    2018-06-13

    In a genome-wide association study (GWAS), association between genotype and phenotype at autosomal loci is generally tested by regression models. However, X-chromosome data are often excluded from published analyses of autosomes because of the difference between males and females in number of X chromosomes. Failure to analyze X-chromosome data at all is obviously less than ideal, and can lead to missed discoveries. Even when X-chromosome data are included, they are often analyzed with suboptimal statistics. Several mathematically sensible statistics for X-chromosome association have been proposed. The optimality of these statistics, however, is based on very specific simple genetic models. In addition, while previous simulation studies of these statistics have been informative, they have focused on single-marker tests and have not considered the types of error that occur even under the null hypothesis when the entire X chromosome is scanned. In this study, we comprehensively tested several X-chromosome association statistics using simulation studies that include the entire chromosome. We also considered a wide range of trait models for sex differences and phenotypic effects of X inactivation. We found that models that do not incorporate a sex effect can have large type I error in some cases. We also found that many of the best statistics perform well even when there are modest deviations, such as trait variance differences between the sexes or small sex differences in allele frequencies, from assumptions. © 2018 WILEY PERIODICALS, INC.

  19. Statistical Analysis of Data for Timber Strengths

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2003-01-01

    Statistical analyses are performed for material strength parameters from a large number of specimens of structural timber. Non-parametric statistical analysis and fits have been investigated for the following distribution types: Normal, Lognormal, 2 parameter Weibull and 3-parameter Weibull...... fits to the data available, especially if tail fits are used whereas the Log Normal distribution generally gives a poor fit and larger coefficients of variation, especially if tail fits are used. The implications on the reliability level of typical structural elements and on partial safety factors...... for timber are investigated....

  20. Statistical inference for the lifetime performance index based on generalised order statistics from exponential distribution

    Science.gov (United States)

    Vali Ahmadi, Mohammad; Doostparast, Mahdi; Ahmadi, Jafar

    2015-04-01

    In manufacturing industries, the lifetime of an item is usually characterised by a random variable X and considered to be satisfactory if X exceeds a given lower lifetime limit L. The probability of a satisfactory item is then ηL := P(X ≥ L), called conforming rate. In industrial companies, however, the lifetime performance index, proposed by Montgomery and denoted by CL, is widely used as a process capability index instead of the conforming rate. Assuming a parametric model for the random variable X, we show that there is a connection between the conforming rate and the lifetime performance index. Consequently, the statistical inferences about ηL and CL are equivalent. Hence, we restrict ourselves to statistical inference for CL based on generalised order statistics, which contains several ordered data models such as usual order statistics, progressively Type-II censored data and records. Various point and interval estimators for the parameter CL are obtained and optimal critical regions for the hypothesis testing problems concerning CL are proposed. Finally, two real data-sets on the lifetimes of insulating fluid and ball bearings, due to Nelson (1982) and Caroni (2002), respectively, and a simulated sample are analysed.

  1. Using DEWIS and R for Multi-Staged Statistics e-Assessments

    Science.gov (United States)

    Gwynllyw, D. Rhys; Weir, Iain S.; Henderson, Karen L.

    2016-01-01

    We demonstrate how the DEWIS e-Assessment system may use embedded R code to facilitate the assessment of students' ability to perform involved statistical analyses. The R code has been written to emulate SPSS output and thus the statistical results for each bespoke data set can be generated efficiently and accurately using standard R routines.…

  2. A proposal for the measurement of graphical statistics effectiveness: Does it enhance or interfere with statistical reasoning?

    International Nuclear Information System (INIS)

    Agus, M; Penna, M P; Peró-Cebollero, M; Guàrdia-Olmos, J

    2015-01-01

    Numerous studies have examined students' difficulties in understanding some notions related to statistical problems. Some authors observed that the presentation of distinct visual representations could increase statistical reasoning, supporting the principle of graphical facilitation. But other researchers disagree with this viewpoint, emphasising the impediments related to the use of illustrations that could overcharge the cognitive system with insignificant data. In this work we aim at comparing the probabilistic statistical reasoning regarding two different formats of problem presentations: graphical and verbal-numerical. We have conceived and presented five pairs of homologous simple problems in the verbal numerical and graphical format to 311 undergraduate Psychology students (n=156 in Italy and n=155 in Spain) without statistical expertise. The purpose of our work was to evaluate the effect of graphical facilitation in probabilistic statistical reasoning. Every undergraduate has solved each pair of problems in two formats in different problem presentation orders and sequences. Data analyses have highlighted that the effect of graphical facilitation is infrequent in psychology undergraduates. This effect is related to many factors (as knowledge, abilities, attitudes, and anxiety); moreover it might be considered the resultant of interaction between individual and task characteristics

  3. Direct Learning of Systematics-Aware Summary Statistics

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Complex machine learning tools, such as deep neural networks and gradient boosting algorithms, are increasingly being used to construct powerful discriminative features for High Energy Physics analyses. These methods are typically trained with simulated or auxiliary data samples by optimising some classification or regression surrogate objective. The learned feature representations are then used to build a sample-based statistical model to perform inference (e.g. interval estimation or hypothesis testing) over a set of parameters of interest. However, the effectiveness of the mentioned approach can be reduced by the presence of known uncertainties that cause differences between training and experimental data, included in the statistical model via nuisance parameters. This work presents an end-to-end algorithm, which leverages on existing deep learning technologies but directly aims to produce inference-optimal sample-summary statistics. By including the statistical model and a differentiable approximation of ...

  4. Conjunction analysis and propositional logic in fMRI data analysis using Bayesian statistics.

    Science.gov (United States)

    Rudert, Thomas; Lohmann, Gabriele

    2008-12-01

    To evaluate logical expressions over different effects in data analyses using the general linear model (GLM) and to evaluate logical expressions over different posterior probability maps (PPMs). In functional magnetic resonance imaging (fMRI) data analysis, the GLM was applied to estimate unknown regression parameters. Based on the GLM, Bayesian statistics can be used to determine the probability of conjunction, disjunction, implication, or any other arbitrary logical expression over different effects or contrast. For second-level inferences, PPMs from individual sessions or subjects are utilized. These PPMs can be combined to a logical expression and its probability can be computed. The methods proposed in this article are applied to data from a STROOP experiment and the methods are compared to conjunction analysis approaches for test-statistics. The combination of Bayesian statistics with propositional logic provides a new approach for data analyses in fMRI. Two different methods are introduced for propositional logic: the first for analyses using the GLM and the second for common inferences about different probability maps. The methods introduced extend the idea of conjunction analysis to a full propositional logic and adapt it from test-statistics to Bayesian statistics. The new approaches allow inferences that are not possible with known standard methods in fMRI. (c) 2008 Wiley-Liss, Inc.

  5. Statistical learning in high energy and astrophysics

    International Nuclear Information System (INIS)

    Zimmermann, J.

    2005-01-01

    This thesis studies the performance of statistical learning methods in high energy and astrophysics where they have become a standard tool in physics analysis. They are used to perform complex classification or regression by intelligent pattern recognition. This kind of artificial intelligence is achieved by the principle ''learning from examples'': The examples describe the relationship between detector events and their classification. The application of statistical learning methods is either motivated by the lack of knowledge about this relationship or by tight time restrictions. In the first case learning from examples is the only possibility since no theory is available which would allow to build an algorithm in the classical way. In the second case a classical algorithm exists but is too slow to cope with the time restrictions. It is therefore replaced by a pattern recognition machine which implements a fast statistical learning method. But even in applications where some kind of classical algorithm had done a good job, statistical learning methods convinced by their remarkable performance. This thesis gives an introduction to statistical learning methods and how they are applied correctly in physics analysis. Their flexibility and high performance will be discussed by showing intriguing results from high energy and astrophysics. These include the development of highly efficient triggers, powerful purification of event samples and exact reconstruction of hidden event parameters. The presented studies also show typical problems in the application of statistical learning methods. They should be only second choice in all cases where an algorithm based on prior knowledge exists. Some examples in physics analyses are found where these methods are not used in the right way leading either to wrong predictions or bad performance. Physicists also often hesitate to profit from these methods because they fear that statistical learning methods cannot be controlled in a

  6. Statistical learning in high energy and astrophysics

    Energy Technology Data Exchange (ETDEWEB)

    Zimmermann, J.

    2005-06-16

    This thesis studies the performance of statistical learning methods in high energy and astrophysics where they have become a standard tool in physics analysis. They are used to perform complex classification or regression by intelligent pattern recognition. This kind of artificial intelligence is achieved by the principle ''learning from examples'': The examples describe the relationship between detector events and their classification. The application of statistical learning methods is either motivated by the lack of knowledge about this relationship or by tight time restrictions. In the first case learning from examples is the only possibility since no theory is available which would allow to build an algorithm in the classical way. In the second case a classical algorithm exists but is too slow to cope with the time restrictions. It is therefore replaced by a pattern recognition machine which implements a fast statistical learning method. But even in applications where some kind of classical algorithm had done a good job, statistical learning methods convinced by their remarkable performance. This thesis gives an introduction to statistical learning methods and how they are applied correctly in physics analysis. Their flexibility and high performance will be discussed by showing intriguing results from high energy and astrophysics. These include the development of highly efficient triggers, powerful purification of event samples and exact reconstruction of hidden event parameters. The presented studies also show typical problems in the application of statistical learning methods. They should be only second choice in all cases where an algorithm based on prior knowledge exists. Some examples in physics analyses are found where these methods are not used in the right way leading either to wrong predictions or bad performance. Physicists also often hesitate to profit from these methods because they fear that statistical learning methods cannot

  7. Homeopathy: meta-analyses of pooled clinical data.

    Science.gov (United States)

    Hahn, Robert G

    2013-01-01

    In the first decade of the evidence-based era, which began in the mid-1990s, meta-analyses were used to scrutinize homeopathy for evidence of beneficial effects in medical conditions. In this review, meta-analyses including pooled data from placebo-controlled clinical trials of homeopathy and the aftermath in the form of debate articles were analyzed. In 1997 Klaus Linde and co-workers identified 89 clinical trials that showed an overall odds ratio of 2.45 in favor of homeopathy over placebo. There was a trend toward smaller benefit from studies of the highest quality, but the 10 trials with the highest Jadad score still showed homeopathy had a statistically significant effect. These results challenged academics to perform alternative analyses that, to demonstrate the lack of effect, relied on extensive exclusion of studies, often to the degree that conclusions were based on only 5-10% of the material, or on virtual data. The ultimate argument against homeopathy is the 'funnel plot' published by Aijing Shang's research group in 2005. However, the funnel plot is flawed when applied to a mixture of diseases, because studies with expected strong treatments effects are, for ethical reasons, powered lower than studies with expected weak or unclear treatment effects. To conclude that homeopathy lacks clinical effect, more than 90% of the available clinical trials had to be disregarded. Alternatively, flawed statistical methods had to be applied. Future meta-analyses should focus on the use of homeopathy in specific diseases or groups of diseases instead of pooling data from all clinical trials. © 2013 S. Karger GmbH, Freiburg.

  8. Systematic reviews of anesthesiologic interventions reported as statistically significant

    DEFF Research Database (Denmark)

    Imberger, Georgina; Gluud, Christian; Boylan, John

    2015-01-01

    statistically significant meta-analyses of anesthesiologic interventions, we used TSA to estimate power and imprecision in the context of sparse data and repeated updates. METHODS: We conducted a search to identify all systematic reviews with meta-analyses that investigated an intervention that may......: From 11,870 titles, we found 682 systematic reviews that investigated anesthesiologic interventions. In the 50 sampled meta-analyses, the median number of trials included was 8 (interquartile range [IQR], 5-14), the median number of participants was 964 (IQR, 523-1736), and the median number...

  9. Analysis and Calculation of the Fluid Flow and the Temperature Field by Finite Element Modeling

    Science.gov (United States)

    Dhamodaran, M.; Jegadeesan, S.; Kumar, R. Praveen

    2018-04-01

    This paper presents a fundamental and accurate approach to study numerical analysis of fluid flow and heat transfer inside a channel. In this study, the Finite Element Method is used to analyze the channel, which is divided into small subsections. The small subsections are discretized using higher number of domain elements and the corresponding number of nodes. MATLAB codes are developed to be used in the analysis. Simulation results showed that the analyses of fluid flow and temperature are influenced significantly by the changing entrance velocity. Also, there is an apparent effect on the temperature fields due to the presence of an energy source in the middle of the domain. In this paper, the characteristics of flow analysis and heat analysis in a channel have been investigated.

  10. Understanding the statistics of small risks

    International Nuclear Information System (INIS)

    Siddall, E.

    1983-10-01

    Monte Carlo analyses are used to show what inferences can and cannot be drawn when either a very small number of accidents result from a considerable exposure or where a very small number of people, down to a single individual, are exposed to small added risks. The distinction between relative and absolute uncertainty is illustrated. No new statistical principles are involved

  11. Vector-field statistics for the analysis of time varying clinical gait data.

    Science.gov (United States)

    Donnelly, C J; Alexander, C; Pataky, T C; Stannage, K; Reid, S; Robinson, M A

    2017-01-01

    In clinical settings, the time varying analysis of gait data relies heavily on the experience of the individual(s) assessing these biological signals. Though three dimensional kinematics are recognised as time varying waveforms (1D), exploratory statistical analysis of these data are commonly carried out with multiple discrete or 0D dependent variables. In the absence of an a priori 0D hypothesis, clinicians are at risk of making type I and II errors in their analyis of time varying gait signatures in the event statistics are used in concert with prefered subjective clinical assesment methods. The aim of this communication was to determine if vector field waveform statistics were capable of providing quantitative corroboration to practically significant differences in time varying gait signatures as determined by two clinically trained gait experts. The case study was a left hemiplegic Cerebral Palsy (GMFCS I) gait patient following a botulinum toxin (BoNT-A) injection to their left gastrocnemius muscle. When comparing subjective clinical gait assessments between two testers, they were in agreement with each other for 61% of the joint degrees of freedom and phases of motion analysed. For tester 1 and tester 2, they were in agreement with the vector-field analysis for 78% and 53% of the kinematic variables analysed. When the subjective analyses of tester 1 and tester 2 were pooled together and then compared to the vector-field analysis, they were in agreement for 83% of the time varying kinematic variables analysed. These outcomes demonstrate that in principle, vector-field statistics corroborates with what a team of clinical gait experts would classify as practically meaningful pre- versus post time varying kinematic differences. The potential for vector-field statistics to be used as a useful clinical tool for the objective analysis of time varying clinical gait data is established. Future research is recommended to assess the usefulness of vector-field analyses

  12. 75 FR 24718 - Guidance for Industry on Documenting Statistical Analysis Programs and Data Files; Availability

    Science.gov (United States)

    2010-05-05

    ...] Guidance for Industry on Documenting Statistical Analysis Programs and Data Files; Availability AGENCY... documenting statistical analyses and data files submitted to the Center for Veterinary Medicine (CVM) for the... on Documenting Statistical Analysis Programs and Data Files; Availability'' giving interested persons...

  13. SPSS for applied sciences basic statistical testing

    CERN Document Server

    Davis, Cole

    2013-01-01

    This book offers a quick and basic guide to using SPSS and provides a general approach to solving problems using statistical tests. It is both comprehensive in terms of the tests covered and the applied settings it refers to, and yet is short and easy to understand. Whether you are a beginner or an intermediate level test user, this book will help you to analyse different types of data in applied settings. It will also give you the confidence to use other statistical software and to extend your expertise to more specific scientific settings as required.The author does not use mathematical form

  14. Radar Derived Spatial Statistics of Summer Rain. Volume 2; Data Reduction and Analysis

    Science.gov (United States)

    Konrad, T. G.; Kropfli, R. A.

    1975-01-01

    Data reduction and analysis procedures are discussed along with the physical and statistical descriptors used. The statistical modeling techniques are outlined and examples of the derived statistical characterization of rain cells in terms of the several physical descriptors are presented. Recommendations concerning analyses which can be pursued using the data base collected during the experiment are included.

  15. Statistics and Corporate Environmental Management: Relations and Problems

    DEFF Research Database (Denmark)

    Madsen, Henning; Ulhøi, John Parm

    1997-01-01

    Statistical methods have long been used to analyse the macroeconomic consequences of environmentally damaging activities, political actions to control, prevent, or reduce these damages, and environmental problems in the natural environment. Up to now, however, they have had a limited and not very...... specific use in corporate environmental management systems. This paper will address some of the special problems related to the use of statistical techniques in corporate environmental management systems. One important aspect of this is the interaction of internal decisions and activities with conditions...

  16. Bayesian analyses of seasonal runoff forecasts

    Science.gov (United States)

    Krzysztofowicz, R.; Reese, S.

    1991-12-01

    Forecasts of seasonal snowmelt runoff volume provide indispensable information for rational decision making by water project operators, irrigation district managers, and farmers in the western United States. Bayesian statistical models and communication frames have been researched in order to enhance the forecast information disseminated to the users, and to characterize forecast skill from the decision maker's point of view. Four products are presented: (i) a Bayesian Processor of Forecasts, which provides a statistical filter for calibrating the forecasts, and a procedure for estimating the posterior probability distribution of the seasonal runoff; (ii) the Bayesian Correlation Score, a new measure of forecast skill, which is related monotonically to the ex ante economic value of forecasts for decision making; (iii) a statistical predictor of monthly cumulative runoffs within the snowmelt season, conditional on the total seasonal runoff forecast; and (iv) a framing of the forecast message that conveys the uncertainty associated with the forecast estimates to the users. All analyses are illustrated with numerical examples of forecasts for six gauging stations from the period 1971 1988.

  17. Long-Term Propagation Statistics and Availability Performance Assessment for Simulated Terrestrial Hybrid FSO/RF System

    Directory of Open Access Journals (Sweden)

    Fiser Ondrej

    2011-01-01

    Full Text Available Long-term monthly and annual statistics of the attenuation of electromagnetic waves that have been obtained from 6 years of measurements on a free space optical path, 853 meters long, with a wavelength of 850 nm and on a precisely parallel radio path with a frequency of 58 GHz are presented. All the attenuation events observed are systematically classified according to the hydrometeor type causing the particular event. Monthly and yearly propagation statistics on the free space optical path and radio path are obtained. The influence of individual hydrometeors on attenuation is analysed. The obtained propagation statistics are compared to the calculated statistics using ITU-R models. The calculated attenuation statistics both at 850 nm and 58 GHz underestimate the measured statistics for higher attenuation levels. The availability performance of a simulated hybrid FSO/RF system is analysed based on the measured data.

  18. Statistical analysis of thermal conductivity of nanofluid containing ...

    Indian Academy of Sciences (India)

    Thermal conductivity measurements of nanofluids were analysed via two-factor completely randomized design and comparison of data means is carried out with Duncan's multiple-range test. Statistical analysis of experimental data show that temperature and weight fraction have a reasonable impact on the thermal ...

  19. Inferential Statistics in "Language Teaching Research": A Review and Ways Forward

    Science.gov (United States)

    Lindstromberg, Seth

    2016-01-01

    This article reviews all (quasi)experimental studies appearing in the first 19 volumes (1997-2015) of "Language Teaching Research" (LTR). Specifically, it provides an overview of how statistical analyses were conducted in these studies and of how the analyses were reported. The overall conclusion is that there has been a tight adherence…

  20. A Framework for Assessing High School Students' Statistical Reasoning.

    Science.gov (United States)

    Chan, Shiau Wei; Ismail, Zaleha; Sumintono, Bambang

    2016-01-01

    Based on a synthesis of literature, earlier studies, analyses and observations on high school students, this study developed an initial framework for assessing students' statistical reasoning about descriptive statistics. Framework descriptors were established across five levels of statistical reasoning and four key constructs. The former consisted of idiosyncratic reasoning, verbal reasoning, transitional reasoning, procedural reasoning, and integrated process reasoning. The latter include describing data, organizing and reducing data, representing data, and analyzing and interpreting data. In contrast to earlier studies, this initial framework formulated a complete and coherent statistical reasoning framework. A statistical reasoning assessment tool was then constructed from this initial framework. The tool was administered to 10 tenth-grade students in a task-based interview. The initial framework was refined, and the statistical reasoning assessment tool was revised. The ten students then participated in the second task-based interview, and the data obtained were used to validate the framework. The findings showed that the students' statistical reasoning levels were consistent across the four constructs, and this result confirmed the framework's cohesion. Developed to contribute to statistics education, this newly developed statistical reasoning framework provides a guide for planning learning goals and designing instruction and assessments.

  1. RCC-M: Design and construction rules for mechanical components of PWR nuclear islands

    International Nuclear Information System (INIS)

    2017-01-01

    AFCEN's RCC-M code concerns the mechanical components designed and manufactured for pressurized water reactors (PWR). It applies to pressure equipment in nuclear islands in safety classes 1, 2 and 3, and certain non-pressure components, such as vessel internals, supporting structures for safety class components, storage tanks and containment penetrations. RCC-M covers the following technical subjects: sizing and design, choice of materials and procurement. Fabrication and control, including: associated qualification requirements (procedures, welders and operators, etc.), control methods to be implemented, acceptance criteria for detected defects, documentation associated with the different activities covered, and quality assurance. The design, manufacture and inspection rules defined in RCC-M leverage the results of the research and development work pioneered in France, Europe and worldwide, and which have been successfully used by industry to design and build PWR nuclear islands. AFCEN's rules incorporate the resulting feedback. Use: France's last 16 nuclear units (P'4 and N4); 4 CP1 reactors in South Africa (2) and Korea (2); 44 M310 (4), CPR-1000 (28), CPR-600 (6), HPR-1000 (4) and EPR (2) reactors in service or undergoing construction in China; 4 EPR reactors in Europe: Finland (1), France (1) and UK (2). Content: Section I - nuclear island components, subsection 'A': general rules, subsection 'B': class 1 components, subsection 'C': class 2 components, subsection 'D': class 3 components, subsection 'E': small components, subsection 'G': core support structures, subsection 'H': supports, subsection 'J': low pressure or atmospheric storage tanks, subsection 'P': containment penetration, subsection 'Q': qualification of active mechanical components, subsection 'Z': technical appendices; section II - materials; section III - examination

  2. Monitoring and Evaluation; Statistical Support for Life-cycle Studies, 2003 Annual Report.

    Energy Technology Data Exchange (ETDEWEB)

    Skalski, John

    2003-12-01

    This report summarizes the statistical analysis and consulting activities performed under Contract No. 00004134, Project No. 199105100 funded by Bonneville Power Administration during 2003. These efforts are focused on providing real-time predictions of outmigration timing, assessment of life-history performance measures, evaluation of status and trends in recovery, and guidance on the design and analysis of Columbia Basin fish and wildlife studies monitoring and evaluation studies. The overall objective of the project is to provide BPA and the rest of the fisheries community with statistical guidance on design, analysis, and interpretation of monitoring data, which will lead to improved monitoring and evaluation of salmonid mitigation programs in the Columbia/Snake River Basin. This overall goal is being accomplished by making fisheries data readily available for public scrutiny, providing statistical guidance on the design and analyses of studies by hands-on support and written documents, and providing real-time analyses of tagging results during the smolt outmigration for review by decision makers. For a decade, this project has been providing in-season projections of smolt outmigration timing to assist in spill management. As many as 50 different fish stocks at 8 different hydroprojects are tracked and real-time to predict the 'percent of run to date' and 'date to specific percentile'. The project also conducts added-value analyses of historical tagging data to understand relationships between fish responses, environmental factors, and anthropogenic effects. The statistical analysis of historical tagging data crosses agency lines in order to assimilate information on salmon population dynamics irrespective of origin. The lessons learned from past studies are used to improve the design and analyses of future monitoring and evaluation efforts. Through these efforts, the project attempts to provide the fisheries community with reliable analyses

  3. Statistical Survey of Non-Formal Education

    Directory of Open Access Journals (Sweden)

    Ondřej Nývlt

    2012-12-01

    Full Text Available focused on a programme within a regular education system. Labour market flexibility and new requirements on employees create a new domain of education called non-formal education. Is there a reliable statistical source with a good methodological definition for the Czech Republic? Labour Force Survey (LFS has been the basic statistical source for time comparison of non-formal education for the last ten years. Furthermore, a special Adult Education Survey (AES in 2011 was focused on individual components of non-formal education in a detailed way. In general, the goal of the EU is to use data from both internationally comparable surveys for analyses of the particular fields of lifelong learning in the way, that annual LFS data could be enlarged by detailed information from AES in five years periods. This article describes reliability of statistical data aboutnon-formal education. This analysis is usually connected with sampling and non-sampling errors.

  4. Microvariability in AGNs: study of different statistical methods - I. Observational analysis

    Science.gov (United States)

    Zibecchi, L.; Andruchow, I.; Cellone, S. A.; Carpintero, D. D.; Romero, G. E.; Combi, J. A.

    2017-05-01

    We present the results of a study of different statistical methods currently used in the literature to analyse the (micro)variability of active galactic nuclei (AGNs) from ground-based optical observations. In particular, we focus on the comparison between the results obtained by applying the so-called C and F statistics, which are based on the ratio of standard deviations and variances, respectively. The motivation for this is that the implementation of these methods leads to different and contradictory results, making the variability classification of the light curves of a certain source dependent on the statistics implemented. For this purpose, we re-analyse the results on an AGN sample observed along several sessions with the 2.15 m 'Jorge Sahade' telescope (CASLEO), San Juan, Argentina. For each AGN, we constructed the nightly differential light curves. We thus obtained a total of 78 light curves for 39 AGNs, and we then applied the statistical tests mentioned above, in order to re-classify the variability state of these light curves and in an attempt to find the suitable statistical methodology to study photometric (micro)variations. We conclude that, although the C criterion is not proper a statistical test, it could still be a suitable parameter to detect variability and that its application allows us to get more reliable variability results, in contrast with the F test.

  5. Automated material accounting statistics system (AMASS)

    International Nuclear Information System (INIS)

    Messinger, M.; Lumb, R.F.; Tingey, F.H.

    1981-01-01

    In this paper the modeling and statistical analysis of measurement and process data for nuclear material accountability is readdressed under a more general framework than that provided in the literature. The result of this effort is a computer program (AMASS) which uses the algorithms and equations of this paper to accomplish the analyses indicated. The actual application of the method to process data is emphasized

  6. A statistical analysis of electrical cerebral activity; Contribution a l'etude de l'analyse statistique de l'activite electrique cerebrale

    Energy Technology Data Exchange (ETDEWEB)

    Bassant, Marie-Helene

    1971-01-15

    The aim of this work was to study the statistical properties of the amplitude of the electroencephalographic signal. The experimental method is described (implantation of electrodes, acquisition and treatment of data). The program of the mathematical analysis is given (calculation of probability density functions, study of stationarity) and the validity of the tests discussed. The results concerned ten rabbits. Trips of EEG were sampled during 40 s. with very short intervals (500 μs). The probability density functions established for different brain structures (especially the dorsal hippocampus) and areas, were compared during sleep, arousal and visual stimulus. Using a Χ{sup 2} test, it was found that the Gaussian distribution assumption was rejected in 96.7 per cent of the cases. For a given physiological state, there was no mathematical reason to reject the assumption of stationarity (in 96 per cent of the cases). (author) [French] Le but de ce travail est d'etudier les proprietes statistiques des amplitudes du signal electroencephalographique. La methode experimentale est decrite (implantation d'electrodes, acquisition et traitement des donnees). Le programme d'analyse mathematique est precise (calcul des courbes de repartition statistique, etude de la stationnarite du signal) et la validite des tests, discutee. Les resultats de l'etude portent sur 10 lapins. Des sequences de 40 s d'EEG sont echantillonnees. La valeur de la tension est prelevee a un pas d'echantillonnage de 500 μs. Les courbes de repartition statistiques sont comparees d'une region de l'encephale a l'autre (l'hippocampe dorsal a ete specialement analyse) ceci pendant le sommeil, l'eveil et des stimulations visuelles. Le test du Χ{sup 2} rejette l'hypothese de distribution normale dans 97 pour cent des cas. Pour un etat physiologique donne, il n'existe pas de raison mathematique a ce que soit repoussee l'hypothese de stationnarite, ceci dans 96.7 pour cent des cas. (auteur)

  7. Statistical analysis of environmental data

    International Nuclear Information System (INIS)

    Beauchamp, J.J.; Bowman, K.O.; Miller, F.L. Jr.

    1975-10-01

    This report summarizes the analyses of data obtained by the Radiological Hygiene Branch of the Tennessee Valley Authority from samples taken around the Browns Ferry Nuclear Plant located in Northern Alabama. The data collection was begun in 1968 and a wide variety of types of samples have been gathered on a regular basis. The statistical analysis of environmental data involving very low-levels of radioactivity is discussed. Applications of computer calculations for data processing are described

  8. Euphorbia L. subsect. Esula (Boiss. in DC. Pax in the Iberian Peninsula. Leaf surface, chromosome numbers and taxonomic treatment

    Directory of Open Access Journals (Sweden)

    Molero, Julià

    1992-12-01

    Full Text Available We present a taxonomic study of the representatives or Euphorbia subsect. Esula in the Iberian Peninsula. Prior to this, a first section is included on the study of the leaf surface and a second section on chromosome numbers.
    The section on leaf surface is based on a study of the leaves or 45 populations of Iberian and European taxa of the subsections using a light microscope and SEM. The characters analyzed are cell shape, morphology of the cells and stomata (primary and secondary sculpture and epicuticular waxes (tertiary sculpture. Some microcharacters of the leaf surface proved particularly usefu1for taxonomical purposes. Thus the basic type of stoma and the distribution model of the stomata on the two sides of the leaf are characters which make it possible to separate taxa as closely related as E. esula L. subsp. esula and E. esula L. subsp orientalis (Boiss. in DC. Molero & Rovira. The morphological type of the epicuticular waxes also enables us to differentiate between E.graminifolia Vill. and E. esula aggr. And to distinguish subsp. bolosii Molero & Rovira from the remaining subespecies in E. nevadensis Boiss. & Reuter.
    Cytogenetic investigation reveals the presence of only the diploid cytotype (2n=10 in E. cyparissias L. and E. esula L. subsp. esula in the Iberian Peninsula. We describe for the first time in E. nevadensis s.1. a polyploidy complex with a base of x= 10 in which the diploid level (2n=20 is present in all subspecies; the tetraploid level (2n=40 is present in E. nevadensis subsp. nevadensis and the hexaploid level (2n=60 is found in E. nevadensis subsp. bolosii. Chromosome number is not a parameter that can be used for taxonomic purposes. In E. nevadensis, cytogenetic differentiation has followed its own course, with no apparent relationship to the process of morphological

  9. Renyi statistics in equilibrium statistical mechanics

    International Nuclear Information System (INIS)

    Parvan, A.S.; Biro, T.S.

    2010-01-01

    The Renyi statistics in the canonical and microcanonical ensembles is examined both in general and in particular for the ideal gas. In the microcanonical ensemble the Renyi statistics is equivalent to the Boltzmann-Gibbs statistics. By the exact analytical results for the ideal gas, it is shown that in the canonical ensemble, taking the thermodynamic limit, the Renyi statistics is also equivalent to the Boltzmann-Gibbs statistics. Furthermore it satisfies the requirements of the equilibrium thermodynamics, i.e. the thermodynamical potential of the statistical ensemble is a homogeneous function of first degree of its extensive variables of state. We conclude that the Renyi statistics arrives at the same thermodynamical relations, as those stemming from the Boltzmann-Gibbs statistics in this limit.

  10. Quantile regression for the statistical analysis of immunological data with many non-detects

    NARCIS (Netherlands)

    Eilers, P.H.C.; Roder, E.; Savelkoul, H.F.J.; Wijk, van R.G.

    2012-01-01

    Background Immunological parameters are hard to measure. A well-known problem is the occurrence of values below the detection limit, the non-detects. Non-detects are a nuisance, because classical statistical analyses, like ANOVA and regression, cannot be applied. The more advanced statistical

  11. Development of a statistical shape model of multi-organ and its performance evaluation

    International Nuclear Information System (INIS)

    Nakada, Misaki; Shimizu, Akinobu; Kobatake, Hidefumi; Nawano, Shigeru

    2010-01-01

    Existing statistical shape modeling methods for an organ can not take into account the correlation between neighboring organs. This study focuses on a level set distribution model and proposes two modeling methods for multiple organs that can take into account the correlation between neighboring organs. The first method combines level set functions of multiple organs into a vector. Subsequently it analyses the distribution of the vectors of a training dataset by a principal component analysis and builds a multiple statistical shape model. Second method constructs a statistical shape model for each organ independently and assembles component scores of different organs in a training dataset so as to generate a vector. It analyses the distribution of the vectors of to build a statistical shape model of multiple organs. This paper shows results of applying the proposed methods trained by 15 abdominal CT volumes to unknown 8 CT volumes. (author)

  12. Which statistics should tropical biologists learn?

    Science.gov (United States)

    Loaiza Velásquez, Natalia; González Lutz, María Isabel; Monge-Nájera, Julián

    2011-09-01

    Tropical biologists study the richest and most endangered biodiversity in the planet, and in these times of climate change and mega-extinctions, the need for efficient, good quality research is more pressing than in the past. However, the statistical component in research published by tropical authors sometimes suffers from poor quality in data collection; mediocre or bad experimental design and a rigid and outdated view of data analysis. To suggest improvements in their statistical education, we listed all the statistical tests and other quantitative analyses used in two leading tropical journals, the Revista de Biología Tropical and Biotropica, during a year. The 12 most frequent tests in the articles were: Analysis of Variance (ANOVA), Chi-Square Test, Student's T Test, Linear Regression, Pearson's Correlation Coefficient, Mann-Whitney U Test, Kruskal-Wallis Test, Shannon's Diversity Index, Tukey's Test, Cluster Analysis, Spearman's Rank Correlation Test and Principal Component Analysis. We conclude that statistical education for tropical biologists must abandon the old syllabus based on the mathematical side of statistics and concentrate on the correct selection of these and other procedures and tests, on their biological interpretation and on the use of reliable and friendly freeware. We think that their time will be better spent understanding and protecting tropical ecosystems than trying to learn the mathematical foundations of statistics: in most cases, a well designed one-semester course should be enough for their basic requirements.

  13. HLA region excluded by linkage analyses of early onset periodontitis

    Energy Technology Data Exchange (ETDEWEB)

    Sun, C.; Wang, S.; Lopez, N.

    1994-09-01

    Previous studies suggested that HLA genes may influence susceptibility to early-onset periodontitis (EOP). Segregation analyses indicate that EOP may be due to a single major gene. We conducted linkage analyses to assess possible HLA effects on EOP. Fifty families with two or more close relatives affected by EOP were ascertained in Virginia and Chile. A microsatellite polymorphism within the HLA region (at the tumor necrosis factor beta locus) was typed using PCR. Linkage analyses used a donimant model most strongly supported by previous studies. Assuming locus homogeneity, our results exclude a susceptibility gene within 10 cM on either side of our marker locus. This encompasses all of the HLA region. Analyses assuming alternative models gave qualitatively similar results. Allowing for locus heterogeneity, our data still provide no support for HLA-region involvement. However, our data do not statistically exclude (LOD <-2.0) hypotheses of disease-locus heterogeneity, including models where up to half of our families could contain an EOP disease gene located in the HLA region. This is due to the limited power of even our relatively large collection of families and the inherent difficulties of mapping genes for disorders that have complex and heterogeneous etiologies. Additional statistical analyses, recruitment of families, and typing of flanking DNA markers are planned to more conclusively address these issues with respect to the HLA region and other candidate locations in the human genome. Additional results for markers covering most of the human genome will also be presented.

  14. Robustness assessments are needed to reduce bias in meta-analyses that include zero-event randomized trials

    DEFF Research Database (Denmark)

    Keus, F; Wetterslev, J; Gluud, C

    2009-01-01

    of statistical method on inference. RESULTS: In seven meta-analyses of seven outcomes from 15 trials, there were zero-event trials in 0 to 71.4% of the trials. We found inconsistency in significance in one of seven outcomes (14%; 95% confidence limit 0.4%-57.9%). There was also considerable variability......OBJECTIVES: Meta-analysis of randomized trials with binary data can use a variety of statistical methods. Zero-event trials may create analytic problems. We explored how different methods may impact inferences from meta-analyses containing zero-event trials. METHODS: Five levels of statistical...... methods are identified for meta-analysis with zero-event trials, leading to numerous data analyses. We used the binary outcomes from our Cochrane review of randomized trials of laparoscopic vs. small-incision cholecystectomy for patients with symptomatic cholecystolithiasis to illustrate the influence...

  15. Quality control and conduct of genome-wide association meta-analyses

    DEFF Research Database (Denmark)

    Winkler, Thomas W; Day, Felix R; Croteau-Chonka, Damien C

    2014-01-01

    Rigorous organization and quality control (QC) are necessary to facilitate successful genome-wide association meta-analyses (GWAMAs) of statistics aggregated across multiple genome-wide association studies. This protocol provides guidelines for (i) organizational aspects of GWAMAs, and for (ii) QC...

  16. Statistics for Learning Genetics

    Science.gov (United States)

    Charles, Abigail Sheena

    This study investigated the knowledge and skills that biology students may need to help them understand statistics/mathematics as it applies to genetics. The data are based on analyses of current representative genetics texts, practicing genetics professors' perspectives, and more directly, students' perceptions of, and performance in, doing statistically-based genetics problems. This issue is at the emerging edge of modern college-level genetics instruction, and this study attempts to identify key theoretical components for creating a specialized biological statistics curriculum. The goal of this curriculum will be to prepare biology students with the skills for assimilating quantitatively-based genetic processes, increasingly at the forefront of modern genetics. To fulfill this, two college level classes at two universities were surveyed. One university was located in the northeastern US and the other in the West Indies. There was a sample size of 42 students and a supplementary interview was administered to a select 9 students. Interviews were also administered to professors in the field in order to gain insight into the teaching of statistics in genetics. Key findings indicated that students had very little to no background in statistics (55%). Although students did perform well on exams with 60% of the population receiving an A or B grade, 77% of them did not offer good explanations on a probability question associated with the normal distribution provided in the survey. The scope and presentation of the applicable statistics/mathematics in some of the most used textbooks in genetics teaching, as well as genetics syllabi used by instructors do not help the issue. It was found that the text books, often times, either did not give effective explanations for students, or completely left out certain topics. The omission of certain statistical/mathematical oriented topics was seen to be also true with the genetics syllabi reviewed for this study. Nonetheless

  17. Evaluation and application of summary statistic imputation to discover new height-associated loci.

    Science.gov (United States)

    Rüeger, Sina; McDaid, Aaron; Kutalik, Zoltán

    2018-05-01

    As most of the heritability of complex traits is attributed to common and low frequency genetic variants, imputing them by combining genotyping chips and large sequenced reference panels is the most cost-effective approach to discover the genetic basis of these traits. Association summary statistics from genome-wide meta-analyses are available for hundreds of traits. Updating these to ever-increasing reference panels is very cumbersome as it requires reimputation of the genetic data, rerunning the association scan, and meta-analysing the results. A much more efficient method is to directly impute the summary statistics, termed as summary statistics imputation, which we improved to accommodate variable sample size across SNVs. Its performance relative to genotype imputation and practical utility has not yet been fully investigated. To this end, we compared the two approaches on real (genotyped and imputed) data from 120K samples from the UK Biobank and show that, genotype imputation boasts a 3- to 5-fold lower root-mean-square error, and better distinguishes true associations from null ones: We observed the largest differences in power for variants with low minor allele frequency and low imputation quality. For fixed false positive rates of 0.001, 0.01, 0.05, using summary statistics imputation yielded a decrease in statistical power by 9, 43 and 35%, respectively. To test its capacity to discover novel associations, we applied summary statistics imputation to the GIANT height meta-analysis summary statistics covering HapMap variants, and identified 34 novel loci, 19 of which replicated using data in the UK Biobank. Additionally, we successfully replicated 55 out of the 111 variants published in an exome chip study. Our study demonstrates that summary statistics imputation is a very efficient and cost-effective way to identify and fine-map trait-associated loci. Moreover, the ability to impute summary statistics is important for follow-up analyses, such as Mendelian

  18. Model-based Recursive Partitioning for Subgroup Analyses

    OpenAIRE

    Seibold, Heidi; Zeileis, Achim; Hothorn, Torsten

    2016-01-01

    The identification of patient subgroups with differential treatment effects is the first step towards individualised treatments. A current draft guideline by the EMA discusses potentials and problems in subgroup analyses and formulated challenges to the development of appropriate statistical procedures for the data-driven identification of patient subgroups. We introduce model-based recursive partitioning as a procedure for the automated detection of patient subgroups that are identifiable by...

  19. Statistical analysis of the potassium concentration obtained through

    International Nuclear Information System (INIS)

    Pereira, Joao Eduardo da Silva; Silva, Jose Luiz Silverio da; Pires, Carlos Alberto da Fonseca; Strieder, Adelir Jose

    2007-01-01

    The present work was developed in outcrops of Santa Maria region, southern Brazil, Rio Grande do Sul State. Statistic evaluations were applied in different rock types. The possibility to distinguish different geologic units, sedimentary and volcanic (acid and basic types) by means of the statistic analyses from the use of airborne gamma-ray spectrometry integrating potash radiation emissions data with geological and geochemistry data is discussed. This Project was carried out at 1973 by Geological Survey of Brazil/Companhia de Pesquisas de Recursos Minerais. The Camaqua Project evaluated the behavior of potash concentrations generating XYZ Geosof 1997 format, one grid, thematic map and digital thematic map files from this total area. Using these data base, the integration of statistics analyses in sedimentary formations which belong to the Depressao Central do Rio Grande do Sul and/or to volcanic rocks from Planalto da Serra Geral at the border of Parana Basin was tested. Univariate statistics model was used: the media, the standard media error, and the trust limits were estimated. The Tukey's Test was used in order to compare mean values. The results allowed to create criteria to distinguish geological formations based on their potash content. The back-calibration technique was employed to transform K radiation to percentage. Inside this context it was possible to define characteristic values from radioactive potash emissions and their trust ranges in relation to geologic formations. The potash variable when evaluated in relation to geographic Universal Transverse Mercator coordinates system showed a spatial relation following one polynomial model of second order, with one determination coefficient. The statistica 7.1 software Generalist Linear Models produced by Statistics Department of Federal University of Santa Maria/Brazil was used. (author)

  20. USE OF THE SIMPLE LINEAR REGRESSION MODEL IN MACRO-ECONOMICAL ANALYSES

    Directory of Open Access Journals (Sweden)

    Constantin ANGHELACHE

    2011-10-01

    Full Text Available The article presents the fundamental aspects of the linear regression, as a toolbox which can be used in macroeconomic analyses. The article describes the estimation of the parameters, the statistical tests used, the homoscesasticity and heteroskedasticity. The use of econometrics instrument in macroeconomics is an important factor that guarantees the quality of the models, analyses, results and possible interpretation that can be drawn at this level.

  1. Reporting the results of meta-analyses: a plea for incorporating clinical relevance referring to an example.

    Science.gov (United States)

    Bartels, Ronald H M A; Donk, Roland D; Verhagen, Wim I M; Hosman, Allard J F; Verbeek, André L M

    2017-11-01

    The results of meta-analyses are frequently reported, but understanding and interpreting them is difficult for both clinicians and patients. Statistical significances are presented without referring to values that imply clinical relevance. This study aimed to use the minimal clinically important difference (MCID) to rate the clinical relevance of a meta-analysis. This study is a review of the literature. This study is a review of meta-analyses relating to a specific topic, clinical results of cervical arthroplasty. The outcome measure used in the study was the MCID. We performed an extensive literature search of a series of meta-analyses evaluating a similar subject as an example. We searched in Pubmed and Embase through August 9, 2016, and found articles concerning meta-analyses of the clinical outcome of cervical arthroplasty compared with that of anterior cervical discectomy with fusion in cases of cervical degenerative disease. We evaluated the analyses for statistical significance and their relation to MCID. MCID was defined based on results in similar patient groups and a similar disease entity reported in the literature. We identified 21 meta-analyses, only one of which referred to MCID. However, the researchers used an inappropriate measurement scale and, therefore, an incorrect MCID. The majority of the conclusions were based on statistical results without mentioning clinical relevance. The majority of the articles we reviewed drew conclusions based on statistical differences instead of clinical relevance. We recommend introducing the concept of MCID while reporting the results of a meta-analysis, as well as mentioning the explicit scale of the analyzed measurement. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Progressive statistics for studies in sports medicine and exercise science.

    Science.gov (United States)

    Hopkins, William G; Marshall, Stephen W; Batterham, Alan M; Hanin, Juri

    2009-01-01

    Statistical guidelines and expert statements are now available to assist in the analysis and reporting of studies in some biomedical disciplines. We present here a more progressive resource for sample-based studies, meta-analyses, and case studies in sports medicine and exercise science. We offer forthright advice on the following controversial or novel issues: using precision of estimation for inferences about population effects in preference to null-hypothesis testing, which is inadequate for assessing clinical or practical importance; justifying sample size via acceptable precision or confidence for clinical decisions rather than via adequate power for statistical significance; showing SD rather than SEM, to better communicate the magnitude of differences in means and nonuniformity of error; avoiding purely nonparametric analyses, which cannot provide inferences about magnitude and are unnecessary; using regression statistics in validity studies, in preference to the impractical and biased limits of agreement; making greater use of qualitative methods to enrich sample-based quantitative projects; and seeking ethics approval for public access to the depersonalized raw data of a study, to address the need for more scrutiny of research and better meta-analyses. Advice on less contentious issues includes the following: using covariates in linear models to adjust for confounders, to account for individual differences, and to identify potential mechanisms of an effect; using log transformation to deal with nonuniformity of effects and error; identifying and deleting outliers; presenting descriptive, effect, and inferential statistics in appropriate formats; and contending with bias arising from problems with sampling, assignment, blinding, measurement error, and researchers' prejudices. This article should advance the field by stimulating debate, promoting innovative approaches, and serving as a useful checklist for authors, reviewers, and editors.

  3. The practical impact of differential item functioning analyses in a health-related quality of life instrument

    DEFF Research Database (Denmark)

    Scott, Neil W; Fayers, Peter M; Aaronson, Neil K

    2009-01-01

    Differential item functioning (DIF) analyses are commonly used to evaluate health-related quality of life (HRQoL) instruments. There is, however, a lack of consensus as to how to assess the practical impact of statistically significant DIF results.......Differential item functioning (DIF) analyses are commonly used to evaluate health-related quality of life (HRQoL) instruments. There is, however, a lack of consensus as to how to assess the practical impact of statistically significant DIF results....

  4. An innovative statistical approach for analysing non-continuous variables in environmental monitoring: assessing temporal trends of TBT pollution.

    Science.gov (United States)

    Santos, José António; Galante-Oliveira, Susana; Barroso, Carlos

    2011-03-01

    The current work presents an innovative statistical approach to model ordinal variables in environmental monitoring studies. An ordinal variable has values that can only be compared as "less", "equal" or "greater" and it is not possible to have information about the size of the difference between two particular values. The example of ordinal variable under this study is the vas deferens sequence (VDS) used in imposex (superimposition of male sexual characters onto prosobranch females) field assessment programmes for monitoring tributyltin (TBT) pollution. The statistical methodology presented here is the ordered logit regression model. It assumes that the VDS is an ordinal variable whose values match up a process of imposex development that can be considered continuous in both biological and statistical senses and can be described by a latent non-observable continuous variable. This model was applied to the case study of Nucella lapillus imposex monitoring surveys conducted in the Portuguese coast between 2003 and 2008 to evaluate the temporal evolution of TBT pollution in this country. In order to produce more reliable conclusions, the proposed model includes covariates that may influence the imposex response besides TBT (e.g. the shell size). The model also provides an analysis of the environmental risk associated to TBT pollution by estimating the probability of the occurrence of females with VDS ≥ 2 in each year, according to OSPAR criteria. We consider that the proposed application of this statistical methodology has a great potential in environmental monitoring whenever there is the need to model variables that can only be assessed through an ordinal scale of values.

  5. Conceptual and statistical problems associated with the use of diversity indices in ecology.

    Science.gov (United States)

    Barrantes, Gilbert; Sandoval, Luis

    2009-09-01

    Diversity indices, particularly the Shannon-Wiener index, have extensively been used in analyzing patterns of diversity at different geographic and ecological scales. These indices have serious conceptual and statistical problems which make comparisons of species richness or species abundances across communities nearly impossible. There is often no a single statistical method that retains all information needed to answer even a simple question. However, multivariate analyses could be used instead of diversity indices, such as cluster analyses or multiple regressions. More complex multivariate analyses, such as Canonical Correspondence Analysis, provide very valuable information on environmental variables associated to the presence and abundance of the species in a community. In addition, particular hypotheses associated to changes in species richness across localities, or change in abundance of one, or a group of species can be tested using univariate, bivariate, and/or rarefaction statistical tests. The rarefaction method has proved to be robust to standardize all samples to a common size. Even the simplest method as reporting the number of species per taxonomic category possibly provides more information than a diversity index value.

  6. Understanding Statistics - Cancer Statistics

    Science.gov (United States)

    Annual reports of U.S. cancer statistics including new cases, deaths, trends, survival, prevalence, lifetime risk, and progress toward Healthy People targets, plus statistical summaries for a number of common cancer types.

  7. Investigating salt frost scaling by using statistical methods

    DEFF Research Database (Denmark)

    Hasholt, Marianne Tange; Clemmensen, Line Katrine Harder

    2010-01-01

    A large data set comprising data for 118 concrete mixes on mix design, air void structure, and the outcome of freeze/thaw testing according to SS 13 72 44 has been analysed by use of statistical methods. The results show that with regard to mix composition, the most important parameter...

  8. Faculty Library and Its Users (Interactively and Statistically

    Directory of Open Access Journals (Sweden)

    Rozalija Marinković

    2011-12-01

    Full Text Available According to popular opinion, statistics is the sum of incorrect data. However, much can be learned and concluded from statistical reports. In our example of recording statistics in the library of the Faculty of Economics in Osijek the following records are kept: user categories, the frequency of borrowing, the structure of borrowing, use of the reading room, use of the database, etc. Based on these data and statistical processing, the following can be analysed: the total number of users, the number of users by the year of study, by the content of used professional and scientific literature, by the use of serial publications in the reading room, or by the work on computers and database usage. All these data are precious for determining the procurement policy of the library, observing frequency of use of particular categories of professional and scientific literature (textbooks, handbooks, professional and scientific monographs, databases and determining operation guidelines for library staff.

  9. Statistical analysis of manufacturing defects on fatigue life of wind turbine casted Component

    DEFF Research Database (Denmark)

    Rafsanjani, Hesam Mirzaei; Sørensen, John Dalsgaard; Mukherjee, Krishnendu

    2014-01-01

    Wind turbine components experience heavily variable loads during its lifetime and fatigue failure is a main failure mode of casted components during their design working life. The fatigue life is highly dependent on the microstructure (grain size and graphite form and size), number, type, location...... and size of defects in the casted components and is therefore rather uncertain and needs to be described by stochastic models. Uncertainties related to such defects influence prediction of the fatigue strengths and are therefore important in modelling and assessment of the reliability of wind turbine...... for the fatigue life, namely LogNormal and Weibull distributions. The statistical analyses are performed using the Maximum Likelihood Method and the statistical uncertainty is estimated. Further, stochastic models for the fatigue life obtained from the statistical analyses are used for illustration to assess...

  10. Nuclear modification factor using Tsallis non-extensive statistics

    Energy Technology Data Exchange (ETDEWEB)

    Tripathy, Sushanta; Garg, Prakhar; Kumar, Prateek; Sahoo, Raghunath [Indian Institute of Technology Indore, Discipline of Physics, School of Basic Sciences, Simrol (India); Bhattacharyya, Trambak; Cleymans, Jean [University of Cape Town, UCT-CERN Research Centre and Department of Physics, Rondebosch (South Africa)

    2016-09-15

    The nuclear modification factor is derived using Tsallis non-extensive statistics in relaxation time approximation. The variation of the nuclear modification factor with transverse momentum for different values of the non-extensive parameter, q, is also observed. The experimental data from RHIC and LHC are analysed in the framework of Tsallis non-extensive statistics in a relaxation time approximation. It is shown that the proposed approach explains the R{sub AA} of all particles over a wide range of transverse momentum but does not seem to describe the rise in R{sub AA} at very high transverse momenta. (orig.)

  11. Big Data as a Source for Official Statistics

    Directory of Open Access Journals (Sweden)

    Daas Piet J.H.

    2015-06-01

    Full Text Available More and more data are being produced by an increasing number of electronic devices physically surrounding us and on the internet. The large amount of data and the high frequency at which they are produced have resulted in the introduction of the term ‘Big Data’. Because these data reflect many different aspects of our daily lives and because of their abundance and availability, Big Data sources are very interesting from an official statistics point of view. This article discusses the exploration of both opportunities and challenges for official statistics associated with the application of Big Data. Experiences gained with analyses of large amounts of Dutch traffic loop detection records and Dutch social media messages are described to illustrate the topics characteristic of the statistical analysis and use of Big Data.

  12. Visual and statistical analysis of 18F-FDG PET in primary progressive aphasia

    International Nuclear Information System (INIS)

    Matias-Guiu, Jordi A.; Moreno-Ramos, Teresa; Garcia-Ramos, Rocio; Fernandez-Matarrubia, Marta; Oreja-Guevara, Celia; Matias-Guiu, Jorge; Cabrera-Martin, Maria Nieves; Perez-Castejon, Maria Jesus; Rodriguez-Rey, Cristina; Ortega-Candil, Aida; Carreras, Jose Luis

    2015-01-01

    Diagnosing progressive primary aphasia (PPA) and its variants is of great clinical importance, and fluorodeoxyglucose (FDG) positron emission tomography (PET) may be a useful diagnostic technique. The purpose of this study was to evaluate interobserver variability in the interpretation of FDG PET images in PPA as well as the diagnostic sensitivity and specificity of the technique. We also aimed to compare visual and statistical analyses of these images. There were 10 raters who analysed 44 FDG PET scans from 33 PPA patients and 11 controls. Five raters analysed the images visually, while the other five used maps created using Statistical Parametric Mapping software. Two spatial normalization procedures were performed: global mean normalization and cerebellar normalization. Clinical diagnosis was considered the gold standard. Inter-rater concordance was moderate for visual analysis (Fleiss' kappa 0.568) and substantial for statistical analysis (kappa 0.756-0.881). Agreement was good for all three variants of PPA except for the nonfluent/agrammatic variant studied with visual analysis. The sensitivity and specificity of each rater's diagnosis of PPA was high, averaging 87.8 and 89.9 % for visual analysis and 96.9 and 90.9 % for statistical analysis using global mean normalization, respectively. In cerebellar normalization, sensitivity was 88.9 % and specificity 100 %. FDG PET demonstrated high diagnostic accuracy for the diagnosis of PPA and its variants. Inter-rater concordance was higher for statistical analysis, especially for the nonfluent/agrammatic variant. These data support the use of FDG PET to evaluate patients with PPA and show that statistical analysis methods are particularly useful for identifying the nonfluent/agrammatic variant of PPA. (orig.)

  13. Mathematical background and attitudes toward statistics in a sample of Spanish college students.

    Science.gov (United States)

    Carmona, José; Martínez, Rafael J; Sánchez, Manuel

    2005-08-01

    To examine the relation of mathematical background and initial attitudes toward statistics of Spanish college students in social sciences the Survey of Attitudes Toward Statistics was given to 827 students. Multivariate analyses tested the effects of two indicators of mathematical background (amount of exposure and achievement in previous courses) on the four subscales. Analysis suggested grades in previous courses are more related to initial attitudes toward statistics than the number of mathematics courses taken. Mathematical background was related with students' affective responses to statistics but not with their valuing of statistics. Implications of possible research are discussed.

  14. Chemical data and statistical analyses from a uranium hydrogeochemical survey of the Rio Ojo Caliente drainage basin, New Mexico. Part I. Water

    International Nuclear Information System (INIS)

    Wenrich-Verbeek, K.J.; Suits, V.J.

    1979-01-01

    This report presents the chemical analyses and statistical evaluation of 62 water samples collected in the north-central part of New Mexico near Rio Ojo Caliente. Both spring and surface-water samples were taken throughout the Rio Ojo Caliente drainage basin above and a few miles below the town of La Madera. A high U concentration (15 μg/l) found in the water of the Rio Ojo Caliente near La Madera, Rio Arriba County, New Mexico, during a regional sampling-technique study in August 1975 by the senior author, was investigated further in May 1976 to determine whether stream waters could be effectively used to trace the source of a U anomaly. A detailed study of the tributaries to the Rio Ojo Caliente, involving 29 samples, was conducted during a moderate discharge period, May 1976, so that small tributaries would contain water. This study isolated Canada de la Cueva as the tributary contributing the anomalous U, so that in May 1977, an extremely low discharge period due to the 1977 drought, an additional 33 samples were taken to further define the anomalous area. 6 references, 3 figures, 6 tables

  15. Analysing the Relevance of Experience Partitions to the Prediction of Players’ Self-Reports of Affect

    DEFF Research Database (Denmark)

    Martínez, Héctor Pérez; Yannakakis, Georgios N.

    2011-01-01

    A common practice in modeling affect from physiological signals consists of reducing the signals to a set of statistical features that feed predictors of self-reported emotions. This paper analyses the impact of various time-windows, used for the extraction of physiological features, to the accur......A common practice in modeling affect from physiological signals consists of reducing the signals to a set of statistical features that feed predictors of self-reported emotions. This paper analyses the impact of various time-windows, used for the extraction of physiological features...

  16. Analysis and classification of ECG-waves and rhythms using circular statistics and vector strength

    Directory of Open Access Journals (Sweden)

    Janßen Jan-Dirk

    2017-09-01

    Full Text Available The most common way to analyse heart rhythm is to calculate the RR-interval and the heart rate variability. For further evaluation, descriptive statistics are often used. Here we introduce a new and more natural heart rhythm analysis tool that is based on circular statistics and vector strength. Vector strength is a tool to measure the periodicity or lack of periodicity of a signal. We divide the signal into non-overlapping window segments and project the detected R-waves around the unit circle using the complex exponential function and the median RR-interval. In addition, we calculate the vector strength and apply circular statistics as wells as an angular histogram on the R-wave vectors. This approach enables an intuitive visualization and analysis of rhythmicity. Our results show that ECG-waves and rhythms can be easily visualized, analysed and classified by circular statistics and vector strength.

  17. Statistical separability and the impossibility of the superluminal quantum communication

    International Nuclear Information System (INIS)

    Zhang Qiren

    2004-01-01

    The authors analyse the relation and the difference between the quantum correlation of two points in space and the communication between them. The statistical separability of two points in the space is defined and proven. From this statistical separability, authors prove that the superluminal quantum communication between different points is impossible. To emphasis the compatibility between the quantum theory and the relativity, authors write the von Neumann equation of density operator evolution in the multi-time form. (author)

  18. Polygenic scores via penalized regression on summary statistics.

    Science.gov (United States)

    Mak, Timothy Shin Heng; Porsch, Robert Milan; Choi, Shing Wan; Zhou, Xueya; Sham, Pak Chung

    2017-09-01

    Polygenic scores (PGS) summarize the genetic contribution of a person's genotype to a disease or phenotype. They can be used to group participants into different risk categories for diseases, and are also used as covariates in epidemiological analyses. A number of possible ways of calculating PGS have been proposed, and recently there is much interest in methods that incorporate information available in published summary statistics. As there is no inherent information on linkage disequilibrium (LD) in summary statistics, a pertinent question is how we can use LD information available elsewhere to supplement such analyses. To answer this question, we propose a method for constructing PGS using summary statistics and a reference panel in a penalized regression framework, which we call lassosum. We also propose a general method for choosing the value of the tuning parameter in the absence of validation data. In our simulations, we showed that pseudovalidation often resulted in prediction accuracy that is comparable to using a dataset with validation phenotype and was clearly superior to the conservative option of setting the tuning parameter of lassosum to its lowest value. We also showed that lassosum achieved better prediction accuracy than simple clumping and P-value thresholding in almost all scenarios. It was also substantially faster and more accurate than the recently proposed LDpred. © 2017 WILEY PERIODICALS, INC.

  19. National Statistical Commission and Indian Official Statistics*

    Indian Academy of Sciences (India)

    IAS Admin

    a good collection of official statistics of that time. With more .... statistical agencies and institutions to provide details of statistical activities .... ing several training programmes. .... ful completion of Indian Statistical Service examinations, the.

  20. Official Statistics and Statistics Education: Bridging the Gap

    Directory of Open Access Journals (Sweden)

    Gal Iddo

    2017-03-01

    Full Text Available This article aims to challenge official statistics providers and statistics educators to ponder on how to help non-specialist adult users of statistics develop those aspects of statistical literacy that pertain to official statistics. We first document the gap in the literature in terms of the conceptual basis and educational materials needed for such an undertaking. We then review skills and competencies that may help adults to make sense of statistical information in areas of importance to society. Based on this review, we identify six elements related to official statistics about which non-specialist adult users should possess knowledge in order to be considered literate in official statistics: (1 the system of official statistics and its work principles; (2 the nature of statistics about society; (3 indicators; (4 statistical techniques and big ideas; (5 research methods and data sources; and (6 awareness and skills for citizens’ access to statistical reports. Based on this ad hoc typology, we discuss directions that official statistics providers, in cooperation with statistics educators, could take in order to (1 advance the conceptualization of skills needed to understand official statistics, and (2 expand educational activities and services, specifically by developing a collaborative digital textbook and a modular online course, to improve public capacity for understanding of official statistics.

  1. Age and gender effects on normal regional cerebral blood flow studied using two different voxel-based statistical analyses

    International Nuclear Information System (INIS)

    Pirson, A.S.; George, J.; Krug, B.; Vander Borght, T.; Van Laere, K.; Jamart, J.; D'Asseler, Y.; Minoshima, S.

    2009-01-01

    Fully automated analysis programs have been applied more and more to aid for the reading of regional cerebral blood flow SPECT study. They are increasingly based on the comparison of the patient study with a normal database. In this study, we evaluate the ability of Three-Dimensional Stereotactic Surface Projection (3 D-S.S.P.) to isolate effects of age and gender in a previously studied normal population. The results were also compared with those obtained using Statistical Parametric Mapping (S.P.M.99). Methods Eighty-nine 99m Tc-E.C.D.-SPECT studies performed in carefully screened healthy volunteers (46 females, 43 males; age 20 - 81 years) were analysed using 3 D-S.S.P.. A multivariate analysis based on the general linear model was performed with regions as intra-subject factor, gender as inter-subject factor and age as co-variate. Results Both age and gender had a significant interaction effect with regional tracer uptake. An age-related decline (p < 0.001) was found in the anterior cingulate gyrus, left frontal association cortex and left insula. Bilateral occipital association and left primary visual cortical uptake showed a significant relative increase with age (p < 0.001). Concerning the gender effect, women showed higher uptake (p < 0.01) in the parietal and right sensorimotor cortices. An age by gender interaction (p < 0.01) was only found in the left medial frontal cortex. The results were consistent with those obtained with S.P.M.99. Conclusion 3 D-S.S.P. analysis of normal r.C.B.F. variability is consistent with the literature and other automated voxel-based techniques, which highlight the effects of both age and gender. (authors)

  2. Application of multivariate statistical techniques in microbial ecology.

    Science.gov (United States)

    Paliy, O; Shankar, V

    2016-03-01

    Recent advances in high-throughput methods of molecular analyses have led to an explosion of studies generating large-scale ecological data sets. In particular, noticeable effect has been attained in the field of microbial ecology, where new experimental approaches provided in-depth assessments of the composition, functions and dynamic changes of complex microbial communities. Because even a single high-throughput experiment produces large amount of data, powerful statistical techniques of multivariate analysis are well suited to analyse and interpret these data sets. Many different multivariate techniques are available, and often it is not clear which method should be applied to a particular data set. In this review, we describe and compare the most widely used multivariate statistical techniques including exploratory, interpretive and discriminatory procedures. We consider several important limitations and assumptions of these methods, and we present examples of how these approaches have been utilized in recent studies to provide insight into the ecology of the microbial world. Finally, we offer suggestions for the selection of appropriate methods based on the research question and data set structure. © 2016 John Wiley & Sons Ltd.

  3. Accounting for undetected compounds in statistical analyses of mass spectrometry 'omic studies.

    Science.gov (United States)

    Taylor, Sandra L; Leiserowitz, Gary S; Kim, Kyoungmi

    2013-12-01

    Mass spectrometry is an important high-throughput technique for profiling small molecular compounds in biological samples and is widely used to identify potential diagnostic and prognostic compounds associated with disease. Commonly, this data generated by mass spectrometry has many missing values resulting when a compound is absent from a sample or is present but at a concentration below the detection limit. Several strategies are available for statistically analyzing data with missing values. The accelerated failure time (AFT) model assumes all missing values result from censoring below a detection limit. Under a mixture model, missing values can result from a combination of censoring and the absence of a compound. We compare power and estimation of a mixture model to an AFT model. Based on simulated data, we found the AFT model to have greater power to detect differences in means and point mass proportions between groups. However, the AFT model yielded biased estimates with the bias increasing as the proportion of observations in the point mass increased while estimates were unbiased with the mixture model except if all missing observations came from censoring. These findings suggest using the AFT model for hypothesis testing and mixture model for estimation. We demonstrated this approach through application to glycomics data of serum samples from women with ovarian cancer and matched controls.

  4. Kidney function changes with aging in adults: comparison between cross-sectional and longitudinal data analyses in renal function assessment.

    Science.gov (United States)

    Chung, Sang M; Lee, David J; Hand, Austin; Young, Philip; Vaidyanathan, Jayabharathi; Sahajwalla, Chandrahas

    2015-12-01

    The study evaluated whether the renal function decline rate per year with age in adults varies based on two primary statistical analyses: cross-section (CS), using one observation per subject, and longitudinal (LT), using multiple observations per subject over time. A total of 16628 records (3946 subjects; age range 30-92 years) of creatinine clearance and relevant demographic data were used. On average, four samples per subject were collected for up to 2364 days (mean: 793 days). A simple linear regression and random coefficient models were selected for CS and LT analyses, respectively. The renal function decline rates per year were 1.33 and 0.95 ml/min/year for CS and LT analyses, respectively, and were slower when the repeated individual measurements were considered. The study confirms that rates are different based on statistical analyses, and that a statistically robust longitudinal model with a proper sampling design provides reliable individual as well as population estimates of the renal function decline rates per year with age in adults. In conclusion, our findings indicated that one should be cautious in interpreting the renal function decline rate with aging information because its estimation was highly dependent on the statistical analyses. From our analyses, a population longitudinal analysis (e.g. random coefficient model) is recommended if individualization is critical, such as a dose adjustment based on renal function during a chronic therapy. Copyright © 2015 John Wiley & Sons, Ltd.

  5. International Conference on Mathematical Sciences and Statistics 2013 : Selected Papers

    CERN Document Server

    Leong, Wah; Eshkuvatov, Zainidin

    2014-01-01

    This volume is devoted to the most recent discoveries in mathematics and statistics. It also serves as a platform for knowledge and information exchange between experts from industrial and academic sectors. The book covers a wide range of topics, including mathematical analyses, probability, statistics, algebra, geometry, mathematical physics, wave propagation, stochastic processes, ordinary and partial differential equations, boundary value problems, linear operators, cybernetics and number and functional theory. It is a valuable resource for pure and applied mathematicians, statisticians, engineers and scientists.

  6. Publication of statistically significant research findings in prosthodontics & implant dentistry in the context of other dental specialties.

    Science.gov (United States)

    Papageorgiou, Spyridon N; Kloukos, Dimitrios; Petridis, Haralampos; Pandis, Nikolaos

    2015-10-01

    To assess the hypothesis that there is excessive reporting of statistically significant studies published in prosthodontic and implantology journals, which could indicate selective publication. The last 30 issues of 9 journals in prosthodontics and implant dentistry were hand-searched for articles with statistical analyses. The percentages of significant and non-significant results were tabulated by parameter of interest. Univariable/multivariable logistic regression analyses were applied to identify possible predictors of reporting statistically significance findings. The results of this study were compared with similar studies in dentistry with random-effects meta-analyses. From the 2323 included studies 71% of them reported statistically significant results, with the significant results ranging from 47% to 86%. Multivariable modeling identified that geographical area and involvement of statistician were predictors of statistically significant results. Compared to interventional studies, the odds that in vitro and observational studies would report statistically significant results was increased by 1.20 times (OR: 2.20, 95% CI: 1.66-2.92) and 0.35 times (OR: 1.35, 95% CI: 1.05-1.73), respectively. The probability of statistically significant results from randomized controlled trials was significantly lower compared to various study designs (difference: 30%, 95% CI: 11-49%). Likewise the probability of statistically significant results in prosthodontics and implant dentistry was lower compared to other dental specialties, but this result did not reach statistical significant (P>0.05). The majority of studies identified in the fields of prosthodontics and implant dentistry presented statistically significant results. The same trend existed in publications of other specialties in dentistry. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Australasian Resuscitation In Sepsis Evaluation trial statistical analysis plan.

    Science.gov (United States)

    Delaney, Anthony; Peake, Sandra L; Bellomo, Rinaldo; Cameron, Peter; Holdgate, Anna; Howe, Belinda; Higgins, Alisa; Presneill, Jeffrey; Webb, Steve

    2013-10-01

    The Australasian Resuscitation In Sepsis Evaluation (ARISE) study is an international, multicentre, randomised, controlled trial designed to evaluate the effectiveness of early goal-directed therapy compared with standard care for patients presenting to the ED with severe sepsis. In keeping with current practice, and taking into considerations aspects of trial design and reporting specific to non-pharmacologic interventions, this document outlines the principles and methods for analysing and reporting the trial results. The document is prepared prior to completion of recruitment into the ARISE study, without knowledge of the results of the interim analysis conducted by the data safety and monitoring committee and prior to completion of the two related international studies. The statistical analysis plan was designed by the ARISE chief investigators, and reviewed and approved by the ARISE steering committee. The data collected by the research team as specified in the study protocol, and detailed in the study case report form were reviewed. Information related to baseline characteristics, characteristics of delivery of the trial interventions, details of resuscitation and other related therapies, and other relevant data are described with appropriate comparisons between groups. The primary, secondary and tertiary outcomes for the study are defined, with description of the planned statistical analyses. A statistical analysis plan was developed, along with a trial profile, mock-up tables and figures. A plan for presenting baseline characteristics, microbiological and antibiotic therapy, details of the interventions, processes of care and concomitant therapies, along with adverse events are described. The primary, secondary and tertiary outcomes are described along with identification of subgroups to be analysed. A statistical analysis plan for the ARISE study has been developed, and is available in the public domain, prior to the completion of recruitment into the

  8. Analyses of statistical transformations of row data describing free proline concentration in sugar beet exposed to drought

    Directory of Open Access Journals (Sweden)

    Putnik-Delić Marina I.

    2010-01-01

    Full Text Available Eleven sugar beet genotypes were tested for their capacity to tolerate drought. Plants were grown in semi-controlled conditions, in the greenhouse, and watered daily. After 90 days, water deficit was imposed by the cessation of watering, while the control plants continued to be watered up to 80% of FWC. Five days later concentration of free proline in leaves was determined. Analysis was done in three replications. Statistical analysis was performed using STATISTICA 9.0, Minitab 15, and R2.11.1. Differences between genotypes were statistically processed by Duncan test. Because of nonormality of the data distribution and heterogeneity of variances in different groups, two types of transformations of row data were applied. For this type of data more appropriate in eliminating nonormality was Johnson transformation, as opposed to Box-Cox. Based on the both transformations it may be concluded that in all genotypes except for 10, concentration of free proline differs significantly between treatment (drought and the control.

  9. Statistical evaluation of cleanup: How should it be done?

    International Nuclear Information System (INIS)

    Gilbert, R.O.

    1993-02-01

    This paper discusses statistical issues that must be addressed when conducting statistical tests for the purpose of evaluating if a site has been remediated to guideline values or standards. The importance of using the Data Quality Objectives (DQO) process to plan and design the sampling plan is emphasized. Other topics discussed are: (1) accounting for the uncertainty of cleanup standards when conducting statistical tests, (2) determining the number of samples and measurements needed to attain specified DQOs, (3) considering whether the appropriate testing philosophy in a given situation is ''guilty until proven innocent'' or ''innocent until proven guilty'' when selecting a statistical test for evaluating the attainment of standards, (4) conducting tests using data sets that contain measurements that have been reported by the laboratory as less than the minimum detectable activity, and (5) selecting statistical tests that are appropriate for risk-based or background-based standards. A recent draft report by Berger that provides guidance on sampling plans and data analyses for final status surveys at US Nuclear Regulatory Commission licensed facilities serves as a focal point for discussion

  10. Chemometric and Statistical Analyses of ToF-SIMS Spectra of Increasingly Complex Biological Samples

    Energy Technology Data Exchange (ETDEWEB)

    Berman, E S; Wu, L; Fortson, S L; Nelson, D O; Kulp, K S; Wu, K J

    2007-10-24

    Characterizing and classifying molecular variation within biological samples is critical for determining fundamental mechanisms of biological processes that will lead to new insights including improved disease understanding. Towards these ends, time-of-flight secondary ion mass spectrometry (ToF-SIMS) was used to examine increasingly complex samples of biological relevance, including monosaccharide isomers, pure proteins, complex protein mixtures, and mouse embryo tissues. The complex mass spectral data sets produced were analyzed using five common statistical and chemometric multivariate analysis techniques: principal component analysis (PCA), linear discriminant analysis (LDA), partial least squares discriminant analysis (PLSDA), soft independent modeling of class analogy (SIMCA), and decision tree analysis by recursive partitioning. PCA was found to be a valuable first step in multivariate analysis, providing insight both into the relative groupings of samples and into the molecular basis for those groupings. For the monosaccharides, pure proteins and protein mixture samples, all of LDA, PLSDA, and SIMCA were found to produce excellent classification given a sufficient number of compound variables calculated. For the mouse embryo tissues, however, SIMCA did not produce as accurate a classification. The decision tree analysis was found to be the least successful for all the data sets, providing neither as accurate a classification nor chemical insight for any of the tested samples. Based on these results we conclude that as the complexity of the sample increases, so must the sophistication of the multivariate technique used to classify the samples. PCA is a preferred first step for understanding ToF-SIMS data that can be followed by either LDA or PLSDA for effective classification analysis. This study demonstrates the strength of ToF-SIMS combined with multivariate statistical and chemometric techniques to classify increasingly complex biological samples

  11. Statistical analysis of brake squeal noise

    Science.gov (United States)

    Oberst, S.; Lai, J. C. S.

    2011-06-01

    Despite substantial research efforts applied to the prediction of brake squeal noise since the early 20th century, the mechanisms behind its generation are still not fully understood. Squealing brakes are of significant concern to the automobile industry, mainly because of the costs associated with warranty claims. In order to remedy the problems inherent in designing quieter brakes and, therefore, to understand the mechanisms, a design of experiments study, using a noise dynamometer, was performed by a brake system manufacturer to determine the influence of geometrical parameters (namely, the number and location of slots) of brake pads on brake squeal noise. The experimental results were evaluated with a noise index and ranked for warm and cold brake stops. These data are analysed here using statistical descriptors based on population distributions, and a correlation analysis, to gain greater insight into the functional dependency between the time-averaged friction coefficient as the input and the peak sound pressure level data as the output quantity. The correlation analysis between the time-averaged friction coefficient and peak sound pressure data is performed by applying a semblance analysis and a joint recurrence quantification analysis. Linear measures are compared with complexity measures (nonlinear) based on statistics from the underlying joint recurrence plots. Results show that linear measures cannot be used to rank the noise performance of the four test pad configurations. On the other hand, the ranking of the noise performance of the test pad configurations based on the noise index agrees with that based on nonlinear measures: the higher the nonlinearity between the time-averaged friction coefficient and peak sound pressure, the worse the squeal. These results highlight the nonlinear character of brake squeal and indicate the potential of using nonlinear statistical analysis tools to analyse disc brake squeal.

  12. Japanese standard method for safety evaluation using best estimate code based on uncertainty and scaling analyses with statistical approach

    International Nuclear Information System (INIS)

    Mizokami, Shinya; Hotta, Akitoshi; Kudo, Yoshiro; Yonehara, Tadashi; Watada, Masayuki; Sakaba, Hiroshi

    2009-01-01

    Current licensing practice in Japan consists of using conservative boundary and initial conditions(BIC), assumptions and analytical codes. The safety analyses for licensing purpose are inherently deterministic. Therefore, conservative BIC and assumptions, such as single failure, must be employed for the analyses. However, using conservative analytical codes are not considered essential. The standard committee of Atomic Energy Society of Japan(AESJ) has drawn up the standard for using best estimate codes for safety analyses in 2008 after three-years of discussions reflecting domestic and international recent findings. (author)

  13. Visual and statistical analysis of {sup 18}F-FDG PET in primary progressive aphasia

    Energy Technology Data Exchange (ETDEWEB)

    Matias-Guiu, Jordi A.; Moreno-Ramos, Teresa; Garcia-Ramos, Rocio; Fernandez-Matarrubia, Marta; Oreja-Guevara, Celia; Matias-Guiu, Jorge [Hospital Clinico San Carlos, Department of Neurology, Madrid (Spain); Cabrera-Martin, Maria Nieves; Perez-Castejon, Maria Jesus; Rodriguez-Rey, Cristina; Ortega-Candil, Aida; Carreras, Jose Luis [San Carlos Health Research Institute (IdISSC) Complutense University of Madrid, Department of Nuclear Medicine, Hospital Clinico San Carlos, Madrid (Spain)

    2015-05-01

    Diagnosing progressive primary aphasia (PPA) and its variants is of great clinical importance, and fluorodeoxyglucose (FDG) positron emission tomography (PET) may be a useful diagnostic technique. The purpose of this study was to evaluate interobserver variability in the interpretation of FDG PET images in PPA as well as the diagnostic sensitivity and specificity of the technique. We also aimed to compare visual and statistical analyses of these images. There were 10 raters who analysed 44 FDG PET scans from 33 PPA patients and 11 controls. Five raters analysed the images visually, while the other five used maps created using Statistical Parametric Mapping software. Two spatial normalization procedures were performed: global mean normalization and cerebellar normalization. Clinical diagnosis was considered the gold standard. Inter-rater concordance was moderate for visual analysis (Fleiss' kappa 0.568) and substantial for statistical analysis (kappa 0.756-0.881). Agreement was good for all three variants of PPA except for the nonfluent/agrammatic variant studied with visual analysis. The sensitivity and specificity of each rater's diagnosis of PPA was high, averaging 87.8 and 89.9 % for visual analysis and 96.9 and 90.9 % for statistical analysis using global mean normalization, respectively. In cerebellar normalization, sensitivity was 88.9 % and specificity 100 %. FDG PET demonstrated high diagnostic accuracy for the diagnosis of PPA and its variants. Inter-rater concordance was higher for statistical analysis, especially for the nonfluent/agrammatic variant. These data support the use of FDG PET to evaluate patients with PPA and show that statistical analysis methods are particularly useful for identifying the nonfluent/agrammatic variant of PPA. (orig.)

  14. 32 CFR 322.7 - Exempt systems of records.

    Science.gov (United States)

    2010-07-01

    ...) Reasons: (i) From subsection (c)(3) and (d) when access to accounting disclosures and access to or...) From subsection (c)(3) because the release of the disclosure accounting would place the subject of an...). (4) Reasons: (i) From subsection (c)(3) and (d) when access to accounting disclosures and access to...

  15. 26 CFR 1.9005 - Statutory provisions; section 2 of the Act of September 26, 1961 (Pub. L. 87-321, 75 Stat. 683).

    Science.gov (United States)

    2010-04-01

    ... journal or other industry publication. (b) Years to which applicable. An election made under subsection (c... day for making an election under subsection (c). An election by a taxpayer under subsection (c) shall... Stat. 683; 26 U.S.C. 613 note) [T.D. 6583, 26 FR 12077, Dec. 16, 1961] ...

  16. Trends in statistical methods in articles published in Archives of Plastic Surgery between 2012 and 2017.

    Science.gov (United States)

    Han, Kyunghwa; Jung, Inkyung

    2018-05-01

    This review article presents an assessment of trends in statistical methods and an evaluation of their appropriateness in articles published in the Archives of Plastic Surgery (APS) from 2012 to 2017. We reviewed 388 original articles published in APS between 2012 and 2017. We categorized the articles that used statistical methods according to the type of statistical method, the number of statistical methods, and the type of statistical software used. We checked whether there were errors in the description of statistical methods and results. A total of 230 articles (59.3%) published in APS between 2012 and 2017 used one or more statistical method. Within these articles, there were 261 applications of statistical methods with continuous or ordinal outcomes, and 139 applications of statistical methods with categorical outcome. The Pearson chi-square test (17.4%) and the Mann-Whitney U test (14.4%) were the most frequently used methods. Errors in describing statistical methods and results were found in 133 of the 230 articles (57.8%). Inadequate description of P-values was the most common error (39.1%). Among the 230 articles that used statistical methods, 71.7% provided details about the statistical software programs used for the analyses. SPSS was predominantly used in the articles that presented statistical analyses. We found that the use of statistical methods in APS has increased over the last 6 years. It seems that researchers have been paying more attention to the proper use of statistics in recent years. It is expected that these positive trends will continue in APS.

  17. Incorporating an Interactive Statistics Workshop into an Introductory Biology Course-Based Undergraduate Research Experience (CURE) Enhances Students' Statistical Reasoning and Quantitative Literacy Skills.

    Science.gov (United States)

    Olimpo, Jeffrey T; Pevey, Ryan S; McCabe, Thomas M

    2018-01-01

    Course-based undergraduate research experiences (CUREs) provide an avenue for student participation in authentic scientific opportunities. Within the context of such coursework, students are often expected to collect, analyze, and evaluate data obtained from their own investigations. Yet, limited research has been conducted that examines mechanisms for supporting students in these endeavors. In this article, we discuss the development and evaluation of an interactive statistics workshop that was expressly designed to provide students with an open platform for graduate teaching assistant (GTA)-mentored data processing, statistical testing, and synthesis of their own research findings. Mixed methods analyses of pre/post-intervention survey data indicated a statistically significant increase in students' reasoning and quantitative literacy abilities in the domain, as well as enhancement of student self-reported confidence in and knowledge of the application of various statistical metrics to real-world contexts. Collectively, these data reify an important role for scaffolded instruction in statistics in preparing emergent scientists to be data-savvy researchers in a globally expansive STEM workforce.

  18. Statistical Data Editing in Scientific Articles.

    Science.gov (United States)

    Habibzadeh, Farrokh

    2017-07-01

    Scientific journals are important scholarly forums for sharing research findings. Editors have important roles in safeguarding standards of scientific publication and should be familiar with correct presentation of results, among other core competencies. Editors do not have access to the raw data and should thus rely on clues in the submitted manuscripts. To identify probable errors, they should look for inconsistencies in presented results. Common statistical problems that can be picked up by a knowledgeable manuscript editor are discussed in this article. Manuscripts should contain a detailed section on statistical analyses of the data. Numbers should be reported with appropriate precisions. Standard error of the mean (SEM) should not be reported as an index of data dispersion. Mean (standard deviation [SD]) and median (interquartile range [IQR]) should be used for description of normally and non-normally distributed data, respectively. If possible, it is better to report 95% confidence interval (CI) for statistics, at least for main outcome variables. And, P values should be presented, and interpreted with caution, if there is a hypothesis. To advance knowledge and skills of their members, associations of journal editors are better to develop training courses on basic statistics and research methodology for non-experts. This would in turn improve research reporting and safeguard the body of scientific evidence. © 2017 The Korean Academy of Medical Sciences.

  19. Statistical applications for chemistry, manufacturing and controls (CMC) in the pharmaceutical industry

    CERN Document Server

    Burdick, Richard K; Pfahler, Lori B; Quiroz, Jorge; Sidor, Leslie; Vukovinsky, Kimberly; Zhang, Lanju

    2017-01-01

    This book examines statistical techniques that are critically important to Chemistry, Manufacturing, and Control (CMC) activities. Statistical methods are presented with a focus on applications unique to the CMC in the pharmaceutical industry. The target audience consists of statisticians and other scientists who are responsible for performing statistical analyses within a CMC environment. Basic statistical concepts are addressed in Chapter 2 followed by applications to specific topics related to development and manufacturing. The mathematical level assumes an elementary understanding of statistical methods. The ability to use Excel or statistical packages such as Minitab, JMP, SAS, or R will provide more value to the reader. The motivation for this book came from an American Association of Pharmaceutical Scientists (AAPS) short course on statistical methods applied to CMC applications presented by four of the authors. One of the course participants asked us for a good reference book, and the only book recomm...

  20. Anti-schistosomal intervention targets identified by lifecycle transcriptomic analyses.

    Directory of Open Access Journals (Sweden)

    Jennifer M Fitzpatrick

    2009-11-01

    Full Text Available Novel methods to identify anthelmintic drug and vaccine targets are urgently needed, especially for those parasite species currently being controlled by singular, often limited strategies. A clearer understanding of the transcriptional components underpinning helminth development will enable identification of exploitable molecules essential for successful parasite/host interactions. Towards this end, we present a combinatorial, bioinformatics-led approach, employing both statistical and network analyses of transcriptomic data, for identifying new immunoprophylactic and therapeutic lead targets to combat schistosomiasis.Utilisation of a Schistosoma mansoni oligonucleotide DNA microarray consisting of 37,632 elements enabled gene expression profiling from 15 distinct parasite lifecycle stages, spanning three unique ecological niches. Statistical approaches of data analysis revealed differential expression of 973 gene products that minimally describe the three major characteristics of schistosome development: asexual processes within intermediate snail hosts, sexual maturation within definitive vertebrate hosts and sexual dimorphism amongst adult male and female worms. Furthermore, we identified a group of 338 constitutively expressed schistosome gene products (including 41 transcripts sharing no sequence similarity outside the Platyhelminthes, which are likely to be essential for schistosome lifecycle progression. While highly informative, statistics-led bioinformatics mining of the transcriptional dataset has limitations, including the inability to identify higher order relationships between differentially expressed transcripts and lifecycle stages. Network analysis, coupled to Gene Ontology enrichment investigations, facilitated a re-examination of the dataset and identified 387 clusters (containing 12,132 gene products displaying novel examples of developmentally regulated classes (including 294 schistosomula and/or adult transcripts with no

  1. Steam Generator Group Project. Progress report on data acquisition/statistical analysis

    International Nuclear Information System (INIS)

    Doctor, P.G.; Buchanan, J.A.; McIntyre, J.M.; Hof, P.J.; Ercanbrack, S.S.

    1984-01-01

    A major task of the Steam Generator Group Project (SGGP) is to establish the reliability of the eddy current inservice inspections of PWR steam generator tubing, by comparing the eddy current data to the actual physical condition of the tubes via destructive analyses. This report describes the plans for the computer systems needed to acquire, store and analyze the diverse data to be collected during the project. The real-time acquisition of the baseline eddy current inspection data will be handled using a specially designed data acquisition computer system based on a Digital Equipment Corporation (DEC) PDP-11/44. The data will be archived in digital form for use after the project is completed. Data base management and statistical analyses will be done on a DEC VAX-11/780. Color graphics will be heavily used to summarize the data and the results of the analyses. The report describes the data that will be taken during the project and the statistical methods that will be used to analyze the data. 7 figures, 2 tables

  2. Statistics and Dynamics in the Large-scale Structure of the Universe

    International Nuclear Information System (INIS)

    Matsubara, Takahiko

    2006-01-01

    In cosmology, observations and theories are related to each other by statistics in most cases. Especially, statistical methods play central roles in analyzing fluctuations in the universe, which are seeds of the present structure of the universe. The confrontation of the statistics and dynamics is one of the key methods to unveil the structure and evolution of the universe. I will review some of the major statistical methods in cosmology, in connection with linear and nonlinear dynamics of the large-scale structure of the universe. The present status of analyses of the observational data such as the Sloan Digital Sky Survey, and the future prospects to constrain the nature of exotic components of the universe such as the dark energy will be presented

  3. Applications of Infrared and Raman Spectroscopies to Probiotic Investigation

    Science.gov (United States)

    Santos, Mauricio I.; Gerbino, Esteban; Tymczyszyn, Elizabeth; Gomez-Zavaglia, Andrea

    2015-01-01

    In this review, we overview the most important contributions of vibrational spectroscopy based techniques in the study of probiotics and lactic acid bacteria. First, we briefly introduce the fundamentals of these techniques, together with the main multivariate analytical tools used for spectral interpretation. Then, four main groups of applications are reported: (a) bacterial taxonomy (Subsection 4.1); (b) bacterial preservation (Subsection 4.2); (c) monitoring processes involving lactic acid bacteria and probiotics (Subsection 4.3); (d) imaging-based applications (Subsection 4.4). A final conclusion, underlying the potentialities of these techniques, is presented. PMID:28231205

  4. 47th Scientific Meeting of the Italian Statistical Society

    CERN Document Server

    Moreno, Elías; Racugno, Walter

    2016-01-01

    This book brings together selected peer-reviewed contributions from various research fields in statistics, and highlights the diverse approaches and analyses related to real-life phenomena. Major topics covered in this volume include, but are not limited to, bayesian inference, likelihood approach, pseudo-likelihoods, regression, time series, and data analysis as well as applications in the life and social sciences. The software packages used in the papers are made available by the authors. This book is a result of the 47th Scientific Meeting of the Italian Statistical Society, held at the University of Cagliari, Italy, in 2014.

  5. The statistical chopper in the time-of-flight technique

    International Nuclear Information System (INIS)

    Albuquerque Vieira, J. de.

    1975-12-01

    A detailed study of the 'statistical' chopper and of the method of analysis of the data obtained by this technique is made. The study includes the basic ideas behind correlation methods applied in time-of-flight techniques; comparisons with the conventional chopper made by an analysis of statistical errors; the development of a FORTRAN computer programme to analyse experimental results; the presentation of the related fields of work to demonstrate the potential of this method and suggestions for future study together with the criteria for a time-of-flight experiment using the method being studied [pt

  6. Excel 2016 for engineering statistics a guide to solving practical problems

    CERN Document Server

    Quirk, Thomas J

    2016-01-01

    This book shows the capabilities of Microsoft Excel in teaching engineering statistics effectively. Similar to the previously published Excel 2013 for Engineering Statistics, this book is a step-by-step exercise-driven guide for students and practitioners who need to master Excel to solve practical engineering problems. If understanding statistics isn’t your strongest suit, you are not especially mathematically-inclined, or if you are wary of computers, this is the right book for you. Excel, a widely available computer program for students and managers, is also an effective teaching and learning tool for quantitative analyses in engineering courses. Its powerful computational ability and graphical functions make learning statistics much easier than in years past. However,Excel 2016 for Engineering Statistics: A Guide to Solving Practical Problems is the first book to capitalize on these improvements by teaching students and managers how to apply Excel to statistical techniques necessary in their courses and...

  7. Excel 2016 for business statistics a guide to solving practical problems

    CERN Document Server

    Quirk, Thomas J

    2016-01-01

    This book shows the capabilities of Microsoft Excel in teaching business statistics effectively. Similar to the previously published Excel 2010 for Business Statistics, this book is a step-by-step exercise-driven guide for students and practitioners who need to master Excel to solve practical business problems. If understanding statistics isn’t your strongest suit, you are not especially mathematically-inclined, or if you are wary of computers, this is the right book for you. Excel, a widely available computer program for students and managers, is also an effective teaching and learning tool for quantitative analyses in business courses. Its powerful computational ability and graphical functions make learning statistics much easier than in years past. However, Excel 2016 for Business Statistics: A Guide to Solving Practical Problems is the first book to capitalize on these improvements by teaching students and managers how to apply Excel to statistical techniques necessary in their courses and work. Each ch...

  8. Excel 2016 for marketing statistics a guide to solving practical problems

    CERN Document Server

    Quirk, Thomas J

    2016-01-01

    This is the first book to show the capabilities of Microsoft Excel in teaching marketing statistics effectively. It is a step-by-step exercise-driven guide for students and practitioners who need to master Excel to solve practical marketing problems. If understanding statistics isn’t your strongest suit, you are not especially mathematically-inclined, or if you are wary of computers, this is the right book for you. Excel, a widely available computer program for students and managers, is also an effective teaching and learning tool for quantitative analyses in marketing courses. Its powerful computational ability and graphical functions make learning statistics much easier than in years past. However, Excel 2016 for Marketing Statistics: A Guide to Solving Practical Problems is the first book to capitalize on these improvements by teaching students and managers how to apply Excel to statistical techniques necessary in their courses and work. Each chapter explains statistical formulas and directs the reader t...

  9. Excel 2013 for engineering statistics a guide to solving practical problems

    CERN Document Server

    Quirk, Thomas J

    2015-01-01

    This is the first book to show the capabilities of Microsoft Excel to teach engineering statistics effectively.  It is a step-by-step exercise-driven guide for students and practitioners who need to master Excel to solve practical engineering problems.  If understanding statistics isn’t your strongest suit, you are not especially mathematically-inclined, or if you are wary of computers, this is the right book for you. Excel, a widely available computer program for students and managers, is also an effective teaching and learning tool for quantitative analyses in engineering courses.  Its powerful computational ability and graphical functions make learning statistics much easier than in years past.  However, Excel 2013 for Engineering Statistics: A Guide to Solving Practical Problems is the first book to capitalize on these improvements by teaching students and managers how to apply Excel to statistical techniques necessary in their courses and work. Each chapter explains statistical formulas and directs...

  10. Statistical thermodynamics

    International Nuclear Information System (INIS)

    Lim, Gyeong Hui

    2008-03-01

    This book consists of 15 chapters, which are basic conception and meaning of statistical thermodynamics, Maxwell-Boltzmann's statistics, ensemble, thermodynamics function and fluctuation, statistical dynamics with independent particle system, ideal molecular system, chemical equilibrium and chemical reaction rate in ideal gas mixture, classical statistical thermodynamics, ideal lattice model, lattice statistics and nonideal lattice model, imperfect gas theory on liquid, theory on solution, statistical thermodynamics of interface, statistical thermodynamics of a high molecule system and quantum statistics

  11. Environmental restoration and statistics: Issues and needs

    International Nuclear Information System (INIS)

    Gilbert, R.O.

    1991-10-01

    Statisticians have a vital role to play in environmental restoration (ER) activities. One facet of that role is to point out where additional work is needed to develop statistical sampling plans and data analyses that meet the needs of ER. This paper is an attempt to show where statistics fits into the ER process. The statistician, as member of the ER planning team, works collaboratively with the team to develop the site characterization sampling design, so that data of the quality and quantity required by the specified data quality objectives (DQOs) are obtained. At the same time, the statistician works with the rest of the planning team to design and implement, when appropriate, the observational approach to streamline the ER process and reduce costs. The statistician will also provide the expertise needed to select or develop appropriate tools for statistical analysis that are suited for problems that are common to waste-site data. These data problems include highly heterogeneous waste forms, large variability in concentrations over space, correlated data, data that do not have a normal (Gaussian) distribution, and measurements below detection limits. Other problems include environmental transport and risk models that yield highly uncertain predictions, and the need to effectively communicate to the public highly technical information, such as sampling plans, site characterization data, statistical analysis results, and risk estimates. Even though some statistical analysis methods are available ''off the shelf'' for use in ER, these problems require the development of additional statistical tools, as discussed in this paper. 29 refs

  12. Isotopic safeguards statistics

    International Nuclear Information System (INIS)

    Timmerman, C.L.; Stewart, K.B.

    1978-06-01

    The methods and results of our statistical analysis of isotopic data using isotopic safeguards techniques are illustrated using example data from the Yankee Rowe reactor. The statistical methods used in this analysis are the paired comparison and the regression analyses. A paired comparison results when a sample from a batch is analyzed by two different laboratories. Paired comparison techniques can be used with regression analysis to detect and identify outlier batches. The second analysis tool, linear regression, involves comparing various regression approaches. These approaches use two basic types of models: the intercept model (y = α + βx) and the initial point model [y - y 0 = β(x - x 0 )]. The intercept model fits strictly the exposure or burnup values of isotopic functions, while the initial point model utilizes the exposure values plus the initial or fabricator's data values in the regression analysis. Two fitting methods are applied to each of these models. These methods are: (1) the usual least squares fitting approach where x is measured without error, and (2) Deming's approach which uses the variance estimates obtained from the paired comparison results and considers x and y are both measured with error. The Yankee Rowe data were first measured by Nuclear Fuel Services (NFS) and remeasured by Nuclear Audit and Testing Company (NATCO). The ratio of Pu/U versus 235 D (in which 235 D is the amount of depleted 235 U expressed in weight percent) using actual numbers is the isotopic function illustrated. Statistical results using the Yankee Rowe data indicates the attractiveness of Deming's regression model over the usual approach by simple comparison of the given regression variances with the random variance from the paired comparison results

  13. Methodological difficulties of conducting agroecological studies from a statistical perspective

    DEFF Research Database (Denmark)

    Bianconi, A.; Dalgaard, Tommy; Manly, Bryan F J

    2013-01-01

    Statistical methods for analysing agroecological data might not be able to help agroecologists to solve all of the current problems concerning crop and animal husbandry, but such methods could well help agroecologists to assess, tackle, and resolve several agroecological issues in a more reliable...... and accurate manner. Therefore, our goal in this paper is to discuss the importance of statistical tools for alternative agronomic approaches, because alternative approaches, such as organic farming, should not only be promoted by encouraging farmers to deploy agroecological techniques, but also by providing...

  14. The Australasian Resuscitation in Sepsis Evaluation (ARISE) trial statistical analysis plan.

    Science.gov (United States)

    Delaney, Anthony P; Peake, Sandra L; Bellomo, Rinaldo; Cameron, Peter; Holdgate, Anna; Howe, Belinda; Higgins, Alisa; Presneill, Jeffrey; Webb, Steve

    2013-09-01

    The Australasian Resuscitation in Sepsis Evaluation (ARISE) study is an international, multicentre, randomised, controlled trial designed to evaluate the effectiveness of early goal-directed therapy compared with standard care for patients presenting to the emergency department with severe sepsis. In keeping with current practice, and considering aspects of trial design and reporting specific to non-pharmacological interventions, our plan outlines the principles and methods for analysing and reporting the trial results. The document is prepared before completion of recruitment into the ARISE study, without knowledge of the results of the interim analysis conducted by the data safety and monitoring committee and before completion of the two related international studies. Our statistical analysis plan was designed by the ARISE chief investigators, and reviewed and approved by the ARISE steering committee. We reviewed the data collected by the research team as specified in the study protocol and detailed in the study case report form. We describe information related to baseline characteristics, characteristics of delivery of the trial interventions, details of resuscitation, other related therapies and other relevant data with appropriate comparisons between groups. We define the primary, secondary and tertiary outcomes for the study, with description of the planned statistical analyses. We have developed a statistical analysis plan with a trial profile, mock-up tables and figures. We describe a plan for presenting baseline characteristics, microbiological and antibiotic therapy, details of the interventions, processes of care and concomitant therapies and adverse events. We describe the primary, secondary and tertiary outcomes with identification of subgroups to be analysed. We have developed a statistical analysis plan for the ARISE study, available in the public domain, before the completion of recruitment into the study. This will minimise analytical bias and

  15. Incorporating an Interactive Statistics Workshop into an Introductory Biology Course-Based Undergraduate Research Experience (CURE) Enhances Students’ Statistical Reasoning and Quantitative Literacy Skills †

    Science.gov (United States)

    Olimpo, Jeffrey T.; Pevey, Ryan S.; McCabe, Thomas M.

    2018-01-01

    Course-based undergraduate research experiences (CUREs) provide an avenue for student participation in authentic scientific opportunities. Within the context of such coursework, students are often expected to collect, analyze, and evaluate data obtained from their own investigations. Yet, limited research has been conducted that examines mechanisms for supporting students in these endeavors. In this article, we discuss the development and evaluation of an interactive statistics workshop that was expressly designed to provide students with an open platform for graduate teaching assistant (GTA)-mentored data processing, statistical testing, and synthesis of their own research findings. Mixed methods analyses of pre/post-intervention survey data indicated a statistically significant increase in students’ reasoning and quantitative literacy abilities in the domain, as well as enhancement of student self-reported confidence in and knowledge of the application of various statistical metrics to real-world contexts. Collectively, these data reify an important role for scaffolded instruction in statistics in preparing emergent scientists to be data-savvy researchers in a globally expansive STEM workforce. PMID:29904549

  16. Using Data Mining to Teach Applied Statistics and Correlation

    Science.gov (United States)

    Hartnett, Jessica L.

    2016-01-01

    This article describes two class activities that introduce the concept of data mining and very basic data mining analyses. Assessment data suggest that students learned some of the conceptual basics of data mining, understood some of the ethical concerns related to the practice, and were able to perform correlations via the Statistical Package for…

  17. Statistical universals reveal the structures and functions of human music.

    Science.gov (United States)

    Savage, Patrick E; Brown, Steven; Sakai, Emi; Currie, Thomas E

    2015-07-21

    Music has been called "the universal language of mankind." Although contemporary theories of music evolution often invoke various musical universals, the existence of such universals has been disputed for decades and has never been empirically demonstrated. Here we combine a music-classification scheme with statistical analyses, including phylogenetic comparative methods, to examine a well-sampled global set of 304 music recordings. Our analyses reveal no absolute universals but strong support for many statistical universals that are consistent across all nine geographic regions sampled. These universals include 18 musical features that are common individually as well as a network of 10 features that are commonly associated with one another. They span not only features related to pitch and rhythm that are often cited as putative universals but also rarely cited domains including performance style and social context. These cross-cultural structural regularities of human music may relate to roles in facilitating group coordination and cohesion, as exemplified by the universal tendency to sing, play percussion instruments, and dance to simple, repetitive music in groups. Our findings highlight the need for scientists studying music evolution to expand the range of musical cultures and musical features under consideration. The statistical universals we identified represent important candidates for future investigation.

  18. Summary statistics for end-point conditioned continuous-time Markov chains

    DEFF Research Database (Denmark)

    Hobolth, Asger; Jensen, Jens Ledet

    Continuous-time Markov chains are a widely used modelling tool. Applications include DNA sequence evolution, ion channel gating behavior and mathematical finance. We consider the problem of calculating properties of summary statistics (e.g. mean time spent in a state, mean number of jumps between...... two states and the distribution of the total number of jumps) for discretely observed continuous time Markov chains. Three alternative methods for calculating properties of summary statistics are described and the pros and cons of the methods are discussed. The methods are based on (i) an eigenvalue...... decomposition of the rate matrix, (ii) the uniformization method, and (iii) integrals of matrix exponentials. In particular we develop a framework that allows for analyses of rather general summary statistics using the uniformization method....

  19. Statistical Analyses and Modeling of the Implementation of Agile Manufacturing Tactics in Industrial Firms

    Directory of Open Access Journals (Sweden)

    Mohammad D. AL-Tahat

    2012-01-01

    Full Text Available This paper provides a review and introduction on agile manufacturing. Tactics of agile manufacturing are mapped into different production areas (eight-construct latent: manufacturing equipment and technology, processes technology and know-how, quality and productivity improvement, production planning and control, shop floor management, product design and development, supplier relationship management, and customer relationship management. The implementation level of agile manufacturing tactics is investigated in each area. A structural equation model is proposed. Hypotheses are formulated. Feedback from 456 firms is collected using five-point-Likert-scale questionnaire. Statistical analysis is carried out using IBM SPSS and AMOS. Multicollinearity, content validity, consistency, construct validity, ANOVA analysis, and relationships between agile components are tested. The results of this study prove that the agile manufacturing tactics have positive effect on the overall agility level. This conclusion can be used by manufacturing firms to manage challenges when trying to be agile.

  20. Statistical analysis of the determinations of the Sun's Galactocentric distance

    Science.gov (United States)

    Malkin, Zinovy

    2013-02-01

    Based on several tens of R0 measurements made during the past two decades, several studies have been performed to derive the best estimate of R0. Some used just simple averaging to derive a result, whereas others provided comprehensive analyses of possible errors in published results. In either case, detailed statistical analyses of data used were not performed. However, a computation of the best estimates of the Galactic rotation constants is not only an astronomical but also a metrological task. Here we perform an analysis of 53 R0 measurements (published in the past 20 years) to assess the consistency of the data. Our analysis shows that they are internally consistent. It is also shown that any trend in the R0 estimates from the last 20 years is statistically negligible, which renders the presence of a bandwagon effect doubtful. On the other hand, the formal errors in the published R0 estimates improve significantly with time.

  1. Introduction to nonparametric statistics for the biological sciences using R

    CERN Document Server

    MacFarland, Thomas W

    2016-01-01

    This book contains a rich set of tools for nonparametric analyses, and the purpose of this supplemental text is to provide guidance to students and professional researchers on how R is used for nonparametric data analysis in the biological sciences: To introduce when nonparametric approaches to data analysis are appropriate To introduce the leading nonparametric tests commonly used in biostatistics and how R is used to generate appropriate statistics for each test To introduce common figures typically associated with nonparametric data analysis and how R is used to generate appropriate figures in support of each data set The book focuses on how R is used to distinguish between data that could be classified as nonparametric as opposed to data that could be classified as parametric, with both approaches to data classification covered extensively. Following an introductory lesson on nonparametric statistics for the biological sciences, the book is organized into eight self-contained lessons on various analyses a...

  2. Mortality variation across Australia: descriptive data for states and territories, and statistical divisions.

    Science.gov (United States)

    Wilkinson, D; Hiller, J; Moss, J; Ryan, P; Worsley, T

    2000-06-01

    To describe variation in all cause and selected cause-specific mortality rates across Australia. Mortality and population data for 1997 were obtained from the Australian Bureau of Statistics. All cause and selected cause-specific mortality rates were calculated and directly standardised to the 1997 Australian population in 5-year age groups. Selected major causes of death included cancer, coronary artery disease, cerebrovascular disease, diabetes, accidents and suicide. Rates are reported by statistical division, and State and Territory. All cause age-standardised mortality was 6.98 per 1000 in 1997 and this varied 2-fold from a low in the statistical division of Pilbara, Western Australia (5.78, 95% confidence interval 5.06-6.56), to a high in Northern Territory--excluding Darwin (11.30, 10.67-11.98). Similar mortality variation (all p killers. Larger variation (all p suicide (0.6-3.8 per 10,000). Less marked variation was observed when analysed by State and Territory, but Northern Territory consistently has the highest age-standardised mortality rates. Analysed by statistical division, substantial mortality gradients exist across Australia, suggesting an inequitable distribution of the determinants of health. Further research is required to better understand this heterogeneity.

  3. Establishment, maintenance and application of failure statistics as a basis for availability optimization

    International Nuclear Information System (INIS)

    Poll, H.

    1989-01-01

    The purpose of failure statistics is to obtain hints on weak points due to operation and design. The present failure statistics of Rheinisch-Westfaelisches Elektrizitaetswerk (RWE) is based on reducing availability of power station units. If damage or trouble occurs with a unit, data will be recorded in order to calculate the unavailability and to describe the occurence, the extent, and the removal of damage. Following a survey of the most important data, a short explanation is given on updating of failure statistics and some problems of this job are mentioned. Finally some examples are given, how failure statistics can be used for analyses. (orig.) [de

  4. Applications of Infrared and Raman Spectroscopies to Probiotic Investigation

    Directory of Open Access Journals (Sweden)

    Mauricio I. Santos

    2015-07-01

    Full Text Available In this review, we overview the most important contributions of vibrational spectroscopy based techniques in the study of probiotics and lactic acid bacteria. First, we briefly introduce the fundamentals of these techniques, together with the main multivariate analytical tools used for spectral interpretation. Then, four main groups of applications are reported: (a bacterial taxonomy (Subsection 4.1; (b bacterial preservation (Subsection 4.2; (c monitoring processes involving lactic acid bacteria and probiotics (Subsection 4.3; (d imaging-based applications (Subsection 4.4. A final conclusion, underlying the potentialities of these techniques, is presented.

  5. Calculating statistical distributions from operator relations: The statistical distributions of various intermediate statistics

    International Nuclear Information System (INIS)

    Dai, Wu-Sheng; Xie, Mi

    2013-01-01

    In this paper, we give a general discussion on the calculation of the statistical distribution from a given operator relation of creation, annihilation, and number operators. Our result shows that as long as the relation between the number operator and the creation and annihilation operators can be expressed as a † b=Λ(N) or N=Λ −1 (a † b), where N, a † , and b denote the number, creation, and annihilation operators, i.e., N is a function of quadratic product of the creation and annihilation operators, the corresponding statistical distribution is the Gentile distribution, a statistical distribution in which the maximum occupation number is an arbitrary integer. As examples, we discuss the statistical distributions corresponding to various operator relations. In particular, besides the Bose–Einstein and Fermi–Dirac cases, we discuss the statistical distributions for various schemes of intermediate statistics, especially various q-deformation schemes. Our result shows that the statistical distributions corresponding to various q-deformation schemes are various Gentile distributions with different maximum occupation numbers which are determined by the deformation parameter q. This result shows that the results given in much literature on the q-deformation distribution are inaccurate or incomplete. -- Highlights: ► A general discussion on calculating statistical distribution from relations of creation, annihilation, and number operators. ► A systemic study on the statistical distributions corresponding to various q-deformation schemes. ► Arguing that many results of q-deformation distributions in literature are inaccurate or incomplete

  6. [Methods, challenges and opportunities for big data analyses of microbiome].

    Science.gov (United States)

    Sheng, Hua-Fang; Zhou, Hong-Wei

    2015-07-01

    Microbiome is a novel research field related with a variety of chronic inflamatory diseases. Technically, there are two major approaches to analysis of microbiome: metataxonome by sequencing the 16S rRNA variable tags, and metagenome by shot-gun sequencing of the total microbial (mainly bacterial) genome mixture. The 16S rRNA sequencing analyses pipeline includes sequence quality control, diversity analyses, taxonomy and statistics; metagenome analyses further includes gene annotation and functional analyses. With the development of the sequencing techniques, the cost of sequencing will decrease, and big data analyses will become the central task. Data standardization, accumulation, modeling and disease prediction are crucial for future exploit of these data. Meanwhile, the information property in these data, and the functional verification with culture-dependent and culture-independent experiments remain the focus in future research. Studies of human microbiome will bring a better understanding of the relations between the human body and the microbiome, especially in the context of disease diagnosis and therapy, which promise rich research opportunities.

  7. Bibliography of integral charged particle nuclear data

    Energy Technology Data Exchange (ETDEWEB)

    Burrows, T.W.; Burt, J.S.

    1977-03-01

    This bibliography is divided into three main sections covering experimental, theoretical, and review references. The review section also includes compilation and evaluation references. Each section contains two subsections. The main subsection contains all references satisfying the criteria noted above and the second subsection is devoted to isotope production. The main subsections are ordered by increasing Z and A of the incident particle, then by increasing Z and A of the target nucleus. Within this order, the entries are ordered by residual nucleus and quantity (e.g., sigma(E)). Finally, the entries are ordered by outgoing particles or processes. All entries which have the same target, reaction, and quantity are grouped under a common heading with the most recent reference first. As noted above the second subsection is devoted to isotope production and is limited in the information it carries. Only those references which contain data on a definite residual nucleus or group of nuclei (e.g., fission fragments) are included in these subsections. Entries within these second subsections are ordered by increasing Z and A of the isotope produced and then by quantity. All references containing data on the same isotope production and quantity are grouped together. All lines within a group are ordered by increasing Z and A of the target and then of the incident particle. The final ordering is by increasing minimum energy.

  8. Bibliography of integral charged particle nuclear data

    International Nuclear Information System (INIS)

    Burrows, T.W.; Burt, J.S.

    1977-03-01

    This bibliography is divided into three main sections covering experimental, theoretical, and review references. The review section also includes compilation and evaluation references. Each section contains two subsections. The main subsection contains all references satisfying the criteria noted above and the second subsection is devoted to isotope production. The main subsections are ordered by increasing Z and A of the incident particle, then by increasing Z and A of the target nucleus. Within this order, the entries are ordered by residual nucleus and quantity (e.g., sigma(E)). Finally, the entries are ordered by outgoing particles or processes. All entries which have the same target, reaction, and quantity are grouped under a common heading with the most recent reference first. As noted above the second subsection is devoted to isotope production and is limited in the information it carries. Only those references which contain data on a definite residual nucleus or group of nuclei (e.g., fission fragments) are included in these subsections. Entries within these second subsections are ordered by increasing Z and A of the isotope produced and then by quantity. All references containing data on the same isotope production and quantity are grouped together. All lines within a group are ordered by increasing Z and A of the target and then of the incident particle. The final ordering is by increasing minimum energy

  9. Eulerian vs. Lagrangian analyses of pedestrian dynamics asymmetries in a staircase landing

    NARCIS (Netherlands)

    Corbetta, A.; Lee, C.-M.; Muntean, A.; Toschi, F.

    2016-01-01

    Real-life, out-of-laboratory, measurements of pedestrian movements allow extensive and fully-resolved statistical analyses. However, data acquisition in real-life is subjected to the wide heterogeneity that characterizes crowd flows over time. Disparate flow conditions, such as co-flows and

  10. Frame vs. trajectory analyses of pedestrian dynamics asymmetries in a staircase landing

    NARCIS (Netherlands)

    Corbetta, A.; Lee, C.-M.; Muntean, A.; Toschi, F.

    Real-life, out-of-laboratory, measurements of pedestrian walking dynamics allow extensive and fully-resolved statistical analyses. However, data acquisition in real-life is subjected to the randomness and heterogeneity that characterizes crowd flows over time. In a typical real-life location,

  11. From sexless to sexy: Why it is time for human genetics to consider and report analyses of sex.

    Science.gov (United States)

    Powers, Matthew S; Smith, Phillip H; McKee, Sherry A; Ehringer, Marissa A

    2017-01-01

    Science has come a long way with regard to the consideration of sex differences in clinical and preclinical research, but one field remains behind the curve: human statistical genetics. The goal of this commentary is to raise awareness and discussion about how to best consider and evaluate possible sex effects in the context of large-scale human genetic studies. Over the course of this commentary, we reinforce the importance of interpreting genetic results in the context of biological sex, establish evidence that sex differences are not being considered in human statistical genetics, and discuss how best to conduct and report such analyses. Our recommendation is to run stratified analyses by sex no matter the sample size or the result and report the findings. Summary statistics from stratified analyses are helpful for meta-analyses, and patterns of sex-dependent associations may be hidden in a combined dataset. In the age of declining sequencing costs, large consortia efforts, and a number of useful control samples, it is now time for the field of human genetics to appropriately include sex in the design, analysis, and reporting of results.

  12. Perceptual and statistical analysis of cardiac phase and amplitude images

    International Nuclear Information System (INIS)

    Houston, A.; Craig, A.

    1991-01-01

    A perceptual experiment was conducted using cardiac phase and amplitude images. Estimates of statistical parameters were derived from the images and the diagnostic potential of human and statistical decisions compared. Five methods were used to generate the images from 75 gated cardiac studies, 39 of which were classified as pathological. The images were presented to 12 observers experienced in nuclear medicine. The observers rated the images using a five-category scale based on their confidence of an abnormality presenting. Circular and linear statistics were used to analyse phase and amplitude image data, respectively. Estimates of mean, standard deviation (SD), skewness, kurtosis and the first term of the spatial correlation function were evaluated in the region of the left ventricle. A receiver operating characteristic analysis was performed on both sets of data and the human and statistical decisions compared. For phase images, circular SD was shown to discriminate better between normal and abnormal than experienced observers, but no single statistic discriminated as well as the human observer for amplitude images. (orig.)

  13. Statistics

    CERN Document Server

    Hayslett, H T

    1991-01-01

    Statistics covers the basic principles of Statistics. The book starts by tackling the importance and the two kinds of statistics; the presentation of sample data; the definition, illustration and explanation of several measures of location; and the measures of variation. The text then discusses elementary probability, the normal distribution and the normal approximation to the binomial. Testing of statistical hypotheses and tests of hypotheses about the theoretical proportion of successes in a binomial population and about the theoretical mean of a normal population are explained. The text the

  14. Statistical modelling of traffic safety development

    DEFF Research Database (Denmark)

    Christens, Peter

    2004-01-01

    there were 6861 injury trafficc accidents reported by the police, resulting in 4519 minor injuries, 3946 serious injuries, and 431 fatalities. The general purpose of the research was to improve the insight into aggregated road safety methodology in Denmark. The aim was to analyse advanced statistical methods......, that were designed to study developments over time, including effects of interventions. This aim has been achieved by investigating variations in aggregated Danish traffic accident series and by applying state of the art methodologies to specific case studies. The thesis comprises an introduction...

  15. Stochastics introduction to probability and statistics

    CERN Document Server

    Georgii, Hans-Otto

    2012-01-01

    This second revised and extended edition presents the fundamental ideas and results of both, probability theory and statistics, and comprises the material of a one-year course. It is addressed to students with an interest in the mathematical side of stochastics. Stochastic concepts, models and methods are motivated by examples and developed and analysed systematically. Some measure theory is included, but this is done at an elementary level that is in accordance with the introductory character of the book. A large number of problems offer applications and supplements to the text.

  16. Can trial sequential monitoring boundaries reduce spurious inferences from meta-analyses?

    DEFF Research Database (Denmark)

    Thorlund, Kristian; Devereaux, P J; Wetterslev, Jørn

    2008-01-01

    BACKGROUND: Results from apparently conclusive meta-analyses may be false. A limited number of events from a few small trials and the associated random error may be under-recognized sources of spurious findings. The information size (IS, i.e. number of participants) required for a reliable......-analyses after each included trial and evaluated their results using a conventional statistical criterion (alpha = 0.05) and two-sided Lan-DeMets monitoring boundaries. We examined the proportion of false positive results and important inaccuracies in estimates of treatment effects that resulted from the two...... approaches. RESULTS: Using the random-effects model and final data, 12 of the meta-analyses yielded P > alpha = 0.05, and 21 yielded P alpha = 0.05. The monitoring boundaries eliminated all false positives. Important inaccuracies in estimates were observed in 6 out of 21 meta-analyses using the conventional...

  17. How Big of a Problem is Analytic Error in Secondary Analyses of Survey Data?

    Directory of Open Access Journals (Sweden)

    Brady T West

    Full Text Available Secondary analyses of survey data collected from large probability samples of persons or establishments further scientific progress in many fields. The complex design features of these samples improve data collection efficiency, but also require analysts to account for these features when conducting analysis. Unfortunately, many secondary analysts from fields outside of statistics, biostatistics, and survey methodology do not have adequate training in this area, and as a result may apply incorrect statistical methods when analyzing these survey data sets. This in turn could lead to the publication of incorrect inferences based on the survey data that effectively negate the resources dedicated to these surveys. In this article, we build on the results of a preliminary meta-analysis of 100 peer-reviewed journal articles presenting analyses of data from a variety of national health surveys, which suggested that analytic errors may be extremely prevalent in these types of investigations. We first perform a meta-analysis of a stratified random sample of 145 additional research products analyzing survey data from the Scientists and Engineers Statistical Data System (SESTAT, which describes features of the U.S. Science and Engineering workforce, and examine trends in the prevalence of analytic error across the decades used to stratify the sample. We once again find that analytic errors appear to be quite prevalent in these studies. Next, we present several example analyses of real SESTAT data, and demonstrate that a failure to perform these analyses correctly can result in substantially biased estimates with standard errors that do not adequately reflect complex sample design features. Collectively, the results of this investigation suggest that reviewers of this type of research need to pay much closer attention to the analytic methods employed by researchers attempting to publish or present secondary analyses of survey data.

  18. How Big of a Problem is Analytic Error in Secondary Analyses of Survey Data?

    Science.gov (United States)

    West, Brady T.; Sakshaug, Joseph W.; Aurelien, Guy Alain S.

    2016-01-01

    Secondary analyses of survey data collected from large probability samples of persons or establishments further scientific progress in many fields. The complex design features of these samples improve data collection efficiency, but also require analysts to account for these features when conducting analysis. Unfortunately, many secondary analysts from fields outside of statistics, biostatistics, and survey methodology do not have adequate training in this area, and as a result may apply incorrect statistical methods when analyzing these survey data sets. This in turn could lead to the publication of incorrect inferences based on the survey data that effectively negate the resources dedicated to these surveys. In this article, we build on the results of a preliminary meta-analysis of 100 peer-reviewed journal articles presenting analyses of data from a variety of national health surveys, which suggested that analytic errors may be extremely prevalent in these types of investigations. We first perform a meta-analysis of a stratified random sample of 145 additional research products analyzing survey data from the Scientists and Engineers Statistical Data System (SESTAT), which describes features of the U.S. Science and Engineering workforce, and examine trends in the prevalence of analytic error across the decades used to stratify the sample. We once again find that analytic errors appear to be quite prevalent in these studies. Next, we present several example analyses of real SESTAT data, and demonstrate that a failure to perform these analyses correctly can result in substantially biased estimates with standard errors that do not adequately reflect complex sample design features. Collectively, the results of this investigation suggest that reviewers of this type of research need to pay much closer attention to the analytic methods employed by researchers attempting to publish or present secondary analyses of survey data. PMID:27355817

  19. GALEX-SDSS CATALOGS FOR STATISTICAL STUDIES

    International Nuclear Information System (INIS)

    Budavari, Tamas; Heinis, Sebastien; Szalay, Alexander S.; Nieto-Santisteban, Maria; Bianchi, Luciana; Gupchup, Jayant; Shiao, Bernie; Smith, Myron; Chang Ruixiang; Kauffmann, Guinevere; Morrissey, Patrick; Wyder, Ted K.; Martin, D. Christopher; Barlow, Tom A.; Forster, Karl; Friedman, Peter G.; Schiminovich, David; Milliard, Bruno; Donas, Jose; Seibert, Mark

    2009-01-01

    We present a detailed study of the Galaxy Evolution Explorer's (GALEX) photometric catalogs with special focus on the statistical properties of the All-sky and Medium Imaging Surveys. We introduce the concept of primaries to resolve the issue of multiple detections and follow a geometric approach to define clean catalogs with well understood selection functions. We cross-identify the GALEX sources (GR2+3) with Sloan Digital Sky Survey (SDSS; DR6) observations, which indirectly provides an invaluable insight into the astrometric model of the UV sources and allows us to revise the band merging strategy. We derive the formal description of the GALEX footprints as well as their intersections with the SDSS coverage along with analytic calculations of their areal coverage. The crossmatch catalogs are made available for the public. We conclude by illustrating the implementation of typical selection criteria in SQL for catalog subsets geared toward statistical analyses, e.g., correlation and luminosity function studies.

  20. Phylogenomic analyses data of the avian phylogenomics project

    DEFF Research Database (Denmark)

    Jarvis, Erich D; Mirarab, Siavash; Aberer, Andre J

    2015-01-01

    BACKGROUND: Determining the evolutionary relationships among the major lineages of extant birds has been one of the biggest challenges in systematic biology. To address this challenge, we assembled or collected the genomes of 48 avian species spanning most orders of birds, including all Neognathae...... and two of the five Palaeognathae orders. We used these genomes to construct a genome-scale avian phylogenetic tree and perform comparative genomic analyses. FINDINGS: Here we present the datasets associated with the phylogenomic analyses, which include sequence alignment files consisting of nucleotides......ML algorithm or when using statistical binning with the coalescence-based MP-EST algorithm (which we refer to as MP-EST*). Other data sets, such as the coding sequence of some exons, revealed other properties of genome evolution, namely convergence. CONCLUSIONS: The Avian Phylogenomics Project is the largest...

  1. Statistical analysis of lightning electric field measured under Malaysian condition

    Science.gov (United States)

    Salimi, Behnam; Mehranzamir, Kamyar; Abdul-Malek, Zulkurnain

    2014-02-01

    Lightning is an electrical discharge during thunderstorms that can be either within clouds (Inter-Cloud), or between clouds and ground (Cloud-Ground). The Lightning characteristics and their statistical information are the foundation for the design of lightning protection system as well as for the calculation of lightning radiated fields. Nowadays, there are various techniques to detect lightning signals and to determine various parameters produced by a lightning flash. Each technique provides its own claimed performances. In this paper, the characteristics of captured broadband electric fields generated by cloud-to-ground lightning discharges in South of Malaysia are analyzed. A total of 130 cloud-to-ground lightning flashes from 3 separate thunderstorm events (each event lasts for about 4-5 hours) were examined. Statistical analyses of the following signal parameters were presented: preliminary breakdown pulse train time duration, time interval between preliminary breakdowns and return stroke, multiplicity of stroke, and percentages of single stroke only. The BIL model is also introduced to characterize the lightning signature patterns. Observations on the statistical analyses show that about 79% of lightning signals fit well with the BIL model. The maximum and minimum of preliminary breakdown time duration of the observed lightning signals are 84 ms and 560 us, respectively. The findings of the statistical results show that 7.6% of the flashes were single stroke flashes, and the maximum number of strokes recorded was 14 multiple strokes per flash. A preliminary breakdown signature in more than 95% of the flashes can be identified.

  2. Structural reliability in context of statistical uncertainties and modelling discrepancies

    International Nuclear Information System (INIS)

    Pendola, Maurice

    2000-01-01

    Structural reliability methods have been largely improved during the last years and have showed their ability to deal with uncertainties during the design stage or to optimize the functioning and the maintenance of industrial installations. They are based on a mechanical modeling of the structural behavior according to the considered failure modes and on a probabilistic representation of input parameters of this modeling. In practice, only limited statistical information is available to build the probabilistic representation and different sophistication levels of the mechanical modeling may be introduced. Thus, besides the physical randomness, other uncertainties occur in such analyses. The aim of this work is triple: 1. at first, to propose a methodology able to characterize the statistical uncertainties due to the limited number of data in order to take them into account in the reliability analyses. The obtained reliability index measures the confidence in the structure considering the statistical information available. 2. Then, to show a methodology leading to reliability results evaluated from a particular mechanical modeling but by using a less sophisticated one. The objective is then to decrease the computational efforts required by the reference modeling. 3. Finally, to propose partial safety factors that are evolving as a function of the number of statistical data available and as a function of the sophistication level of the mechanical modeling that is used. The concepts are illustrated in the case of a welded pipe and in the case of a natural draught cooling tower. The results show the interest of the methodologies in an industrial context. [fr

  3. Data base of accident and agricultural statistics for transportation risk assessment

    Energy Technology Data Exchange (ETDEWEB)

    Saricks, C.L.; Williams, R.G.; Hopf, M.R.

    1989-11-01

    A state-level data base of accident and agricultural statistics has been developed to support risk assessment for transportation of spent nuclear fuels and high-level radioactive wastes. This data base will enhance the modeling capabilities for more route-specific analyses of potential risks associated with transportation of these wastes to a disposal site. The data base and methodology used to develop state-specific accident and agricultural data bases are described, and summaries of accident and agricultural statistics are provided. 27 refs., 9 tabs.

  4. Data base of accident and agricultural statistics for transportation risk assessment

    International Nuclear Information System (INIS)

    Saricks, C.L.; Williams, R.G.; Hopf, M.R.

    1989-11-01

    A state-level data base of accident and agricultural statistics has been developed to support risk assessment for transportation of spent nuclear fuels and high-level radioactive wastes. This data base will enhance the modeling capabilities for more route-specific analyses of potential risks associated with transportation of these wastes to a disposal site. The data base and methodology used to develop state-specific accident and agricultural data bases are described, and summaries of accident and agricultural statistics are provided. 27 refs., 9 tabs

  5. Industrial commodity statistics yearbook 2001. Production statistics (1992-2001)

    International Nuclear Information System (INIS)

    2003-01-01

    This is the thirty-fifth in a series of annual compilations of statistics on world industry designed to meet both the general demand for information of this kind and the special requirements of the United Nations and related international bodies. Beginning with the 1992 edition, the title of the publication was changed to industrial Commodity Statistics Yearbook as the result of a decision made by the United Nations Statistical Commission at its twenty-seventh session to discontinue, effective 1994, publication of the Industrial Statistics Yearbook, volume I, General Industrial Statistics by the Statistics Division of the United Nations. The United Nations Industrial Development Organization (UNIDO) has become responsible for the collection and dissemination of general industrial statistics while the Statistics Division of the United Nations continues to be responsible for industrial commodity production statistics. The previous title, Industrial Statistics Yearbook, volume II, Commodity Production Statistics, was introduced in the 1982 edition. The first seven editions in this series were published under the title The Growth of World industry and the next eight editions under the title Yearbook of Industrial Statistics. This edition of the Yearbook contains annual quantity data on production of industrial commodities by country, geographical region, economic grouping and for the world. A standard list of about 530 commodities (about 590 statistical series) has been adopted for the publication. The statistics refer to the ten-year period 1992-2001 for about 200 countries and areas

  6. Industrial commodity statistics yearbook 2002. Production statistics (1993-2002)

    International Nuclear Information System (INIS)

    2004-01-01

    This is the thirty-sixth in a series of annual compilations of statistics on world industry designed to meet both the general demand for information of this kind and the special requirements of the United Nations and related international bodies. Beginning with the 1992 edition, the title of the publication was changed to industrial Commodity Statistics Yearbook as the result of a decision made by the United Nations Statistical Commission at its twenty-seventh session to discontinue, effective 1994, publication of the Industrial Statistics Yearbook, volume I, General Industrial Statistics by the Statistics Division of the United Nations. The United Nations Industrial Development Organization (UNIDO) has become responsible for the collection and dissemination of general industrial statistics while the Statistics Division of the United Nations continues to be responsible for industrial commodity production statistics. The previous title, Industrial Statistics Yearbook, volume II, Commodity Production Statistics, was introduced in the 1982 edition. The first seven editions in this series were published under the title 'The Growth of World industry' and the next eight editions under the title 'Yearbook of Industrial Statistics'. This edition of the Yearbook contains annual quantity data on production of industrial commodities by country, geographical region, economic grouping and for the world. A standard list of about 530 commodities (about 590 statistical series) has been adopted for the publication. The statistics refer to the ten-year period 1993-2002 for about 200 countries and areas

  7. Industrial commodity statistics yearbook 2000. Production statistics (1991-2000)

    International Nuclear Information System (INIS)

    2002-01-01

    This is the thirty-third in a series of annual compilations of statistics on world industry designed to meet both the general demand for information of this kind and the special requirements of the United Nations and related international bodies. Beginning with the 1992 edition, the title of the publication was changed to industrial Commodity Statistics Yearbook as the result of a decision made by the United Nations Statistical Commission at its twenty-seventh session to discontinue, effective 1994, publication of the Industrial Statistics Yearbook, volume I, General Industrial Statistics by the Statistics Division of the United Nations. The United Nations Industrial Development Organization (UNIDO) has become responsible for the collection and dissemination of general industrial statistics while the Statistics Division of the United Nations continues to be responsible for industrial commodity production statistics. The previous title, Industrial Statistics Yearbook, volume II, Commodity Production Statistics, was introduced in the 1982 edition. The first seven editions in this series were published under the title The Growth of World industry and the next eight editions under the title Yearbook of Industrial Statistics. This edition of the Yearbook contains annual quantity data on production of industrial commodities by country, geographical region, economic grouping and for the world. A standard list of about 530 commodities (about 590 statistical series) has been adopted for the publication. Most of the statistics refer to the ten-year period 1991-2000 for about 200 countries and areas

  8. Excel 2013 for physical sciences statistics a guide to solving practical problems

    CERN Document Server

    Quirk, Thomas J; Horton, Howard F

    2016-01-01

    This book shows the capabilities of Microsoft Excel in teaching physical sciences statistics effectively. Similar to the previously published Excel 2010 for Physical Sciences Statistics, this book is a step-by-step exercise-driven guide for students and practitioners who need to master Excel to solve practical science problems. If understanding statistics isn’t your strongest suit, you are not especially mathematically-inclined, or if you are wary of computers, this is the right book for you. Excel, a widely available computer program for students and managers, is also an effective teaching and learning tool for quantitative analyses in science courses. Its powerful computational ability and graphical functions make learning statistics much easier than in years past. However, Excel 2013 for Physical Sciences Statistics: A Guide to Solving Practical Problems is the first book to capitalize on these improvements by teaching students and managers how to apply Excel to statistical techniques necessary in their ...

  9. Excel 2013 for social sciences statistics a guide to solving practical problems

    CERN Document Server

    Quirk, Thomas J

    2015-01-01

    This is the first book to show the capabilities of Microsoft Excel to teach social science statistics effectively.  It is a step-by-step exercise-driven guide for students and practitioners who need to master Excel to solve practical social science problems.  If understanding statistics isn’t your strongest suit, you are not especially mathematically-inclined, or if you are wary of computers, this is the right book for you.  Excel, a widely available computer program for students and managers, is also an effective teaching and learning tool for quantitative analyses in social science courses.  Its powerful computational ability and graphical functions make learning statistics much easier than in years past.  However, Excel 2013 for Social Science Statistics: A Guide to Solving Practical Problems is the first book to capitalize on these improvements by teaching students and managers how to apply Excel to statistical techniques necessary in their courses and work. Each chapter explains statistical formul...

  10. Excel 2016 for social science statistics a guide to solving practical problems

    CERN Document Server

    Quirk, Thomas J

    2016-01-01

    This book shows the capabilities of Microsoft Excel in teaching social science statistics effectively. Similar to the previously published Excel 2013 for Social Sciences Statistics, this book is a step-by-step exercise-driven guide for students and practitioners who need to master Excel to solve practical social science problems. If understanding statistics isn’t your strongest suit, you are not especially mathematically-inclined, or if you are wary of computers, this is the right book for you. Excel, a widely available computer program for students and managers, is also an effective teaching and learning tool for quantitative analyses in social science courses. Its powerful computational ability and graphical functions make learning statistics much easier than in years past. However, Excel 2016 for Social Science Statistics: A Guide to Solving Practical Problems is the first book to capitalize on these improvements by teaching students and managers how to apply Excel to statistical techniques necessary in ...

  11. Design and Statistics in Quantitative Translation (Process) Research

    DEFF Research Database (Denmark)

    Balling, Laura Winther; Hvelplund, Kristian Tangsgaard

    2015-01-01

    Traditionally, translation research has been qualitative, but quantitative research is becoming increasingly important, especially in translation process research but also in other areas of translation studies. This poses problems to many translation scholars since this way of thinking...... is unfamiliar. In this article, we attempt to mitigate these problems by outlining our approach to good quantitative research, all the way from research questions and study design to data preparation and statistics. We concentrate especially on the nature of the variables involved, both in terms of their scale...... and their role in the design; this has implications for both design and choice of statistics. Although we focus on quantitative research, we also argue that such research should be supplemented with qualitative analyses and considerations of the translation product....

  12. Statistical analysis of field data for aircraft warranties

    Science.gov (United States)

    Lakey, Mary J.

    Air Force and Navy maintenance data collection systems were researched to determine their scientific applicability to the warranty process. New and unique algorithms were developed to extract failure distributions which were then used to characterize how selected families of equipment typically fails. Families of similar equipment were identified in terms of function, technology and failure patterns. Statistical analyses and applications such as goodness-of-fit test, maximum likelihood estimation and derivation of confidence intervals for the probability density function parameters were applied to characterize the distributions and their failure patterns. Statistical and reliability theory, with relevance to equipment design and operational failures were also determining factors in characterizing the failure patterns of the equipment families. Inferences about the families with relevance to warranty needs were then made.

  13. Exploring Foundation Concepts in Introductory Statistics Using Dynamic Data Points

    Science.gov (United States)

    Ekol, George

    2015-01-01

    This paper analyses introductory statistics students' verbal and gestural expressions as they interacted with a dynamic sketch (DS) designed using "Sketchpad" software. The DS involved numeric data points built on the number line whose values changed as the points were dragged along the number line. The study is framed on aggregate…

  14. The ‘39 steps’: an algorithm for performing statistical analysis of data on energy intake and expenditure

    Directory of Open Access Journals (Sweden)

    John R. Speakman

    2013-03-01

    Full Text Available The epidemics of obesity and diabetes have aroused great interest in the analysis of energy balance, with the use of organisms ranging from nematode worms to humans. Although generating energy-intake or -expenditure data is relatively straightforward, the most appropriate way to analyse the data has been an issue of contention for many decades. In the last few years, a consensus has been reached regarding the best methods for analysing such data. To facilitate using these best-practice methods, we present here an algorithm that provides a step-by-step guide for analysing energy-intake or -expenditure data. The algorithm can be used to analyse data from either humans or experimental animals, such as small mammals or invertebrates. It can be used in combination with any commercial statistics package; however, to assist with analysis, we have included detailed instructions for performing each step for three popular statistics packages (SPSS, MINITAB and R. We also provide interpretations of the results obtained at each step. We hope that this algorithm will assist in the statistically appropriate analysis of such data, a field in which there has been much confusion and some controversy.

  15. A Statistical Analysis of Cryptocurrencies

    Directory of Open Access Journals (Sweden)

    Stephen Chan

    2017-05-01

    Full Text Available We analyze statistical properties of the largest cryptocurrencies (determined by market capitalization, of which Bitcoin is the most prominent example. We characterize their exchange rates versus the U.S. Dollar by fitting parametric distributions to them. It is shown that returns are clearly non-normal, however, no single distribution fits well jointly to all the cryptocurrencies analysed. We find that for the most popular currencies, such as Bitcoin and Litecoin, the generalized hyperbolic distribution gives the best fit, while for the smaller cryptocurrencies the normal inverse Gaussian distribution, generalized t distribution, and Laplace distribution give good fits. The results are important for investment and risk management purposes.

  16. Do we need statistics when we have linguistics?

    Directory of Open Access Journals (Sweden)

    Cantos Gómez Pascual

    2002-01-01

    Full Text Available Statistics is known to be a quantitative approach to research. However, most of the research done in the fields of language and linguistics is of a different kind, namely qualitative. Succinctly, qualitative analysis differs from quantitative analysis is that in the former no attempt is made to assign frequencies, percentages and the like, to the linguistic features found or identified in the data. In quantitative research, linguistic features are classified and counted, and even more complex statistical models are constructed in order to explain these observed facts. In qualitative research, however, we use the data only for identifying and describing features of language usage and for providing real occurrences/examples of particular phenomena. In this paper, we shall try to show how quantitative methods and statistical techniques can supplement qualitative analyses of language. We shall attempt to present some mathematical and statistical properties of natural languages, and introduce some of the quantitative methods which are of the most value in working empirically with texts and corpora, illustrating the various issues with numerous examples and moving from the most basic descriptive techniques (frequency counts and percentages to decision-taking techniques (chi-square and z-score and to more sophisticated statistical language models (Type-Token/Lemma-Token/Lemma-Type formulae, cluster analysis and discriminant function analysis.

  17. Analyse spatiale et statistique de l’âge du Fer en France. L’exemple de la “ BaseFer ” Spatial and statistical analysis of the Iron Age in France. The example of 'basefer'

    Directory of Open Access Journals (Sweden)

    Olivier Buchsenschutz

    2009-05-01

    Full Text Available Le développement des systèmes d'information géographique (SIG permet d'introduire dans les bases de données archéologiques la localisation des données. Il est possible alors d'obtenir des cartes de répartition qu'il s'agit ensuite d'interpréter en s’appuyant sur des analyses statistiques et spatiales. Cartes et statistiques mettent en évidence l'état de la recherche, les conditions de conservation des sites, et au-delà des phénomènes historiques ou culturels.À travers un programme de recherche sur l'âge du Fer en France (Basefer une base de données globale a été constituée pour l'espace métropolitain. Cet article propose un certain nombre d'analyses sur les critères descriptifs généraux d’un corpus de 11 000 sites (les départements côtiers de la Méditerranée ne sont pas traités dans ce test. Le contrôle et le développement des rubriques plus fines seront réalisés avec une équipe élargie, avant une mise en réseau de la base.The development of Geographical Information Systems (GIS allows information in archaeological databases to be georeferenced. It is thus possible to obtain distribution maps which can then be interpreted using statistical and spatial analyses. Maps and statistics highlight the state of research, the condition of sites, and moreover historical and cultural phenomena.Through a research programme on the Iron Age in France (Basefer, a global database was established for the entire country. This article puts forward some analyses of the general descriptive criteria represented in a corpus of 11000 sites (departments along the Mediterranean Sea coast are excluded from this test. The control and development of finer descriptors will be undertaken by an enlarged team, before the data are networked.

  18. Statistical cluster analysis and diagnosis of nuclear system level performance

    International Nuclear Information System (INIS)

    Teichmann, T.; Levine, M.M.; Samanta, P.K.; Kato, W.Y.

    1985-01-01

    The complexity of individual nuclear power plants and the importance of maintaining reliable and safe operations makes it desirable to complement the deterministic analyses of these plants by corresponding statistical surveys and diagnoses. Based on such investigations, one can then explore, statistically, the anticipation, prevention, and when necessary, the control of such failures and malfunctions. This paper, and the accompanying one by Samanta et al., describe some of the initial steps in exploring the feasibility of setting up such a program on an integrated and global (industry-wide) basis. The conceptual statistical and data framework was originally outlined in BNL/NUREG-51609, NUREG/CR-3026, and the present work aims at showing how some important elements might be implemented in a practical way (albeit using hypothetical or simulated data)

  19. Statistical Change Detection for Diagnosis of Buoyancy Element Defects on Moored Floating Vessels

    DEFF Research Database (Denmark)

    Blanke, Mogens; Fang, Shaoji; Galeazzi, Roberto

    2012-01-01

    . After residual generation, statistical change detection scheme is derived from mathematical models supported by experimental data. To experimentally verify loss of an underwater buoyancy element, an underwater line breaker is designed to create realistic replication of abrupt faults. The paper analyses...... the properties of residuals and suggests a dedicated GLRT change detector based on a vector residual. Special attention is paid to threshold selection for non ideal (non-IID) test statistics....

  20. The challenges of transportation/traffic statistics in Japan and directions for the future

    Directory of Open Access Journals (Sweden)

    Shigeru Kawasaki

    2015-07-01

    Full Text Available In order to respond to new challenges in transportation and traffic problems, it is essential to enhance statistics in this field that provides the basis for policy researches. Many of the statistics in this field in Japan consist of “official statistics” created by the government. This paper gives a review of the current status of transportation and traffic statistics (hereinafter called “transportation statistics” in short in Japan. Furthermore, the paper discusses challenges in such statistics in the new environment and the direction that statistics that should take in the future. For Japan’s transportation statistics to play vital roles in more sophisticated analyses, it is necessary to improve the environment that facilitates the use of microdata for analysis. It is also necessary to establish an environment where big data can be more easily used for compilation of official statistics and performing policy researches. To achieve this end, close cooperation among the government, academia, and businesses will be essential.

  1. STATISTICS. The reusable holdout: Preserving validity in adaptive data analysis.

    Science.gov (United States)

    Dwork, Cynthia; Feldman, Vitaly; Hardt, Moritz; Pitassi, Toniann; Reingold, Omer; Roth, Aaron

    2015-08-07

    Misapplication of statistical data analysis is a common cause of spurious discoveries in scientific research. Existing approaches to ensuring the validity of inferences drawn from data assume a fixed procedure to be performed, selected before the data are examined. In common practice, however, data analysis is an intrinsically adaptive process, with new analyses generated on the basis of data exploration, as well as the results of previous analyses on the same data. We demonstrate a new approach for addressing the challenges of adaptivity based on insights from privacy-preserving data analysis. As an application, we show how to safely reuse a holdout data set many times to validate the results of adaptively chosen analyses. Copyright © 2015, American Association for the Advancement of Science.

  2. Truths, lies, and statistics.

    Science.gov (United States)

    Thiese, Matthew S; Walker, Skyler; Lindsey, Jenna

    2017-10-01

    Distribution of valuable research discoveries are needed for the continual advancement of patient care. Publication and subsequent reliance of false study results would be detrimental for patient care. Unfortunately, research misconduct may originate from many sources. While there is evidence of ongoing research misconduct in all it's forms, it is challenging to identify the actual occurrence of research misconduct, which is especially true for misconduct in clinical trials. Research misconduct is challenging to measure and there are few studies reporting the prevalence or underlying causes of research misconduct among biomedical researchers. Reported prevalence estimates of misconduct are probably underestimates, and range from 0.3% to 4.9%. There have been efforts to measure the prevalence of research misconduct; however, the relatively few published studies are not freely comparable because of varying characterizations of research misconduct and the methods used for data collection. There are some signs which may point to an increased possibility of research misconduct, however there is a need for continued self-policing by biomedical researchers. There are existing resources to assist in ensuring appropriate statistical methods and preventing other types of research fraud. These included the "Statistical Analyses and Methods in the Published Literature", also known as the SAMPL guidelines, which help scientists determine the appropriate method of reporting various statistical methods; the "Strengthening Analytical Thinking for Observational Studies", or the STRATOS, which emphases on execution and interpretation of results; and the Committee on Publication Ethics (COPE), which was created in 1997 to deliver guidance about publication ethics. COPE has a sequence of views and strategies grounded in the values of honesty and accuracy.

  3. Harmonic statistics

    International Nuclear Information System (INIS)

    Eliazar, Iddo

    2017-01-01

    The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their ‘public relations’ for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of this object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford’s law, and 1/f noise. - Highlights: • Harmonic statistics are described and reviewed in detail. • Connections to various statistical laws are established. • Connections to perturbation, renormalization and dynamics are established.

  4. Harmonic statistics

    Energy Technology Data Exchange (ETDEWEB)

    Eliazar, Iddo, E-mail: eliazar@post.tau.ac.il

    2017-05-15

    The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their ‘public relations’ for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of this object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford’s law, and 1/f noise. - Highlights: • Harmonic statistics are described and reviewed in detail. • Connections to various statistical laws are established. • Connections to perturbation, renormalization and dynamics are established.

  5. [Clinical research XXIII. From clinical judgment to meta-analyses].

    Science.gov (United States)

    Rivas-Ruiz, Rodolfo; Castelán-Martínez, Osvaldo D; Pérez-Rodríguez, Marcela; Palacios-Cruz, Lino; Noyola-Castillo, Maura E; Talavera, Juan O

    2014-01-01

    Systematic reviews (SR) are studies made in order to ask clinical questions based on original articles. Meta-analysis (MTA) is the mathematical analysis of SR. These analyses are divided in two groups, those which evaluate the measured results of quantitative variables (for example, the body mass index -BMI-) and those which evaluate qualitative variables (for example, if a patient is alive or dead, or if he is healing or not). Quantitative variables generally use the mean difference analysis and qualitative variables can be performed using several calculations: odds ratio (OR), relative risk (RR), absolute risk reduction (ARR) and hazard ratio (HR). These analyses are represented through forest plots which allow the evaluation of each individual study, as well as the heterogeneity between studies and the overall effect of the intervention. These analyses are mainly based on Student's t test and chi-squared. To take appropriate decisions based on the MTA, it is important to understand the characteristics of statistical methods in order to avoid misinterpretations.

  6. Validation of chemical analyses of atmospheric deposition in forested European sites

    Directory of Open Access Journals (Sweden)

    Erwin ULRICH

    2005-08-01

    Full Text Available Within the activities of the Integrated Co-operative Programme on Assessment and Monitoring of Air Pollution Effects on Forests (ICP Forests and of the EU Regulation 2152/2003, a Working Group on Quality Assurance/Quality Control of analyses has been created to assist the participating laboratories in the analysis of atmospheric deposition, soil and soil solution, and leaves/needles. As part of the activity of the WG, this study is a statistical analysis in the field of water analysis of chemical concentrations and relationships between ions, and between conductivity and ions for different types of samples (bulk or wet-only samples, throughfall, stemflow considered in forest studies. About 5000 analyses from seven laboratories were used to establish relationships representative of different European geographic and climatic situations, from northern Finland to southern Italy. Statistically significant differences between the relationships obtained from different types of solutions, interacting with different types of vegetation (throughfall and stemflow samples, broad-leaved trees and conifers and with varying influence of marine salt were tested. The ultimate aim is to establish general relationships between ions, and between conductivity and ions, with relative confidence limits, which can be used as a comparison with those established in single laboratories. The use of such techniques is strongly encouraged in the ICPF laboratories to validate single chemical analyses, to be performed when it is still possible to replicate the analysis, and as a general overview of the whole set of analyses, to obtain an indication of the laboratory performance on a long-term basis.

  7. VESPA: Very large-scale Evolutionary and Selective Pressure Analyses

    Directory of Open Access Journals (Sweden)

    Andrew E. Webb

    2017-06-01

    Full Text Available Background Large-scale molecular evolutionary analyses of protein coding sequences requires a number of preparatory inter-related steps from finding gene families, to generating alignments and phylogenetic trees and assessing selective pressure variation. Each phase of these analyses can represent significant challenges, particularly when working with entire proteomes (all protein coding sequences in a genome from a large number of species. Methods We present VESPA, software capable of automating a selective pressure analysis using codeML in addition to the preparatory analyses and summary statistics. VESPA is written in python and Perl and is designed to run within a UNIX environment. Results We have benchmarked VESPA and our results show that the method is consistent, performs well on both large scale and smaller scale datasets, and produces results in line with previously published datasets. Discussion Large-scale gene family identification, sequence alignment, and phylogeny reconstruction are all important aspects of large-scale molecular evolutionary analyses. VESPA provides flexible software for simplifying these processes along with downstream selective pressure variation analyses. The software automatically interprets results from codeML and produces simplified summary files to assist the user in better understanding the results. VESPA may be found at the following website: http://www.mol-evol.org/VESPA.

  8. Statistical mechanics for a class of quantum statistics

    International Nuclear Information System (INIS)

    Isakov, S.B.

    1994-01-01

    Generalized statistical distributions for identical particles are introduced for the case where filling a single-particle quantum state by particles depends on filling states of different momenta. The system of one-dimensional bosons with a two-body potential that can be solved by means of the thermodynamic Bethe ansatz is shown to be equivalent thermodynamically to a system of free particles obeying statistical distributions of the above class. The quantum statistics arising in this way are completely determined by the two-particle scattering phases of the corresponding interacting systems. An equation determining the statistical distributions for these statistics is derived

  9. Evidence-based orthodontics. Current statistical trends in published articles in one journal.

    Science.gov (United States)

    Law, Scott V; Chudasama, Dipak N; Rinchuse, Donald J

    2010-09-01

    To ascertain the number, type, and overall usage of statistics in American Journal of Orthodontics and Dentofacial (AJODO) articles for 2008. These data were then compared to data from three previous years: 1975, 1985, and 2003. The frequency and distribution of statistics used in the AJODO original articles for 2008 were dichotomized into those using statistics and those not using statistics. Statistical procedures were then broadly divided into descriptive statistics (mean, standard deviation, range, percentage) and inferential statistics (t-test, analysis of variance). Descriptive statistics were used to make comparisons. In 1975, 1985, 2003, and 2008, AJODO published 72, 87, 134, and 141 original articles, respectively. The percentage of original articles using statistics was 43.1% in 1975, 75.9% in 1985, 94.0% in 2003, and 92.9% in 2008; original articles using statistics stayed relatively the same from 2003 to 2008, with only a small 1.1% decrease. The percentage of articles using inferential statistical analyses was 23.7% in 1975, 74.2% in 1985, 92.9% in 2003, and 84.4% in 2008. Comparing AJODO publications in 2003 and 2008, there was an 8.5% increase in the use of descriptive articles (from 7.1% to 15.6%), and there was an 8.5% decrease in articles using inferential statistics (from 92.9% to 84.4%).

  10. Statistical modeling of nitrogen-dependent modulation of root system architecture in Arabidopsis thaliana.

    Science.gov (United States)

    Araya, Takao; Kubo, Takuya; von Wirén, Nicolaus; Takahashi, Hideki

    2016-03-01

    Plant root development is strongly affected by nutrient availability. Despite the importance of structure and function of roots in nutrient acquisition, statistical modeling approaches to evaluate dynamic and temporal modulations of root system architecture in response to nutrient availability have remained as widely open and exploratory areas in root biology. In this study, we developed a statistical modeling approach to investigate modulations of root system architecture in response to nitrogen availability. Mathematical models were designed for quantitative assessment of root growth and root branching phenotypes and their dynamic relationships based on hierarchical configuration of primary and lateral roots formulating the fishbone-shaped root system architecture in Arabidopsis thaliana. Time-series datasets reporting dynamic changes in root developmental traits on different nitrate or ammonium concentrations were generated for statistical analyses. Regression analyses unraveled key parameters associated with: (i) inhibition of primary root growth under nitrogen limitation or on ammonium; (ii) rapid progression of lateral root emergence in response to ammonium; and (iii) inhibition of lateral root elongation in the presence of excess nitrate or ammonium. This study provides a statistical framework for interpreting dynamic modulation of root system architecture, supported by meta-analysis of datasets displaying morphological responses of roots to diverse nitrogen supplies. © 2015 Institute of Botany, Chinese Academy of Sciences.

  11. Sorption analyses in materials science: selected oxides

    International Nuclear Information System (INIS)

    Fuller, E.L. Jr.; Condon, J.B.; Eager, M.H.; Jones, L.L.

    1981-01-01

    Physical adsorption studies have been shown to be extremely valuable in studying the chemistry and structure of dispersed materials. Many processes rely on the access to the large amount of surface made available by the high degree of dispersion. Conversely, there are many applications where consolidation of the dispersed solids is required. Several systems (silica gel, alumina catalysts, mineralogic alumino-silicates, and yttrium oxide plasters) have been studied to show the type and amount of chemical and structural information that can be obtained. Some review of current theories is given and additional concepts are developed based on statistical and thermodynamic arguments. The results are applied to sorption data to show that detailed sorption analyses are extremely useful and can provide valuable information that is difficult to obtain by any other means. Considerable emphasis has been placed on data analyses and interpretation of a nonclassical nature to show the potential of such studies that is often not recognized nor utilized

  12. Isotropy analyses of the Planck convergence map

    Science.gov (United States)

    Marques, G. A.; Novaes, C. P.; Bernui, A.; Ferreira, I. S.

    2018-01-01

    The presence of matter in the path of relic photons causes distortions in the angular pattern of the cosmic microwave background (CMB) temperature fluctuations, modifying their properties in a slight but measurable way. Recently, the Planck Collaboration released the estimated convergence map, an integrated measure of the large-scale matter distribution that produced the weak gravitational lensing (WL) phenomenon observed in Planck CMB data. We perform exhaustive analyses of this convergence map calculating the variance in small and large regions of the sky, but excluding the area masked due to Galactic contaminations, and compare them with the features expected in the set of simulated convergence maps, also released by the Planck Collaboration. Our goal is to search for sky directions or regions where the WL imprints anomalous signatures to the variance estimator revealed through a χ2 analyses at a statistically significant level. In the local analysis of the Planck convergence map, we identified eight patches of the sky in disagreement, in more than 2σ, with what is observed in the average of the simulations. In contrast, in the large regions analysis we found no statistically significant discrepancies, but, interestingly, the regions with the highest χ2 values are surrounding the ecliptic poles. Thus, our results show a good agreement with the features expected by the Λ cold dark matter concordance model, as given by the simulations. Yet, the outliers regions found here could suggest that the data still contain residual contamination, like noise, due to over- or underestimation of systematic effects in the simulation data set.

  13. The effect of image enhancement on the statistical analysis of functional neuroimages : Wavelet-based denoising and Gaussian smoothing

    NARCIS (Netherlands)

    Wink, AM; Roerdink, JBTM; Sonka, M; Fitzpatrick, JM

    2003-01-01

    The quality of statistical analyses of functional neuroimages is studied after applying various preprocessing methods. We present wavelet-based denoising as an alternative to Gaussian smoothing, the standard denoising method in statistical parametric mapping (SPM). The wavelet-based denoising

  14. Statistical methods for quantitative indicators of impacts, applied to transmission line projects

    International Nuclear Information System (INIS)

    Ospina Norena, Jesus Efren; Lema Tapias, Alvaro de Jesus

    2005-01-01

    Multivariate statistical analyses are proposed for encountering the relationships between variables and impacts, to obtain high explanatory power for interpretation of the causes and effects and achieve the highest certainty possible, to evaluate and classify impacts by their level of influence

  15. Statistics with JMP graphs, descriptive statistics and probability

    CERN Document Server

    Goos, Peter

    2015-01-01

    Peter Goos, Department of Statistics, University ofLeuven, Faculty of Bio-Science Engineering and University ofAntwerp, Faculty of Applied Economics, BelgiumDavid Meintrup, Department of Mathematics and Statistics,University of Applied Sciences Ingolstadt, Faculty of MechanicalEngineering, GermanyThorough presentation of introductory statistics and probabilitytheory, with numerous examples and applications using JMPDescriptive Statistics and Probability provides anaccessible and thorough overview of the most important descriptivestatistics for nominal, ordinal and quantitative data withpartic

  16. Solar radiation data - statistical analysis and simulation models

    Energy Technology Data Exchange (ETDEWEB)

    Mustacchi, C; Cena, V; Rocchi, M; Haghigat, F

    1984-01-01

    The activities consisted in collecting meteorological data on magnetic tape for ten european locations (with latitudes ranging from 42/sup 0/ to 56/sup 0/ N), analysing the multi-year sequences, developing mathematical models to generate synthetic sequences having the same statistical properties of the original data sets, and producing one or more Short Reference Years (SRY's) for each location. The meteorological parameters examinated were (for all the locations) global + diffuse radiation on horizontal surface, dry bulb temperature, sunshine duration. For some of the locations additional parameters were available, namely, global, beam and diffuse radiation on surfaces other than horizontal, wet bulb temperature, wind velocity, cloud type, cloud cover. The statistical properties investigated were mean, variance, autocorrelation, crosscorrelation with selected parameters, probability density function. For all the meteorological parameters, various mathematical models were built: linear regression, stochastic models of the AR and the DAR type. In each case, the model with the best statistical behaviour was selected for the production of a SRY for the relevant parameter/location.

  17. Publication selection and the income elasticity of the value of a statistical life.

    Science.gov (United States)

    Doucouliagos, Hristos; Stanley, T D; Viscusi, W Kip

    2014-01-01

    Estimates of the value of a statistical life (VSL) establish the price government agencies use to value fatality risks. Transferring these valuations to other populations often utilizes the income elasticity of the VSL, which typically draw on estimates from meta-analyses. Using a data set consisting of 101 estimates of the income elasticity of VSL from 14 previously reported meta-analyses, we find that after accounting for potential publication bias the income elasticity of value of a statistical life is clearly and robustly inelastic, with a value of approximately 0.25-0.63. There is also clear evidence of the importance of controlling for levels of risk, differential publication selection bias, and the greater income sensitivity of VSL from stated preference surveys. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Mars: Noachian hydrology by its statistics and topology

    Science.gov (United States)

    Cabrol, N. A.; Grin, E. A.

    1993-01-01

    Discrimination between fluvial features generated by surface drainage and subsurface aquifer discharges will provide clues to the understanding of early Mars' climatic history. Our approach is to define the process of formation of the oldest fluvial valleys by statistical and topological analyses. Formation of fluvial valley systems reached its highest statistical concentration during the Noachian Period. Nevertheless, they are a scarce phenomenom in Martian history, localized on the craterized upland, and subject to latitudinal distribution. They occur sparsely on Noachian geological units with a weak distribution density, and appear in reduced isolated surface (around 5 x 10(exp 3)(sq km)), filled by short streams (100-300 km length). Topological analysis of the internal organization of 71 surveyed Noachian fluvial valley networks also provides information on the mechanisms of formation.

  19. Statistical Issues in Searches for New Physics

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    Given the cost, both financial and even more importantly in terms of human effort, in building High Energy Physics accelerators and detectors and running them, it is important to use good statistical techniques in analysing data. This talk covers some of the statistical issues that arise in searches for New Physics. They include topics such as: Should we insist on the 5 sigma criterion for discovery claims? What are the relative merits of a Raster Scan or a "2-D" approach? P(A|B) is not the same as P(B|A) The meaning of p-values Example of a problematic likelihood What is Wilks Theorem and when does it not apply? How should we deal with the "Look Elsewhere Effect"? Dealing with systematics such as background parametrisation Coverage: What is it and does my method have the correct coverage? The use of p0 vs. p1 plots

  20. Statistical analysis of network data with R

    CERN Document Server

    Kolaczyk, Eric D

    2014-01-01

    Networks have permeated everyday life through everyday realities like the Internet, social networks, and viral marketing. As such, network analysis is an important growth area in the quantitative sciences, with roots in social network analysis going back to the 1930s and graph theory going back centuries. Measurement and analysis are integral components of network research. As a result, statistical methods play a critical role in network analysis. This book is the first of its kind in network research. It can be used as a stand-alone resource in which multiple R packages are used to illustrate how to conduct a wide range of network analyses, from basic manipulation and visualization, to summary and characterization, to modeling of network data. The central package is igraph, which provides extensive capabilities for studying network graphs in R. This text builds on Eric D. Kolaczyk’s book Statistical Analysis of Network Data (Springer, 2009).

  1. Statistical mechanics of spatial evolutionary games

    International Nuclear Information System (INIS)

    Miekisz, Jacek

    2004-01-01

    We discuss the long-run behaviour of stochastic dynamics of many interacting players in spatial evolutionary games. In particular, we investigate the effect of the number of players and the noise level on the stochastic stability of Nash equilibria. We discuss similarities and differences between systems of interacting players maximizing their individual payoffs and particles minimizing their interaction energy. We use concepts and techniques of statistical mechanics to study game-theoretic models. In order to obtain results in the case of the so-called potential games, we analyse the thermodynamic limit of the appropriate models of interacting particles

  2. Agile Systems Engineering-Kanban Scheduling Subsection

    Science.gov (United States)

    2017-03-10

    Community Programs, derived an initial methodology for evaluating software-related MPTs that might be applicable in systems engineering through surveys...was also conducted using a different simulation tool (SIMIO) as a reference. Appendix A provides a short white paper on the methodology and the...resource and task assignment, and the Incremental Commitment Spiral Model (ICSM) to manage constantly changing risks, clarifying objectives and

  3. [Continuity of hospital identifiers in hospital discharge data - Analysis of the nationwide German DRG Statistics from 2005 to 2013].

    Science.gov (United States)

    Nimptsch, Ulrike; Wengler, Annelene; Mansky, Thomas

    2016-11-01

    In Germany, nationwide hospital discharge data (DRG statistics provided by the research data centers of the Federal Statistical Office and the Statistical Offices of the 'Länder') are increasingly used as data source for health services research. Within this data hospitals can be separated via their hospital identifier ([Institutionskennzeichen] IK). However, this hospital identifier primarily designates the invoicing unit and is not necessarily equivalent to one hospital location. Aiming to investigate direction and extent of possible bias in hospital-level analyses this study examines the continuity of the hospital identifier within a cross-sectional and longitudinal approach and compares the results to official hospital census statistics. Within the DRG statistics from 2005 to 2013 the annual number of hospitals as classified by hospital identifiers was counted for each year of observation. The annual number of hospitals derived from DRG statistics was compared to the number of hospitals in the official census statistics 'Grunddaten der Krankenhäuser'. Subsequently, the temporal continuity of hospital identifiers in the DRG statistics was analyzed within cohorts of hospitals. Until 2013, the annual number of hospital identifiers in the DRG statistics fell by 175 (from 1,725 to 1,550). This decline affected only providers with small or medium case volume. The number of hospitals identified in the DRG statistics was lower than the number given in the census statistics (e.g., in 2013 1,550 IK vs. 1,668 hospitals in the census statistics). The longitudinal analyses revealed that the majority of hospital identifiers persisted in the years of observation, while one fifth of hospital identifiers changed. In cross-sectional studies of German hospital discharge data the separation of hospitals via the hospital identifier might lead to underestimating the number of hospitals and consequential overestimation of caseload per hospital. Discontinuities of hospital

  4. Longitudinal Data Analyses Using Linear Mixed Models in SPSS: Concepts, Procedures and Illustrations

    Directory of Open Access Journals (Sweden)

    Daniel T. L. Shek

    2011-01-01

    Full Text Available Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes in Hong Kong are presented.

  5. Longitudinal data analyses using linear mixed models in SPSS: concepts, procedures and illustrations.

    Science.gov (United States)

    Shek, Daniel T L; Ma, Cecilia M S

    2011-01-05

    Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented.

  6. Adaptive Statistical Iterative Reconstruction-V Versus Adaptive Statistical Iterative Reconstruction: Impact on Dose Reduction and Image Quality in Body Computed Tomography.

    Science.gov (United States)

    Gatti, Marco; Marchisio, Filippo; Fronda, Marco; Rampado, Osvaldo; Faletti, Riccardo; Bergamasco, Laura; Ropolo, Roberto; Fonio, Paolo

    The aim of this study was to evaluate the impact on dose reduction and image quality of the new iterative reconstruction technique: adaptive statistical iterative reconstruction (ASIR-V). Fifty consecutive oncologic patients acted as case controls undergoing during their follow-up a computed tomography scan both with ASIR and ASIR-V. Each study was analyzed in a double-blinded fashion by 2 radiologists. Both quantitative and qualitative analyses of image quality were conducted. Computed tomography scanner radiation output was 38% (29%-45%) lower (P ASIR-V examinations than for the ASIR ones. The quantitative image noise was significantly lower (P ASIR-V. Adaptive statistical iterative reconstruction-V had a higher performance for the subjective image noise (P = 0.01 for 5 mm and P = 0.009 for 1.25 mm), the other parameters (image sharpness, diagnostic acceptability, and overall image quality) being similar (P > 0.05). Adaptive statistical iterative reconstruction-V is a new iterative reconstruction technique that has the potential to provide image quality equal to or greater than ASIR, with a dose reduction around 40%.

  7. Fundamentals and Catalytic Innovation: The Statistical and Data Management Center of the Antibacterial Resistance Leadership Group.

    Science.gov (United States)

    Huvane, Jacqueline; Komarow, Lauren; Hill, Carol; Tran, Thuy Tien T; Pereira, Carol; Rosenkranz, Susan L; Finnemeyer, Matt; Earley, Michelle; Jiang, Hongyu Jeanne; Wang, Rui; Lok, Judith; Evans, Scott R

    2017-03-15

    The Statistical and Data Management Center (SDMC) provides the Antibacterial Resistance Leadership Group (ARLG) with statistical and data management expertise to advance the ARLG research agenda. The SDMC is active at all stages of a study, including design; data collection and monitoring; data analyses and archival; and publication of study results. The SDMC enhances the scientific integrity of ARLG studies through the development and implementation of innovative and practical statistical methodologies and by educating research colleagues regarding the application of clinical trial fundamentals. This article summarizes the challenges and roles, as well as the innovative contributions in the design, monitoring, and analyses of clinical trials and diagnostic studies, of the ARLG SDMC. © The Author 2017. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail: journals.permissions@oup.com.

  8. Third ordinance amending the ordinance on the approval of drugs treated with ionising radiation or drugs containing radioactive substances

    International Nuclear Information System (INIS)

    1985-01-01

    Amendments: 1) Section 2, sub-section (2), no. 1 - Cr-51, Fe-59, Ga-67, In-111, J-123, J-125, J-13, Co-57, Co-58, P-32, Se-75, Th-201, Xe-127, Xe-133; 2) Section 3, sub-section (2), no. 2 - Mo-99, Hg-195m, Rb-81, Su-113, Tc-99m, Au-195m, Kr-81m, In-113m; 3) Section 3a: The prohibitory provisions of section 7, sub-section (1) of the Medical Preparation Act do not apply to radioactive medical preparations which are drugs within the purview of section 2, sub-section (2), no. 4, item (a) of said act. (HP) [de

  9. Excel 2010 for environmental sciences statistics a guide to solving practical problems

    CERN Document Server

    Quirk, Thomas J; Horton, Howard F

    2015-01-01

    This is the first book to show the capabilities of Microsoft Excel to teach environmental sciences statistics effectively.  It is a step-by-step exercise-driven guide for students and practitioners who need to master Excel to solve practical environmental sciences problems.  If understanding statistics isn’t your strongest suit, you are not especially mathematically-inclined, or if you are wary of computers, this is the right book for you.  Excel, a widely available computer program for students and managers, is also an effective teaching and learning tool for quantitative analyses in environmental science courses.  Its powerful computational ability and graphical functions make learning statistics much easier than in years past.  However, Excel 2010 for Environmental Sciences Statistics: A Guide to Solving Practical Problems is the first book to capitalize on these improvements by teaching students and managers how to apply Excel to statistical techniques necessary in their courses and work. Eac...

  10. Excel 2016 for physical sciences statistics a guide to solving practical problems

    CERN Document Server

    Quirk, Thomas J; Horton, Howard F

    2016-01-01

    This book is a step-by-step exercise-driven guide for students and practitioners who need to master Excel to solve practical physical science problems. If understanding statistics isn’t your strongest suit, you are not especially mathematically-inclined, or if you are wary of computers, this is the right book for you. Excel is an effective learning tool for quantitative analyses in environmental science courses. Its powerful computational ability and graphical functions make learning statistics much easier than in years past. However, Excel 2016 for Physical Sciences Statistics: A Guide to Solving Practical Problems is the first book to capitalize on these improvements by teaching students and managers how to apply Excel 2016 to statistical techniques necessary in their courses and work. Each chapter explains statistical formulas and directs the reader to use Excel commands to solve specific, easy-to-understand physical science problems. Practice problems are provided at the end of each chapter with their s...

  11. Excel 2016 for environmental sciences statistics a guide to solving practical problems

    CERN Document Server

    Quirk, Thomas J; Horton, Howard F

    2016-01-01

    This book is a step-by-step exercise-driven guide for students and practitioners who need to master Excel to solve practical environmental science problems. If understanding statistics isn’t your strongest suit, you are not especially mathematically-inclined, or if you are wary of computers, this is the right book for you. Excel is an effective learning tool for quantitative analyses in environmental science courses. Its powerful computational ability and graphical functions make learning statistics much easier than in years past. However, Excel 2016 for Environmental Science Statistics: A Guide to Solving Practical Problems is the first book to capitalize on these improvements by teaching students and managers how to apply Excel 2016 to statistical techniques necessary in their courses and work. Each chapter explains statistical formulas and directs the reader to use Excel commands to solve specific, easy-to-understand environmental science problems. Practice problems are provided at the end of each chapte...

  12. Excel 2016 for health services management statistics a guide to solving problems

    CERN Document Server

    Quirk, Thomas J

    2016-01-01

    This book shows the capabilities of Microsoft Excel in teaching health services management statistics effectively. Similar to the previously published Excel 2013 for Health Services Management Statistics, this book is a step-by-step exercise-driven guide for students and practitioners who need to master Excel to solve practical health service management problems. If understanding statistics isn’t your strongest suit, you are not especially mathematically-inclined, or if you are wary of computers, this is the right book for you. Excel, a widely available computer program for students and managers, is also an effective teaching and learning tool for quantitative analyses in health service courses. Its powerful computational ability and graphical functions make learning statistics much easier than in years past. However, Excel 2016 for Health Services Management Statistics: A Guide to Solving Practical Problems is the first book to capitalize on these improvements by teaching students and managers how to apply...

  13. Excel 2013 for environmental sciences statistics a guide to solving practical problems

    CERN Document Server

    Quirk, Thomas J; Horton, Howard F

    2015-01-01

    This is the first book to show the capabilities of Microsoft Excel to teach environmentall sciences statistics effectively.  It is a step-by-step exercise-driven guide for students and practitioners who need to master Excel to solve practical environmental science problems.  If understanding statistics isn’t your strongest suit, you are not especially mathematically-inclined, or if you are wary of computers, this is the right book for you.  Excel, a widely available computer program for students and managers, is also an effective teaching and learning tool for quantitative analyses in environmental science courses.  Its powerful computational ability and graphical functions make learning statistics much easier than in years past.  However, Excel 2013 for Environmental Sciences Statistics: A Guide to Solving Practical Problems is the first book to capitalize on these improvements by teaching students and managers how to apply Excel to statistical techniques necessary in their courses and work. Each chap...

  14. Dynamic Graphics in Excel for Teaching Statistics: Understanding the Probability Density Function

    Science.gov (United States)

    Coll-Serrano, Vicente; Blasco-Blasco, Olga; Alvarez-Jareno, Jose A.

    2011-01-01

    In this article, we show a dynamic graphic in Excel that is used to introduce an important concept in our subject, Statistics I: the probability density function. This interactive graphic seeks to facilitate conceptual understanding of the main aspects analysed by the learners.

  15. Statistical prediction of biomethane potentials based on the composition of lignocellulosic biomass

    DEFF Research Database (Denmark)

    Thomsen, Sune Tjalfe; Spliid, Henrik; Østergård, Hanne

    2014-01-01

    Mixture models are introduced as a new and stronger methodology for statistical prediction of biomethane potentials (BPM) from lignocellulosic biomass compared to the linear regression models previously used. A large dataset from literature combined with our own data were analysed using canonical...

  16. Excel 2013 for educational and psychological statistics a guide to solving practical problems

    CERN Document Server

    Quirk, Thomas J

    2015-01-01

    This is the first book to show the capabilities of Microsoft Excel to teach educational and psychological statistics effectively. It is a step-by-step exercise-driven guide for students and practitioners who need to master Excel to solve practical problems in education and psychology. If understanding statistics isn’t your strongest suit, you are not especially mathematically-inclined, or if you are wary of computers, this is the right book for you. Excel, a widely available computer program for students and practitioners, is also an effective teaching and learning tool for quantitative analyses in statistics courses. Its powerful computational ability and graphical functions make learning statistics much easier than in years past. However, Excel 2013 for Educational and Psychological Statistics: A Guide to Solving Practical Problems is the first book to capitalize on these improvements by teaching students and practitioners how to apply Excel to statistical techniques necessary in their courses and work. E...

  17. Statistics Anxiety and Business Statistics: The International Student

    Science.gov (United States)

    Bell, James A.

    2008-01-01

    Does the international student suffer from statistics anxiety? To investigate this, the Statistics Anxiety Rating Scale (STARS) was administered to sixty-six beginning statistics students, including twelve international students and fifty-four domestic students. Due to the small number of international students, nonparametric methods were used to…

  18. Analysis of Effects of Sensor Multithreading to Generate Local System Event Timelines

    Science.gov (United States)

    2014-03-27

    logging server, which Huang et al. later use for statistical analysis. In addition to sending logs to a MySQL database, the centralized logging server...anomalous increase in the number of log entries over the course of a month instead of appearing as one single malicious entry. 2.3 Windows® Thread...same resources, as previously described in Subsection 4.2.1.1. Of course , the last point is in addition to the medium load scripts crippling the

  19. System for analysing sickness absenteeism in Poland.

    Science.gov (United States)

    Indulski, J A; Szubert, Z

    1997-01-01

    The National System of Sickness Absenteeism Statistics has been functioning in Poland since 1977, as the part of the national health statistics. The system is based on a 15-percent random sample of copies of certificates of temporary incapacity for work issued by all health care units and authorised private medical practitioners. A certificate of temporary incapacity for work is received by every insured employee who is compelled to stop working due to sickness, accident, or due to the necessity to care for a sick member of his/her family. The certificate is required on the first day of sickness. Analyses of disease- and accident-related sickness absenteeism carried out each year in Poland within the statistical system lead to the main conclusions: 1. Diseases of the musculoskeletal and peripheral nervous systems accounting, when combined, for 1/3 of the total sickness absenteeism, are a major health problem of the working population in Poland. During the past five years, incapacity for work caused by these diseases in males increased 2.5 times. 2. Circulatory diseases, and arterial hypertension and ischaemic heart disease in particular (41% and 27% of sickness days, respectively), create an essential health problem among males at productive age, especially, in the 40 and older age group. Absenteeism due to these diseases has increased in males more than two times.

  20. Statistical analysis plan for the Adjunctive Corticosteroid Treatment in Critically Ill Patients with Septic Shock (ADRENAL) trial

    DEFF Research Database (Denmark)

    Billot, Laurent; Venkatesh, Balasubramanian; Myburgh, John

    2017-01-01

    BACKGROUND: The Adjunctive Corticosteroid Treatment in Critically Ill Patients with Septic Shock (ADRENAL) trial, a 3800-patient, multicentre, randomised controlled trial, will be the largest study to date of corticosteroid therapy in patients with septic shock. OBJECTIVE: To describe a statistical...... and statisticians and approved by the ADRENAL management committee. All authors were blind to treatment allocation and to the unblinded data produced during two interim analyses conducted by the Data Safety and Monitoring Committee. The data shells were produced from a previously published protocol. Statistical...... analyses are described in broad detail. Trial outcomes were selected and categorised into primary, secondary and tertiary outcomes, and appropriate statistical comparisons between groups are planned and described in a way that is transparent, available to the public, verifiable and determined before...

  1. Spreadsheets as tools for statistical computing and statistics education

    OpenAIRE

    Neuwirth, Erich

    2000-01-01

    Spreadsheets are an ubiquitous program category, and we will discuss their use in statistics and statistics education on various levels, ranging from very basic examples to extremely powerful methods. Since the spreadsheet paradigm is very familiar to many potential users, using it as the interface to statistical methods can make statistics more easily accessible.

  2. The Need for Speed in Rodent Locomotion Analyses

    Science.gov (United States)

    Batka, Richard J.; Brown, Todd J.; Mcmillan, Kathryn P.; Meadows, Rena M.; Jones, Kathryn J.; Haulcomb, Melissa M.

    2016-01-01

    Locomotion analysis is now widely used across many animal species to understand the motor defects in disease, functional recovery following neural injury, and the effectiveness of various treatments. More recently, rodent locomotion analysis has become an increasingly popular method in a diverse range of research. Speed is an inseparable aspect of locomotion that is still not fully understood, and its effects are often not properly incorporated while analyzing data. In this hybrid manuscript, we accomplish three things: (1) review the interaction between speed and locomotion variables in rodent studies, (2) comprehensively analyze the relationship between speed and 162 locomotion variables in a group of 16 wild-type mice using the CatWalk gait analysis system, and (3) develop and test a statistical method in which locomotion variables are analyzed and reported in the context of speed. Notable results include the following: (1) over 90% of variables, reported by CatWalk, were dependent on speed with an average R2 value of 0.624, (2) most variables were related to speed in a nonlinear manner, (3) current methods of controlling for speed are insufficient, and (4) the linear mixed model is an appropriate and effective statistical method for locomotion analyses that is inclusive of speed-dependent relationships. Given the pervasive dependency of locomotion variables on speed, we maintain that valid conclusions from locomotion analyses cannot be made unless they are analyzed and reported within the context of speed. PMID:24890845

  3. Automating bibliometric analyses using Taverna scientific workflows : A tutorial on integrating Web Services

    NARCIS (Netherlands)

    Guler, A.T.; Waaijer, C.J.F.; Mohammed, Y.; Palmblad, M.

    2016-01-01

    Quantitative analysis of the scientific literature is a frequent task in bibliometrics. Several large online resources collect and disseminate bibliographic information, paving the way for broad analyses and statistics. The Europe PubMed Central (PMC) and its Web Services is one of these resources,

  4. Towards interoperable and reproducible QSAR analyses: Exchange of datasets.

    Science.gov (United States)

    Spjuth, Ola; Willighagen, Egon L; Guha, Rajarshi; Eklund, Martin; Wikberg, Jarl Es

    2010-06-30

    QSAR is a widely used method to relate chemical structures to responses or properties based on experimental observations. Much effort has been made to evaluate and validate the statistical modeling in QSAR, but these analyses treat the dataset as fixed. An overlooked but highly important issue is the validation of the setup of the dataset, which comprises addition of chemical structures as well as selection of descriptors and software implementations prior to calculations. This process is hampered by the lack of standards and exchange formats in the field, making it virtually impossible to reproduce and validate analyses and drastically constrain collaborations and re-use of data. We present a step towards standardizing QSAR analyses by defining interoperable and reproducible QSAR datasets, consisting of an open XML format (QSAR-ML) which builds on an open and extensible descriptor ontology. The ontology provides an extensible way of uniquely defining descriptors for use in QSAR experiments, and the exchange format supports multiple versioned implementations of these descriptors. Hence, a dataset described by QSAR-ML makes its setup completely reproducible. We also provide a reference implementation as a set of plugins for Bioclipse which simplifies setup of QSAR datasets, and allows for exporting in QSAR-ML as well as old-fashioned CSV formats. The implementation facilitates addition of new descriptor implementations from locally installed software and remote Web services; the latter is demonstrated with REST and XMPP Web services. Standardized QSAR datasets open up new ways to store, query, and exchange data for subsequent analyses. QSAR-ML supports completely reproducible creation of datasets, solving the problems of defining which software components were used and their versions, and the descriptor ontology eliminates confusions regarding descriptors by defining them crisply. This makes is easy to join, extend, combine datasets and hence work collectively, but

  5. Towards interoperable and reproducible QSAR analyses: Exchange of datasets

    Directory of Open Access Journals (Sweden)

    Spjuth Ola

    2010-06-01

    Full Text Available Abstract Background QSAR is a widely used method to relate chemical structures to responses or properties based on experimental observations. Much effort has been made to evaluate and validate the statistical modeling in QSAR, but these analyses treat the dataset as fixed. An overlooked but highly important issue is the validation of the setup of the dataset, which comprises addition of chemical structures as well as selection of descriptors and software implementations prior to calculations. This process is hampered by the lack of standards and exchange formats in the field, making it virtually impossible to reproduce and validate analyses and drastically constrain collaborations and re-use of data. Results We present a step towards standardizing QSAR analyses by defining interoperable and reproducible QSAR datasets, consisting of an open XML format (QSAR-ML which builds on an open and extensible descriptor ontology. The ontology provides an extensible way of uniquely defining descriptors for use in QSAR experiments, and the exchange format supports multiple versioned implementations of these descriptors. Hence, a dataset described by QSAR-ML makes its setup completely reproducible. We also provide a reference implementation as a set of plugins for Bioclipse which simplifies setup of QSAR datasets, and allows for exporting in QSAR-ML as well as old-fashioned CSV formats. The implementation facilitates addition of new descriptor implementations from locally installed software and remote Web services; the latter is demonstrated with REST and XMPP Web services. Conclusions Standardized QSAR datasets open up new ways to store, query, and exchange data for subsequent analyses. QSAR-ML supports completely reproducible creation of datasets, solving the problems of defining which software components were used and their versions, and the descriptor ontology eliminates confusions regarding descriptors by defining them crisply. This makes is easy to join

  6. Interpreting Statistical Significance Test Results: A Proposed New "What If" Method.

    Science.gov (United States)

    Kieffer, Kevin M.; Thompson, Bruce

    As the 1994 publication manual of the American Psychological Association emphasized, "p" values are affected by sample size. As a result, it can be helpful to interpret the results of statistical significant tests in a sample size context by conducting so-called "what if" analyses. However, these methods can be inaccurate…

  7. The use of mass spectrometry for analysing metabolite biomarkers in epidemiology: methodological and statistical considerations for application to large numbers of biological samples.

    Science.gov (United States)

    Lind, Mads V; Savolainen, Otto I; Ross, Alastair B

    2016-08-01

    Data quality is critical for epidemiology, and as scientific understanding expands, the range of data available for epidemiological studies and the types of tools used for measurement have also expanded. It is essential for the epidemiologist to have a grasp of the issues involved with different measurement tools. One tool that is increasingly being used for measuring biomarkers in epidemiological cohorts is mass spectrometry (MS), because of the high specificity and sensitivity of MS-based methods and the expanding range of biomarkers that can be measured. Further, the ability of MS to quantify many biomarkers simultaneously is advantageously compared to single biomarker methods. However, as with all methods used to measure biomarkers, there are a number of pitfalls to consider which may have an impact on results when used in epidemiology. In this review we discuss the use of MS for biomarker analyses, focusing on metabolites and their application and potential issues related to large-scale epidemiology studies, the use of MS "omics" approaches for biomarker discovery and how MS-based results can be used for increasing biological knowledge gained from epidemiological studies. Better understanding of the possibilities and possible problems related to MS-based measurements will help the epidemiologist in their discussions with analytical chemists and lead to the use of the most appropriate statistical tools for these data.

  8. Real-world visual statistics and infants' first-learned object names.

    Science.gov (United States)

    Clerkin, Elizabeth M; Hart, Elizabeth; Rehg, James M; Yu, Chen; Smith, Linda B

    2017-01-05

    We offer a new solution to the unsolved problem of how infants break into word learning based on the visual statistics of everyday infant-perspective scenes. Images from head camera video captured by 8 1/2 to 10 1/2 month-old infants at 147 at-home mealtime events were analysed for the objects in view. The images were found to be highly cluttered with many different objects in view. However, the frequency distribution of object categories was extremely right skewed such that a very small set of objects was pervasively present-a fact that may substantially reduce the problem of referential ambiguity. The statistical structure of objects in these infant egocentric scenes differs markedly from that in the training sets used in computational models and in experiments on statistical word-referent learning. Therefore, the results also indicate a need to re-examine current explanations of how infants break into word learning.This article is part of the themed issue 'New frontiers for statistical learning in the cognitive sciences'. © 2016 The Author(s).

  9. SOR 90-165, 8 March 1990, Atomic Energy Control Regulations, amendment

    International Nuclear Information System (INIS)

    1990-01-01

    Subsections 7(4) and (5) of the Regulations were revoked by this amendment. Those subsections required the AECB, when deciding whether or not to authorise export of a prescribed substance, to be satisfied about the price and quantity of that substance. The two subsections were replaced by new provisions simply authorising the Board to issue an export licence and to impose conditions on the licence in the interests of health, safety and security. (NEA) [fr

  10. Register-based statistics statistical methods for administrative data

    CERN Document Server

    Wallgren, Anders

    2014-01-01

    This book provides a comprehensive and up to date treatment of  theory and practical implementation in Register-based statistics. It begins by defining the area, before explaining how to structure such systems, as well as detailing alternative approaches. It explains how to create statistical registers, how to implement quality assurance, and the use of IT systems for register-based statistics. Further to this, clear details are given about the practicalities of implementing such statistical methods, such as protection of privacy and the coordination and coherence of such an undertaking. Thi

  11. Cancer Statistics

    Science.gov (United States)

    ... What Is Cancer? Cancer Statistics Cancer Disparities Cancer Statistics Cancer has a major impact on society in ... success of efforts to control and manage cancer. Statistics at a Glance: The Burden of Cancer in ...

  12. Report of analyses for light hydrocarbons in ground water

    International Nuclear Information System (INIS)

    Dromgoole, E.L.

    1982-04-01

    This report contains on microfiche the results of analyses for methane, ethane, propane, and butane in 11,659 ground water samples collected in 47 western and three eastern 1 0 x 2 0 quadrangles of the National Topographic Map Series (Figures 1 and 2), along with a brief description of the analytical technique used and some simple, descriptive statistics. The ground water samples were collected as part of the National Uranium Resource Evaluation (NURE) hydrogeochemical and stream sediment reconnaissance. Further information on the ground water samples can be obtained by consulting the NURE data reports for the individual quadrangles. This information includes (1) measurements characterizing water samples (pH, conductivity, and alkalinity), (2) physical measurements, where applicable (water temperature, well description, and other measurements), and (3) elemental analyses

  13. Guided waves based SHM systems for composites structural elements: statistical analyses finalized at probability of detection definition and assessment

    Science.gov (United States)

    Monaco, E.; Memmolo, V.; Ricci, F.; Boffa, N. D.; Maio, L.

    2015-03-01

    Maintenance approaches based on sensorised structures and Structural Health Monitoring systems could represent one of the most promising innovations in the fields of aerostructures since many years, mostly when composites materials (fibers reinforced resins) are considered. Layered materials still suffer today of drastic reductions of maximum allowable stress values during the design phase as well as of costly and recurrent inspections during the life cycle phase that don't permit of completely exploit their structural and economic potentialities in today aircrafts. Those penalizing measures are necessary mainly to consider the presence of undetected hidden flaws within the layered sequence (delaminations) or in bonded areas (partial disbonding); in order to relax design and maintenance constraints a system based on sensors permanently installed on the structure to detect and locate eventual flaws can be considered (SHM system) once its effectiveness and reliability will be statistically demonstrated via a rigorous Probability Of Detection function definition and evaluation. This paper presents an experimental approach with a statistical procedure for the evaluation of detection threshold of a guided waves based SHM system oriented to delaminations detection on a typical wing composite layered panel. The experimental tests are mostly oriented to characterize the statistical distribution of measurements and damage metrics as well as to characterize the system detection capability using this approach. Numerically it is not possible to substitute part of the experimental tests aimed at POD where the noise in the system response is crucial. Results of experiments are presented in the paper and analyzed.

  14. A comparison of statistical methods for identifying out-of-date systematic reviews.

    Directory of Open Access Journals (Sweden)

    Porjai Pattanittum

    Full Text Available BACKGROUND: Systematic reviews (SRs can provide accurate and reliable evidence, typically about the effectiveness of health interventions. Evidence is dynamic, and if SRs are out-of-date this information may not be useful; it may even be harmful. This study aimed to compare five statistical methods to identify out-of-date SRs. METHODS: A retrospective cohort of SRs registered in the Cochrane Pregnancy and Childbirth Group (CPCG, published between 2008 and 2010, were considered for inclusion. For each eligible CPCG review, data were extracted and "3-years previous" meta-analyses were assessed for the need to update, given the data from the most recent 3 years. Each of the five statistical methods was used, with random effects analyses throughout the study. RESULTS: Eighty reviews were included in this study; most were in the area of induction of labour. The numbers of reviews identified as being out-of-date using the Ottawa, recursive cumulative meta-analysis (CMA, and Barrowman methods were 34, 7, and 7 respectively. No reviews were identified as being out-of-date using the simulation-based power method, or the CMA for sufficiency and stability method. The overall agreement among the three discriminating statistical methods was slight (Kappa = 0.14; 95% CI 0.05 to 0.23. The recursive cumulative meta-analysis, Ottawa, and Barrowman methods were practical according to the study criteria. CONCLUSION: Our study shows that three practical statistical methods could be applied to examine the need to update SRs.

  15. Statistical analysis of Thematic Mapper Simulator data for the geobotanical discrimination of rock types in southwest Oregon

    Science.gov (United States)

    Morrissey, L. A.; Weinstock, K. J.; Mouat, D. A.; Card, D. H.

    1984-01-01

    An evaluation of Thematic Mapper Simulator (TMS) data for the geobotanical discrimination of rock types based on vegetative cover characteristics is addressed in this research. A methodology for accomplishing this evaluation utilizing univariate and multivariate techniques is presented. TMS data acquired with a Daedalus DEI-1260 multispectral scanner were integrated with vegetation and geologic information for subsequent statistical analyses, which included a chi-square test, an analysis of variance, stepwise discriminant analysis, and Duncan's multiple range test. Results indicate that ultramafic rock types are spectrally separable from nonultramafics based on vegetative cover through the use of statistical analyses.

  16. Software Used to Generate Cancer Statistics - SEER Cancer Statistics

    Science.gov (United States)

    Videos that highlight topics and trends in cancer statistics and definitions of statistical terms. Also software tools for analyzing and reporting cancer statistics, which are used to compile SEER's annual reports.

  17. An estimator for statistical anisotropy from the CMB bispectrum

    International Nuclear Information System (INIS)

    Bartolo, N.; Dimastrogiovanni, E.; Matarrese, S.; Liguori, M.; Riotto, A.

    2012-01-01

    Various data analyses of the Cosmic Microwave Background (CMB) provide observational hints of statistical isotropy breaking. Some of these features can be studied within the framework of primordial vector fields in inflationary theories which generally display some level of statistical anisotropy both in the power spectrum and in higher-order correlation functions. Motivated by these observations and the recent theoretical developments in the study of primordial vector fields, we develop the formalism necessary to extract statistical anisotropy information from the three-point function of the CMB temperature anisotropy. We employ a simplified vector field model and parametrize the bispectrum of curvature fluctuations in such a way that all the information about statistical anisotropy is encoded in some parameters λ LM (which measure the anisotropic to the isotropic bispectrum amplitudes). For such a template bispectrum, we compute an optimal estimator for λ LM and the expected signal-to-noise ratio. We estimate that, for f NL ≅ 30, an experiment like Planck can be sensitive to a ratio of the anisotropic to the isotropic amplitudes of the bispectrum as small as 10%. Our results are complementary to the information coming from a power spectrum analysis and particularly relevant for those models where statistical anisotropy turns out to be suppressed in the power spectrum but not negligible in the bispectrum

  18. Excel 2016 for educational and psychological statistics a guide to solving practical problems

    CERN Document Server

    Quirk, Thomas J

    2016-01-01

    This book shows the capabilities of Microsoft Excel in teaching educational and psychological statistics effectively. Similar to the previously published Excel 2013 for Educational and Psychological Statistics, this book is a step-by-step exercise-driven guide for students and practitioners who need to master Excel to solve practical education and psychology problems. If understanding statistics isn’t your strongest suit, you are not especially mathematically-inclined, or if you are wary of computers, this is the right book for you. Excel, a widely available computer program for students and managers, is also an effective teaching and learning tool for quantitative analyses in education and psychology courses. Its powerful computational ability and graphical functions make learning statistics much easier than in years past. However, Excel 2016 for Educational and Psychological Statistics: A Guide to Solving Practical Problems is the first book to capitalize on these improvements by teaching students and man...

  19. Statistical analysis of solid waste composition data: Arithmetic mean, standard deviation and correlation coefficients.

    Science.gov (United States)

    Edjabou, Maklawe Essonanawe; Martín-Fernández, Josep Antoni; Scheutz, Charlotte; Astrup, Thomas Fruergaard

    2017-11-01

    Data for fractional solid waste composition provide relative magnitudes of individual waste fractions, the percentages of which always sum to 100, thereby connecting them intrinsically. Due to this sum constraint, waste composition data represent closed data, and their interpretation and analysis require statistical methods, other than classical statistics that are suitable only for non-constrained data such as absolute values. However, the closed characteristics of waste composition data are often ignored when analysed. The results of this study showed, for example, that unavoidable animal-derived food waste amounted to 2.21±3.12% with a confidence interval of (-4.03; 8.45), which highlights the problem of the biased negative proportions. A Pearson's correlation test, applied to waste fraction generation (kg mass), indicated a positive correlation between avoidable vegetable food waste and plastic packaging. However, correlation tests applied to waste fraction compositions (percentage values) showed a negative association in this regard, thus demonstrating that statistical analyses applied to compositional waste fraction data, without addressing the closed characteristics of these data, have the potential to generate spurious or misleading results. Therefore, ¨compositional data should be transformed adequately prior to any statistical analysis, such as computing mean, standard deviation and correlation coefficients. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Vector field statistical analysis of kinematic and force trajectories.

    Science.gov (United States)

    Pataky, Todd C; Robinson, Mark A; Vanrenterghem, Jos

    2013-09-27

    When investigating the dynamics of three-dimensional multi-body biomechanical systems it is often difficult to derive spatiotemporally directed predictions regarding experimentally induced effects. A paradigm of 'non-directed' hypothesis testing has emerged in the literature as a result. Non-directed analyses typically consist of ad hoc scalar extraction, an approach which substantially simplifies the original, highly multivariate datasets (many time points, many vector components). This paper describes a commensurately multivariate method as an alternative to scalar extraction. The method, called 'statistical parametric mapping' (SPM), uses random field theory to objectively identify field regions which co-vary significantly with the experimental design. We compared SPM to scalar extraction by re-analyzing three publicly available datasets: 3D knee kinematics, a ten-muscle force system, and 3D ground reaction forces. Scalar extraction was found to bias the analyses of all three datasets by failing to consider sufficient portions of the dataset, and/or by failing to consider covariance amongst vector components. SPM overcame both problems by conducting hypothesis testing at the (massively multivariate) vector trajectory level, with random field corrections simultaneously accounting for temporal correlation and vector covariance. While SPM has been widely demonstrated to be effective for analyzing 3D scalar fields, the current results are the first to demonstrate its effectiveness for 1D vector field analysis. It was concluded that SPM offers a generalized, statistically comprehensive solution to scalar extraction's over-simplification of vector trajectories, thereby making it useful for objectively guiding analyses of complex biomechanical systems. © 2013 Published by Elsevier Ltd. All rights reserved.

  1. Understanding Statistics and Statistics Education: A Chinese Perspective

    Science.gov (United States)

    Shi, Ning-Zhong; He, Xuming; Tao, Jian

    2009-01-01

    In recent years, statistics education in China has made great strides. However, there still exists a fairly large gap with the advanced levels of statistics education in more developed countries. In this paper, we identify some existing problems in statistics education in Chinese schools and make some proposals as to how they may be overcome. We…

  2. Excel 2007 for Business Statistics A Guide to Solving Practical Business Problems

    CERN Document Server

    Quirk, Thomas J

    2012-01-01

    This is the first book to show the capabilities of Microsoft Excel to teach business statistics effectively. It is a step-by-step exercise-driven guide for students and practitioners who need to master Excel to solve practical business problems. If understanding statistics isn't your strongest suit, you are not especially mathematically-inclined, or if you are wary of computers, this is the right book for you. Excel, a widely available computer program for students and managers, is also an effective teaching and learning tool for quantitative analyses in business courses. Its powerful computat

  3. Statistical issues in the parton distribution analysis of the Tevatron jet data

    International Nuclear Information System (INIS)

    Alekhin, S.; Bluemlein, J.; Moch, S.O.; Hamburg Univ.

    2012-11-01

    We analyse a tension between the D0 and CDF inclusive jet data and the perturbative QCD calculations, which are based on the ABKM09 and ABM11 parton distribution functions (PDFs) within the nuisance parameter framework. Particular attention is paid on the uncertainties in the nuisance parameters due to the data fluctuations and the PDF errors. We show that with account of these uncertainties the nuisance parameters do not demonstrate a statistically significant excess. A statistical bias of the estimator based on the nuisance parameters is also discussed.

  4. Mathematical statistics

    CERN Document Server

    Pestman, Wiebe R

    2009-01-01

    This textbook provides a broad and solid introduction to mathematical statistics, including the classical subjects hypothesis testing, normal regression analysis, and normal analysis of variance. In addition, non-parametric statistics and vectorial statistics are considered, as well as applications of stochastic analysis in modern statistics, e.g., Kolmogorov-Smirnov testing, smoothing techniques, robustness and density estimation. For students with some elementary mathematical background. With many exercises. Prerequisites from measure theory and linear algebra are presented.

  5. On statistical methods for analysing the geographical distribution of cancer cases near nuclear installations

    International Nuclear Information System (INIS)

    Bithell, J.F.; Stone, R.A.

    1989-01-01

    This paper sets out to show that epidemiological methods most commonly used can be improved. When analysing geographical data it is necessary to consider location. The most obvious quantification of location is ranked distance, though other measures which may be more meaningful in relation to aetiology may be substituted. A test based on distance ranks, the ''Poisson maximum test'', depends on the maximum of observed relative risk in regions of increasing size, but with significance level adjusted for selection. Applying this test to data from Sellafield and Sizewell shows that the excess of leukaemia incidence observed at Seascale, near Sellafield, is not an artefact due to data selection by region, and that the excess probably results from a genuine, if as yet unidentified cause (there being little evidence of any other locational association once the Seascale cases have been removed). So far as Sizewell is concerned, geographical proximity to the nuclear power station does not seen particularly important. (author)

  6. Night shift work and breast cancer risk: what do the meta-analyses tell us?

    Science.gov (United States)

    Pahwa, Manisha; Labrèche, France; Demers, Paul A

    2018-05-22

    Objectives This paper aims to compare results, assess the quality, and discuss the implications of recently published meta-analyses of night shift work and breast cancer risk. Methods A comprehensive search was conducted for meta-analyses published from 2007-2017 that included at least one pooled effect size (ES) for breast cancer associated with any night shift work exposure metric and were accompanied by a systematic literature review. Pooled ES from each meta-analysis were ascertained with a focus on ever/never exposure associations. Assessments of heterogeneity and publication bias were also extracted. The AMSTAR 2 checklist was used to evaluate quality. Results Seven meta-analyses, published from 2013-2016, collectively included 30 cohort and case-control studies spanning 1996-2016. Five meta-analyses reported pooled ES for ever/never night shift work exposure; these ranged from 0.99 [95% confidence interval (CI) 0.95-1.03, N=10 cohort studies) to 1.40 (95% CI 1.13-1.73, N=9 high quality studies). Estimates for duration, frequency, and cumulative night shift work exposure were scant and mostly not statistically significant. Meta-analyses of cohort, Asian, and more fully-adjusted studies generally resulted in lower pooled ES than case-control, European, American, or minimally-adjusted studies. Most reported statistically significant between-study heterogeneity. Publication bias was not evident in any of the meta-analyses. Only one meta-analysis was strong in critical quality domains. Conclusions Fairly consistent elevated pooled ES were found for ever/never night shift work and breast cancer risk, but results for other shift work exposure metrics were inconclusive. Future evaluations of shift work should incorporate high quality meta-analyses that better appraise individual study quality.

  7. Sampling, Probability Models and Statistical Reasoning Statistical

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 5. Sampling, Probability Models and Statistical Reasoning Statistical Inference. Mohan Delampady V R Padmawar. General Article Volume 1 Issue 5 May 1996 pp 49-58 ...

  8. Analysing the Wu metric on a class of eggs in Cn–II

    Indian Academy of Sciences (India)

    Home; Journals; Proceedings – Mathematical Sciences; Volume 127; Issue 3. Analysing the Wu metric on a class of eggs in C C n – I I .... G P BALAKUMAR1 PRACHI MAHAJAN2. Indian Statistical Institute, Chennai 600 113, India; Department of Mathematics, Indian Institute of Technology – Bombay, Mumbai 400 076, India ...

  9. Quantitative Analysis of Probabilistic Models of Software Product Lines with Statistical Model Checking

    DEFF Research Database (Denmark)

    ter Beek, Maurice H.; Legay, Axel; Lluch Lafuente, Alberto

    2015-01-01

    We investigate the suitability of statistical model checking techniques for analysing quantitative properties of software product line models with probabilistic aspects. For this purpose, we enrich the feature-oriented language FLAN with action rates, which specify the likelihood of exhibiting pa...

  10. The statistical process control methods - SPC

    Directory of Open Access Journals (Sweden)

    Floreková Ľubica

    1998-03-01

    Full Text Available Methods of statistical evaluation of quality – SPC (item 20 of the documentation system of quality control of ISO norm, series 900 of various processes, products and services belong amongst basic qualitative methods that enable us to analyse and compare data pertaining to various quantitative parameters. Also they enable, based on the latter, to propose suitable interventions with the aim of improving these processes, products and services. Theoretical basis and applicatibily of the principles of the: - diagnostics of a cause and effects, - Paret analysis and Lorentz curve, - number distribution and frequency curves of random variable distribution, - Shewhart regulation charts, are presented in the contribution.

  11. Statistics

    International Nuclear Information System (INIS)

    2005-01-01

    For the years 2004 and 2005 the figures shown in the tables of Energy Review are partly preliminary. The annual statistics published in Energy Review are presented in more detail in a publication called Energy Statistics that comes out yearly. Energy Statistics also includes historical time-series over a longer period of time (see e.g. Energy Statistics, Statistics Finland, Helsinki 2004.) The applied energy units and conversion coefficients are shown in the back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supplies and total consumption of electricity GWh, Energy imports by country of origin in January-June 2003, Energy exports by recipient country in January-June 2003, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes, precautionary stock fees and oil pollution fees

  12. Assessment and statistics of surgically induced astigmatism.

    Science.gov (United States)

    Naeser, Kristian

    2008-05-01

    The aim of the thesis was to develop methods for assessment of surgically induced astigmatism (SIA) in individual eyes, and in groups of eyes. The thesis is based on 12 peer-reviewed publications, published over a period of 16 years. In these publications older and contemporary literature was reviewed(1). A new method (the polar system) for analysis of SIA was developed. Multivariate statistical analysis of refractive data was described(2-4). Clinical validation studies were performed. The description of a cylinder surface with polar values and differential geometry was compared. The main results were: refractive data in the form of sphere, cylinder and axis may define an individual patient or data set, but are unsuited for mathematical and statistical analyses(1). The polar value system converts net astigmatisms to orthonormal components in dioptric space. A polar value is the difference in meridional power between two orthogonal meridians(5,6). Any pair of polar values, separated by an arch of 45 degrees, characterizes a net astigmatism completely(7). The two polar values represent the net curvital and net torsional power over the chosen meridian(8). The spherical component is described by the spherical equivalent power. Several clinical studies demonstrated the efficiency of multivariate statistical analysis of refractive data(4,9-11). Polar values and formal differential geometry describe astigmatic surfaces with similar concepts and mathematical functions(8). Other contemporary methods, such as Long's power matrix, Holladay's and Alpins' methods, Zernike(12) and Fourier analyses(8), are correlated to the polar value system. In conclusion, analysis of SIA should be performed with polar values or other contemporary component systems. The study was supported by Statens Sundhedsvidenskabeligt Forskningsråd, Cykelhandler P. Th. Rasmussen og Hustrus Mindelegat, Hotelejer Carl Larsen og Hustru Nicoline Larsens Mindelegat, Landsforeningen til Vaern om Synet

  13. Analysis of QCD sum rule based on the maximum entropy method

    International Nuclear Information System (INIS)

    Gubler, Philipp

    2012-01-01

    QCD sum rule was developed about thirty years ago and has been used up to the present to calculate various physical quantities like hadrons. It has been, however, needed to assume 'pole + continuum' for the spectral function in the conventional analyses. Application of this method therefore came across with difficulties when the above assumption is not satisfied. In order to avoid this difficulty, analysis to make use of the maximum entropy method (MEM) has been developed by the present author. It is reported here how far this new method can be successfully applied. In the first section, the general feature of the QCD sum rule is introduced. In section 2, it is discussed why the analysis by the QCD sum rule based on the MEM is so effective. In section 3, the MEM analysis process is described, and in the subsection 3.1 likelihood function and prior probability are considered then in subsection 3.2 numerical analyses are picked up. In section 4, some cases of applications are described starting with ρ mesons, then charmoniums in the finite temperature and finally recent developments. Some figures of the spectral functions are shown. In section 5, summing up of the present analysis method and future view are given. (S. Funahashi)

  14. Frog Statistics

    Science.gov (United States)

    Whole Frog Project and Virtual Frog Dissection Statistics wwwstats output for January 1 through duplicate or extraneous accesses. For example, in these statistics, while a POST requesting an image is as well. Note that this under-represents the bytes requested. Starting date for following statistics

  15. Statistical Control Charts: Performances of Short Term Stock Trading in Croatia

    Directory of Open Access Journals (Sweden)

    Dumičić Ksenija

    2015-03-01

    Full Text Available Background: The stock exchange, as a regulated financial market, in modern economies reflects their economic development level. The stock market indicates the mood of investors in the development of a country and is an important ingredient for growth. Objectives: This paper aims to introduce an additional statistical tool used to support the decision-making process in stock trading, and it investigate the usage of statistical process control (SPC methods into the stock trading process. Methods/Approach: The individual (I, exponentially weighted moving average (EWMA and cumulative sum (CUSUM control charts were used for gaining trade signals. The open and the average prices of CROBEX10 index stocks on the Zagreb Stock Exchange were used in the analysis. The statistical control charts capabilities for stock trading in the short-run were analysed. Results: The statistical control chart analysis pointed out too many signals to buy or sell stocks. Most of them are considered as false alarms. So, the statistical control charts showed to be not so much useful in stock trading or in a portfolio analysis. Conclusions: The presence of non-normality and autocorellation has great impact on statistical control charts performances. It is assumed that if these two problems are solved, the use of statistical control charts in a portfolio analysis could be greatly improved.

  16. Statistical issues in searches for new phenomena in High Energy Physics

    Science.gov (United States)

    Lyons, Louis; Wardle, Nicholas

    2018-03-01

    Many analyses of data in High Energy Physics are concerned with searches for New Physics. We review the statistical issues that arise in such searches, and then illustrate these using the specific example of the recent successful search for the Higgs boson, produced in collisions between high energy protons at CERN’s Large Hadron Collider.

  17. Meta-analyses of the 5-HTTLPR polymorphisms and post-traumatic stress disorder.

    Directory of Open Access Journals (Sweden)

    Fernando Navarro-Mateu

    Full Text Available OBJECTIVE: To conduct a meta-analysis of all published genetic association studies of 5-HTTLPR polymorphisms performed in PTSD cases. METHODS DATA SOURCES: Potential studies were identified through PubMed/MEDLINE, EMBASE, Web of Science databases (Web of Knowledge, WoK, PsychINFO, PsychArticles and HuGeNet (Human Genome Epidemiology Network up until December 2011. STUDY SELECTION: Published observational studies reporting genotype or allele frequencies of this genetic factor in PTSD cases and in non-PTSD controls were all considered eligible for inclusion in this systematic review. DATA EXTRACTION: Two reviewers selected studies for possible inclusion and extracted data independently following a standardized protocol. STATISTICAL ANALYSIS: A biallelic and a triallelic meta-analysis, including the total S and S' frequencies, the dominant (S+/LL and S'+/L'L' and the recessive model (SS/L+ and S'S'/L'+, was performed with a random-effect model to calculate the pooled OR and its corresponding 95% CI. Forest plots and Cochran's Q-Statistic and I(2 index were calculated to check for heterogeneity. Subgroup analyses and meta-regression were carried out to analyze potential moderators. Publication bias and quality of reporting were also analyzed. RESULTS: 13 studies met our inclusion criteria, providing a total sample of 1874 patients with PTSD and 7785 controls in the biallelic meta-analyses and 627 and 3524, respectively, in the triallelic. None of the meta-analyses showed evidence of an association between 5-HTTLPR and PTSD but several characteristics (exposure to the same principal stressor for PTSD cases and controls, adjustment for potential confounding variables, blind assessment, study design, type of PTSD, ethnic distribution and Total Quality Score influenced the results in subgroup analyses and meta-regression. There was no evidence of potential publication bias. CONCLUSIONS: Current evidence does not support a direct effect of 5-HTTLPR

  18. Measurements and statistical analyses of indoor radon concentrations in Tokyo and surrounding areas

    International Nuclear Information System (INIS)

    Sugiura, Shiroharu; Suzuki, Takashi; Inokoshi, Yukio

    1995-01-01

    Since the UNSCEAR report published in 1982, radiation exposure to the respiratory tract due to radon and its progeny has been regarded as the single largest contributor to the natural radiation exposure of the general public. In Japan, the measurement of radon gas concentrations in many types of buildings have been surveyed by national and private institutes. We also carried out the measurement of radon gas concentrations in different types of residential buildings in Tokyo and its adjoining prefectures from October 1988 to September 1991, to evaluate the potential radiation risk of the people living there. One or two simplified passive radon monitors were set up in each of the 34 residential buildings located in the above-mentioned area for an exposure period of 3 months each. Comparing the average concentrations in the buildings of different materials and structures, those in the concrete steel buildings were always higher than those in the wooden and the prefabricated mortared buildings. The radon concentrations were proved to become higher in autumn and winter, and lower in spring and summer. Radon concentrations in an underground room of a concrete steel building showed the highest value throughout our investigation, and statistically significant seasonal variation was detected by the X-11 method developed by the U.S. Bureau of Census. The values measured in a room at the first floor of the same concrete steel building also showed seasonal variation, but the phase of variation was different. Another multivariate analysis suggested that the building material and structure are the most important factors concerning the levels of radon concentration among other factors such as the age of the building and the use of ventilators. (author)

  19. First study of correlation between oleic acid content and SAD gene polymorphism in olive oil samples through statistical and bayesian modeling analyses.

    Science.gov (United States)

    Ben Ayed, Rayda; Ennouri, Karim; Ercişli, Sezai; Ben Hlima, Hajer; Hanana, Mohsen; Smaoui, Slim; Rebai, Ahmed; Moreau, Fabienne

    2018-04-10

    Virgin olive oil is appreciated for its particular aroma and taste and is recognized worldwide for its nutritional value and health benefits. The olive oil contains a vast range of healthy compounds such as monounsaturated free fatty acids, especially, oleic acid. The SAD.1 polymorphism localized in the Stearoyl-acyl carrier protein desaturase gene (SAD) was genotyped and showed that it is associated with the oleic acid composition of olive oil samples. However, the effect of polymorphisms in fatty acid-related genes on olive oil monounsaturated and saturated fatty acids distribution in the Tunisian olive oil varieties is not understood. Seventeen Tunisian olive-tree varieties were selected for fatty acid content analysis by gas chromatography. The association of SAD.1 genotypes with the fatty acids composition was studied by statistical and Bayesian modeling analyses. Fatty acid content analysis showed interestingly that some Tunisian virgin olive oil varieties could be classified as a functional food and nutraceuticals due to their particular richness in oleic acid. In fact, the TT-SAD.1 genotype was found to be associated with a higher proportion of mono-unsaturated fatty acids (MUFA), mainly oleic acid (C18:1) (r = - 0.79, p SAD.1 association with the oleic acid composition of olive oil was identified among the studied varieties. This correlation fluctuated between studied varieties, which might elucidate variability in lipidic composition among them and therefore reflecting genetic diversity through differences in gene expression and biochemical pathways. SAD locus would represent an excellent marker for identifying interesting amongst virgin olive oil lipidic composition.

  20. Analysis of statistical misconception in terms of statistical reasoning

    Science.gov (United States)

    Maryati, I.; Priatna, N.

    2018-05-01

    Reasoning skill is needed for everyone to face globalization era, because every person have to be able to manage and use information from all over the world which can be obtained easily. Statistical reasoning skill is the ability to collect, group, process, interpret, and draw conclusion of information. Developing this skill can be done through various levels of education. However, the skill is low because many people assume that statistics is just the ability to count and using formulas and so do students. Students still have negative attitude toward course which is related to research. The purpose of this research is analyzing students’ misconception in descriptive statistic course toward the statistical reasoning skill. The observation was done by analyzing the misconception test result and statistical reasoning skill test; observing the students’ misconception effect toward statistical reasoning skill. The sample of this research was 32 students of math education department who had taken descriptive statistic course. The mean value of misconception test was 49,7 and standard deviation was 10,6 whereas the mean value of statistical reasoning skill test was 51,8 and standard deviation was 8,5. If the minimal value is 65 to state the standard achievement of a course competence, students’ mean value is lower than the standard competence. The result of students’ misconception study emphasized on which sub discussion that should be considered. Based on the assessment result, it was found that students’ misconception happen on this: 1) writing mathematical sentence and symbol well, 2) understanding basic definitions, 3) determining concept that will be used in solving problem. In statistical reasoning skill, the assessment was done to measure reasoning from: 1) data, 2) representation, 3) statistic format, 4) probability, 5) sample, and 6) association.

  1. Statistical and Fractal Processing of Phase Images of Human Biological Fluids

    Directory of Open Access Journals (Sweden)

    MARCHUK, Y. I.

    2010-11-01

    Full Text Available Performed in this work are complex statistical and fractal analyses of phase properties inherent to birefringence networks of liquid crystals consisting of optically-thin layers prepared from human bile. Within the framework of a statistical approach, the authors have investigated values and ranges for changes of statistical moments of the 1-st to 4-th orders that characterize coordinate distributions for phase shifts between orthogonal components of amplitudes inherent to laser radiation transformed by human bile with various pathologies. Using the Gramm-Charlie method, ascertained are correlation criteria for differentiation of phase maps describing pathologically changed liquid-crystal networks. In the framework of the fractal approach, determined are dimensionalities of self-similar coordinate phase distributions as well as features of transformation of logarithmic dependences for power spectra of these distributions for various types of human pathologies.

  2. Statistical Inference at Work: Statistical Process Control as an Example

    Science.gov (United States)

    Bakker, Arthur; Kent, Phillip; Derry, Jan; Noss, Richard; Hoyles, Celia

    2008-01-01

    To characterise statistical inference in the workplace this paper compares a prototypical type of statistical inference at work, statistical process control (SPC), with a type of statistical inference that is better known in educational settings, hypothesis testing. Although there are some similarities between the reasoning structure involved in…

  3. Evaluation and projection of daily temperature percentiles from statistical and dynamical downscaling methods

    Directory of Open Access Journals (Sweden)

    A. Casanueva

    2013-08-01

    Full Text Available The study of extreme events has become of great interest in recent years due to their direct impact on society. Extremes are usually evaluated by using extreme indicators, based on order statistics on the tail of the probability distribution function (typically percentiles. In this study, we focus on the tail of the distribution of daily maximum and minimum temperatures. For this purpose, we analyse high (95th and low (5th percentiles in daily maximum and minimum temperatures on the Iberian Peninsula, respectively, derived from different downscaling methods (statistical and dynamical. First, we analyse the performance of reanalysis-driven downscaling methods in present climate conditions. The comparison among the different methods is performed in terms of the bias of seasonal percentiles, considering as observations the public gridded data sets E-OBS and Spain02, and obtaining an estimation of both the mean and spatial percentile errors. Secondly, we analyse the increments of future percentile projections under the SRES A1B scenario and compare them with those corresponding to the mean temperature, showing that their relative importance depends on the method, and stressing the need to consider an ensemble of methodologies.

  4. Excel 2010 for health services management statistics a guide to solving practical problems

    CERN Document Server

    Quirk, Thomas J

    2014-01-01

    This is the first book to show the capabilities of Microsoft Excel to teach health services management statistics effectively. It is a step-by-step exercise-driven guide for students and practitioners who need to master Excel to solve practical health services management problems. If understanding statistics isn’t your strongest suit, you are not especially mathematically-inclined, or if you are wary of computers, this is the right book for you.   Excel, a widely available computer program for students and managers, is also an effective teaching and learning tool for quantitative analyses in health services management courses. Its powerful computational ability and graphical functions make learning statistics much easier than in years past. However, Excel 2010 for Health Services Management Statistics: A Guide to Solving Practical Problems is the first book to capitalize on these improvements by teaching students and managers how to apply Excel to statistical techniques necessary in their courses and work....

  5. Excel 2013 for human resource management statistics a guide to solving practical problems

    CERN Document Server

    Quirk, Thomas J

    2016-01-01

    This book shows how Microsoft Excel is able to teach human resource management statistics effectively. Similar to the previously published Excel 2010 for Human Resource Management Statistics, it is a step-by-step exercise-driven guide for students and practitioners who need to master Excel to solve practical human resource management problems. If understanding statistics isn’t your strongest suit, you are not especially mathematically-inclined, or if you are wary of computers, this is the right book for you. Excel, a widely available computer program for students and managers, is also an effective teaching and learning tool for quantitative analyses in human resource management courses. Its powerful computational ability and graphical functions make learning statistics much easier than in years past. However, Excel 2013 for Human Resource Management Statistics: A Guide to Solving Practical Problems is the first book to capitalize on these improvements by teaching students and managers how to apply Excel to ...

  6. Excel 2016 for human resource management statistics a guide to solving practical problems

    CERN Document Server

    Quirk, Thomas J

    2016-01-01

    This book shows the capabilities of Microsoft Excel in teaching human resource management statistics effectively. Similar to the previously published Excel 2013 for Human Resource Management Statistics, this book is a step-by-step exercise-driven guide for students and practitioners who need to master Excel to solve practical human resource management problems. If understanding statistics isn’t your strongest suit, you are not especially mathematically-inclined, or if you are wary of computers, this is the right book for you. Excel, a widely available computer program for students and managers, is also an effective teaching and learning tool for quantitative analyses in human resource management courses. Its powerful computational ability and graphical functions make learning statistics much easier than in years past. However, Excel 2016 for Human Resource Management Statistics: A Guide to Solving Practical Problems is the first book to capitalize on these improvements by teaching students and managers how ...

  7. Excel 2013 for health services management statistics a guide to solving practical problems

    CERN Document Server

    Quirk, Thomas J

    2016-01-01

    This book shows the capabilities of Microsoft Excel to teach health services management statistics effectively. Similar to the previously published Excel 2010 for Health Services Management Statistics, this book is a step-by-step exercise-driven guide for students and practitioners who need to master Excel to solve practical health services management problems. If understanding statistics isn’t your strongest suit, you are not especially mathematically-inclined, or if you are wary of computers, this is the right book for you. Excel, a widely available computer program for students and managers, is also an effective teaching and learning tool for quantitative analyses in health services management courses. Its powerful computational ability and graphical functions make learning statistics much easier than in years past. However, Excel 2013 for Health Services Management Statistics: A Guide to Solving Practical Problems is the first book to capitalize on these improvements by teaching students and managers ho...

  8. Excel 2010 for human resource management statistics a guide to solving practical problems

    CERN Document Server

    Quirk, Thomas J

    2014-01-01

    This is the first book to show the capabilities of Microsoft Excel to teach human resource  management statistics effectively.  It is a step-by-step exercise-driven guide for students and practitioners who need to master Excel to solve practical human resource management problems.  If understanding statistics isn’t your strongest suit, you are not especially mathematically-inclined, or if you are wary of computers, this is the right book for you.  Excel, a widely available computer program for students and managers, is also an effective teaching and learning tool for quantitative analyses in human resource management courses.  Its powerful computational ability and graphical functions make learning statistics much easier than in years past.  However, Excel 2010 for Human Resource Management Statistics: A Guide to Solving Practical Problems is the first book to capitalize on these improvements by teaching students and managers how to apply Excel to statistical techniques necessary in their courses and ...

  9. Quantile regression for the statistical analysis of immunological data with many non-detects

    NARCIS (Netherlands)

    P.H.C. Eilers (Paul); E. Röder (Esther); H.F.J. Savelkoul (Huub); R. Gerth van Wijk (Roy)

    2012-01-01

    textabstractBackground: Immunological parameters are hard to measure. A well-known problem is the occurrence of values below the detection limit, the non-detects. Non-detects are a nuisance, because classical statistical analyses, like ANOVA and regression, cannot be applied. The more advanced

  10. Global aesthetic surgery statistics: a closer look.

    Science.gov (United States)

    Heidekrueger, Paul I; Juran, S; Ehrl, D; Aung, T; Tanna, N; Broer, P Niclas

    2017-08-01

    Obtaining quality global statistics about surgical procedures remains an important yet challenging task. The International Society of Aesthetic Plastic Surgery (ISAPS) reports the total number of surgical and non-surgical procedures performed worldwide on a yearly basis. While providing valuable insight, ISAPS' statistics leave two important factors unaccounted for: (1) the underlying base population, and (2) the number of surgeons performing the procedures. Statistics of the published ISAPS' 'International Survey on Aesthetic/Cosmetic Surgery' were analysed by country, taking into account the underlying national base population according to the official United Nations population estimates. Further, the number of surgeons per country was used to calculate the number of surgeries performed per surgeon. In 2014, based on ISAPS statistics, national surgical procedures ranked in the following order: 1st USA, 2nd Brazil, 3rd South Korea, 4th Mexico, 5th Japan, 6th Germany, 7th Colombia, and 8th France. When considering the size of the underlying national populations, the demand for surgical procedures per 100,000 people changes the overall ranking substantially. It was also found that the rate of surgical procedures per surgeon shows great variation between the responding countries. While the US and Brazil are often quoted as the countries with the highest demand for plastic surgery, according to the presented analysis, other countries surpass these countries in surgical procedures per capita. While data acquisition and quality should be improved in the future, valuable insight regarding the demand for surgical procedures can be gained by taking specific demographic and geographic factors into consideration.

  11. A Statistical Primer: Understanding Descriptive and Inferential Statistics

    OpenAIRE

    Gillian Byrne

    2007-01-01

    As libraries and librarians move more towards evidence‐based decision making, the data being generated in libraries is growing. Understanding the basics of statistical analysis is crucial for evidence‐based practice (EBP), in order to correctly design and analyze researchas well as to evaluate the research of others. This article covers the fundamentals of descriptive and inferential statistics, from hypothesis construction to sampling to common statistical techniques including chi‐square, co...

  12. Solution of the statistical bootstrap with Bose statistics

    International Nuclear Information System (INIS)

    Engels, J.; Fabricius, K.; Schilling, K.

    1977-01-01

    A brief and transparent way to introduce Bose statistics into the statistical bootstrap of Hagedorn and Frautschi is presented. The resulting bootstrap equation is solved by a cluster expansion for the grand canonical partition function. The shift of the ultimate temperature due to Bose statistics is determined through an iteration process. We discuss two-particle spectra of the decaying fireball (with given mass) as obtained from its grand microcanonical level density

  13. Low statistical power in biomedical science: a review of three human research domains

    Science.gov (United States)

    Dumas-Mallet, Estelle; Button, Katherine S.; Boraud, Thomas; Gonon, Francois

    2017-01-01

    Studies with low statistical power increase the likelihood that a statistically significant finding represents a false positive result. We conducted a review of meta-analyses of studies investigating the association of biological, environmental or cognitive parameters with neurological, psychiatric and somatic diseases, excluding treatment studies, in order to estimate the average statistical power across these domains. Taking the effect size indicated by a meta-analysis as the best estimate of the likely true effect size, and assuming a threshold for declaring statistical significance of 5%, we found that approximately 50% of studies have statistical power in the 0–10% or 11–20% range, well below the minimum of 80% that is often considered conventional. Studies with low statistical power appear to be common in the biomedical sciences, at least in the specific subject areas captured by our search strategy. However, we also observe evidence that this depends in part on research methodology, with candidate gene studies showing very low average power and studies using cognitive/behavioural measures showing high average power. This warrants further investigation. PMID:28386409

  14. An ANOVA approach for statistical comparisons of brain networks.

    Science.gov (United States)

    Fraiman, Daniel; Fraiman, Ricardo

    2018-03-16

    The study of brain networks has developed extensively over the last couple of decades. By contrast, techniques for the statistical analysis of these networks are less developed. In this paper, we focus on the statistical comparison of brain networks in a nonparametric framework and discuss the associated detection and identification problems. We tested network differences between groups with an analysis of variance (ANOVA) test we developed specifically for networks. We also propose and analyse the behaviour of a new statistical procedure designed to identify different subnetworks. As an example, we show the application of this tool in resting-state fMRI data obtained from the Human Connectome Project. We identify, among other variables, that the amount of sleep the days before the scan is a relevant variable that must be controlled. Finally, we discuss the potential bias in neuroimaging findings that is generated by some behavioural and brain structure variables. Our method can also be applied to other kind of networks such as protein interaction networks, gene networks or social networks.

  15. Statistical methods

    CERN Document Server

    Szulc, Stefan

    1965-01-01

    Statistical Methods provides a discussion of the principles of the organization and technique of research, with emphasis on its application to the problems in social statistics. This book discusses branch statistics, which aims to develop practical ways of collecting and processing numerical data and to adapt general statistical methods to the objectives in a given field.Organized into five parts encompassing 22 chapters, this book begins with an overview of how to organize the collection of such information on individual units, primarily as accomplished by government agencies. This text then

  16. Statistical optics

    CERN Document Server

    Goodman, Joseph W

    2015-01-01

    This book discusses statistical methods that are useful for treating problems in modern optics, and the application of these methods to solving a variety of such problems This book covers a variety of statistical problems in optics, including both theory and applications.  The text covers the necessary background in statistics, statistical properties of light waves of various types, the theory of partial coherence and its applications, imaging with partially coherent light, atmospheric degradations of images, and noise limitations in the detection of light. New topics have been introduced i

  17. Making systems with mutually exclusive events analysable by standard fault tree analysis tools

    International Nuclear Information System (INIS)

    Vaurio, J.K.

    2001-01-01

    Methods are developed for analysing systems that comprise mutually exclusive events by fault tree techniques that accept only statistically independent basic events. Techniques based on equivalent models and numerical transformations are presented for phased missions and for systems with component-caused system-level common cause failures. Numerical examples illustrate the methods

  18. Photon statistical properties of photon-added two-mode squeezed coherent states

    International Nuclear Information System (INIS)

    Xu Xue-Fen; Wang Shuai; Tang Bin

    2014-01-01

    We investigate photon statistical properties of the multiple-photon-added two-mode squeezed coherent states (PA-TMSCS). We find that the photon statistical properties are sensitive to the compound phase involved in the TMSCS. Our numerical analyses show that the photon addition can enhance the cross-correlation and anti-bunching effects of the PA-TMSCS. Compared with that of the TMSCS, the photon number distribution of the PA-TMSCS is modulated by a factor that is a monotonically increasing function of the numbers of adding photons to each mode; further, that the photon addition essentially shifts the photon number distribution. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  19. A method for statistical steady state thermal analysis of reactor cores

    International Nuclear Information System (INIS)

    Whetton, P.A.

    1981-01-01

    In a previous publication the author presented a method for undertaking statistical steady state thermal analyses of reactor cores. The present paper extends the technique to an assessment of confidence limits for the resulting probability functions which define the probability that a given thermal response value will be exceeded in a reactor core. Establishing such confidence limits is considered an integral part of any statistical thermal analysis and essential if such analysis are to be considered in any regulatory process. In certain applications the use of a best estimate probability function may be justifiable but it is recognised that a demonstrably conservative probability function is required for any regulatory considerations. (orig.)

  20. Crop identification technology assessment for remote sensing. (CITARS) Volume 9: Statistical analysis of results

    Science.gov (United States)

    Davis, B. J.; Feiveson, A. H.

    1975-01-01

    Results are presented of CITARS data processing in raw form. Tables of descriptive statistics are given along with descriptions and results of inferential analyses. The inferential results are organized by questions which CITARS was designed to answer.

  1. All of statistics a concise course in statistical inference

    CERN Document Server

    Wasserman, Larry

    2004-01-01

    This book is for people who want to learn probability and statistics quickly It brings together many of the main ideas in modern statistics in one place The book is suitable for students and researchers in statistics, computer science, data mining and machine learning This book covers a much wider range of topics than a typical introductory text on mathematical statistics It includes modern topics like nonparametric curve estimation, bootstrapping and classification, topics that are usually relegated to follow-up courses The reader is assumed to know calculus and a little linear algebra No previous knowledge of probability and statistics is required The text can be used at the advanced undergraduate and graduate level Larry Wasserman is Professor of Statistics at Carnegie Mellon University He is also a member of the Center for Automated Learning and Discovery in the School of Computer Science His research areas include nonparametric inference, asymptotic theory, causality, and applications to astrophysics, bi...

  2. Experimental, statistical, and biological models of radon carcinogenesis

    International Nuclear Information System (INIS)

    Cross, F.T.

    1991-09-01

    Risk models developed for underground miners have not been consistently validated in studies of populations exposed to indoor radon. Imprecision in risk estimates results principally from differences between exposures in mines as compared to domestic environments and from uncertainties about the interaction between cigarette-smoking and exposure to radon decay products. Uncertainties in extrapolating miner data to domestic exposures can be reduced by means of a broad-based health effects research program that addresses the interrelated issues of exposure, respiratory tract dose, carcinogenesis (molecular/cellular and animal studies, plus developing biological and statistical models), and the relationship of radon to smoking and other copollutant exposures. This article reviews experimental animal data on radon carcinogenesis observed primarily in rats at Pacific Northwest Laboratory. Recent experimental and mechanistic carcinogenesis models of exposures to radon, uranium ore dust, and cigarette smoke are presented with statistical analyses of animal data. 20 refs., 1 fig

  3. MQSA National Statistics

    Science.gov (United States)

    ... Standards Act and Program MQSA Insights MQSA National Statistics Share Tweet Linkedin Pin it More sharing options ... but should level off with time. Archived Scorecard Statistics 2018 Scorecard Statistics 2017 Scorecard Statistics 2016 Scorecard ...

  4. Adjusting the Adjusted X[superscript 2]/df Ratio Statistic for Dichotomous Item Response Theory Analyses: Does the Model Fit?

    Science.gov (United States)

    Tay, Louis; Drasgow, Fritz

    2012-01-01

    Two Monte Carlo simulation studies investigated the effectiveness of the mean adjusted X[superscript 2]/df statistic proposed by Drasgow and colleagues and, because of problems with the method, a new approach for assessing the goodness of fit of an item response theory model was developed. It has been previously recommended that mean adjusted…

  5. Generalized quantum statistics

    International Nuclear Information System (INIS)

    Chou, C.

    1992-01-01

    In the paper, a non-anyonic generalization of quantum statistics is presented, in which Fermi-Dirac statistics (FDS) and Bose-Einstein statistics (BES) appear as two special cases. The new quantum statistics, which is characterized by the dimension of its single particle Fock space, contains three consistent parts, namely the generalized bilinear quantization, the generalized quantum mechanical description and the corresponding statistical mechanics

  6. Statistical physics of human beings in games: Controlled experiments

    International Nuclear Information System (INIS)

    Liang Yuan; Huang Ji-Ping

    2014-01-01

    It is important to know whether the laws or phenomena in statistical physics for natural systems with non-adaptive agents still hold for social human systems with adaptive agents, because this implies whether it is possible to study or understand social human systems by using statistical physics originating from natural systems. For this purpose, we review the role of human adaptability in four kinds of specific human behaviors, namely, normal behavior, herd behavior, contrarian behavior, and hedge behavior. The approach is based on controlled experiments in the framework of market-directed resource-allocation games. The role of the controlled experiments could be at least two-fold: adopting the real human decision-making process so that the system under consideration could reflect the performance of genuine human beings; making it possible to obtain macroscopic physical properties of a human system by tuning a particular factor of the system, thus directly revealing cause and effect. As a result, both computer simulations and theoretical analyses help to show a few counterparts of some laws or phenomena in statistical physics for social human systems: two-phase phenomena or phase transitions, entropy-related phenomena, and a non-equilibrium steady state. This review highlights the role of human adaptability in these counterparts, and makes it possible to study or understand some particular social human systems by means of statistical physics coming from natural systems. (topical review - statistical physics and complex systems)

  7. Statistical analysis of partial reduced width distributions

    International Nuclear Information System (INIS)

    Tran Quoc Thuong.

    1973-01-01

    The aim of this study was to develop rigorous methods for analysing experimental event distributions according to a law in chi 2 and to check if the number of degrees of freedom ν is compatible with the value 1 for the reduced neutron width distribution. Two statistical methods were used (the maximum-likelihood method and the method of moments); it was shown, in a few particular cases, that ν is compatible with 1. The difference between ν and 1, if it exists, should not exceed 3%. These results confirm the validity of the compound nucleus model [fr

  8. Mine Drainage Generation and Control Options.

    Science.gov (United States)

    Wei, Xinchao; Rodak, Carolyn M; Zhang, Shicheng; Han, Yuexin; Wolfe, F Andrew

    2016-10-01

    This review provides a snapshot of papers published in 2015 relevant to the topic of mine drainage generation and control options. The review is broken into 3 sections: Generation, Prediction and Prevention, and Treatment Options. The first section, mine drainage generation, focuses on the characterization of mine drainage and the environmental impacts. As such, it is broken into three subsections focused on microbiological characterization, physiochemical characterization, and environmental impacts. The second section of the review is divided into two subsections focused on either the prediction or prevention of acid mine drainage. The final section focuses on treatment options for mine drainage and waste sludge. The third section contains subsections on passive treatment, biological treatment, physiochemical treatment, and a new subsection on beneficial uses for mine drainage and treatment wastes.

  9. A robust statistical method for association-based eQTL analysis.

    Directory of Open Access Journals (Sweden)

    Ning Jiang

    Full Text Available It has been well established that theoretical kernel for recently surging genome-wide association study (GWAS is statistical inference of linkage disequilibrium (LD between a tested genetic marker and a putative locus affecting a disease trait. However, LD analysis is vulnerable to several confounding factors of which population stratification is the most prominent. Whilst many methods have been proposed to correct for the influence either through predicting the structure parameters or correcting inflation in the test statistic due to the stratification, these may not be feasible or may impose further statistical problems in practical implementation.We propose here a novel statistical method to control spurious LD in GWAS from population structure by incorporating a control marker into testing for significance of genetic association of a polymorphic marker with phenotypic variation of a complex trait. The method avoids the need of structure prediction which may be infeasible or inadequate in practice and accounts properly for a varying effect of population stratification on different regions of the genome under study. Utility and statistical properties of the new method were tested through an intensive computer simulation study and an association-based genome-wide mapping of expression quantitative trait loci in genetically divergent human populations.The analyses show that the new method confers an improved statistical power for detecting genuine genetic association in subpopulations and an effective control of spurious associations stemmed from population structure when compared with other two popularly implemented methods in the literature of GWAS.

  10. Statistical analysis and Kalman filtering applied to nuclear materials accountancy

    International Nuclear Information System (INIS)

    Annibal, P.S.

    1990-08-01

    Much theoretical research has been carried out on the development of statistical methods for nuclear material accountancy. In practice, physical, financial and time constraints mean that the techniques must be adapted to give an optimal performance in plant conditions. This thesis aims to bridge the gap between theory and practice, to show the benefits to be gained from a knowledge of the facility operation. Four different aspects are considered; firstly, the use of redundant measurements to reduce the error on the estimate of the mass of heavy metal in an 'accountancy tank' is investigated. Secondly, an analysis of the calibration data for the same tank is presented, establishing bounds for the error and suggesting a means of reducing them. Thirdly, a plant-specific method of producing an optimal statistic from the input, output and inventory data, to help decide between 'material loss' and 'no loss' hypotheses, is developed and compared with existing general techniques. Finally, an application of the Kalman Filter to materials accountancy is developed, to demonstrate the advantages of state-estimation techniques. The results of the analyses and comparisons illustrate the importance of taking into account a complete and accurate knowledge of the plant operation, measurement system, and calibration methods, to derive meaningful results from statistical tests on materials accountancy data, and to give a better understanding of critical random and systematic error sources. The analyses were carried out on the head-end of the Fast Reactor Reprocessing Plant, where fuel from the prototype fast reactor is cut up and dissolved. However, the techniques described are general in their application. (author)

  11. Preliminary results of statistical dynamic experiments on a heat exchanger

    International Nuclear Information System (INIS)

    Corran, E.R.; Cummins, J.D.

    1962-10-01

    The inherent noise signals present in a heat exchanger have been recorded and analysed in order to determine some of the statistical dynamic characteristics of the heat exchanger. These preliminary results show that the primary side temperature frequency response may be determined by analysing the inherent noise. The secondary side temperature frequency response and cross coupled temperature frequency responses between primary and secondary are poorly determined because of the presence of a non-stationary noise source in the secondary circuit of this heat exchanger. This may be overcome by correlating the dependent variables with an externally applied noise signal. Some preliminary experiments with an externally applied random telegraph type of signal are reported. (author)

  12. A Primer on Receiver Operating Characteristic Analysis and Diagnostic Efficiency Statistics for Pediatric Psychology: We Are Ready to ROC

    Science.gov (United States)

    2014-01-01

    Objective To offer a practical demonstration of receiver operating characteristic (ROC) analyses, diagnostic efficiency statistics, and their application to clinical decision making using a popular parent checklist to assess for potential mood disorder. Method Secondary analyses of data from 589 families seeking outpatient mental health services, completing the Child Behavior Checklist and semi-structured diagnostic interviews. Results Internalizing Problems raw scores discriminated mood disorders significantly better than did age- and gender-normed T scores, or an Affective Problems score. Internalizing scores 30 had a diagnostic likelihood ratio of 7.4. Conclusions This study illustrates a series of steps in defining a clinical problem, operationalizing it, selecting a valid study design, and using ROC analyses to generate statistics that support clinical decisions. The ROC framework offers important advantages for clinical interpretation. Appendices include sample scripts using SPSS and R to check assumptions and conduct ROC analyses. PMID:23965298

  13. Statistics in the medical sciences - the long Germany road to there

    Directory of Open Access Journals (Sweden)

    Weiß, Christel

    2005-06-01

    Full Text Available This contribution aims at tracing the development of statistical methods in medical science. Statistical methodology in medical research was first implemented in England in the age of the Enlightenment during the 18th century. As this approach stood in a clear opposition to the conventional medical practice directed in a rather authoritarian manner, this research field had to overcome a lot of difficulties. Nowadays, there is a widespread consensus that medical research is hardly possible without profound knowledge and application of statistical methods. Nevertheless, it took an extremely long time until the end of the 20th century, before this methodology was taken notice of and became appreciated. In order to better understand this long process, a brief summary of the development of statistics beginning from the Ancient Times is presented. It is shown how medical progress evolved parallel to the advancing mathematical understanding. A focus is put on the influence of the latter on medical sciences. Moreover, the special case of Germany in this aspect is analysed.

  14. A look at the links between drainage density and flood statistics

    Directory of Open Access Journals (Sweden)

    A. Montanari

    2009-07-01

    Full Text Available We investigate the links between the drainage density of a river basin and selected flood statistics, namely, mean, standard deviation, coefficient of variation and coefficient of skewness of annual maximum series of peak flows. The investigation is carried out through a three-stage analysis. First, a numerical simulation is performed by using a spatially distributed hydrological model in order to highlight how flood statistics change with varying drainage density. Second, a conceptual hydrological model is used in order to analytically derive the dependence of flood statistics on drainage density. Third, real world data from 44 watersheds located in northern Italy were analysed. The three-level analysis seems to suggest that a critical value of the drainage density exists for which a minimum is attained in both the coefficient of variation and the absolute value of the skewness coefficient. Such minima in the flood statistics correspond to a minimum of the flood quantile for a given exceedance probability (i.e., recurrence interval. Therefore, the results of this study may provide useful indications for flood risk assessment in ungauged basins.

  15. Patterns of medicinal plant use: an examination of the Ecuadorian Shuar medicinal flora using contingency table and binomial analyses.

    Science.gov (United States)

    Bennett, Bradley C; Husby, Chad E

    2008-03-28

    Botanical pharmacopoeias are non-random subsets of floras, with some taxonomic groups over- or under-represented. Moerman [Moerman, D.E., 1979. Symbols and selectivity: a statistical analysis of Native American medical ethnobotany, Journal of Ethnopharmacology 1, 111-119] introduced linear regression/residual analysis to examine these patterns. However, regression, the commonly-employed analysis, suffers from several statistical flaws. We use contingency table and binomial analyses to examine patterns of Shuar medicinal plant use (from Amazonian Ecuador). We first analyzed the Shuar data using Moerman's approach, modified to better meet requirements of linear regression analysis. Second, we assessed the exact randomization contingency table test for goodness of fit. Third, we developed a binomial model to test for non-random selection of plants in individual families. Modified regression models (which accommodated assumptions of linear regression) reduced R(2) to from 0.59 to 0.38, but did not eliminate all problems associated with regression analyses. Contingency table analyses revealed that the entire flora departs from the null model of equal proportions of medicinal plants in all families. In the binomial analysis, only 10 angiosperm families (of 115) differed significantly from the null model. These 10 families are largely responsible for patterns seen at higher taxonomic levels. Contingency table and binomial analyses offer an easy and statistically valid alternative to the regression approach.

  16. Statistics For Dummies

    CERN Document Server

    Rumsey, Deborah

    2011-01-01

    The fun and easy way to get down to business with statistics Stymied by statistics? No fear ? this friendly guide offers clear, practical explanations of statistical ideas, techniques, formulas, and calculations, with lots of examples that show you how these concepts apply to your everyday life. Statistics For Dummies shows you how to interpret and critique graphs and charts, determine the odds with probability, guesstimate with confidence using confidence intervals, set up and carry out a hypothesis test, compute statistical formulas, and more.Tracks to a typical first semester statistics cou

  17. Accounting for uncertainty in ecological analysis: the strengths and limitations of hierarchical statistical modeling.

    Science.gov (United States)

    Cressie, Noel; Calder, Catherine A; Clark, James S; Ver Hoef, Jay M; Wikle, Christopher K

    2009-04-01

    Analyses of ecological data should account for the uncertainty in the process(es) that generated the data. However, accounting for these uncertainties is a difficult task, since ecology is known for its complexity. Measurement and/or process errors are often the only sources of uncertainty modeled when addressing complex ecological problems, yet analyses should also account for uncertainty in sampling design, in model specification, in parameters governing the specified model, and in initial and boundary conditions. Only then can we be confident in the scientific inferences and forecasts made from an analysis. Probability and statistics provide a framework that accounts for multiple sources of uncertainty. Given the complexities of ecological studies, the hierarchical statistical model is an invaluable tool. This approach is not new in ecology, and there are many examples (both Bayesian and non-Bayesian) in the literature illustrating the benefits of this approach. In this article, we provide a baseline for concepts, notation, and methods, from which discussion on hierarchical statistical modeling in ecology can proceed. We have also planted some seeds for discussion and tried to show where the practical difficulties lie. Our thesis is that hierarchical statistical modeling is a powerful way of approaching ecological analysis in the presence of inevitable but quantifiable uncertainties, even if practical issues sometimes require pragmatic compromises.

  18. Descriptive statistics.

    Science.gov (United States)

    Nick, Todd G

    2007-01-01

    Statistics is defined by the Medical Subject Headings (MeSH) thesaurus as the science and art of collecting, summarizing, and analyzing data that are subject to random variation. The two broad categories of summarizing and analyzing data are referred to as descriptive and inferential statistics. This chapter considers the science and art of summarizing data where descriptive statistics and graphics are used to display data. In this chapter, we discuss the fundamentals of descriptive statistics, including describing qualitative and quantitative variables. For describing quantitative variables, measures of location and spread, for example the standard deviation, are presented along with graphical presentations. We also discuss distributions of statistics, for example the variance, as well as the use of transformations. The concepts in this chapter are useful for uncovering patterns within the data and for effectively presenting the results of a project.

  19. Visualization of time series statistical data by shape analysis (GDP ratio changes among Asia countries)

    Science.gov (United States)

    Shirota, Yukari; Hashimoto, Takako; Fitri Sari, Riri

    2018-03-01

    It has been very significant to visualize time series big data. In the paper we shall discuss a new analysis method called “statistical shape analysis” or “geometry driven statistics” on time series statistical data in economics. In the paper, we analyse the agriculture, value added and industry, value added (percentage of GDP) changes from 2000 to 2010 in Asia. We handle the data as a set of landmarks on a two-dimensional image to see the deformation using the principal components. The point of the analysis method is the principal components of the given formation which are eigenvectors of its bending energy matrix. The local deformation can be expressed as the set of non-Affine transformations. The transformations give us information about the local differences between in 2000 and in 2010. Because the non-Affine transformation can be decomposed into a set of partial warps, we present the partial warps visually. The statistical shape analysis is widely used in biology but, in economics, no application can be found. In the paper, we investigate its potential to analyse the economic data.

  20. Analysing passenger arrivals rates and waiting time at bus stops

    OpenAIRE

    Kaparias, I.; Rossetti, C.; Trozzi, V.

    2015-01-01

    The present study investigates the rather under-explored topic of passenger waiting times at public transport facilities. Using data collected from part of London’s bus network by means of physical counts, measurements and observations, and complemented by on-site passenger interviews, the waiting behaviour is analysed for a number of bus stops served by different numbers of lines. The analysis employs a wide range of statistical methods and tools, and concentrates on three aspects: passenger...

  1. Fractionary statistics and field theories in (2+1) dimensions

    International Nuclear Information System (INIS)

    Gomes, M.

    1989-01-01

    The existence of fractionary angular momentum and the exotic statistics in many dimensions are analysed. The soliton type excitations of non linear sigma model with 0(3) symmetry are considered. The parity breaking through mass term and the generation of Chern Simon term are discussed in three dimensional fermion model. It is shown that the symmetry breaking can be due to vacuum and this possibility is illustrated in the context of Gross-Never similar model. (M.C.K.)

  2. Statistics in Schools

    Science.gov (United States)

    Information Statistics in Schools Educate your students about the value and everyday use of statistics. The Statistics in Schools program provides resources for teaching and learning with real life data. Explore the site for standards-aligned, classroom-ready activities. Statistics in Schools Math Activities History

  3. Risk assessment, management, communication: a guide to selected sources. Update. Information guide

    International Nuclear Information System (INIS)

    1987-05-01

    This is the first update to the March 1987 publication entitled Risk Assessment, Management, Communication: A Guide to Selected Sources. The risk update series is divided into three major sections: Assessment, Management, and Communication. This update also includes subsections on hazardous waste, radiation, and a number of specific chemicals. Due to the expanding literature on risk, other subsections may be added to updates in the future. Each Table of Contents contains a complete list of the subsections. Updates are produced on a quarterly basis

  4. The Structural Reforms of the Chinese Statistical System Die Strukturreformen des chinesischen Statistiksystems

    Directory of Open Access Journals (Sweden)

    Günter Moser

    2009-04-01

    Full Text Available The quality of statistical data covering the economic and social development of the People’s Republic of China has been questioned by international and national data users for years. The reasons for this doubt lie mainly in the structure of the Chinese system of statistics. Two parallel systems exist which operate largely autonomously: the national system of statistics and the sectoral system of statistics. In the area of the national statistical system, the National Bureau of Statistics (NBS has the authority to order and collect statistics. This competence lies with the ministries and authorities below the ministerial level. This article describes and analyses these structures, the resulting problems, and the reform measures taken to date. It also aims to provide a better understanding of the statistical data about the People’s Republic of China and to enable an assessment of them within a changing structural context. In conclusion, approaches to further reforms will be provided based on the author’s long-standing experience in cooperation projects with the official Chinese statistics agencies. Die Qualität der Statistiken zur ökonomischen und sozialen Entwicklung in der Volksrepublik China ist in letzter Zeit sowohl von ausländischen wie auch von einheimischen Nutzern der Daten in Frage gestellt worden. Die Gründe dafür liegen vor allem in der Struktur des Erhebungssystems in China.

  5. Using Artificial Neural Networks in Educational Research: Some Comparisons with Linear Statistical Models.

    Science.gov (United States)

    Everson, Howard T.; And Others

    This paper explores the feasibility of neural computing methods such as artificial neural networks (ANNs) and abductory induction mechanisms (AIM) for use in educational measurement. ANNs and AIMS methods are contrasted with more traditional statistical techniques, such as multiple regression and discriminant function analyses, for making…

  6. Statistics of the Von Mises Stress Response For Structures Subjected To Random Excitations

    Directory of Open Access Journals (Sweden)

    Mu-Tsang Chen

    1998-01-01

    Full Text Available Finite element-based random vibration analysis is increasingly used in computer aided engineering software for computing statistics (e.g., root-mean-square value of structural responses such as displacements, stresses and strains. However, these statistics can often be computed only for Cartesian responses. For the design of metal structures, a failure criterion based on an equivalent stress response, commonly known as the von Mises stress, is more appropriate and often used. This paper presents an approach for computing the statistics of the von Mises stress response for structures subjected to random excitations. Random vibration analysis is first performed to compute covariance matrices of Cartesian stress responses. Monte Carlo simulation is then used to perform scatter and failure analyses using the von Mises stress response.

  7. High performance statistical computing with parallel R: applications to biology and climate modelling

    International Nuclear Information System (INIS)

    Samatova, Nagiza F; Branstetter, Marcia; Ganguly, Auroop R; Hettich, Robert; Khan, Shiraj; Kora, Guruprasad; Li, Jiangtian; Ma, Xiaosong; Pan, Chongle; Shoshani, Arie; Yoginath, Srikanth

    2006-01-01

    Ultrascale computing and high-throughput experimental technologies have enabled the production of scientific data about complex natural phenomena. With this opportunity, comes a new problem - the massive quantities of data so produced. Answers to fundamental questions about the nature of those phenomena remain largely hidden in the produced data. The goal of this work is to provide a scalable high performance statistical data analysis framework to help scientists perform interactive analyses of these raw data to extract knowledge. Towards this goal we have been developing an open source parallel statistical analysis package, called Parallel R, that lets scientists employ a wide range of statistical analysis routines on high performance shared and distributed memory architectures without having to deal with the intricacies of parallelizing these routines

  8. Identifying User Profiles from Statistical Grouping Methods

    Directory of Open Access Journals (Sweden)

    Francisco Kelsen de Oliveira

    2018-02-01

    Full Text Available This research aimed to group users into subgroups according to their levels of knowledge about technology. Statistical hierarchical and non-hierarchical clustering methods were studied, compared and used in the creations of the subgroups from the similarities of the skill levels with these users’ technology. The research sample consisted of teachers who answered online questionnaires about their skills with the use of software and hardware with educational bias. The statistical methods of grouping were performed and showed the possibilities of groupings of the users. The analyses of these groups allowed to identify the common characteristics among the individuals of each subgroup. Therefore, it was possible to define two subgroups of users, one with skill in technology and another with skill with technology, so that the partial results of the research showed two main algorithms for grouping with 92% similarity in the formation of groups of users with skill with technology and the other with little skill, confirming the accuracy of the techniques of discrimination against individuals.

  9. Spatial Analysis Along Networks Statistical and Computational Methods

    CERN Document Server

    Okabe, Atsuyuki

    2012-01-01

    In the real world, there are numerous and various events that occur on and alongside networks, including the occurrence of traffic accidents on highways, the location of stores alongside roads, the incidence of crime on streets and the contamination along rivers. In order to carry out analyses of those events, the researcher needs to be familiar with a range of specific techniques. Spatial Analysis Along Networks provides a practical guide to the necessary statistical techniques and their computational implementation. Each chapter illustrates a specific technique, from Stochastic Point Process

  10. Theoretical, analytical, and statistical interpretation of environmental data

    International Nuclear Information System (INIS)

    Lombard, S.M.

    1974-01-01

    The reliability of data from radiochemical analyses of environmental samples cannot be determined from nuclear counting statistics alone. The rigorous application of the principles of propagation of errors, an understanding of the physics and chemistry of the species of interest in the environment, and the application of information from research on the analytical procedure are all necessary for a valid estimation of the errors associated with analytical results. The specific case of the determination of plutonium in soil is considered in terms of analytical problems and data reliability. (U.S.)

  11. Hood of the truck statistics for food animal practitioners.

    Science.gov (United States)

    Slenning, Barrett D

    2006-03-01

    This article offers some tips on working with statistics and develops four relatively simple procedures to deal with most kinds of data with which veterinarians work. The criterion for a procedure to be a "Hood of the Truck Statistics" (HOT Stats) technique is that it must be simple enough to be done with pencil, paper, and a calculator. The goal of HOT Stats is to have the tools available to run quick analyses in only a few minutes so that decisions can be made in a timely fashion. The discipline allows us to move away from the all-too-common guess work about effects and differences we perceive following a change in treatment or management. The techniques allow us to move toward making more defensible, credible, and more quantifiably "risk-aware" real-time recommendations to our clients.

  12. Statistical physics

    CERN Document Server

    Sadovskii, Michael V

    2012-01-01

    This volume provides a compact presentation of modern statistical physics at an advanced level. Beginning with questions on the foundations of statistical mechanics all important aspects of statistical physics are included, such as applications to ideal gases, the theory of quantum liquids and superconductivity and the modern theory of critical phenomena. Beyond that attention is given to new approaches, such as quantum field theory methods and non-equilibrium problems.

  13. Statistical methods for data analysis in particle physics

    CERN Document Server

    Lista, Luca

    2017-01-01

    This concise set of course-based notes provides the reader with the main concepts and tools needed to perform statistical analyses of experimental data, in particular in the field of high-energy physics (HEP). First, the book provides an introduction to probability theory and basic statistics, mainly intended as a refresher from readers’ advanced undergraduate studies, but also to help them clearly distinguish between the Frequentist and Bayesian approaches and interpretations in subsequent applications. More advanced concepts and applications are gradually introduced, culminating in the chapter on both discoveries and upper limits, as many applications in HEP concern hypothesis testing, where the main goal is often to provide better and better limits so as to eventually be able to distinguish between competing hypotheses, or to rule out some of them altogether. Many worked-out examples will help newcomers to the field and graduate students alike understand the pitfalls involved in applying theoretical co...

  14. Continuous Covariate Imbalance and Conditional Power for Clinical Trial Interim Analyses

    Science.gov (United States)

    Ciolino, Jody D.; Martin, Renee' H.; Zhao, Wenle; Jauch, Edward C.; Hill, Michael D.; Palesch, Yuko Y.

    2014-01-01

    Oftentimes valid statistical analyses for clinical trials involve adjustment for known influential covariates, regardless of imbalance observed in these covariates at baseline across treatment groups. Thus, it must be the case that valid interim analyses also properly adjust for these covariates. There are situations, however, in which covariate adjustment is not possible, not planned, or simply carries less merit as it makes inferences less generalizable and less intuitive. In this case, covariate imbalance between treatment groups can have a substantial effect on both interim and final primary outcome analyses. This paper illustrates the effect of influential continuous baseline covariate imbalance on unadjusted conditional power (CP), and thus, on trial decisions based on futility stopping bounds. The robustness of the relationship is illustrated for normal, skewed, and bimodal continuous baseline covariates that are related to a normally distributed primary outcome. Results suggest that unadjusted CP calculations in the presence of influential covariate imbalance require careful interpretation and evaluation. PMID:24607294

  15. Multidomain analyses of a longitudinal human microbiome intestinal cleanout perturbation experiment.

    Science.gov (United States)

    Fukuyama, Julia; Rumker, Laurie; Sankaran, Kris; Jeganathan, Pratheepa; Dethlefsen, Les; Relman, David A; Holmes, Susan P

    2017-08-01

    Our work focuses on the stability, resilience, and response to perturbation of the bacterial communities in the human gut. Informative flash flood-like disturbances that eliminate most gastrointestinal biomass can be induced using a clinically-relevant iso-osmotic agent. We designed and executed such a disturbance in human volunteers using a dense longitudinal sampling scheme extending before and after induced diarrhea. This experiment has enabled a careful multidomain analysis of a controlled perturbation of the human gut microbiota with a new level of resolution. These new longitudinal multidomain data were analyzed using recently developed statistical methods that demonstrate improvements over current practices. By imposing sparsity constraints we have enhanced the interpretability of the analyses and by employing a new adaptive generalized principal components analysis, incorporated modulated phylogenetic information and enhanced interpretation through scoring of the portions of the tree most influenced by the perturbation. Our analyses leverage the taxa-sample duality in the data to show how the gut microbiota recovers following this perturbation. Through a holistic approach that integrates phylogenetic, metagenomic and abundance information, we elucidate patterns of taxonomic and functional change that characterize the community recovery process across individuals. We provide complete code and illustrations of new sparse statistical methods for high-dimensional, longitudinal multidomain data that provide greater interpretability than existing methods.

  16. DMINDA: an integrated web server for DNA motif identification and analyses.

    Science.gov (United States)

    Ma, Qin; Zhang, Hanyuan; Mao, Xizeng; Zhou, Chuan; Liu, Bingqiang; Chen, Xin; Xu, Ying

    2014-07-01

    DMINDA (DNA motif identification and analyses) is an integrated web server for DNA motif identification and analyses, which is accessible at http://csbl.bmb.uga.edu/DMINDA/. This web site is freely available to all users and there is no login requirement. This server provides a suite of cis-regulatory motif analysis functions on DNA sequences, which are important to elucidation of the mechanisms of transcriptional regulation: (i) de novo motif finding for a given set of promoter sequences along with statistical scores for the predicted motifs derived based on information extracted from a control set, (ii) scanning motif instances of a query motif in provided genomic sequences, (iii) motif comparison and clustering of identified motifs, and (iv) co-occurrence analyses of query motifs in given promoter sequences. The server is powered by a backend computer cluster with over 150 computing nodes, and is particularly useful for motif prediction and analyses in prokaryotic genomes. We believe that DMINDA, as a new and comprehensive web server for cis-regulatory motif finding and analyses, will benefit the genomic research community in general and prokaryotic genome researchers in particular. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  17. Decision of December 11, 1981 - 7 B 22/81 (Wyhl nuclear power station)

    International Nuclear Information System (INIS)

    Anon.

    1982-01-01

    The Federal Administrative Court has rejected a complaint as in admissable which was directed against non-admission of appeal. After the suit against first partial licensing for the Whyl nuclear power plant had been rejected in the second instance also - this decision was based on the nuclear installations regulations, section 3, sub-section 1, rule of exclusion of objection - the plaintiff demanded clarification of the usage of the preclusionary ordinance and its constitutionality in regards to its admissability in a situation of advanced court actions. The Federal Administrative Court rejected the fundamental significance of this question, according to section 132, sub-section 2, number 1 of the administrative rules of the court. Based on its legal interpretation of these regulations, the court was unable to arrive at a judgement for an appeal, as of section 2, sub-section 2, number 2 of the nuclear installations regulations, which might have opened avenues for revisions, see administrative rules of court, section 132, sub-section 2, number 2. The Court confirmed that only an action of intention would represent an objection, according to the nuclear installations regulations, section 2, sub-section 2. (GA) [de

  18. Methods for estimating low-flow statistics for Massachusetts streams

    Science.gov (United States)

    Ries, Kernell G.; Friesz, Paul J.

    2000-01-01

    streamgaging stations had from 2 to 81 years of record, with a mean record length of 37 years. The low-flow partial-record stations had from 8 to 36 streamflow measurements, with a median of 14 measurements. All basin characteristics were determined from digital map data. The basin characteristics that were statistically significant in most of the final regression equations were drainage area, the area of stratified-drift deposits per unit of stream length plus 0.1, mean basin slope, and an indicator variable that was 0 in the eastern region and 1 in the western region of Massachusetts. The equations were developed by use of weighted-least-squares regression analyses, with weights assigned proportional to the years of record and inversely proportional to the variances of the streamflow statistics for the stations. Standard errors of prediction ranged from 70.7 to 17.5 percent for the equations to predict the 7-day, 10-year low flow and 50-percent duration flow, respectively. The equations are not applicable for use in the Southeast Coastal region of the State, or where basin characteristics for the selected ungaged site are outside the ranges of those for the stations used in the regression analyses. A World Wide Web application was developed that provides streamflow statistics for data collection stations from a data base and for ungaged sites by measuring the necessary basin characteristics for the site and solving the regression equations. Output provided by the Web application for ungaged sites includes a map of the drainage-basin boundary determined for the site, the measured basin characteristics, the estimated streamflow statistics, and 90-percent prediction intervals for the estimates. An equation is provided for combining regression and correlation estimates to obtain improved estimates of the streamflow statistics for low-flow partial-record stations. An equation is also provided for combining regression and drainage-area ratio estimates to obtain improved e

  19. Statistical methods for determination of background levels for naturally occuring radionuclides in soil at a RCRA facility

    International Nuclear Information System (INIS)

    Guha, S.; Taylor, J.H.

    1996-01-01

    It is critical that summary statistics on background data, or background levels, be computed based on standardized and defensible statistical methods because background levels are frequently used in subsequent analyses and comparisons performed by separate analysts over time. The final background for naturally occurring radionuclide concentrations in soil at a RCRA facility, and the associated statistical methods used to estimate these concentrations, are presented. The primary objective is to describe, via a case study, the statistical methods used to estimate 95% upper tolerance limits (UTL) on radionuclide background soil data sets. A 95% UTL on background samples can be used as a screening level concentration in the absence of definitive soil cleanup criteria for naturally occurring radionuclides. The statistical methods are based exclusively on EPA guidance. This paper includes an introduction, a discussion of the analytical results for the radionuclides and a detailed description of the statistical analyses leading to the determination of 95% UTLs. Soil concentrations reported are based on validated data. Data sets are categorized as surficial soil; samples collected at depths from zero to one-half foot; and deep soil, samples collected from 3 to 5 feet. These data sets were tested for statistical outliers and underlying distributions were determined by using the chi-squared test for goodness-of-fit. UTLs for the data sets were then computed based on the percentage of non-detects and the appropriate best-fit distribution (lognormal, normal, or non-parametric). For data sets containing greater than approximately 50% nondetects, nonparametric UTLs were computed

  20. Excel 2016 for biological and life sciences statistics a guide to solving practical problems

    CERN Document Server

    Quirk, Thomas J; Horton, Howard F

    2016-01-01

    This book is a step-by-step exercise-driven guide for students and practitioners who need to master Excel to solve practical biological and life science problems. If understanding statistics isn’t your strongest suit, you are not especially mathematically-inclined, or if you are wary of computers, this is the right book for you. Excel is an effective learning tool for quantitative analyses in biological and life sciences courses. Its powerful computational ability and graphical functions make learning statistics much easier than in years past. However, Excel 2016 for Biological and Life Sciences Statistics: A Guide to Solving Practical Problems is the first book to capitalize on these improvements by teaching students and managers how to apply Excel 2016 to statistical techniques necessary in their courses and work. Each chapter explains statistical formulas and directs the reader to use Excel commands to solve specific, easy-to-understand biological and life science problems. Practice problems are provided...