WorldWideScience

Sample records for subsection statistical analyses

  1. Statistical Analyses of Digital Collections

    DEFF Research Database (Denmark)

    Frandsen, Tove Faber; Nicolaisen, Jeppe

    2016-01-01

    Using statistical methods to analyse digital material for patterns makes it possible to detect patterns in big data that we would otherwise not be able to detect. This paper seeks to exemplify this fact by statistically analysing a large corpus of references in systematic reviews. The aim...... of the analysis is to study the phenomenon of non-citation: Situations where just one (or some) document(s) are cited from a pool of otherwise equally citable documents. The study is based on more than 120,000 cited studies, and a total number of non-cited studies of more than 1.6 million. The number of cited...... 10 years. After 10 years the cited and non-cited studies tend to be more similar in terms of age. Separating the data set into different sub-disciplines reveals that the sub-disciplines vary in terms of age of cited vs. non-cited references. Some fields may be expanding and the number of published...

  2. Taxonomic revision of Elaphoglossum subsection Muscosa (Dryopteridaceae)

    NARCIS (Netherlands)

    Vasco, A.

    2011-01-01

    The present paper provides a monograph of Elaphoglossum subsect. Muscosa, a monophyletic group supported by molecular phylogenetic analyses. The monograph includes keys, full synonymy, descriptions, representative specimens examined, an index to collectors’ names and numbers, illustrations, spore

  3. Studies in Coprinus III — Coprinus section Veliformes. Subdivision and revision of subsection Nivei emend

    NARCIS (Netherlands)

    Uljé, C.B.; Noordeloos, M.E.

    1993-01-01

    Coprinus section Veliformes is defined and delimited to comprise four subsections: subsection Micacei, subsection Domestici, subsection Nivei, and subsection Narcotici, subsection nov. A key to the subsections is given. Subsection Nivei is emended, including also most taxa of subsection Flocculosi

  4. Ecological units of the Northern Region: Subsections

    Science.gov (United States)

    John A. Nesser; Gary L. Ford; C. Lee Maynard; Debbie Dumroese

    1997-01-01

    Ecological units are described at the subsection level of the Forest Service National Hierarchical Framework of Ecological Units. A total of 91 subsections are delineated on the 1996 map "Ecological Units of the Northern Region: Subsections," based on physical and biological criteria. This document consists of descriptions of the climate, geomorphology,...

  5. Applied statistics a handbook of BMDP analyses

    CERN Document Server

    Snell, E J

    1987-01-01

    This handbook is a realization of a long term goal of BMDP Statistical Software. As the software supporting statistical analysis has grown in breadth and depth to the point where it can serve many of the needs of accomplished statisticians it can also serve as an essential support to those needing to expand their knowledge of statistical applications. Statisticians should not be handicapped by heavy computation or by the lack of needed options. When Applied Statistics, Principle and Examples by Cox and Snell appeared we at BMDP were impressed with the scope of the applications discussed and felt that many statisticians eager to expand their capabilities in handling such problems could profit from having the solutions carried further, to get them started and guided to a more advanced level in problem solving. Who would be better to undertake that task than the authors of Applied Statistics? A year or two later discussions with David Cox and Joyce Snell at Imperial College indicated that a wedding of the proble...

  6. Basic assumptions in statistical analyses of data in biomedical ...

    African Journals Online (AJOL)

    If one or more assumptions are violated, an alternative procedure must be used to obtain valid results. This article aims at highlighting some basic assumptions in statistical analyses of data in biomedical sciences. Keywords: samples, independence, non-parametric, parametric, statistical analyses. Int. J. Biol. Chem. Sci. Vol.

  7. A new species in Coprinus subsection Setulosi

    NARCIS (Netherlands)

    Uljé, C.B.; Verbeken, A.

    2002-01-01

    Coprinus canistri spec. nov. is proposed. It belongs to the subsection Setulosi because of the presence of pileo- and caulocystidia. A comparison is given with C. subimpatiens and C. congregatus, on account of similar microscopical characters.

  8. Use of statistical analyses in the ophthalmic literature.

    Science.gov (United States)

    Lisboa, Renato; Meira-Freitas, Daniel; Tatham, Andrew J; Marvasti, Amir H; Sharpsten, Lucie; Medeiros, Felipe A

    2014-07-01

    To identify the most commonly used statistical analyses in the ophthalmic literature and to determine the likely gain in comprehension of the literature that readers could expect if they were to add knowledge of more advanced techniques sequentially to their statistical repertoire. Cross-sectional study. All articles published from January 2012 through December 2012 in Ophthalmology, the American Journal of Ophthalmology, and Archives of Ophthalmology were reviewed. A total of 780 peer-reviewed articles were included. Two reviewers examined each article and assigned categories to each one depending on the type of statistical analyses used. Discrepancies between reviewers were resolved by consensus. Total number and percentage of articles containing each category of statistical analysis were obtained. Additionally, we estimated the accumulated number and percentage of articles that a reader would be expected to be able to interpret depending on their statistical repertoire. Readers with little or no statistical knowledge would be expected to be able to interpret the statistical methods presented in only 20.8% of articles. To understand more than half (51.4%) of the articles published, readers would be expected to be familiar with at least 15 different statistical methods. Knowledge of 21 categories of statistical methods was necessary to comprehend 70.9% of articles, whereas knowledge of more than 29 categories was necessary to comprehend more than 90% of articles. Articles related to retina and glaucoma subspecialties showed a tendency for using more complex analysis when compared with articles from the cornea subspecialty. Readers of clinical journals in ophthalmology need to have substantial knowledge of statistical methodology to understand the results of studies published in the literature. The frequency of the use of complex statistical analyses also indicates that those involved in the editorial peer-review process must have sound statistical knowledge to

  9. SWORDS: A statistical tool for analysing large DNA sequences

    Indian Academy of Sciences (India)

    In this article, we present some simple yet effective statistical techniques for analysing and comparing large DNA sequences. These techniques are based on frequency distributions of DNA words in a large sequence, and have been packaged into a software called SWORDS. Using sequences available in public domain ...

  10. Additional studies in Coprinus subsection Glabri

    NARCIS (Netherlands)

    Uljé, C.B.; Bender, H.

    1997-01-01

    First Coprinus lilatinctus, belonging to subsect. Glabri, is described as new. Secondly, nomenclatural reasons are given for Coprinus nudiceps P.D. Orton to be replaced by the older name C. schroeteri P. Karst. Type studies of both taxa are given and their synonymy is discussed. To facilitate

  11. Type studies in Coprinus subsection Lanatuli

    NARCIS (Netherlands)

    Uljé, C.B.; Noordeloos, M.E.

    2001-01-01

    As a prelude to a monograph of the genus Coprinus, types were studied of a number of species said to belong to Coprinus subsection Lanatuli (Coprinus alnivorus. C. alutaceivelatus, C. ammophilae, C. arachnoideus, C. asterophoroides, C. brunneistragulatus, C. bubalinus, C. citrinovelatus, C.

  12. 22 CFR 505.13 - General exemptions (Subsection (j)).

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 2 2010-04-01 2010-04-01 true General exemptions (Subsection (j)). 505.13... exemptions (Subsection (j)). (a) General exemptions are available for systems of records which are maintained by the Central Intelligence Agency (Subsection (j)(1)), or maintained by an agency which performs as...

  13. 22 CFR 505.14 - Specific exemptions (Subsection (k)).

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 2 2010-04-01 2010-04-01 true Specific exemptions (Subsection (k)). 505.14... exemptions (Subsection (k)). The specific exemptions focus more on the nature of the records in the system of...) Subsection (k)(1). Records which are specifically authorized under criteria established under an Executive...

  14. Statistical reliability analyses of two wood plastic composite extrusion processes

    Energy Technology Data Exchange (ETDEWEB)

    Crookston, Kevin A., E-mail: kevincrookston@gmail.co [Department of Statistics, Operations, and Management Science, University of Tennessee, Knoxville, TN 37996-0532 (United States); Mark Young, Timothy, E-mail: tmyoung1@utk.ed [Forest Products Center, University of Tennessee, Knoxville, TN 37996-4570 (United States); Harper, David, E-mail: dharper4@utk.ed [Forest Products Center, University of Tennessee, Knoxville, TN 37996-4570 (United States); Guess, Frank M., E-mail: fguess@utk.ed [Department of Statistics, Operations, and Management Science, University of Tennessee, Knoxville, TN 37996-0532 (United States)

    2011-01-15

    Estimates of the reliability of wood plastic composites (WPC) are explored for two industrial extrusion lines. The goal of the paper is to use parametric and non-parametric analyses to examine potential differences in the WPC metrics of reliability for the two extrusion lines that may be helpful for use by the practitioner. A parametric analysis of the extrusion lines reveals some similarities and disparities in the best models; however, a non-parametric analysis reveals unique and insightful differences between Kaplan-Meier survival curves for the modulus of elasticity (MOE) and modulus of rupture (MOR) of the WPC industrial data. The distinctive non-parametric comparisons indicate the source of the differences in strength between the 10.2% and 48.0% fractiles [3,183-3,517 MPa] for MOE and for MOR between the 2.0% and 95.1% fractiles [18.9-25.7 MPa]. Distribution fitting as related to selection of the proper statistical methods is discussed with relevance to estimating the reliability of WPC. The ability to detect statistical differences in the product reliability of WPC between extrusion processes may benefit WPC producers in improving product reliability and safety of this widely used house-decking product. The approach can be applied to many other safety and complex system lifetime comparisons.

  15. Statistical technique for analysing functional connectivity of multiple spike trains.

    Science.gov (United States)

    Masud, Mohammad Shahed; Borisyuk, Roman

    2011-03-15

    A new statistical technique, the Cox method, used for analysing functional connectivity of simultaneously recorded multiple spike trains is presented. This method is based on the theory of modulated renewal processes and it estimates a vector of influence strengths from multiple spike trains (called reference trains) to the selected (target) spike train. Selecting another target spike train and repeating the calculation of the influence strengths from the reference spike trains enables researchers to find all functional connections among multiple spike trains. In order to study functional connectivity an "influence function" is identified. This function recognises the specificity of neuronal interactions and reflects the dynamics of postsynaptic potential. In comparison to existing techniques, the Cox method has the following advantages: it does not use bins (binless method); it is applicable to cases where the sample size is small; it is sufficiently sensitive such that it estimates weak influences; it supports the simultaneous analysis of multiple influences; it is able to identify a correct connectivity scheme in difficult cases of "common source" or "indirect" connectivity. The Cox method has been thoroughly tested using multiple sets of data generated by the neural network model of the leaky integrate and fire neurons with a prescribed architecture of connections. The results suggest that this method is highly successful for analysing functional connectivity of simultaneously recorded multiple spike trains. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. Chasing the peak: optimal statistics for weak shear analyses

    Science.gov (United States)

    Smit, Merijn; Kuijken, Konrad

    2018-01-01

    Context. Weak gravitational lensing analyses are fundamentally limited by the intrinsic distribution of galaxy shapes. It is well known that this distribution of galaxy ellipticity is non-Gaussian, and the traditional estimation methods, explicitly or implicitly assuming Gaussianity, are not necessarily optimal. Aims: We aim to explore alternative statistics for samples of ellipticity measurements. An optimal estimator needs to be asymptotically unbiased, efficient, and robust in retaining these properties for various possible sample distributions. We take the non-linear mapping of gravitational shear and the effect of noise into account. We then discuss how the distribution of individual galaxy shapes in the observed field of view can be modeled by fitting Fourier modes to the shear pattern directly. This allows scientific analyses using statistical information of the whole field of view, instead of locally sparse and poorly constrained estimates. Methods: We simulated samples of galaxy ellipticities, using both theoretical distributions and data for ellipticities and noise. We determined the possible bias Δe, the efficiency η and the robustness of the least absolute deviations, the biweight, and the convex hull peeling (CHP) estimators, compared to the canonical weighted mean. Using these statistics for regression, we have shown the applicability of direct Fourier mode fitting. Results: We find an improved performance of all estimators, when iteratively reducing the residuals after de-shearing the ellipticity samples by the estimated shear, which removes the asymmetry in the ellipticity distributions. We show that these estimators are then unbiased in the absence of noise, and decrease noise bias by more than 30%. Our results show that the CHP estimator distribution is skewed, but still centered around the underlying shear, and its bias least affected by noise. We find the least absolute deviations estimator to be the most efficient estimator in almost all

  17. Needle Terpenes as Chemotaxonomic Markers in Pinus: Subsections Pinus and Pinaster.

    Science.gov (United States)

    Mitić, Zorica S; Jovanović, Snežana Č; Zlatković, Bojan K; Nikolić, Biljana M; Stojanović, Gordana S; Marin, Petar D

    2017-05-01

    Chemical compositions of needle essential oils of 27 taxa from the section Pinus, including 20 and 7 taxa of the subsections Pinus and Pinaster, respectively, were compared in order to determine chemotaxonomic significance of terpenes at infrageneric level. According to analysis of variance, six out of 31 studied terpene characters were characterized by a high level of significance, indicating statistically significant difference between the examined subsections. Agglomerative hierarchical cluster analysis has shown separation of eight groups, where representatives of subsect. Pinaster were distributed within the first seven groups on the dendrogram together with P. nigra subsp. laricio and P. merkusii from the subsect. Pinus. On the other hand, the eighth group included the majority of the members of subsect. Pinus. Our findings, based on terpene characters, complement those obtained from morphological, biochemical, and molecular parameters studied over the past two decades. In addition, results presented in this article confirmed that terpenes are good markers at infrageneric level. © 2017 Wiley-VHCA AG, Zurich, Switzerland.

  18. Statistical analyses support power law distributions found in neuronal avalanches.

    Directory of Open Access Journals (Sweden)

    Andreas Klaus

    Full Text Available The size distribution of neuronal avalanches in cortical networks has been reported to follow a power law distribution with exponent close to -1.5, which is a reflection of long-range spatial correlations in spontaneous neuronal activity. However, identifying power law scaling in empirical data can be difficult and sometimes controversial. In the present study, we tested the power law hypothesis for neuronal avalanches by using more stringent statistical analyses. In particular, we performed the following steps: (i analysis of finite-size scaling to identify scale-free dynamics in neuronal avalanches, (ii model parameter estimation to determine the specific exponent of the power law, and (iii comparison of the power law to alternative model distributions. Consistent with critical state dynamics, avalanche size distributions exhibited robust scaling behavior in which the maximum avalanche size was limited only by the spatial extent of sampling ("finite size" effect. This scale-free dynamics suggests the power law as a model for the distribution of avalanche sizes. Using both the Kolmogorov-Smirnov statistic and a maximum likelihood approach, we found the slope to be close to -1.5, which is in line with previous reports. Finally, the power law model for neuronal avalanches was compared to the exponential and to various heavy-tail distributions based on the Kolmogorov-Smirnov distance and by using a log-likelihood ratio test. Both the power law distribution without and with exponential cut-off provided significantly better fits to the cluster size distributions in neuronal avalanches than the exponential, the lognormal and the gamma distribution. In summary, our findings strongly support the power law scaling in neuronal avalanches, providing further evidence for critical state dynamics in superficial layers of cortex.

  19. Non-Statistical Methods of Analysing of Bankruptcy Risk

    Directory of Open Access Journals (Sweden)

    Pisula Tomasz

    2015-06-01

    Full Text Available The article focuses on assessing the effectiveness of a non-statistical approach to bankruptcy modelling in enterprises operating in the logistics sector. In order to describe the issue more comprehensively, the aforementioned prediction of the possible negative results of business operations was carried out for companies functioning in the Polish region of Podkarpacie, and in Slovakia. The bankruptcy predictors selected for the assessment of companies operating in the logistics sector included 28 financial indicators characterizing these enterprises in terms of their financial standing and management effectiveness. The purpose of the study was to identify factors (models describing the bankruptcy risk in enterprises in the context of their forecasting effectiveness in a one-year and two-year time horizon. In order to assess their practical applicability the models were carefully analysed and validated. The usefulness of the models was assessed in terms of their classification properties, and the capacity to accurately identify enterprises at risk of bankruptcy and healthy companies as well as proper calibration of the models to the data from training sample sets.

  20. A Weighted U Statistic for Association Analyses Considering Genetic Heterogeneity

    Science.gov (United States)

    Wei, Changshuai; Elston, Robert C.; Lu, Qing

    2016-01-01

    Converging evidence suggests that common complex diseases with the same or similar clinical manifestations could have different underlying genetic etiologies. While current research interests have shifted toward uncovering rare variants and structural variations predisposing to human diseases, the impact of heterogeneity in genetic studies of complex diseases has been largely overlooked. Most of the existing statistical methods assume the disease under investigation has a homogeneous genetic effect and could, therefore, have low power if the disease undergoes heterogeneous pathophysiological and etiological processes. In this paper, we propose a heterogeneity weighted U (HWU) method for association analyses considering genetic heterogeneity. HWU can be applied to various types of phenotypes (e.g., binary and continuous) and is computationally efficient for high-dimensional genetic data. Through simulations, we showed the advantage of HWU when the underlying genetic etiology of a disease was heterogeneous, as well as the robustness of HWU against different model assumptions (e.g., phenotype distributions). Using HWU, we conducted a genome-wide analysis of nicotine dependence from the Study of Addiction: Genetics and Environments (SAGE) dataset. The genome-wide analysis of nearly one million genetic markers took 7 hours, identifying heterogeneous effects of two new genes (i.e., CYP3A5 and IKBKB) on nicotine dependence. PMID:26833871

  1. Weighted Statistical Binning: Enabling Statistically Consistent Genome-Scale Phylogenetic Analyses.

    Directory of Open Access Journals (Sweden)

    Md Shamsuzzoha Bayzid

    Full Text Available Because biological processes can result in different loci having different evolutionary histories, species tree estimation requires multiple loci from across multiple genomes. While many processes can result in discord between gene trees and species trees, incomplete lineage sorting (ILS, modeled by the multi-species coalescent, is considered to be a dominant cause for gene tree heterogeneity. Coalescent-based methods have been developed to estimate species trees, many of which operate by combining estimated gene trees, and so are called "summary methods". Because summary methods are generally fast (and much faster than more complicated coalescent-based methods that co-estimate gene trees and species trees, they have become very popular techniques for estimating species trees from multiple loci. However, recent studies have established that summary methods can have reduced accuracy in the presence of gene tree estimation error, and also that many biological datasets have substantial gene tree estimation error, so that summary methods may not be highly accurate in biologically realistic conditions. Mirarab et al. (Science 2014 presented the "statistical binning" technique to improve gene tree estimation in multi-locus analyses, and showed that it improved the accuracy of MP-EST, one of the most popular coalescent-based summary methods. Statistical binning, which uses a simple heuristic to evaluate "combinability" and then uses the larger sets of genes to re-calculate gene trees, has good empirical performance, but using statistical binning within a phylogenomic pipeline does not have the desirable property of being statistically consistent. We show that weighting the re-calculated gene trees by the bin sizes makes statistical binning statistically consistent under the multispecies coalescent, and maintains the good empirical performance. Thus, "weighted statistical binning" enables highly accurate genome-scale species tree estimation, and is also

  2. Confidence Levels in Statistical Analyses. Analysis of Variances. Case Study.

    Directory of Open Access Journals (Sweden)

    Ileana Brudiu

    2010-05-01

    Full Text Available Applying a statistical test to check statistical assumptions offers a positive or negative response regarding the veracity of the issued hypothesis. In case of variance analysis it’s necessary to apply a post hoc test to determine differences within the group. Statistical estimation using confidence levels provides more information than a statistical test, it shows the high degree of uncertainty resulting from small samples and builds conclusions in terms of "marginally significant" or "almost significant (p being close to 0,05 . The case study shows how the statistical estimation completes the application form the analysis of variance test and Tukey test.

  3. Statistical analyses for NANOGrav 5-year timing residuals

    Science.gov (United States)

    Wang, Yan; Cordes, James M.; Jenet, Fredrick A.; Chatterjee, Shami; Demorest, Paul B.; Dolch, Timothy; Ellis, Justin A.; Lam, Michael T.; Madison, Dustin R.; McLaughlin, Maura A.; Perrodin, Delphine; Rankin, Joanna; Siemens, Xavier; Vallisneri, Michele

    2017-02-01

    In pulsar timing, timing residuals are the differences between the observed times of arrival and predictions from the timing model. A comprehensive timing model will produce featureless residuals, which are presumably composed of dominating noise and weak physical effects excluded from the timing model (e.g. gravitational waves). In order to apply optimal statistical methods for detecting weak gravitational wave signals, we need to know the statistical properties of noise components in the residuals. In this paper we utilize a variety of non-parametric statistical tests to analyze the whiteness and Gaussianity of the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) 5-year timing data, which are obtained from Arecibo Observatory and Green Bank Telescope from 2005 to 2010. We find that most of the data are consistent with white noise; many data deviate from Gaussianity at different levels, nevertheless, removing outliers in some pulsars will mitigate the deviations.

  4. Colonization and diversification of the Euphorbia species (sect. Aphyllis subsect. Macaronesicae) on the Canary Islands.

    Science.gov (United States)

    Sun, Ye; Li, Yanshu; Vargas-Mendoza, Carlos Fabián; Wang, Faguo; Xing, Fuwu

    2016-09-29

    Diversification between islands and ecological radiation within islands are postulated to have occurred in the Euphorbia species (sect. Aphyllis subsect. Macaronesicae) on the Canary Islands. In this study, the biogeographical pattern of 11 species of subsect. Macaronesicae and the genetic differentiation among five species were investigated to distinguish the potential mode and mechanism of diversification and speciation. The biogeographical patterns and genetic structure were examined using statistical dispersal-vicariance analysis, Bayesian phylogenetic analysis, reduced median-joining haplotype network analysis, and discriminant analysis of principal components. The gene flow between related species was evaluated with an isolation-with-migration model. The ancestral range of the species of subsect. Macaronesicae was inferred to be Tenerife and the Cape Verde Islands, and Tenerife-La Gomera acted as sources of diversity to other islands of the Canary Islands. Inter-island colonization of E. lamarckii among the western islands and a colonization of E. regis-jubae from Gran Canaria to northern Africa were revealed. Both diversification between islands and radiation within islands have been revealed in the Euphorbia species (sect. Aphyllis subsect. Macaronesicae) of the Canary Islands. It was clear that this group began the speciation process in Tenerife-La Gomera, and this process occurred with gene flow between some related species.

  5. Statistical analyses of variability in properties of soils in gully erosion ...

    African Journals Online (AJOL)

    The study involved the statistical analyses of variability of soil with depth and its influence on gully development. The soil data were obtained from field study and laboratory analyses. The statistical analyses of soil data were performed on soil index properties using Statistical Package for Social Sciences. Results of the ...

  6. Improved analyses using function datasets and statistical modeling

    Science.gov (United States)

    John S. Hogland; Nathaniel M. Anderson

    2014-01-01

    Raster modeling is an integral component of spatial analysis. However, conventional raster modeling techniques can require a substantial amount of processing time and storage space and have limited statistical functionality and machine learning algorithms. To address this issue, we developed a new modeling framework using C# and ArcObjects and integrated that framework...

  7. Statistical analyses of Global U-Pb Database 2017

    Directory of Open Access Journals (Sweden)

    Stephen J. Puetz

    2018-01-01

    Full Text Available The method of obtaining zircon samples affects estimation of the global U-Pb age distribution. Researchers typically collect zircons via convenience sampling and cluster sampling. When using these techniques, weight adjustments proportional to the areas of the sampled regions improve upon unweighted estimates. Here, grid-area and modern sediment methods are used to weight the samples from a new database of 418,967 U-Pb ages. Preliminary tests involve two age models. Model-1 uses the most precise U-Pb ages as the best ages. Model-2 uses the 206Pb/238U age as the best age if it is less than a 1000 Ma cutoff, otherwise it uses the 207Pb/206Pb age as the best age. A correlation analysis between the Model-1 and Model-2 ages indicates nearly identical distributions for both models. However, after applying acceptance criteria to include only the most precise analyses with minimal discordance, a histogram of the rejected samples shows excessive rejection of the Model-2 analyses around the 1000 Ma cutoff point. Because of the excessive rejection rate for Model-2, we select Model-1 as the preferred model. After eliminating all rejected samples, the remaining analyses use only Model-1 ages for five rock-type subsets of the database: igneous, meta-igneous, sedimentary, meta-sedimentary, and modern sediments. Next, time-series plots, cross-correlation analyses, and spectral analyses determine the degree of alignment among the time-series and their periodicity. For all rock types, the U-Pb age distributions are similar for ages older than 500 Ma, but align poorly for ages younger than 500 Ma. The similarities (>500 Ma and differences (<500 Ma highlight how reductionism from a detailed database enhances understanding of time-dependent sequences, such as erosion, detrital transport mechanisms, lithification, and metamorphism. Time-series analyses and spectral analyses of the age distributions predominantly indicate a synchronous period-tripling sequence

  8. "What If" Analyses: Ways to Interpret Statistical Significance Test Results Using EXCEL or "R"

    Science.gov (United States)

    Ozturk, Elif

    2012-01-01

    The present paper aims to review two motivations to conduct "what if" analyses using Excel and "R" to understand the statistical significance tests through the sample size context. "What if" analyses can be used to teach students what statistical significance tests really do and in applied research either prospectively to estimate what sample size…

  9. Statistical Data Analyses of Trace Chemical, Biochemical, and Physical Analytical Signatures

    Energy Technology Data Exchange (ETDEWEB)

    Udey, Ruth Norma [Michigan State Univ., East Lansing, MI (United States)

    2013-01-01

    Analytical and bioanalytical chemistry measurement results are most meaningful when interpreted using rigorous statistical treatments of the data. The same data set may provide many dimensions of information depending on the questions asked through the applied statistical methods. Three principal projects illustrated the wealth of information gained through the application of statistical data analyses to diverse problems.

  10. The taxonomy of the European species of Hebeloma section Denudata subsections Hiemalia, Echinospora subsect. nov. and Clepsydroida subsect. nov. and five new species

    DEFF Research Database (Denmark)

    Eberhardt, Ursula; Beker, Henry J.; Vesterholt, Jan Hansen

    2016-01-01

    ., H. populinum, and H. rostratum sp. nov. We provide descriptions of all three of these species in order to clarify the taxonomy of this section. We provide a key to H. sect. Denudata and the discussed subsections. For the majority of the taxa there is good overall consistency between morphological...... and phylogenetic delimitation and, where the information exists, thanks to Aanen and Kuyper's work, biological delimitation....

  11. Taxonomy, systematics, and biogeography of Ficus subsection Urostigma (Moraceae)

    NARCIS (Netherlands)

    Chantarasuwan, Bhanumas

    2014-01-01

    Five research methods were used in Taxonomy, Systematics, and Biogeography of Ficus subsection Urostigma(Moraceae); Morphological characters, leaf anatomy, pollen morphology, molecular phylogeny, and historical biogeography. Seven topics are the result: 1) A revision was made based on morphology in

  12. Studies in Coprinus—II. Subsection Setulosi of section Pseudocoprinus

    NARCIS (Netherlands)

    Ujlé, C.B.; Bas, C.

    1991-01-01

    A key is given to the Netherlands’ species of subsect. Setulosi J. Lange of Coprinus sect. Pseudocoprinus (Kühn.) P.D. Orton & Watling. Some additional species are also included. All species dealt with are concisely described and fully discussed. A few probably new species are described ad interim.

  13. SOCR Analyses: Implementation and Demonstration of a New Graphical Statistics Educational Toolkit

    Directory of Open Access Journals (Sweden)

    Annie Chu

    2009-04-01

    Full Text Available The web-based, Java-written SOCR (Statistical Online Computational Resource toolshave been utilized in many undergraduate and graduate level statistics courses for sevenyears now (Dinov 2006; Dinov et al. 2008b. It has been proven that these resourcescan successfully improve students' learning (Dinov et al. 2008b. Being rst publishedonline in 2005, SOCR Analyses is a somewhat new component and it concentrate on datamodeling for both parametric and non-parametric data analyses with graphical modeldiagnostics. One of the main purposes of SOCR Analyses is to facilitate statistical learn-ing for high school and undergraduate students. As we have already implemented SOCRDistributions and Experiments, SOCR Analyses and Charts fulll the rest of a standardstatistics curricula. Currently, there are four core components of SOCR Analyses. Linearmodels included in SOCR Analyses are simple linear regression, multiple linear regression,one-way and two-way ANOVA. Tests for sample comparisons include t-test in the para-metric category. Some examples of SOCR Analyses' in the non-parametric category areWilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, Kolmogorov-Smirno testand Fligner-Killeen test. Hypothesis testing models include contingency table, Friedman'stest and Fisher's exact test. The last component of Analyses is a utility for computingsample sizes for normal distribution. In this article, we present the design framework,computational implementation and the utilization of SOCR Analyses.

  14. Missile total and subsection weight and size estimation equations

    OpenAIRE

    Nowell, John B., Jr.

    1992-01-01

    Approved for public release; distribution is unlimited This study utilizes regression analysis to develop equations which relate missile overall and subsection weights and geometries, including wings and fins, to variables which are considered to be the input for a new design in the conceptual or preliminary design phase, These variables include packaging requirements such as maximum length, diameter and weight, as well as performance characteristics such as mission an range. Data for th...

  15. Studies in Coprinus IV — Coprinus section coprinus. Subdivision and revision of subsection Alachuani

    NARCIS (Netherlands)

    Uljé, C.B.; Noordeloos, M.E.

    1997-01-01

    Coprinus section Coprinus is defined and delimited to comprise four subsections: Atramentarii, Coprinus, Lanatuli and Alachuani. A key to the subsections is given as well as a key to the species of subsection Alachuani known from the Netherlands or to be expected in the Netherlands on account of

  16. Statistical Parametric Mapping (SPM) for alpha-based statistical analyses of multi-muscle EMG time-series.

    Science.gov (United States)

    Robinson, Mark A; Vanrenterghem, Jos; Pataky, Todd C

    2015-02-01

    Multi-muscle EMG time-series are highly correlated and time dependent yet traditional statistical analysis of scalars from an EMG time-series fails to account for such dependencies. This paper promotes the use of SPM vector-field analysis for the generalised analysis of EMG time-series. We reanalysed a publicly available dataset of Young versus Adult EMG gait data to contrast scalar and SPM vector-field analysis. Independent scalar analyses of EMG data between 35% and 45% stance phase showed no statistical differences between the Young and Adult groups. SPM vector-field analysis did however identify statistical differences within this time period. As scalar analysis failed to consider the multi-muscle and time dependence of the EMG time-series it exhibited Type II error. SPM vector-field analysis on the other hand accounts for both dependencies whilst tightly controlling for Type I and Type II error making it highly applicable to EMG data analysis. Additionally SPM vector-field analysis is generalizable to linear and non-linear parametric and non-parametric statistical models, allowing its use under constraints that are common to electromyography and kinesiology. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. SOCR Analyses – an Instructional Java Web-based Statistical Analysis Toolkit

    OpenAIRE

    Chu, Annie; Cui, Jenny; Dinov, Ivo D.

    2009-01-01

    The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses i...

  18. The use of statistical grain-size method in analysing borehole and ...

    African Journals Online (AJOL)

    The use of statistical grain-size method in analysing borehole and evaluating aquifer parameters. A case study of ... The distribution of major geological units, well log data, static water level data, and surface features were found to have influenced groundwater occurrence and flow pattern in the study area. The lithological ...

  19. Statistical Reform: Evidence-Based Practice, Meta-Analyses, and Single Subject Designs

    Science.gov (United States)

    Jenson, William R.; Clark, Elaine; Kircher, John C.; Kristjansson, Sean D.

    2007-01-01

    Evidence-based practice approaches to interventions has come of age and promises to provide a new standard of excellence for school psychologists. This article describes several definitions of evidence-based practice and the problems associated with traditional statistical analyses that rely on rejection of the null hypothesis for the…

  20. Scripts for TRUMP data analyses. Part II (HLA-related data): statistical analyses specific for hematopoietic stem cell transplantation.

    Science.gov (United States)

    Kanda, Junya

    2016-01-01

    The Transplant Registry Unified Management Program (TRUMP) made it possible for members of the Japan Society for Hematopoietic Cell Transplantation (JSHCT) to analyze large sets of national registry data on autologous and allogeneic hematopoietic stem cell transplantation. However, as the processes used to collect transplantation information are complex and differed over time, the background of these processes should be understood when using TRUMP data. Previously, information on the HLA locus of patients and donors had been collected using a questionnaire-based free-description method, resulting in some input errors. To correct minor but significant errors and provide accurate HLA matching data, the use of a Stata or EZR/R script offered by the JSHCT is strongly recommended when analyzing HLA data in the TRUMP dataset. The HLA mismatch direction, mismatch counting method, and different impacts of HLA mismatches by stem cell source are other important factors in the analysis of HLA data. Additionally, researchers should understand the statistical analyses specific for hematopoietic stem cell transplantation, such as competing risk, landmark analysis, and time-dependent analysis, to correctly analyze transplant data. The data center of the JSHCT can be contacted if statistical assistance is required.

  1. Statistical analyses to support forensic interpretation for a new ten-locus STR profiling system.

    Science.gov (United States)

    Foreman, L A; Evett, I W

    2001-01-01

    A new ten-locus STR (short tandem repeat) profiling system was recently introduced into casework by the Forensic Science Service (FSS) and statistical analyses are described here based on data collected using this new system for the three major racial groups of the UK: Caucasian. Afro-Caribbean and Asian (of Indo-Pakistani descent). Allele distributions are compared and the FSS position with regard to routine significance testing of DNA frequency databases is discussed. An investigation of match probability calculations is carried out and the consequent analyses are shown to provide support for proposed changes in how the FSS reports DNA results when very small match probabilities are involved.

  2. Understanding of statistical terms routinely used in meta-analyses: an international survey among researchers.

    Science.gov (United States)

    Mavros, Michael N; Alexiou, Vangelis G; Vardakas, Konstantinos Z; Falagas, Matthew E

    2013-01-01

    Biomedical literature is increasingly enriched with literature reviews and meta-analyses. We sought to assess the understanding of statistical terms routinely used in such studies, among researchers. An online survey posing 4 clinically-oriented multiple-choice questions was conducted in an international sample of randomly selected corresponding authors of articles indexed by PubMed. A total of 315 unique complete forms were analyzed (participation rate 39.4%), mostly from Europe (48%), North America (31%), and Asia/Pacific (17%). Only 10.5% of the participants answered correctly all 4 "interpretation" questions while 9.2% answered all questions incorrectly. Regarding each question, 51.1%, 71.4%, and 40.6% of the participants correctly interpreted statistical significance of a given odds ratio, risk ratio, and weighted mean difference with 95% confidence intervals respectively, while 43.5% correctly replied that no statistical model can adjust for clinical heterogeneity. Clinicians had more correct answers than non-clinicians (mean score ± standard deviation: 2.27±1.06 versus 1.83±1.14, presearchers, randomly selected from a diverse international sample of biomedical scientists, misinterpreted statistical terms commonly reported in meta-analyses. Authors could be prompted to explicitly interpret their findings to prevent misunderstandings and readers are encouraged to keep up with basic biostatistics.

  3. Studies in Coprinus IV — Coprinus section coprinus. Subdivision and revision of subsection Alachuani

    OpenAIRE

    Uljé, C.B.; Noordeloos, M.E.

    1997-01-01

    Coprinus section Coprinus is defined and delimited to comprise four subsections: Atramentarii, Coprinus, Lanatuli and Alachuani. A key to the subsections is given as well as a key to the species of subsection Alachuani known from the Netherlands or to be expected in the Netherlands on account of records from neighbouring countries. Three new species, Coprinus epichloeus, Coprinus fluvialis and Coprinus sclerotiorum are described as well as a new variety of C. urticicola: var. salicicola. In a...

  4. Statistical Analyses of Scatterplots to Identify Important Factors in Large-Scale Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Kleijnen, J.P.C.; Helton, J.C.

    1999-04-01

    The robustness of procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses is investigated. These procedures are based on attempts to detect increasingly complex patterns in the scatterplots under consideration and involve the identification of (1) linear relationships with correlation coefficients, (2) monotonic relationships with rank correlation coefficients, (3) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (4) trends in variability as defined by variances and interquartile ranges, and (5) deviations from randomness as defined by the chi-square statistic. The following two topics related to the robustness of these procedures are considered for a sequence of example analyses with a large model for two-phase fluid flow: the presence of Type I and Type II errors, and the stability of results obtained with independent Latin hypercube samples. Observations from analysis include: (1) Type I errors are unavoidable, (2) Type II errors can occur when inappropriate analysis procedures are used, (3) physical explanations should always be sought for why statistical procedures identify variables as being important, and (4) the identification of important variables tends to be stable for independent Latin hypercube samples.

  5. Statistical Analyses of Higgs- and Z-Portal Dark Matter Models arXiv

    CERN Document Server

    Balazs, Csaba; Fowlie, Andrew; Marzola, Luca; Raidal, Martti

    We perform frequentist and Bayesian statistical analyses of Higgs- and Z-portal models of dark matter particles with spin 0, 1/2 and 1. Our analyses incorporate data from direct detection and indirect detection experiments, as well as LHC searches for monojet and monophoton events, and we also analyze the potential impacts of future direct detection experiments. In contrast to previous claims, we find acceptable regions of the parameter spaces for Higgs-portal models with real scalar, neutral vector, Majorana or Dirac fermion dark matter particles, and Z-portal models with Majorana or Dirac fermion dark matter particles. In many of these cases, there are interesting prospects for discovering dark matter particles in Higgs or Z decays, as well as dark matter particles weighing 100 GeV. Negative results from planned direct detection experiments would still allow acceptable regions for Higgs- and Z-portal models with Majorana or Dirac fermion dark matter particles.

  6. Living systematic reviews: 3. Statistical methods for updating meta-analyses.

    Science.gov (United States)

    Simmonds, Mark; Salanti, Georgia; McKenzie, Joanne; Elliott, Julian

    2017-11-01

    A living systematic review (LSR) should keep the review current as new research evidence emerges. Any meta-analyses included in the review will also need updating as new material is identified. If the aim of the review is solely to present the best current evidence standard meta-analysis may be sufficient, provided reviewers are aware that results may change at later updates. If the review is used in a decision-making context, more caution may be needed. When using standard meta-analysis methods, the chance of incorrectly concluding that any updated meta-analysis is statistically significant when there is no effect (the type I error) increases rapidly as more updates are performed. Inaccurate estimation of any heterogeneity across studies may also lead to inappropriate conclusions. This paper considers four methods to avoid some of these statistical problems when updating meta-analyses: two methods, that is, law of the iterated logarithm and the Shuster method control primarily for inflation of type I error and two other methods, that is, trial sequential analysis and sequential meta-analysis control for type I and II errors (failing to detect a genuine effect) and take account of heterogeneity. This paper compares the methods and considers how they could be applied to LSRs. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Statistical parameters of random heterogeneity estimated by analysing coda waves based on finite difference method

    Science.gov (United States)

    Emoto, K.; Saito, T.; Shiomi, K.

    2017-12-01

    Short-period (seismic waves and randomly distributed small-scale heterogeneities. Statistical properties of the random heterogeneities have been estimated by analysing short-period seismograms. However, generally, the small-scale random heterogeneity is not taken into account for the modelling of long-period (>2 s) seismograms. We found that the energy of the coda of long-period seismograms shows a spatially flat distribution. This phenomenon is well known in short-period seismograms and results from the scattering by small-scale heterogeneities. We estimate the statistical parameters that characterize the small-scale random heterogeneity by modelling the spatiotemporal energy distribution of long-period seismograms. We analyse three moderate-size earthquakes that occurred in southwest Japan. We calculate the spatial distribution of the energy density recorded by a dense seismograph network in Japan at the period bands of 8-16 s, 4-8 s and 2-4 s and model them by using 3-D finite difference (FD) simulations. Compared to conventional methods based on statistical theories, we can calculate more realistic synthetics by using the FD simulation. It is not necessary to assume a uniform background velocity, body or surface waves and scattering properties considered in general scattering theories. By taking the ratio of the energy of the coda area to that of the entire area, we can separately estimate the scattering and the intrinsic absorption effects. Our result reveals the spectrum of the random inhomogeneity in a wide wavenumber range including the intensity around the corner wavenumber as P(m) = 8πε2a3/(1 + a2m2)2, where ε = 0.05 and a = 3.1 km, even though past studies analysing higher-frequency records could not detect the corner. Finally, we estimate the intrinsic attenuation by modelling the decay rate of the energy. The method proposed in this study is suitable for quantifying the statistical properties of long-wavelength subsurface random inhomogeneity, which

  8. Correlating tephras and cryptotephras using glass compositional analyses and numerical and statistical methods: Review and evaluation

    Science.gov (United States)

    Lowe, David J.; Pearce, Nicholas J. G.; Jorgensen, Murray A.; Kuehn, Stephen C.; Tryon, Christian A.; Hayward, Chris L.

    2017-11-01

    We define tephras and cryptotephras and their components (mainly ash-sized particles of glass ± crystals in distal deposits) and summarize the basis of tephrochronology as a chronostratigraphic correlational and dating tool for palaeoenvironmental, geological, and archaeological research. We then document and appraise recent advances in analytical methods used to determine the major, minor, and trace elements of individual glass shards from tephra or cryptotephra deposits to aid their correlation and application. Protocols developed recently for the electron probe microanalysis of major elements in individual glass shards help to improve data quality and standardize reporting procedures. A narrow electron beam (diameter ∼3-5 μm) can now be used to analyze smaller glass shards than previously attainable. Reliable analyses of 'microshards' (defined here as glass shards relationship of such fractionation with glass composition suggests that analyses for some elements at these resolutions may be quantifiable. In undertaking analyses, either by microprobe or LA-ICP-MS, reference material data acquired using the same procedure, and preferably from the same analytical session, should be presented alongside new analytical data. In part 2 of the review, we describe, critically assess, and recommend ways in which tephras or cryptotephras can be correlated (in conjunction with other information) using numerical or statistical analyses of compositional data. Statistical methods provide a less subjective means of dealing with analytical data pertaining to tephra components (usually glass or crystals/phenocrysts) than heuristic alternatives. They enable a better understanding of relationships among the data from multiple viewpoints to be developed and help quantify the degree of uncertainty in establishing correlations. In common with other scientific hypothesis testing, it is easier to infer using such analysis that two or more tephras are different rather than the same

  9. Statistical analyses of digital collections: Using a large corpus of systematic reviews to study non-citations

    DEFF Research Database (Denmark)

    Frandsen, Tove Faber; Nicolaisen, Jeppe

    2017-01-01

    Using statistical methods to analyse digital material for patterns makes it possible to detect patterns in big data that we would otherwise not be able to detect. This paper seeks to exemplify this fact by statistically analysing a large corpus of references in systematic reviews. The aim...

  10. Studies in Coprinus V — Coprinus section Coprinus. Revision of subsection Lanatuli Sing

    NARCIS (Netherlands)

    Uljé, C.B.; Noordeloos, M.E.

    1999-01-01

    A key is given to the species of subsection Lanatuli known from the Netherlands or to be expected in the Netherlands on account of records from neighbouring countries. For a key to the subsections in Coprinus section Coprinus see Uljé & Noordel., Persoonia 16 (1997) 267. Coprinus bicornis and C.

  11. Systematic Mapping and Statistical Analyses of Valley Landform and Vegetation Asymmetries Across Hydroclimatic Gradients

    Science.gov (United States)

    Poulos, M. J.; Pierce, J. L.; McNamara, J. P.; Flores, A. N.; Benner, S. G.

    2015-12-01

    Terrain aspect alters the spatial distribution of insolation across topography, driving eco-pedo-hydro-geomorphic feedbacks that can alter landform evolution and result in valley asymmetries for a suite of land surface characteristics (e.g. slope length and steepness, vegetation, soil properties, and drainage development). Asymmetric valleys serve as natural laboratories for studying how landscapes respond to climate perturbation. In the semi-arid montane granodioritic terrain of the Idaho batholith, Northern Rocky Mountains, USA, prior works indicate that reduced insolation on northern (pole-facing) aspects prolongs snow pack persistence, and is associated with thicker, finer-grained soils, that retain more water, prolong the growing season, support coniferous forest rather than sagebrush steppe ecosystems, stabilize slopes at steeper angles, and produce sparser drainage networks. We hypothesize that the primary drivers of valley asymmetry development are changes in the pedon-scale water-balance that coalesce to alter catchment-scale runoff and drainage development, and ultimately cause the divide between north and south-facing land surfaces to migrate northward. We explore this conceptual framework by coupling land surface analyses with statistical modeling to assess relationships and the relative importance of land surface characteristics. Throughout the Idaho batholith, we systematically mapped and tabulated various statistical measures of landforms, land cover, and hydroclimate within discrete valley segments (n=~10,000). We developed a random forest based statistical model to predict valley slope asymmetry based upon numerous measures (n>300) of landscape asymmetries. Preliminary results suggest that drainages are tightly coupled with hillslopes throughout the region, with drainage-network slope being one of the strongest predictors of land-surface-averaged slope asymmetry. When slope-related statistics are excluded, due to possible autocorrelation, valley

  12. Statistical analyses to support guidelines for marine avian sampling. Final report

    Science.gov (United States)

    Kinlan, Brian P.; Zipkin, Elise; O'Connell, Allan F.; Caldow, Chris

    2012-01-01

    distribution to describe counts of a given species in a particular region and season. 4. Using a large database of historical at-sea seabird survey data, we applied this technique to identify appropriate statistical distributions for modeling a variety of species, allowing the distribution to vary by season. For each species and season, we used the selected distribution to calculate and map retrospective statistical power to detect hotspots and coldspots, and map pvalues from Monte Carlo significance tests of hotspots and coldspots, in discrete lease blocks designated by the U.S. Department of Interior, Bureau of Ocean Energy Management (BOEM). 5. Because our definition of hotspots and coldspots does not explicitly include variability over time, we examine the relationship between the temporal scale of sampling and the proportion of variance captured in time series of key environmental correlates of marine bird abundance, as well as available marine bird abundance time series, and use these analyses to develop recommendations for the temporal distribution of sampling to adequately represent both shortterm and long-term variability. We conclude by presenting a schematic “decision tree” showing how this power analysis approach would fit in a general framework for avian survey design, and discuss implications of model assumptions and results. We discuss avenues for future development of this work, and recommendations for practical implementation in the context of siting and wildlife assessment for offshore renewable energy development projects.

  13. Harmonisation of variables names prior to conducting statistical analyses with multiple datasets: an automated approach

    Science.gov (United States)

    2011-01-01

    Background Data requirements by governments, donors and the international community to measure health and development achievements have increased in the last decade. Datasets produced in surveys conducted in several countries and years are often combined to analyse time trends and geographical patterns of demographic and health related indicators. However, since not all datasets have the same structure, variables definitions and codes, they have to be harmonised prior to submitting them to the statistical analyses. Manually searching, renaming and recoding variables are extremely tedious and prone to errors tasks, overall when the number of datasets and variables are large. This article presents an automated approach to harmonise variables names across several datasets, which optimises the search of variables, minimises manual inputs and reduces the risk of error. Results Three consecutive algorithms are applied iteratively to search for each variable of interest for the analyses in all datasets. The first search (A) captures particular cases that could not be solved in an automated way in the search iterations; the second search (B) is run if search A produced no hits and identifies variables the labels of which contain certain key terms defined by the user. If this search produces no hits, a third one (C) is run to retrieve variables which have been identified in other surveys, as an illustration. For each variable of interest, the outputs of these engines can be (O1) a single best matching variable is found, (O2) more than one matching variable is found or (O3) not matching variables are found. Output O2 is solved by user judgement. Examples using four variables are presented showing that the searches have a 100% sensitivity and specificity after a second iteration. Conclusion Efficient and tested automated algorithms should be used to support the harmonisation process needed to analyse multiple datasets. This is especially relevant when the numbers of datasets

  14. Statistical Analyses of White-Light Flares: Two Main Results about Flare Behaviour

    Science.gov (United States)

    Dal, Hasan Ali

    2012-08-01

    We present two main results, based on models and the statistical analyses of 1672 U-band flares. We also discuss the behaviour of white-light flares. In addition, the parameters of the flares detected from two years of observations on CR Dra are presented. By comparing with flare parameters obtained from other UV Ceti-type stars, we examine the behaviour of the optical flare processes along with the spectral types. Moreover, we aimed, using large white-light flare data, to analyse the flare time-scales with respect to some results obtained from X-ray observations. Using SPSS V17.0 and GraphPad Prism V5.02 software, the flares detected from CR Dra were modelled with the OPEA function, and analysed with the t-Test method to compare similar flare events in other stars. In addition, using some regression calculations in order to derive the best histograms, the time-scales of white-light flares were analysed. Firstly, CR Dra flares have revealed that white-light flares behave in a similar way as their counterparts observed in X-rays. As can be seen in X-ray observations, the electron density seems to be a dominant parameter in white-light flare process, too. Secondly, the distributions of the flare time-scales demonstrate that the number of observed flares reaches a maximum value in some particular ratios, which are 0.5, or its multiples, and especially positive integers. The thermal processes might be dominant for these white-light flares, while non-thermal processes might be dominant in the others. To obtain better results for the behaviour of the white-light flare process along with the spectral types, much more stars in a wide spectral range, from spectral type dK5e to dM6e, must be observed in white-light flare patrols.

  15. Modelling and analysing track cycling Omnium performances using statistical and machine learning techniques.

    Science.gov (United States)

    Ofoghi, Bahadorreza; Zeleznikow, John; Dwyer, Dan; Macmahon, Clare

    2013-01-01

    This article describes the utilisation of an unsupervised machine learning technique and statistical approaches (e.g., the Kolmogorov-Smirnov test) that assist cycling experts in the crucial decision-making processes for athlete selection, training, and strategic planning in the track cycling Omnium. The Omnium is a multi-event competition that will be included in the summer Olympic Games for the first time in 2012. Presently, selectors and cycling coaches make decisions based on experience and intuition. They rarely have access to objective data. We analysed both the old five-event (first raced internationally in 2007) and new six-event (first raced internationally in 2011) Omniums and found that the addition of the elimination race component to the Omnium has, contrary to expectations, not favoured track endurance riders. We analysed the Omnium data and also determined the inter-relationships between different individual events as well as between those events and the final standings of riders. In further analysis, we found that there is no maximum ranking (poorest performance) in each individual event that riders can afford whilst still winning a medal. We also found the required times for riders to finish the timed components that are necessary for medal winning. The results of this study consider the scoring system of the Omnium and inform decision-making toward successful participation in future major Omnium competitions.

  16. Multifractal and statistical analyses of heat release fluctuations in a spark ignition engine.

    Science.gov (United States)

    Sen, Asok K; Litak, Grzegorz; Kaminski, Tomasz; Wendeker, Mirosław

    2008-09-01

    Using multifractal and statistical analyses, we have investigated the complex dynamics of cycle-to-cycle heat release variations in a spark ignition engine. Three different values of the spark advance angle (Delta beta) are examined. The multifractal complexity is characterized by the singularity spectrum of the heat release time series in terms of the Holder exponent. The broadness of the singularity spectrum gives a measure of the degree of mutifractality or complexity of the time series. The broader the spectrum, the richer and more complex is the structure with a higher degree of multifractality. Using this broadness measure, the complexity in heat release variations is compared for the three spark advance angles (SAAs). Our results reveal that the heat release data are most complex for Delta beta=30 degrees followed in order by Delta beta=15 degrees and 5 degrees. In other words, the complexity increases with increasing SAA. In addition, we found that for all the SAAs considered, the heat release fluctuations behave like an antipersistent or a negatively correlated process, becoming more antipersistent with decreasing SAA. We have also performed a statistical analysis of the heat release variations by calculating the kurtosis of their probability density functions (pdfs). It is found that for the smallest SAA considered, Delta beta=5 degrees, the pdf is nearly Gaussian with a kurtosis of 3.42. As the value of the SAA increases, the pdf deviates from a Gaussian distribution and tends to be more peaked with larger values of kurtosis. In particular, the kurtosis has values of 3.94 and 6.69, for Delta beta=15 degrees and 30 degrees, respectively. A non-Gaussian density function with kurtosis in excess of 3 is indicative of intermittency. A larger value of kurtosis implies a higher degree of intermittency. (c) 2008 American Institute of Physics.

  17. Municipal solid waste composition: Sampling methodology, statistical analyses, and case study evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Edjabou, Maklawe Essonanawe, E-mail: vine@env.dtu.dk [Department of Environmental Engineering, Technical University of Denmark, 2800 Kgs. Lyngby (Denmark); Jensen, Morten Bang; Götze, Ramona; Pivnenko, Kostyantyn [Department of Environmental Engineering, Technical University of Denmark, 2800 Kgs. Lyngby (Denmark); Petersen, Claus [Econet AS, Omøgade 8, 2.sal, 2100 Copenhagen (Denmark); Scheutz, Charlotte; Astrup, Thomas Fruergaard [Department of Environmental Engineering, Technical University of Denmark, 2800 Kgs. Lyngby (Denmark)

    2015-02-15

    Highlights: • Tiered approach to waste sorting ensures flexibility and facilitates comparison of solid waste composition data. • Food and miscellaneous wastes are the main fractions contributing to the residual household waste. • Separation of food packaging from food leftovers during sorting is not critical for determination of the solid waste composition. - Abstract: Sound waste management and optimisation of resource recovery require reliable data on solid waste generation and composition. In the absence of standardised and commonly accepted waste characterisation methodologies, various approaches have been reported in literature. This limits both comparability and applicability of the results. In this study, a waste sampling and sorting methodology for efficient and statistically robust characterisation of solid waste was introduced. The methodology was applied to residual waste collected from 1442 households distributed among 10 individual sub-areas in three Danish municipalities (both single and multi-family house areas). In total 17 tonnes of waste were sorted into 10–50 waste fractions, organised according to a three-level (tiered approach) facilitating comparison of the waste data between individual sub-areas with different fractionation (waste from one municipality was sorted at “Level III”, e.g. detailed, while the two others were sorted only at “Level I”). The results showed that residual household waste mainly contained food waste (42 ± 5%, mass per wet basis) and miscellaneous combustibles (18 ± 3%, mass per wet basis). The residual household waste generation rate in the study areas was 3–4 kg per person per week. Statistical analyses revealed that the waste composition was independent of variations in the waste generation rate. Both, waste composition and waste generation rates were statistically similar for each of the three municipalities. While the waste generation rates were similar for each of the two housing types (single

  18. AxPcoords & parallel AxParafit: statistical co-phylogenetic analyses on thousands of taxa

    Directory of Open Access Journals (Sweden)

    Meier-Kolthoff Jan

    2007-10-01

    Full Text Available Abstract Background Current tools for Co-phylogenetic analyses are not able to cope with the continuous accumulation of phylogenetic data. The sophisticated statistical test for host-parasite co-phylogenetic analyses implemented in Parafit does not allow it to handle large datasets in reasonable times. The Parafit and DistPCoA programs are the by far most compute-intensive components of the Parafit analysis pipeline. We present AxParafit and AxPcoords (Ax stands for Accelerated which are highly optimized versions of Parafit and DistPCoA respectively. Results Both programs have been entirely re-written in C. Via optimization of the algorithm and the C code as well as integration of highly tuned BLAS and LAPACK methods AxParafit runs 5–61 times faster than Parafit with a lower memory footprint (up to 35% reduction while the performance benefit increases with growing dataset size. The MPI-based parallel implementation of AxParafit shows good scalability on up to 128 processors, even on medium-sized datasets. The parallel analysis with AxParafit on 128 CPUs for a medium-sized dataset with an 512 by 512 association matrix is more than 1,200/128 times faster per processor than the sequential Parafit run. AxPcoords is 8–26 times faster than DistPCoA and numerically stable on large datasets. We outline the substantial benefits of using parallel AxParafit by example of a large-scale empirical study on smut fungi and their host plants. To the best of our knowledge, this study represents the largest co-phylogenetic analysis to date. Conclusion The highly efficient AxPcoords and AxParafit programs allow for large-scale co-phylogenetic analyses on several thousands of taxa for the first time. In addition, AxParafit and AxPcoords have been integrated into the easy-to-use CopyCat tool.

  19. Dispensing processes impact apparent biological activity as determined by computational and statistical analyses.

    Directory of Open Access Journals (Sweden)

    Sean Ekins

    Full Text Available Dispensing and dilution processes may profoundly influence estimates of biological activity of compounds. Published data show Ephrin type-B receptor 4 IC50 values obtained via tip-based serial dilution and dispensing versus acoustic dispensing with direct dilution differ by orders of magnitude with no correlation or ranking of datasets. We generated computational 3D pharmacophores based on data derived by both acoustic and tip-based transfer. The computed pharmacophores differ significantly depending upon dispensing and dilution methods. The acoustic dispensing-derived pharmacophore correctly identified active compounds in a subsequent test set where the tip-based method failed. Data from acoustic dispensing generates a pharmacophore containing two hydrophobic features, one hydrogen bond donor and one hydrogen bond acceptor. This is consistent with X-ray crystallography studies of ligand-protein interactions and automatically generated pharmacophores derived from this structural data. In contrast, the tip-based data suggest a pharmacophore with two hydrogen bond acceptors, one hydrogen bond donor and no hydrophobic features. This pharmacophore is inconsistent with the X-ray crystallographic studies and automatically generated pharmacophores. In short, traditional dispensing processes are another important source of error in high-throughput screening that impacts computational and statistical analyses. These findings have far-reaching implications in biological research.

  20. Authigenic oxide Neodymium Isotopic composition as a proxy of seawater: applying multivariate statistical analyses.

    Science.gov (United States)

    McKinley, C. C.; Scudder, R.; Thomas, D. J.

    2016-12-01

    The Neodymium Isotopic composition (Nd IC) of oxide coatings has been applied as a tracer of water mass composition and used to address fundamental questions about past ocean conditions. The leached authigenic oxide coating from marine sediment is widely assumed to reflect the dissolved trace metal composition of the bottom water interacting with sediment at the seafloor. However, recent studies have shown that readily reducible sediment components, in addition to trace metal fluxes from the pore water, are incorporated into the bottom water, influencing the trace metal composition of leached oxide coatings. This challenges the prevailing application of the authigenic oxide Nd IC as a proxy of seawater composition. Therefore, it is important to identify the component end-members that create sediments of different lithology and determine if, or how they might contribute to the Nd IC of oxide coatings. To investigate lithologic influence on the results of sequential leaching, we selected two sites with complete bulk sediment statistical characterization. Site U1370 in the South Pacific Gyre, is predominantly composed of Rhyolite ( 60%) and has a distinguishable ( 10%) Fe-Mn Oxyhydroxide component (Dunlea et al., 2015). Site 1149 near the Izu-Bonin-Arc is predominantly composed of dispersed ash ( 20-50%) and eolian dust from Asia ( 50-80%) (Scudder et al., 2014). We perform a two-step leaching procedure: a 14 mL of 0.02 M hydroxylamine hydrochloride (HH) in 20% acetic acid buffered to a pH 4 for one hour, targeting metals bound to Fe- and Mn- oxides fractions, and a second HH leach for 12 hours, designed to remove any remaining oxides from the residual component. We analyze all three resulting fractions for a large suite of major, trace and rare earth elements, a sub-set of the samples are also analyzed for Nd IC. We use multivariate statistical analyses of the resulting geochemical data to identify how each component of the sediment partitions across the sequential

  1. Analyse

    DEFF Research Database (Denmark)

    Greve, Bent

    2007-01-01

    Analyse i Politiken om frynsegoder med udgangspunkt i bogen Occupational Welfare - Winners and Losers publiceret på Edward Elgar......Analyse i Politiken om frynsegoder med udgangspunkt i bogen Occupational Welfare - Winners and Losers publiceret på Edward Elgar...

  2. Studies in Coprinus V — Coprinus section Coprinus. Revision of subsection Lanatuli Sing

    OpenAIRE

    Uljé, C.B.; Noordeloos, M.E.

    1999-01-01

    A key is given to the species of subsection Lanatuli known from the Netherlands or to be expected in the Netherlands on account of records from neighbouring countries. For a key to the subsections in Coprinus section Coprinus see Uljé & Noordel., Persoonia 16 (1997) 267. Coprinus bicornis and C. spelaiophilus are described as new species. In addition the following species are fully described: C. ammophilae, C. calosporus, C. cinereus, C. erythrocephalus, C. geesterani, C. jonesii, C. krieglst...

  3. Radiation induced chromatin conformation changes analysed by fluorescent localization microscopy, statistical physics, and graph theory.

    Directory of Open Access Journals (Sweden)

    Yang Zhang

    Full Text Available It has been well established that the architecture of chromatin in cell nuclei is not random but functionally correlated. Chromatin damage caused by ionizing radiation raises complex repair machineries. This is accompanied by local chromatin rearrangements and structural changes which may for instance improve the accessibility of damaged sites for repair protein complexes. Using stably transfected HeLa cells expressing either green fluorescent protein (GFP labelled histone H2B or yellow fluorescent protein (YFP labelled histone H2A, we investigated the positioning of individual histone proteins in cell nuclei by means of high resolution localization microscopy (Spectral Position Determination Microscopy = SPDM. The cells were exposed to ionizing radiation of different doses and aliquots were fixed after different repair times for SPDM imaging. In addition to the repair dependent histone protein pattern, the positioning of antibodies specific for heterochromatin and euchromatin was separately recorded by SPDM. The present paper aims to provide a quantitative description of structural changes of chromatin after irradiation and during repair. It introduces a novel approach to analyse SPDM images by means of statistical physics and graph theory. The method is based on the calculation of the radial distribution functions as well as edge length distributions for graphs defined by a triangulation of the marker positions. The obtained results show that through the cell nucleus the different chromatin re-arrangements as detected by the fluorescent nucleosomal pattern average themselves. In contrast heterochromatic regions alone indicate a relaxation after radiation exposure and re-condensation during repair whereas euchromatin seemed to be unaffected or behave contrarily. SPDM in combination with the analysis techniques applied allows the systematic elucidation of chromatin re-arrangements after irradiation and during repair, if selected sub-regions of

  4. Analysis of Norwegian bio energy statistics. Quality improvement proposals; Analyse av norsk bioenergistatistikk. Forslag til kvalitetsheving

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2011-07-01

    This report is an assessment of the current model and presentation form of bio energy statistics. It appears proposed revision and enhancement of both collection and data representation. In the context of market development both in general for energy and particularly for bio energy and government targets, a good bio energy statistics form the basis to follow up the objectives and means.(eb)

  5. Essentials of Excel, Excel VBA, SAS and Minitab for statistical and financial analyses

    CERN Document Server

    Lee, Cheng-Few; Chang, Jow-Ran; Tai, Tzu

    2016-01-01

    This introductory textbook for business statistics teaches statistical analysis and research methods via business case studies and financial data using Excel, MINITAB, and SAS. Every chapter in this textbook engages the reader with data of individual stock, stock indices, options, and futures. One studies and uses statistics to learn how to study, analyze, and understand a data set of particular interest. Some of the more popular statistical programs that have been developed to use statistical and computational methods to analyze data sets are SAS, SPSS, and MINITAB. Of those, we look at MINITAB and SAS in this textbook. One of the main reasons to use MINITAB is that it is the easiest to use among the popular statistical programs. We look at SAS because it is the leading statistical package used in industry. We also utilize the much less costly and ubiquitous Microsoft Excel to do statistical analysis, as the benefits of Excel have become widely recognized in the academic world and its analytical capabilities...

  6. A retrospective survey of research design and statistical analyses in selected Chinese medical journals in 1998 and 2008.

    Science.gov (United States)

    Jin, Zhichao; Yu, Danghui; Zhang, Luoman; Meng, Hong; Lu, Jian; Gao, Qingbin; Cao, Yang; Ma, Xiuqiang; Wu, Cheng; He, Qian; Wang, Rui; He, Jia

    2010-05-25

    High quality clinical research not only requires advanced professional knowledge, but also needs sound study design and correct statistical analyses. The number of clinical research articles published in Chinese medical journals has increased immensely in the past decade, but study design quality and statistical analyses have remained suboptimal. The aim of this investigation was to gather evidence on the quality of study design and statistical analyses in clinical researches conducted in China for the first decade of the new millennium. Ten (10) leading Chinese medical journals were selected and all original articles published in 1998 (N = 1,335) and 2008 (N = 1,578) were thoroughly categorized and reviewed. A well-defined and validated checklist on study design, statistical analyses, results presentation, and interpretation was used for review and evaluation. Main outcomes were the frequencies of different types of study design, error/defect proportion in design and statistical analyses, and implementation of CONSORT in randomized clinical trials. From 1998 to 2008: The error/defect proportion in statistical analyses decreased significantly ( = 12.03, pdesign also decreased ( = 21.22, pdesign with randomized clinical trials remained low in single digit (3.8%, 60/1,578) with two-third showed poor results reporting (defects in 44 papers, 73.3%). Nearly half of the published studies were retrospective in nature, 49.3% (658/1,335) in 1998 compared to 48.2% (761/1,578) in 2008. Decreases in defect proportions were observed in both results presentation ( = 93.26, pdesigns. Retrospective clinical studies are the most often used design, whereas randomized clinical trials are rare and often show methodological weaknesses. Urgent implementation of the CONSORT statement is imperative.

  7. A simple and robust statistical framework for planning, analysing and interpreting faecal egg count reduction test (FECRT) studies

    DEFF Research Database (Denmark)

    Denwood, M.J.; McKendrick, I.J.; Matthews, L.

    2017-01-01

    Introduction. There is an urgent need for a method of analysing FECRT data that is computationally simple and statistically robust. A method for evaluating the statistical power of a proposed FECRT study would also greatly enhance the current guidelines. Methods. A novel statistical framework has...... that the notional type 1 error rate of the new statistical test is accurate. Power calculations demonstrate a power of only 65% with a sample size of 20 treatment and control animals, which increases to 69% with 40 control animals or 79% with 40 treatment animals. Discussion. The method proposed is simple...... been developed that evaluates observed FECRT data against two null hypotheses: (1) the observed efficacy is consistent with the expected efficacy, and (2) the observed efficacy is inferior to the expected efficacy. The method requires only four simple summary statistics of the observed data. Power...

  8. Statistics

    CERN Document Server

    Hayslett, H T

    1991-01-01

    Statistics covers the basic principles of Statistics. The book starts by tackling the importance and the two kinds of statistics; the presentation of sample data; the definition, illustration and explanation of several measures of location; and the measures of variation. The text then discusses elementary probability, the normal distribution and the normal approximation to the binomial. Testing of statistical hypotheses and tests of hypotheses about the theoretical proportion of successes in a binomial population and about the theoretical mean of a normal population are explained. The text the

  9. False Positives and Other Statistical Errors in Standard Analyses of Eye Movements in Reading.

    Science.gov (United States)

    von der Malsburg, Titus; Angele, Bernhard

    2017-06-01

    In research on eye movements in reading, it is common to analyze a number of canonical dependent measures to study how the effects of a manipulation unfold over time. Although this gives rise to the well-known multiple comparisons problem, i.e. an inflated probability that the null hypothesis is incorrectly rejected (Type I error), it is accepted standard practice not to apply any correction procedures. Instead, there appears to be a widespread belief that corrections are not necessary because the increase in false positives is too small to matter. To our knowledge, no formal argument has ever been presented to justify this assumption. Here, we report a computational investigation of this issue using Monte Carlo simulations. Our results show that, contrary to conventional wisdom, false positives are increased to unacceptable levels when no corrections are applied. Our simulations also show that counter-measures like the Bonferroni correction keep false positives in check while reducing statistical power only moderately. Hence, there is little reason why such corrections should not be made a standard requirement. Further, we discuss three statistical illusions that can arise when statistical power is low, and we show how power can be improved to prevent these illusions. In sum, our work renders a detailed picture of the various types of statistical errors than can occur in studies of reading behavior and we provide concrete guidance about how these errors can be avoided.

  10. Statistical Modelling of Synaptic Vesicles Distribution and Analysing their Physical Characteristics

    DEFF Research Database (Denmark)

    Khanmohammadi, Mahdieh

    This Ph.D. thesis deals with mathematical and statistical modeling of synaptic vesicle distribution, shape, orientation and interactions. The first major part of this thesis treats the problem of determining the effect of stress on synaptic vesicle distribution and interactions. Serial section...

  11. Statistical Analyses of Raw Material Data for MTM45-1/CF7442A-36% RW: CMH Cure Cycle

    Science.gov (United States)

    Coroneos, Rula; Pai, Shantaram, S.; Murthy, Pappu

    2013-01-01

    This report describes statistical characterization of physical properties of the composite material system MTM45-1/CF7442A, which has been tested and is currently being considered for use on spacecraft structures. This composite system is made of 6K plain weave graphite fibers in a highly toughened resin system. This report summarizes the distribution types and statistical details of the tests and the conditions for the experimental data generated. These distributions will be used in multivariate regression analyses to help determine material and design allowables for similar material systems and to establish a procedure for other material systems. Additionally, these distributions will be used in future probabilistic analyses of spacecraft structures. The specific properties that are characterized are the ultimate strength, modulus, and Poisson??s ratio by using a commercially available statistical package. Results are displayed using graphical and semigraphical methods and are included in the accompanying appendixes.

  12. Assessing the hydrodynamic boundary conditions for risk analyses in coastal areas: a multivariate statistical approach based on Copula functions

    Directory of Open Access Journals (Sweden)

    T. Wahl

    2012-02-01

    Full Text Available This paper presents an advanced approach to statistically analyse storm surge events. In former studies the highest water level during a storm surge event usually was the only parameter that was used for the statistical assessment. This is not always sufficient, especially when statistically analysing storm surge scenarios for event-based risk analyses. Here, Archimedean Copula functions are applied and allow for the consideration of further important parameters in addition to the highest storm surge water levels. First, a bivariate model is presented and used to estimate exceedance probabilities of storm surges (for two tide gauges in the German Bight by jointly analysing the important storm surge parameters "highest turning point" and "intensity". Second, another dimension is added and a trivariate fully nested Archimedean Copula model is applied to additionally incorporate the significant wave height as an important wave parameter. With the presented methodology, reliable and realistic exceedance probabilities are derived and can be considered (among others for integrated flood risk analyses contributing to improve the overall results. It is highlighted that the concept of Copulas represents a promising alternative for facing multivariate problems in coastal engineering.

  13. Statistical Analyses of Scatterplots to Identify Important Factors in Large-Scale Simulations, 2. Robustness of Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Helton, J.C.; Kleijnen, J.P.C.

    1999-03-24

    Procedures for identifying patterns in scatterplots generated in Monte Carlo sensitivity analyses are described and illustrated. These procedures attempt to detect increasingly complex patterns in scatterplots and involve the identification of (i) linear relationships with correlation coefficients, (ii) monotonic relationships with rank correlation coefficients, (iii) trends in central tendency as defined by means, medians and the Kruskal-Wallis statistic, (iv) trends in variability as defined by variances and interquartile ranges, and (v) deviations from randomness as defined by the chi-square statistic. A sequence of example analyses with a large model for two-phase fluid flow illustrates how the individual procedures can differ in the variables that they identify as having effects on particular model outcomes. The example analyses indicate that the use of a sequence of procedures is a good analysis strategy and provides some assurance that an important effect is not overlooked.

  14. Statistical analyses of protein folding rates from the view of quantum transition.

    Science.gov (United States)

    Lv, Jun; Luo, LiaoFu

    2014-12-01

    Understanding protein folding rate is the primary key to unlock the fundamental physics underlying protein structure and its folding mechanism. Especially, the temperature dependence of the folding rate remains unsolved in the literature. Starting from the assumption that protein folding is an event of quantum transition between molecular conformations, we calculated the folding rate for all two-state proteins in a database and studied their temperature dependencies. The non-Arrhenius temperature relation for 16 proteins, whose experimental data had previously been available, was successfully interpreted by comparing the Arrhenius plot with the first-principle calculation. A statistical formula for the prediction of two-state protein folding rate was proposed based on quantum folding theory. The statistical comparisons of the folding rates for 65 two-state proteins were carried out, and the theoretical vs. experimental correlation coefficient was 0.73. Moreover, the maximum and the minimum folding rates given by the theory were consistent with the experimental results.

  15. Thinking About Data, Research Methods, and Statistical Analyses: Commentary on Sijtsma's (2014) "Playing with Data".

    Science.gov (United States)

    Waldman, Irwin D; Lilienfeld, Scott O

    2016-03-01

    We comment on Sijtsma's (2014) thought-provoking essay on how to minimize questionable research practices (QRPs) in psychology. We agree with Sijtsma that proactive measures to decrease the risk of QRPs will ultimately be more productive than efforts to target individual researchers and their work. In particular, we concur that encouraging researchers to make their data and research materials public is the best institutional antidote against QRPs, although we are concerned that Sijtsma's proposal to delegate more responsibility to statistical and methodological consultants could inadvertently reinforce the dichotomy between the substantive and statistical aspects of research. We also discuss sources of false-positive findings and replication failures in psychological research, and outline potential remedies for these problems. We conclude that replicability is the best metric of the minimization of QRPs and their adverse effects on psychological research.

  16. Priors, Posterior Odds and Lagrange Multiplier Statistics in Bayesian Analyses of Cointegration

    NARCIS (Netherlands)

    F.R. Kleibergen (Frank); R. Paap (Richard)

    1996-01-01

    textabstractUsing the standard linear model as a base, a unified theory of Bayesian Analyses of Cointegration Models is constructed. This is achieved by defining (natural conjugate) priors in the linear model and using the implied priors for the cointegration model. Using these priors, posterior

  17. Statistic analyses of the color experience according to the age of the observer.

    Science.gov (United States)

    Hunjet, Anica; Parac-Osterman, Durdica; Vucaj, Edita

    2013-04-01

    Psychological experience of color is a real state of the communication between the environment and color, and it will depend on the source of the light, angle of the view, and particular on the observer and his health condition. Hering's theory or a theory of the opponent processes supposes that cones, which are situated in the retina of the eye, are not sensible on the three chromatic domains (areas, fields, zones) (red, green and purple-blue), but they produce a signal based on the principle of the opposed pairs of colors. A reason of this theory depends on the fact that certain disorders of the color eyesight, which include blindness to certain colors, cause blindness to pairs of opponent colors. This paper presents a demonstration of the experience of blue and yellow tone according to the age of the observer. For the testing of the statistically significant differences in the omission in the color experience according to the color of the background we use following statistical tests: Mann-Whitnney U Test, Kruskal-Wallis ANOVA and Median test. It was proven that the differences are statistically significant in the elderly persons (older than 35 years).

  18. Statistical analyses of the performance of Macedonian investment and pension funds

    Directory of Open Access Journals (Sweden)

    Petar Taleski

    2015-10-01

    Full Text Available The foundation of the post-modern portfolio theory is creating a portfolio based on a desired target return. This specifically applies to the performance of investment and pension funds that provide a rate of return meeting payment requirements from investment funds. A desired target return is the goal of an investment or pension fund. It is the primary benchmark used to measure performances, dynamic monitoring and evaluation of the risk–return ratio on investment funds. The analysis in this paper is based on monthly returns of Macedonian investment and pension funds (June 2011 - June 2014. Such analysis utilizes the basic, but highly informative statistical characteristic moments like skewness, kurtosis, Jarque–Bera, and Chebyishev’s Inequality. The objective of this study is to perform a trough analysis, utilizing the above mentioned and other types of statistical techniques (Sharpe, Sortino, omega, upside potential, Calmar, Sterling to draw relevant conclusions regarding the risks and characteristic moments in Macedonian investment and pension funds. Pension funds are the second largest segment of the financial system, and has great potential for further growth due to constant inflows from pension insurance. The importance of investment funds for the financial system in the Republic of Macedonia is still small, although open-end investment funds have been the fastest growing segment of the financial system. Statistical analysis has shown that pension funds have delivered a significantly positive volatility-adjusted risk premium in the analyzed period more so than investment funds.

  19. Characterizing Earthflow Surface Morphology With Statistical and Spectral Analyses of Airborne Laser Altimetry

    Science.gov (United States)

    McKean, J.; Roering, J.

    High-resolution laser altimetry can depict the topography of large landslides with un- precedented accuracy and allow better management of the hazards posed by such slides. The surface of most landslides is rougher, on a local scale of a few meters, than adjacent unfailed slopes. This characteristic can be exploited to automatically detect and map landslides in landscapes represented by high resolution DTMs. We have used laser altimetry measurements of local topographic roughness to identify and map the perimeter and internal features of a large earthflow in the South Island, New Zealand. Surface roughness was first quantified by statistically characterizing the local variabil- ity of ground surface orientations using both circular and spherical statistics. These measures included the circular resultant, standard deviation and dispersion, and the three-dimensional spherical resultant and ratios of the normalized eigenvalues of the direction cosines. The circular measures evaluate the amount of change in topographic aspect from pixel-to-pixel in the gridded data matrix. The spherical statistics assess both the aspect and steepness of each pixel. The standard deviation of the third di- rection cosine was also used alone to define the variability in just the steepness of each pixel. All of the statistical measures detect and clearly map the earthflow. Cir- cular statistics also emphasize small folds transverse to the movement in the most active zone of the slide. The spherical measures are more sensitive to the larger scale roughness in a portion of the slide that includes large intact limestone blocks. Power spectra of surface roughness were also calculated from two-dimensional Fourier transformations in local test areas. A small earthflow had a broad spectral peak at wavelengths between 10 and 30 meters. Shallower soil failures and surface erosion produced surfaces with a very sharp spectral peak at 12 meters wavelength. Unfailed slopes had an order of magnitude

  20. Statistics

    Science.gov (United States)

    Links to sources of cancer-related statistics, including the Surveillance, Epidemiology and End Results (SEER) Program, SEER-Medicare datasets, cancer survivor prevalence data, and the Cancer Trends Progress Report.

  1. Serum proteins in endogamous Brahmin sub-sects of Andhra Pradesh.

    Science.gov (United States)

    Char, K S; Rao, P R

    1983-01-01

    Haptoglobin phenotypes and rare variants of transferrin and albumin are reported in six endogamous Brahmin sub-sects, viz., Niyogi, Madwa, Dravida, Vadahalai, Tengalai and Vaidiki from Andhra Pradesh. Samples were collected from Vaidiki subsect from three different locations to study the genetic variation, if any, resulting from geographic isolation. The Hp1 gene frequency ranged from 0.1124 to 0.2064. A fast heterozygote transferrin variant and three albumin slow variants showing different electrophoretic mobility are reported for the first time in the Brahmin populations of South India.

  2. The Use of Statistical Process Control Tools for Analysing Financial Statements

    Directory of Open Access Journals (Sweden)

    Niezgoda Janusz

    2017-06-01

    Full Text Available This article presents the proposed application of one type of the modified Shewhart control charts in the monitoring of changes in the aggregated level of financial ratios. The control chart x̅ has been used as a basis of analysis. The examined variable from the sample in the mentioned chart is the arithmetic mean. The author proposes to substitute it with a synthetic measure that is determined and based on the selected ratios. As the ratios mentioned above, are expressed in different units and characters, the author applies standardisation. The results of selected comparative analyses have been presented for both bankrupts and non-bankrupts. They indicate the possibility of using control charts as an auxiliary tool in financial analyses.

  3. Numerical and statistical analyses of aerodynamic characteristics of low Reynolds number airfoils using Xfoil and JMP

    Science.gov (United States)

    Chua, John Christian; Lopez, Neil Stephen; Augusto, Gerardo

    2017-11-01

    Low Reynolds number aerodynamics has become a promising topic of interest in various commercial utilizations such as wind turbines. Airfoils employed for this type of application usually experience performance degradation due to separation bubble formation. This study intends to investigate the behavior and effect of such phenomena and analyze the interrelationship among the contributive factors affecting its existence using JMP, a statistical analysis tool, with numerical data generated from Xfoil, a collective program applicable for low-speed airfoils. Numerical results were validated against published experimental data and exhibited favorable agreement more specifically within the upper limits of the given Reynolds number range. Surface pressure and skin friction drag coefficient plots show that the bubble length tends to decrease as angle of attack, Reynolds number and turbulence intensity are increased. The abridgement of the bubble extent due to enhancement of flow instabilities is associated with increase lift-to-drag ratio which is more pronounced in the attached flow regions. The statistical technique yielded predictive models for multiple outcome variables and it was learned that the main effects had more significant influence on the aerodynamic properties of airfoils and chordwise extent of separation bubble.

  4. Identifying Frequent Users of an Urban Emergency Medical Service Using Descriptive Statistics and Regression Analyses.

    Science.gov (United States)

    Norman, Chenelle; Mello, Michael; Choi, Bryan

    2016-01-01

    This retrospective cohort study provides a descriptive analysis of a population that frequently uses an urban emergency medical service (EMS) and identifies factors that contribute to use among all frequent users. For purposes of this study we divided frequent users into the following groups: low- frequent users (4 EMS transports in 2012), medium-frequent users (5 to 6 EMS transports in 2012), high-frequent users (7 to 10 EMS transports in 2012) and super-frequent users (11 or more EMS transports in 2012). Overall, we identified 539 individuals as frequent users. For all groups of EMS frequent users (i.e. low, medium, high and super) one or more hospital admissions, receiving a referral for follow-up care upon discharge, and having no insurance were found to be statistically significant with frequent EMS use (Pstatistically significant with frequent EMS use.

  5. Municipal solid waste composition: Sampling methodology, statistical analyses, and case study evaluation

    DEFF Research Database (Denmark)

    Edjabou, Vincent Maklawe Essonanawe; Jensen, Morten Bang; Götze, Ramona

    2015-01-01

    stratification parameter. Separating food leftovers from food packaging during manual sorting of the sampled waste did not have significant influence on the proportions of food waste and packaging materials, indicating that this step may not be required. (C) 2014 Elsevier Ltd. All rights reserved.......Sound waste management and optimisation of resource recovery require reliable data on solid waste generation and composition. In the absence of standardised and commonly accepted waste characterisation methodologies, various approaches have been reported in literature. This limits both...... comparability and applicability of the results. In this study, a waste sampling and sorting methodology for efficient and statistically robust characterisation of solid waste was introduced. The methodology was applied to residual waste collected from 1442 households distributed among 10 individual sub...

  6. Statistical Analyses and Modeling of the Implementation of Agile Manufacturing Tactics in Industrial Firms

    Directory of Open Access Journals (Sweden)

    Mohammad D. AL-Tahat

    2012-01-01

    Full Text Available This paper provides a review and introduction on agile manufacturing. Tactics of agile manufacturing are mapped into different production areas (eight-construct latent: manufacturing equipment and technology, processes technology and know-how, quality and productivity improvement, production planning and control, shop floor management, product design and development, supplier relationship management, and customer relationship management. The implementation level of agile manufacturing tactics is investigated in each area. A structural equation model is proposed. Hypotheses are formulated. Feedback from 456 firms is collected using five-point-Likert-scale questionnaire. Statistical analysis is carried out using IBM SPSS and AMOS. Multicollinearity, content validity, consistency, construct validity, ANOVA analysis, and relationships between agile components are tested. The results of this study prove that the agile manufacturing tactics have positive effect on the overall agility level. This conclusion can be used by manufacturing firms to manage challenges when trying to be agile.

  7. Chemometric and Statistical Analyses of ToF-SIMS Spectra of Increasingly Complex Biological Samples

    Energy Technology Data Exchange (ETDEWEB)

    Berman, E S; Wu, L; Fortson, S L; Nelson, D O; Kulp, K S; Wu, K J

    2007-10-24

    Characterizing and classifying molecular variation within biological samples is critical for determining fundamental mechanisms of biological processes that will lead to new insights including improved disease understanding. Towards these ends, time-of-flight secondary ion mass spectrometry (ToF-SIMS) was used to examine increasingly complex samples of biological relevance, including monosaccharide isomers, pure proteins, complex protein mixtures, and mouse embryo tissues. The complex mass spectral data sets produced were analyzed using five common statistical and chemometric multivariate analysis techniques: principal component analysis (PCA), linear discriminant analysis (LDA), partial least squares discriminant analysis (PLSDA), soft independent modeling of class analogy (SIMCA), and decision tree analysis by recursive partitioning. PCA was found to be a valuable first step in multivariate analysis, providing insight both into the relative groupings of samples and into the molecular basis for those groupings. For the monosaccharides, pure proteins and protein mixture samples, all of LDA, PLSDA, and SIMCA were found to produce excellent classification given a sufficient number of compound variables calculated. For the mouse embryo tissues, however, SIMCA did not produce as accurate a classification. The decision tree analysis was found to be the least successful for all the data sets, providing neither as accurate a classification nor chemical insight for any of the tested samples. Based on these results we conclude that as the complexity of the sample increases, so must the sophistication of the multivariate technique used to classify the samples. PCA is a preferred first step for understanding ToF-SIMS data that can be followed by either LDA or PLSDA for effective classification analysis. This study demonstrates the strength of ToF-SIMS combined with multivariate statistical and chemometric techniques to classify increasingly complex biological samples

  8. Radioembolization for Hepatocellular Carcinoma: Statistical Confirmation of Improved Survival in Responders by Landmark Analyses.

    Science.gov (United States)

    Riaz, Ahsun; Gabr, Ahmed; Abouchaleh, Nadine; Ali, Rehan; Alasadi, Ali; Mora, Ronald; Kulik, Laura; Desai, Kush; Thornburg, Bartley; Mouli, Samdeep; Hickey, Ryan; Miller, Frank H; Yaghmai, Vahid; Ganger, Daniel; Lewandowski, Robert J; Salem, Riad

    2017-08-18

    Does imaging response predict survival in hepatocellular carcinoma (HCC)? We studied the ability of post-therapeutic imaging response to predict overall survival. Over 14 years, 948 HCC patients were treated with radioembolization. Patients with baseline metastases, vascular invasion, multifocal disease, Child-Pugh>B7 and transplanted/resected were excluded. This created our homogenous study cohort of 134 Child-Pugh≤B7 patients with solitary HCC. Response (using European Association for Study of the Liver [EASL] and Response Evaluation Criteria in Solid Tumors 1.1 [RECIST 1.1] criteria) was associated with survival using Landmark and risk-of-death methodologies after reviewing 960 scans. In a sub-analysis, survival times of responders were compared to those of patients with stable disease (SD) and progressive disease (PD). Uni/multivariate survival analyses were performed at each Landmark. At the 3-month Landmark, responders survived longer than nonresponders by EASL (HR:0.46; CI:0.26-0.82; P=0.002) but not RECIST 1.1 criteria (HR:0.70; CI:0.37-1.32; P=0.32). At the 6-month Landmark, responders survived longer than nonresponders by EASL (HR:0.32; CI:0.15-0.77; P<0.001) and RECIST 1.1 criteria (HR:0.50; CI:0.29-0.87; P=0.021). At the 12-month Landmark, responders survived longer than nonresponders by EASL (HR:0.34; CI:0.15-0.77; P<0.001) and RECIST 1.1 criteria (HR:0.52;CI 0.27-0.98; P=0.049). At 6 months, risk of death was lower for responders by EASL (P<0.001) and RECIST 1.1 (P=0.0445). In sub-analyses, responders lived longer than patients with SD or PD. EASL response was a significant predictor of survival at 3, 6, and 12 month Landmarks on uni/multivariate analyses. Response to radioembolization in patients with solitary HCC can prognosticate improved survival. EASL necrosis criteria outperformed RECIST 1.1 size criteria in predicting survival. The therapeutic objective of radioembolization should be radiologic response and not solely to prevent progression

  9. Automated extraction of reported statistical analyses: towards a logical representation of clinical trial literature.

    Science.gov (United States)

    Hsu, William; Speier, William; Taira, Ricky K

    2012-01-01

    Randomized controlled trials are an important source of evidence for guiding clinical decisions when treating a patient. However, given the large number of studies and their variability in quality, determining how to summarize reported results and formalize them as part of practice guidelines continues to be a challenge. We have developed a set of information extraction and annotation tools to automate the identification of key information from papers related to the hypothesis, sample size, statistical test, confidence interval, significance level, and conclusions. We adapted the Automated Sequence Annotation Pipeline to map extracted phrases to relevant knowledge sources. We trained and tested our system on a corpus of 42 full-text articles related to chemotherapy of non-small cell lung cancer. On our test set of 7 papers, we obtained an overall precision of 86%, recall of 78%, and an F-score of 0.82 for classifying sentences. This work represents our efforts towards utilizing this information for quality assessment, meta-analysis, and modeling.

  10. Lightning NOx Statistics Derived by NASA Lightning Nitrogen Oxides Model (LNOM) Data Analyses

    Science.gov (United States)

    Koshak, William; Peterson, Harold

    2013-01-01

    What is the LNOM? The NASA Marshall Space Flight Center (MSFC) Lightning Nitrogen Oxides Model (LNOM) [Koshak et al., 2009, 2010, 2011; Koshak and Peterson 2011, 2013] analyzes VHF Lightning Mapping Array (LMA) and National Lightning Detection Network(TradeMark) (NLDN) data to estimate the lightning nitrogen oxides (LNOx) produced by individual flashes. Figure 1 provides an overview of LNOM functionality. Benefits of LNOM: (1) Does away with unrealistic "vertical stick" lightning channel models for estimating LNOx; (2) Uses ground-based VHF data that maps out the true channel in space and time to < 100 m accuracy; (3) Therefore, true channel segment height (ambient air density) is used to compute LNOx; (4) True channel length is used! (typically tens of kilometers since channel has many branches and "wiggles"); (5) Distinction between ground and cloud flashes are made; (6) For ground flashes, actual peak current from NLDN used to compute NOx from lightning return stroke; (7) NOx computed for several other lightning discharge processes (based on Cooray et al., 2009 theory): (a) Hot core of stepped leaders and dart leaders, (b) Corona sheath of stepped leader, (c) K-change, (d) Continuing Currents, and (e) M-components; and (8) LNOM statistics (see later) can be used to parameterize LNOx production for regional air quality models (like CMAQ), and for global chemical transport models (like GEOS-Chem).

  11. Team-Based Learning in a Subsection of a Veterinary Course as Compared to Standard Lectures

    Science.gov (United States)

    Malone, Erin; Spieth, Amie

    2012-01-01

    Team-Based Learning (TBL) maximizes class time for student practice in complex problems using peer learning in an instructor-guided format. Generally entire courses are structured using the comprehensive guidelines of TBL. We used TBL in a subsection of a veterinary course to determine if it remained effective in this format. One section of the…

  12. Studies in Coprinus—I. Subsections Auricomi and Glabri of Coprinus section Pseudocoprinus

    NARCIS (Netherlands)

    Uljé, C.B.; Bas, C.

    1988-01-01

    A key is given to the Netherlands’ species of subsect. Auricomi Sing. and Glabri J. Lange of Coprinus sect. Pseudocoprinus (Kühn.) Orton & Watling. All species concerned are concisely described and amply discussed. Coprinus plicatilis var. microsporus Kühn. is raised to species level as C. kuehneri

  13. STATISTIC, PROBABILISTIC, CORRELATION AND SPECTRAL ANALYSES OF REGENERATIVE BRAKING CURRENT OF DC ELECTRIC ROLLING STOCK

    Directory of Open Access Journals (Sweden)

    A. V. Nikitenko

    2014-04-01

    Full Text Available Purpose. Defining and analysis of the probabilistic and spectral characteristics of random current in regenerative braking mode of DC electric rolling stock are observed in this paper. Methodology. The elements and methods of the probability theory (particularly the theory of stationary and non-stationary processes and methods of the sampling theory are used for processing of the regenerated current data arrays by PC. Findings. The regenerated current records are obtained from the locomotives and trains in Ukraine railways and trams in Poland. It was established that the current has uninterrupted and the jumping variations in time (especially in trams. For the random current in the regenerative braking mode the functions of mathematical expectation, dispersion and standard deviation are calculated. Histograms, probabilistic characteristics and correlation functions are calculated and plotted down for this current too. It was established that the current of the regenerative braking mode can be considered like the stationary and non-ergodic process. The spectral analysis of these records and “tail part” of the correlation function found weak periodical (or low-frequency components which are known like an interharmonic. Originality. Firstly, the theory of non-stationary random processes was adapted for the analysis of the recuperated current which has uninterrupted and the jumping variations in time. Secondly, the presence of interharmonics in the stochastic process of regenerated current was defined for the first time. And finally, the patterns of temporal changes of the correlation current function are defined too. This allows to reasonably apply the correlation functions method in the identification of the electric traction system devices. Practical value. The results of probabilistic and statistic analysis of the recuperated current allow to estimate the quality of recovered energy and energy quality indices of electric rolling stock in the

  14. Computational and Statistical Analyses of Insertional Polymorphic Endogenous Retroviruses in a Non-Model Organism

    Directory of Open Access Journals (Sweden)

    Le Bao

    2014-11-01

    Full Text Available Endogenous retroviruses (ERVs are a class of transposable elements found in all vertebrate genomes that contribute substantially to genomic functional and structural diversity. A host species acquires an ERV when an exogenous retrovirus infects a germ cell of an individual and becomes part of the genome inherited by viable progeny. ERVs that colonized ancestral lineages are fixed in contemporary species. However, in some extant species, ERV colonization is ongoing, which results in variation in ERV frequency in the population. To study the consequences of ERV colonization of a host genome, methods are needed to assign each ERV to a location in a species’ genome and determine which individuals have acquired each ERV by descent. Because well annotated reference genomes are not widely available for all species, de novo clustering approaches provide an alternative to reference mapping that are insensitive to differences between query and reference and that are amenable to mobile element studies in both model and non-model organisms. However, there is substantial uncertainty in both identifying ERV genomic position and assigning each unique ERV integration site to individuals in a population. We present an analysis suitable for detecting ERV integration sites in species without the need for a reference genome. Our approach is based on improved de novo clustering methods and statistical models that take the uncertainty of assignment into account and yield a probability matrix of shared ERV integration sites among individuals. We demonstrate that polymorphic integrations of a recently identified endogenous retrovirus in deer reflect contemporary relationships among individuals and populations.

  15. Using Innovative Statistical Analyses to Assess Soil Degradation due to Land Use Change

    Science.gov (United States)

    Khaledian, Yones; Kiani, Farshad; Ebrahimi, Soheila; Brevik, Eric C.; Aitkenhead-Peterson, Jacqueline

    2016-04-01

    Soil erosion and overall loss of soil fertility is a serious issue for loess soils of the Golestan province, northern Iran. The assessment of soil degradation at large watershed scales is urgently required. This research investigated the role of land use change and its effect on soil degradation in cultivated, pasture and urban lands, when compared to native forest in terms of declines in soil fertility. Some novel statistical methods including partial least squares (PLS), principal component regression (PCR), and ordinary least squares regression (OLS) were used to predict soil cation-exchange capacity (CEC) using soil characteristics. PCA identified five primary components of soil quality. The PLS model was used to predict soil CEC from the soil characteristics including bulk density (BD), electrical conductivity (EC), pH, calcium carbonate equivalent (CCE), soil particle density (DS), mean weight diameter (MWD), soil porosity (F), organic carbon (OC), Labile carbon (LC), mineral carbon, saturation percentage (SP), soil particle size (clay, silt and sand), exchangeable cations (Ca2+, Mg2+, K+, Na+), and soil microbial respiration (SMR) collected in the Ziarat watershed. In order to evaluate the best fit, two other methods, PCR and OLS, were also examined. An exponential semivariogram using PLS predictions revealed stronger spatial dependence among CEC [r2 = 0.80, and RMSE= 1.99] than the other methods, PCR [r2 = 0.84, and RMSE= 2.45] and OLS [r2 = 0.84, and RMSE= 2.45]. Therefore, the PLS method provided the best model for the data. In stepwise regression analysis, MWD and LC were selected as influential variables in all soils, whereas the other influential parameters were different in various land uses. This study quantified reductions in numerous soil quality parameters resulting from extensive land-use changes and urbanization in the Ziarat watershed in Northern Iran.

  16. Statistical analyses of the background distribution of groundwater solutes, Los Alamos National Laboratory, New Mexico.

    Energy Technology Data Exchange (ETDEWEB)

    Longmire, Patrick A.; Goff, Fraser; Counce, D. A. (Dale A.); Ryti, R. T. (Randall T.); Dale, Michael R.; Britner, Kelly A

    2004-01-01

    Background or baseline water chemistry data and information are required to distingu ish between contaminated and non-contaminated waters for environmental investigations conducted at Los Alamos National Laboratory (referred to as the Laboratory). The term 'background' refers to natural waters discharged by springs or penetrated by wells that have not been contaminated by LANL or other municipal or industrial activities, and that are representative of groundwater discharging from their respective aquifer material. These investigations are conducted as part of the Environmental Restoration (ER) Project, Groundwater Protection Program (GWPP), Laboratory Surveillance Program, the Hydrogeologic Workplan, and the Site-Wide Environmental Impact Statement (SWEIS). This poster provides a comprehensive, validated database of inorganic, organic, stable isotope, and radionuclide analyses of up to 136 groundwater samples collected from 15 baseline springs and wells located in and around Los Alamos National Laboratory, New Mexico. The region considered in this investigation extends from the western edge of the Jemez Mountains eastward to the Rio Grande and from Frijoles Canyon northward to Garcia Canyon. Figure 1 shows the fifteen stations sampled for this investigation. The sampling stations and associated aquifer types are summarized in Table 1.

  17. Analysing the Severity and Frequency of Traffic Crashes in Riyadh City Using Statistical Models

    Directory of Open Access Journals (Sweden)

    Saleh Altwaijri

    2012-12-01

    Full Text Available Traffic crashes in Riyadh city cause losses in the form of deaths, injuries and property damages, in addition to the pain and social tragedy affecting families of the victims. In 2005, there were a total of 47,341 injury traffic crashes occurred in Riyadh city (19% of the total KSA crashes and 9% of those crashes were severe. Road safety in Riyadh city may have been adversely affected by: high car ownership, migration of people to Riyadh city, high daily trips reached about 6 million, high rate of income, low-cost of petrol, drivers from different nationalities, young drivers and tremendous growth in population which creates a high level of mobility and transport activities in the city. The primary objective of this paper is therefore to explore factors affecting the severity and frequency of road crashes in Riyadh city using appropriate statistical models aiming to establish effective safety policies ready to be implemented to reduce the severity and frequency of road crashes in Riyadh city. Crash data for Riyadh city were collected from the Higher Commission for the Development of Riyadh (HCDR for a period of five years from 1425H to 1429H (roughly corresponding to 2004-2008. Crash data were classified into three categories: fatal, serious-injury and slight-injury. Two nominal response models have been developed: a standard multinomial logit model (MNL and a mixed logit model to injury-related crash data. Due to a severe underreporting problem on the slight injury crashes binary and mixed binary logistic regression models were also estimated for two categories of severity: fatal and serious crashes. For frequency, two count models such as Negative Binomial (NB models were employed and the unit of analysis was 168 HAIs (wards in Riyadh city. Ward-level crash data are disaggregated by severity of the crash (such as fatal and serious injury crashes. The results from both multinomial and binary response models are found to be fairly consistent but

  18. A rapid discrimination of authentic and unauthentic Radix Angelicae Sinensis growth regions by electronic nose coupled with multivariate statistical analyses.

    Science.gov (United States)

    Liu, Jie; Wang, Weixin; Yang, Yaojun; Yan, Yuning; Wang, Wenyi; Wu, Haozhong; Ren, Zihe

    2014-10-27

    Radix Angelicae Sinensis, known as Danggui in China, is an effective and wide applied material in Traditional Chinese Medicine (TCM) and it is used in more than 80 composite formulae. Danggui from Minxian County, Gansu Province is the best in quality. To rapidly and nondestructively discriminate Danggui from the authentic region of origin from that from an unauthentic region, an electronic nose coupled with multivariate statistical analyses was developed. Two different feature extraction methods were used to ensure the authentic region and unauthentic region of Danggui origin could be discriminated. One feature extraction method is to capture the average value of the maximum response of the electronic nose sensors (feature extraction method 1). The other one is to combine the maximum response of the sensors with their inter-ratios (feature extraction method 2). Multivariate statistical analyses, including principal component analysis (PCA), soft independent modeling of class analogy (SIMCA), and hierarchical clustering analysis (HCA) were employed. Nineteen samples were analyzed by PCA, SIMCA and HCA. Then the remaining samples (GZM1, SH) were projected onto the SIMCA model to validate the models. The results indicated that, in the use of feature extraction method 2, Danggui from Yunnan Province and Danggui from Gansu Province could be successfully discriminated using the electronic nose coupled with PCA, SIMCA and HCA, which suggested that the electronic-nose system could be used as a simple and rapid technique for the discrimination of Danggui between authentic and unauthentic region of origin.

  19. An overview of statistical and regulatory issues in the planning, analysis, and interpretation of subgroup analyses in confirmatory clinical trials.

    Science.gov (United States)

    Hemmings, Robert

    2014-01-01

    Whether confirmatory or exploratory in nature, the investigation of subgroups poses statistical and interpretational challenges, yet these investigations can have important consequences for product licensing, labeling, reimbursement, and prescribing decisions. This article provides a high-level, nontechnical summary of key statistical issues in the analysis of subgroups, with a focus on the regulatory context in which drug development and licensing decisions are made. References to specific aspects of regulatory processes are based on the system in Europe, though it is hoped that the principles outlined can be generally applied to other regulatory regions. This article challenges the common assumption that a clinical trial population should be assumed to be homogeneous, with homogeneous response to treatment, and asks whether commonly employed strategies for handling and identifying potential heterogeneity are sufficient. Investigations into subgroups are unavoidable, yet subgroup analyses suffer from fundamental complications and limitations of which those planning and interpreting clinical trials must be aware. Some areas for further methodological work and an improved methodological framework for the conduct of exploratory subgroup analyses are discussed. Above all, the need for an integrated scientific approach is highlighted.

  20. Reservoir zonation based on statistical analyses: A case study of the Nubian sandstone, Gulf of Suez, Egypt

    Science.gov (United States)

    El Sharawy, Mohamed S.; Gaafar, Gamal R.

    2016-12-01

    Both reservoir engineers and petrophysicists have been concerned about dividing a reservoir into zones for engineering and petrophysics purposes. Through decades, several techniques and approaches were introduced. Out of them, statistical reservoir zonation, stratigraphic modified Lorenz (SML) plot and the principal component and clustering analyses techniques were chosen to apply on the Nubian sandstone reservoir of Palaeozoic - Lower Cretaceous age, Gulf of Suez, Egypt, by using five adjacent wells. The studied reservoir consists mainly of sandstone with some intercalation of shale layers with varying thickness from one well to another. The permeability ranged from less than 1 md to more than 1000 md. The statistical reservoir zonation technique, depending on core permeability, indicated that the cored interval of the studied reservoir can be divided into two zones. Using reservoir properties such as porosity, bulk density, acoustic impedance and interval transit time indicated also two zones with an obvious variation in separation depth and zones continuity. The stratigraphic modified Lorenz (SML) plot indicated the presence of more than 9 flow units in the cored interval as well as a high degree of microscopic heterogeneity. On the other hand, principal component and cluster analyses, depending on well logging data (gamma ray, sonic, density and neutron), indicated that the whole reservoir can be divided at least into four electrofacies having a noticeable variation in reservoir quality, as correlated with the measured permeability. Furthermore, continuity or discontinuity of the reservoir zones can be determined using this analysis.

  1. Consumer Loyalty and Loyalty Programs: a topographic examination of the scientific literature using bibliometrics, spatial statistics and network analyses

    Directory of Open Access Journals (Sweden)

    Viviane Moura Rocha

    2015-04-01

    Full Text Available This paper presents a topographic analysis of the fields of consumer loyalty and loyalty programs, vastly studied in the last decades and still relevant in the marketing literature. After the identification of 250 scientific papers that were published in the last ten years in indexed journals, a subset of 76 were chosen and their 3223 references were extracted. The journals in which these papers were published, their key words, abstracts, authors, institutions of origin and citation patterns were identified and analyzed using bibliometrics, spatial statistics techniques and network analyses. The results allow the identification of the central components of the field, as well as its main authors, journals, institutions and countries that intermediate the diffusion of knowledge, which contributes to the understanding of the constitution of the field by researchers and students.

  2. Exploring the statistical and clinical impact of two interim analyses on the Phase II design with option for direct assignment.

    Science.gov (United States)

    An, Ming-Wen; Mandrekar, Sumithra J; Edelman, Martin J; Sargent, Daniel J

    2014-07-01

    The primary goal of Phase II clinical trials is to understand better a treatment's safety and efficacy to inform a Phase III go/no-go decision. Many Phase II designs have been proposed, incorporating randomization, interim analyses, adaptation, and patient selection. The Phase II design with an option for direct assignment (i.e. stop randomization and assign all patients to the experimental arm based on a single interim analysis (IA) at 50% accrual) was recently proposed [An et al., 2012]. We discuss this design in the context of existing designs, and extend it from a single-IA to a two-IA design. We compared the statistical properties and clinical relevance of the direct assignment design with two IA (DAD-2) versus a balanced randomized design with two IA (BRD-2) and a direct assignment design with one IA (DAD-1), over a range of response rate ratios (2.0-3.0). The DAD-2 has minimal loss in power (designs, the direct assignment design, especially with two IA, provides a middle ground with desirable statistical properties and likely appeal to both clinicians and patients. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. A guide to statistical analysis in microbial ecology: a community-focused, living review of multivariate data analyses.

    Science.gov (United States)

    Buttigieg, Pier Luigi; Ramette, Alban

    2014-12-01

    The application of multivariate statistical analyses has become a consistent feature in microbial ecology. However, many microbial ecologists are still in the process of developing a deep understanding of these methods and appreciating their limitations. As a consequence, staying abreast of progress and debate in this arena poses an additional challenge to many microbial ecologists. To address these issues, we present the GUide to STatistical Analysis in Microbial Ecology (GUSTA ME): a dynamic, web-based resource providing accessible descriptions of numerous multivariate techniques relevant to microbial ecologists. A combination of interactive elements allows users to discover and navigate between methods relevant to their needs and examine how they have been used by others in the field. We have designed GUSTA ME to become a community-led and -curated service, which we hope will provide a common reference and forum to discuss and disseminate analytical techniques relevant to the microbial ecology community. © 2014 The Authors. FEMS Microbiology Ecology published by John Wiley & Sons Ltd on behalf of Federation of European Microbiological Societies.

  4. Statistical approaches to analyse patient-reported outcomes as response variables: an application to health-related quality of life.

    Science.gov (United States)

    Arostegui, Inmaculada; Núñez-Antón, Vicente; Quintana, José M

    2012-04-01

    Patient-reported outcomes (PRO) are used as primary endpoints in medical research and their statistical analysis is an important methodological issue. Theoretical assumptions of the selected methodology and interpretation of its results are issues to take into account when selecting an appropriate statistical technique to analyse data. We present eight methods of analysis of a popular PRO tool under different assumptions that lead to different interpretations of the results. All methods were applied to responses obtained from two of the health dimensions of the SF-36 Health Survey. The proposed methods are: multiple linear regression (MLR), with least square and bootstrap estimations, tobit regression, ordinal logistic and probit regressions, beta-binomial regression (BBR), binomial-logit-normal regression (BLNR) and coarsening. Selection of an appropriate model depends not only on its distributional assumptions but also on the continuous or ordinal features of the response and the fact that they are constrained to a bounded interval. The BBR approach renders satisfactory results in a broad number of situations. MLR is not recommended, especially with skewed outcomes. Ordinal methods are only appropriate for outcomes with a few number of categories. Tobit regression is an acceptable option under normality assumptions and in the presence of moderate ceiling or floor effect. The BLNR and coarsening proposals are also acceptable, but only under certain distributional assumptions that are difficult to test a priori. Interpretation of the results is more convenient when using the BBR, BLNR and ordinal logistic regression approaches.

  5. Statistical Analyses of Optimum Partial Replacement of Cement by Fly Ash Based on Complete Consumption of Calcium Hydroxide

    Directory of Open Access Journals (Sweden)

    Ouypornprasert Winai

    2016-01-01

    Full Text Available The objectives of this technical paper were to propose the optimum partial replacement of cement by fly ash based on the complete consumption of calcium hydroxide from hydration reactions of cement and the long-term strength activity index based on equivalent calcium silicate hydrate as well as the propagation of uncertainty due to randomness inherent in main chemical compositions in cement and fly ash. Firstly the hydration- and pozzolanic reactions as well as stoichiometry were reviewed. Then the optimum partial replacement of cement by fly ash was formulated. After that the propagation of uncertainty due to main chemical compositions in cement and fly ash was discussed and the reliability analyses for applying the suitable replacement were reviewed. Finally an applicability of the concepts mentioned above based on statistical data of materials available was demonstrated. The results from analyses were consistent with the testing results by other researchers. The results of this study provided guidelines of suitable utilization of fly ash for partial replacement of cement. It was interesting to note that these concepts could be extended to optimize partial replacement of cement by other types of pozzolan which were described in the other papers of the authors.

  6. Statistical analysis of individual participant data meta-analyses: a comparison of methods and recommendations for practice.

    Science.gov (United States)

    Stewart, Gavin B; Altman, Douglas G; Askie, Lisa M; Duley, Lelia; Simmonds, Mark C; Stewart, Lesley A

    2012-01-01

    Individual participant data (IPD) meta-analyses that obtain "raw" data from studies rather than summary data typically adopt a "two-stage" approach to analysis whereby IPD within trials generate summary measures, which are combined using standard meta-analytical methods. Recently, a range of "one-stage" approaches which combine all individual participant data in a single meta-analysis have been suggested as providing a more powerful and flexible approach. However, they are more complex to implement and require statistical support. This study uses a dataset to compare "two-stage" and "one-stage" models of varying complexity, to ascertain whether results obtained from the approaches differ in a clinically meaningful way. We included data from 24 randomised controlled trials, evaluating antiplatelet agents, for the prevention of pre-eclampsia in pregnancy. We performed two-stage and one-stage IPD meta-analyses to estimate overall treatment effect and to explore potential treatment interactions whereby particular types of women and their babies might benefit differentially from receiving antiplatelets. Two-stage and one-stage approaches gave similar results, showing a benefit of using anti-platelets (Relative risk 0.90, 95% CI 0.84 to 0.97). Neither approach suggested that any particular type of women benefited more or less from antiplatelets. There were no material differences in results between different types of one-stage model. For these data, two-stage and one-stage approaches to analysis produce similar results. Although one-stage models offer a flexible environment for exploring model structure and are useful where across study patterns relating to types of participant, intervention and outcome mask similar relationships within trials, the additional insights provided by their usage may not outweigh the costs of statistical support for routine application in syntheses of randomised controlled trials. Researchers considering undertaking an IPD meta

  7. A simple device for subsectioning aqueous sediments from the box or spade corer

    Digital Repository Service at National Institute of Oceanography (India)

    Valsangkar, A.B.

    ] by using an additional key. The ring(s) thus help to obtain fine scale sediment sub-section(s) [18] above the sub-core [16] (Figure 2). The rings of plastic or flexible material can also be used externally to achieve the purpose. Accessories 1) Acrylic... on the top of the sub-core. The sediment thus exposed on the top is sectioned, packed in the plastic bag and numbered. Similarly, other three 0.5 cm sections are removed and packed. By this way first top 2 cm sediment is sub-divided in to four parts of 0...

  8. Geographical origin discrimination of lentils (Lens culinaris Medik.) using 1H NMR fingerprinting and multivariate statistical analyses.

    Science.gov (United States)

    Longobardi, Francesco; Innamorato, Valentina; Di Gioia, Annalisa; Ventrella, Andrea; Lippolis, Vincenzo; Logrieco, Antonio F; Catucci, Lucia; Agostiano, Angela

    2017-12-15

    Lentil samples coming from two different countries, i.e. Italy and Canada, were analysed using untargeted 1H NMR fingerprinting in combination with chemometrics in order to build models able to classify them according to their geographical origin. For such aim, Soft Independent Modelling of Class Analogy (SIMCA), k-Nearest Neighbor (k-NN), Principal Component Analysis followed by Linear Discriminant Analysis (PCA-LDA) and Partial Least Squares-Discriminant Analysis (PLS-DA) were applied to the NMR data and the results were compared. The best combination of average recognition (100%) and cross-validation prediction abilities (96.7%) was obtained for the PCA-LDA. All the statistical models were validated both by using a test set and by carrying out a Monte Carlo Cross Validation: the obtained performances were found to be satisfying for all the models, with prediction abilities higher than 95% demonstrating the suitability of the developed methods. Finally, the metabolites that mostly contributed to the lentil discrimination were indicated. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Computational modeling and statistical analyses on individual contact rate and exposure to disease in complex and confined transportation hubs

    Science.gov (United States)

    Wang, W. L.; Tsui, K. L.; Lo, S. M.; Liu, S. B.

    2018-01-01

    Crowded transportation hubs such as metro stations are thought as ideal places for the development and spread of epidemics. However, for the special features of complex spatial layout, confined environment with a large number of highly mobile individuals, it is difficult to quantify human contacts in such environments, wherein disease spreading dynamics were less explored in the previous studies. Due to the heterogeneity and dynamic nature of human interactions, increasing studies proved the importance of contact distance and length of contact in transmission probabilities. In this study, we show how detailed information on contact and exposure patterns can be obtained by statistical analyses on microscopic crowd simulation data. To be specific, a pedestrian simulation model-CityFlow was employed to reproduce individuals' movements in a metro station based on site survey data, values and distributions of individual contact rate and exposure in different simulation cases were obtained and analyzed. It is interesting that Weibull distribution fitted the histogram values of individual-based exposure in each case very well. Moreover, we found both individual contact rate and exposure had linear relationship with the average crowd densities of the environments. The results obtained in this paper can provide reference to epidemic study in complex and confined transportation hubs and refine the existing disease spreading models.

  10. Cellular fatty acid composition of cyanobacteria assigned to subsection II, order Pleurocapsales.

    Science.gov (United States)

    Caudales, R; Wells, J M; Butterfield, J E

    2000-05-01

    The cellular fatty acid composition of five of the six genera of unicellular cyanobacteria in subsection II, Pleurocapsales (Dermocarpa, Xenococcus, Dermocarpella, Myxosarcina and the Pleurocapsa assemblage) contained high proportions of saturated straight-chain fatty acids (26-41% of the total) and unsaturated straight chains (40-67%). Isomers of 16:1 were the main monounsaturated acid component (11-59%). Polyunsaturated acids were present at trace levels (0-1% or less) in Xenococcus and Myxosarcina, at concentrations of less than 7% in Dermocarpa, Dermocarpella, Pleurocapsa and CCMP 1489, and at high concentrations (35% or more) in Chroococcidiopsis. Chroococcidiopsis was also different in terms of the percentage of 16:1 isomers (10-12%) compared to other genera (30-59%), and in terms of total 16-carbon and 18-carbon fatty acids. In general, the composition and heterogeneity of fatty acids in the order Pleurocapsales was similar to that reported for the unicellular cyanobacteria of subsection I, order Chroococcales.

  11. The effects of clinical and statistical heterogeneity on the predictive values of results from meta-analyses

    NARCIS (Netherlands)

    Melsen, W.G.; Bootsma, M.C.; Rovers, M.M.; Bonten, M.J.

    2014-01-01

    Variance between studies in a meta-analysis will exist. This heterogeneity may be of clinical, methodological or statistical origin. The last of these is quantified by the I(2) -statistic. We investigated, using simulated studies, the accuracy of I(2) in the assessment of heterogeneity and the

  12. The effects of clinical and statistical heterogeneity on the predictive values of results from meta-analyses

    NARCIS (Netherlands)

    Melsen, W G; Rovers, M M; Bonten, M J M; Bootsma, M C J

    Variance between studies in a meta-analysis will exist. This heterogeneity may be of clinical, methodological or statistical origin. The last of these is quantified by the I(2) -statistic. We investigated, using simulated studies, the accuracy of I(2) in the assessment of heterogeneity and the

  13. A Review & Assessment of Current Operating Conditions Allowable Stresses in ASME Section III Subsection NH

    Energy Technology Data Exchange (ETDEWEB)

    R. W. Swindeman

    2009-12-14

    The current operating condition allowable stresses provided in ASME Section III, Subsection NH were reviewed for consistency with the criteria used to establish the stress allowables and with the allowable stresses provided in ASME Section II, Part D. It was found that the S{sub o} values in ASME III-NH were consistent with the S values in ASME IID for the five materials of interest. However, it was found that 0.80 S{sub r} was less than S{sub o} for some temperatures for four of the materials. Only values for alloy 800H appeared to be consistent with the criteria on which S{sub o} values are established. With the intent of undertaking a more detailed evaluation of issues related to the allowable stresses in ASME III-NH, the availabilities of databases for the five materials were reviewed and augmented databases were assembled.

  14. A New Classification of Ficus Subsection Urostigma (Moraceae Based on Four Nuclear DNA Markers (ITS, ETS, G3pdh, and ncpGS, Morphology and Leaf Anatomy.

    Directory of Open Access Journals (Sweden)

    Bhanumas Chantarasuwan

    Full Text Available Ficus subsection Urostigma as currently circumscribed contains 27 species, distributed in Africa, Asia, Australia and the Pacific, and is of key importance to understand the origin and evolution of Ficus and the fig-wasp mutualism. The species of subsection Urostigma are very variable in morphological characters and exhibit a wide range of often partly overlapping distributions, which makes identification often difficult. The systematic classification within and between this subsection and others is problematic, e.g., it is still unclear where to classify F. amplissima and F. rumphii. To clarify the circumscription of subsection Urostigma, a phylogenetic reconstruction based on four nuclear DNA markers (ITS, ETS, G3pdh, and ncpGS combined with morphology and leaf anatomy is conducted. The phylogenetic tree based on the combined datasets shows that F. madagascariensis, a Madagascan species, is sister to the remainder of subsect. Urostigma. Ficus amplissima and F. rumphii, formerly constituting sect. Leucogyne, appear to be imbedded in subsect. Conosycea. The result of the phylogenetic analysis necessitates nomenclatural adjustments. A new classification of Ficus subsection Urostigma is presented along with the morphological and leaf anatomical apomorphies typical for the clades. Two new species are described ─ one in subsect. Urostigma, the other in Conosycea. One variety is raised to species level.

  15. Taxonomic revision of the tropical African group of Carex subsect. Elatae (sect. Spirostachyae, Cyperaceae

    Directory of Open Access Journals (Sweden)

    Escudero, Marcial

    2011-12-01

    Full Text Available The tropical African monophyletic group of Carex subsect. Elatae (sect. Spirostachyae is distributed in continental tropical Africa, Madagascar, the Mascarene archipelago, and Bioko Island (32 km off the coast of West Africa, in the Gulf of Guinea. The first monographic treatment of this Carex group, as well as of the tribe Cariceae, was published by Kükenthal (as sect. Elatae Kük.. Recently, the first molecular (nrDNA, cpDNA phylogeny of Carex sect. Elatae has been published, which also included the species of sect. Spirostachyae. In the resulting consensus trees, most species of sect. Elatae were embedded within core Spirostachyae and so this section was joined with sect. Spirostachyae as subsect. Elatae. Within subsect. Elatae, several groups were described, one of which was termed the “tropical African group”. Here we present a taxonomic revision of this group, based on more than 280 vouchers from 29 herbaria as well as in field trips in Tropical Africa. In the revision, we recognise 12 species (16 taxa within the tropical African group, and so have somewhat modified our previous view, in which 10 species, 12 taxa were listed. One new species from Tanzania is included in this treatment, C. uluguruensis Luceño & M. Escudero. Several combinations are made, C. cyrtosaccus is treated as a synonym of C. vallis-rosetto and, finally, the binomial C. greenwayi has been recognised.Las especies de la subsección Elatae (sección Spirostachyae del género Carex que se distribuyen por África tropical continental, Madagascar, el archipiélago de las Mascareñas y la isla de Bioko (a 32 km del litoral de África occidental, en el golfo de Guinea forman un grupo monofilético. El primer tratamiento taxonómico de este grupo de cárices, así como de la tribu Cariceae en su conjunto, fue elaborado por Kükenthal (sección Elatae Kük.; recientemente, se ha publicado el primer estudio de filogenia molecular (nrDNA, cpDNA de los táxones de este grupo

  16. Guided waves based SHM systems for composites structural elements: statistical analyses finalized at probability of detection definition and assessment

    Science.gov (United States)

    Monaco, E.; Memmolo, V.; Ricci, F.; Boffa, N. D.; Maio, L.

    2015-03-01

    Maintenance approaches based on sensorised structures and Structural Health Monitoring systems could represent one of the most promising innovations in the fields of aerostructures since many years, mostly when composites materials (fibers reinforced resins) are considered. Layered materials still suffer today of drastic reductions of maximum allowable stress values during the design phase as well as of costly and recurrent inspections during the life cycle phase that don't permit of completely exploit their structural and economic potentialities in today aircrafts. Those penalizing measures are necessary mainly to consider the presence of undetected hidden flaws within the layered sequence (delaminations) or in bonded areas (partial disbonding); in order to relax design and maintenance constraints a system based on sensors permanently installed on the structure to detect and locate eventual flaws can be considered (SHM system) once its effectiveness and reliability will be statistically demonstrated via a rigorous Probability Of Detection function definition and evaluation. This paper presents an experimental approach with a statistical procedure for the evaluation of detection threshold of a guided waves based SHM system oriented to delaminations detection on a typical wing composite layered panel. The experimental tests are mostly oriented to characterize the statistical distribution of measurements and damage metrics as well as to characterize the system detection capability using this approach. Numerically it is not possible to substitute part of the experimental tests aimed at POD where the noise in the system response is crucial. Results of experiments are presented in the paper and analyzed.

  17. A statistical human resources costing and accounting model for analysing the economic effects of an intervention at a workplace.

    Science.gov (United States)

    Landstad, Bodil J; Gelin, Gunnar; Malmquist, Claes; Vinberg, Stig

    2002-09-15

    The study had two primary aims. The first aim was to combine a human resources costing and accounting approach (HRCA) with a quantitative statistical approach in order to get an integrated model. The second aim was to apply this integrated model in a quasi-experimental study in order to investigate whether preventive intervention affected sickness absence costs at the company level. The intervention studied contained occupational organizational measures, competence development, physical and psychosocial working environmental measures and individual and rehabilitation measures on both an individual and a group basis. The study is a quasi-experimental design with a non-randomized control group. Both groups involved cleaning jobs at predominantly female workplaces. The study plan involved carrying out before and after studies on both groups. The study included only those who were at the same workplace during the whole of the study period. In the HRCA model used here, the cost of sickness absence is the net difference between the costs, in the form of the value of the loss of production and the administrative cost, and the benefits in the form of lower labour costs. According to the HRCA model, the intervention used counteracted a rise in sickness absence costs at the company level, giving an average net effect of 266.5 Euros per person (full-time working) during an 8-month period. Using an analogue statistical analysis on the whole of the material, the contribution of the intervention counteracted a rise in sickness absence costs at the company level giving an average net effect of 283.2 Euros. Using a statistical method it was possible to study the regression coefficients in sub-groups and calculate the p-values for these coefficients; in the younger group the intervention gave a calculated net contribution of 605.6 Euros with a p-value of 0.073, while the intervention net contribution in the older group had a very high p-value. Using the statistical model it was

  18. Analyses of statistical transformations of row data describing free proline concentration in sugar beet exposed to drought

    Directory of Open Access Journals (Sweden)

    Putnik-Delić Marina I.

    2010-01-01

    Full Text Available Eleven sugar beet genotypes were tested for their capacity to tolerate drought. Plants were grown in semi-controlled conditions, in the greenhouse, and watered daily. After 90 days, water deficit was imposed by the cessation of watering, while the control plants continued to be watered up to 80% of FWC. Five days later concentration of free proline in leaves was determined. Analysis was done in three replications. Statistical analysis was performed using STATISTICA 9.0, Minitab 15, and R2.11.1. Differences between genotypes were statistically processed by Duncan test. Because of nonormality of the data distribution and heterogeneity of variances in different groups, two types of transformations of row data were applied. For this type of data more appropriate in eliminating nonormality was Johnson transformation, as opposed to Box-Cox. Based on the both transformations it may be concluded that in all genotypes except for 10, concentration of free proline differs significantly between treatment (drought and the control.

  19. USING STATISTICAL PROCESS CONTROL AND SIX SIGMA TO CRITICALLY ANALYSE SAFETY OF HELICAL SPRINGS: A RAILWAY CASE STUDY

    Directory of Open Access Journals (Sweden)

    Fulufhelo Ṋemavhola

    2017-09-01

    Full Text Available The paper exhibits the examination of life quality evaluation of helical coil springs in the railway industry as it impacts the safety of the transportation of goods and people. The types of spring considered are: the external spring, internal spring and stabiliser spring. Statistical process control was utilised as the fundamental instrument in the investigation. Measurements were performed using a measuring tape, dynamic actuators and the vernier caliper. The purpose of this research was to examine the usability of old helical springs found in a railway environment. The goal of the experiment was to obtain factual statistical information to determine the life quality of the helical springs used in the railroad transportation environment. Six sigma advocacies were additionally used as a part of this paper. According to six sigma estimation examination only the stabilizers and inner springs for coil bar diameter met the six sigma prerequisites. It is reasoned that the coil springs should be replaced as they do not meet the six sigma requirements.

  20. Notes on Elaphoglossum (Lomariopsidaceae section Polytrichia subsection Hybrida in Mexico and Central America

    Directory of Open Access Journals (Sweden)

    Alexander Fco. Rojas-Alvarado

    2003-03-01

    Full Text Available In Elaphoglossum sect. Polytrichia subsect. Hybrida six new species are described: E. angustiob-longum A. Rojas, E. baquianorum A. Rojas, E. cotoi A. Rojas, E. jinoteganum A. Rojas, E. neeanum A. Rojas and E. silencioanum A. Rojas. New combination is made for Elaphoglossum mexicanum (E. Fourn. A. Rojas. Two species are reported: E. barbatum (H. Karst. Hieron. and E. scolopendrifolium (Raddi J. Sm. Two species are redefined: E. erinaceum (Fée T. Moore and E. tambillense (Hook. T. Moore. E. pallidum (Baker ex Jenman C. Chr. Is eliminated for Mexico and Central America. Of the new species only E. neeanum is present outside of the region. A key is given to those species in Mexico and Central America.Seis especies nuevas son descritas: Elaphoglossum angustioblongum A. Rojas, E. baquianorum A. Rojas, E. cotoi A. Rojas, E. jinoteganum A. Rojas, E. neeanum A. Rojas y E. silencioanum A. Rojas. La especie Elaphoglossum mexicanum (E. Fourn. A. Rojas es combinada; las especies E. barbatum (H. Karst. Hieron. y E. scolopendrifolium (Raddi J. Sm. son registradas; además, E. erinaceum (Fée T. Moore y E. tambillense (Hook. T. Moore son redefinidas, y E. pallidum (Baker ex Jenman C. Chr. no se distribuye en México y Centroamérica. De las especies nuevas sólo E. neeanum se encuentra fuera de la región. Se proporciona una clave para reconocer las especies de México y Centro América.

  1. Point processes statistics of stable isotopes: analysing water uptake patterns in a mixed stand of Aleppo pine and Holm oak

    Directory of Open Access Journals (Sweden)

    Carles Comas

    2015-04-01

    Full Text Available Aim of study: Understanding inter- and intra-specific competition for water is crucial in drought-prone environments. However, little is known about the spatial interdependencies for water uptake among individuals in mixed stands. The aim of this work was to compare water uptake patterns during a drought episode in two common Mediterranean tree species, Quercus ilex L. and Pinus halepensis Mill., using the isotope composition of xylem water (δ18O, δ2H as hydrological marker. Area of study: The study was performed in a mixed stand, sampling a total of 33 oaks and 78 pines (plot area= 888 m2. We tested the hypothesis that both species uptake water differentially along the soil profile, thus showing different levels of tree-to-tree interdependency, depending on whether neighbouring trees belong to one species or the other. Material and Methods: We used pair-correlation functions to study intra-specific point-tree configurations and the bivariate pair correlation function to analyse the inter-specific spatial configuration. Moreover, the isotopic composition of xylem water was analysed as a mark point pattern. Main results: Values for Q. ilex (δ18O = –5.3 ± 0.2‰, δ2H = –54.3 ± 0.7‰ were significantly lower than for P. halepensis (δ18O = –1.2 ± 0.2‰, δ2H = –25.1 ± 0.8‰, pointing to a greater contribution of deeper soil layers for water uptake by Q. ilex. Research highlights: Point-process analyses revealed spatial intra-specific dependencies among neighbouring pines, showing neither oak-oak nor oak-pine interactions. This supports niche segregation for water uptake between the two species.

  2. Influence of Immersion Conditions on The Tensile Strength of Recycled Kevlar®/Polyester/Low-Melting-Point Polyester Nonwoven Geotextiles through Applying Statistical Analyses

    Directory of Open Access Journals (Sweden)

    Jing-Chzi Hsieh

    2016-05-01

    Full Text Available The recycled Kevlar®/polyester/low-melting-point polyester (recycled Kevlar®/PET/LPET nonwoven geotextiles are immersed in neutral, strong acid, and strong alkali solutions, respectively, at different temperatures for four months. Their tensile strength is then tested according to various immersion periods at various temperatures, in order to determine their durability to chemicals. For the purpose of analyzing the possible factors that influence mechanical properties of geotextiles under diverse environmental conditions, the experimental results and statistical analyses are incorporated in this study. Therefore, influences of the content of recycled Kevlar® fibers, implementation of thermal treatment, and immersion periods on the tensile strength of recycled Kevlar®/PET/LPET nonwoven geotextiles are examined, after which their influential levels are statistically determined by performing multiple regression analyses. According to the results, the tensile strength of nonwoven geotextiles can be enhanced by adding recycled Kevlar® fibers and thermal treatment.

  3. Statistical and molecular analyses of evolutionary significance of red-green color vision and color blindness in vertebrates.

    Science.gov (United States)

    Yokoyama, Shozo; Takenaka, Naomi

    2005-04-01

    Red-green color vision is strongly suspected to enhance the survival of its possessors. Despite being red-green color blind, however, many species have successfully competed in nature, which brings into question the evolutionary advantage of achieving red-green color vision. Here, we propose a new method of identifying positive selection at individual amino acid sites with the premise that if positive Darwinian selection has driven the evolution of the protein under consideration, then it should be found mostly at the branches in the phylogenetic tree where its function had changed. The statistical and molecular methods have been applied to 29 visual pigments with the wavelengths of maximal absorption at approximately 510-540 nm (green- or middle wavelength-sensitive [MWS] pigments) and at approximately 560 nm (red- or long wavelength-sensitive [LWS] pigments), which are sampled from a diverse range of vertebrate species. The results show that the MWS pigments are positively selected through amino acid replacements S180A, Y277F, and T285A and that the LWS pigments have been subjected to strong evolutionary conservation. The fact that these positively selected M/LWS pigments are found not only in animals with red-green color vision but also in those with red-green color blindness strongly suggests that both red-green color vision and color blindness have undergone adaptive evolution independently in different species.

  4. Statistical evaluation of the performance of gridded monthly precipitation products from reanalysis data, satellite estimates, and merged analyses over China

    Science.gov (United States)

    Deng, Xueliang; Nie, Suping; Deng, Weitao; Cao, Weihua

    2017-04-01

    In this study, we compared the following four different gridded monthly precipitation products: the National Centers for Environmental Prediction version 2 (NCEP-2) reanalysis data, the satellite-based Climate Prediction Center Morphing technique (CMORPH) data, the merged satellite-gauge Global Precipitation Climatology Project (GPCP) data, and the merged satellite-gauge-model data from the Beijing Climate Center Merged Estimation of Precipitation (BMEP). We evaluated the performances of these products using monthly precipitation observations spanning the period of January 2003 to December 2013 from a dense, national, rain gauge network in China. Our assessment involved several statistical techniques, including spatial pattern, temporal variation, bias, root-mean-square error (RMSE), and correlation coefficient (CC) analysis. The results show that NCEP-2, GPCP, and BMEP generally overestimate monthly precipitation at the national scale and CMORPH underestimates it. However, all of the datasets successfully characterized the northwest to southeast increase in the monthly precipitation over China. Because they include precipitation gauge information from the Global Telecommunication System (GTS) network, GPCP and BMEP have much smaller biases, lower RMSEs, and higher CCs than NCEP-2 and CMORPH. When the seasonal and regional variations are considered, NCEP-2 has a larger error over southern China during the summer. CMORPH poorly reproduces the magnitude of the precipitation over southeastern China and the temporal correlation over western and northwestern China during all seasons. BMEP has a lower RMSE and higher CC than GPCP over eastern and southern China, where the station network is dense. In contrast, BMEP has a lower CC than GPCP over western and northwestern China, where the gauge network is relatively sparse.

  5. Combining the Power of Statistical Analyses and Community Interviews to Identify Adoption Barriers for Stormwater Best-Management Practices

    Science.gov (United States)

    Hoover, F. A.; Bowling, L. C.; Prokopy, L. S.

    2015-12-01

    Urban stormwater is an on-going management concern in municipalities of all sizes. In both combined or separated sewer systems, pollutants from stormwater runoff enter the natural waterway system during heavy rain events. Urban flooding during frequent and more intense storms are also a growing concern. Therefore, stormwater best-management practices (BMPs) are being implemented in efforts to reduce and manage stormwater pollution and overflow. The majority of BMP water quality studies focus on the small-scale, individual effects of the BMP, and the change in water quality directly from the runoff of these infrastructures. At the watershed scale, it is difficult to establish statistically whether or not these BMPs are making a difference in water quality, given that watershed scale monitoring is often costly and time consuming, relying on significant sources of funds, which a city may not have. Hence, there is a need to quantify the level of sampling needed to detect the water quality impact of BMPs at the watershed scale. In this study, a power analysis was performed on data from an urban watershed in Lafayette, Indiana, to determine the frequency of sampling required to detect a significant change in water quality measurements. Using the R platform, results indicate that detecting a significant change in watershed level water quality would require hundreds of weekly measurements, even when improvement is present. The second part of this study investigates whether the difficulty in demonstrating water quality change represents a barrier to adoption of stormwater BMPs. Semi-structured interviews of community residents and organizations in Chicago, IL are being used to investigate residents understanding of water quality and best management practices and identify their attitudes and perceptions towards stormwater BMPs. Second round interviews will examine how information on uncertainty in water quality improvements influences their BMP attitudes and perceptions.

  6. Statistical correlations and risk analyses techniques for a diving dual phase bubble model and data bank using massively parallel supercomputers.

    Science.gov (United States)

    Wienke, B R; O'Leary, T R

    2008-05-01

    Linking model and data, we detail the LANL diving reduced gradient bubble model (RGBM), dynamical principles, and correlation with data in the LANL Data Bank. Table, profile, and meter risks are obtained from likelihood analysis and quoted for air, nitrox, helitrox no-decompression time limits, repetitive dive tables, and selected mixed gas and repetitive profiles. Application analyses include the EXPLORER decompression meter algorithm, NAUI tables, University of Wisconsin Seafood Diver tables, comparative NAUI, PADI, Oceanic NDLs and repetitive dives, comparative nitrogen and helium mixed gas risks, USS Perry deep rebreather (RB) exploration dive,world record open circuit (OC) dive, and Woodville Karst Plain Project (WKPP) extreme cave exploration profiles. The algorithm has seen extensive and utilitarian application in mixed gas diving, both in recreational and technical sectors, and forms the bases forreleased tables and decompression meters used by scientific, commercial, and research divers. The LANL Data Bank is described, and the methods used to deduce risk are detailed. Risk functions for dissolved gas and bubbles are summarized. Parameters that can be used to estimate profile risk are tallied. To fit data, a modified Levenberg-Marquardt routine is employed with L2 error norm. Appendices sketch the numerical methods, and list reports from field testing for (real) mixed gas diving. A Monte Carlo-like sampling scheme for fast numerical analysis of the data is also detailed, as a coupled variance reduction technique and additional check on the canonical approach to estimating diving risk. The method suggests alternatives to the canonical approach. This work represents a first time correlation effort linking a dynamical bubble model with deep stop data. Supercomputing resources are requisite to connect model and data in application.

  7. Statistical analyses and risk assessment of potentially toxic metals (PTMS in children’s toys

    Directory of Open Access Journals (Sweden)

    Aderonke O. Oyeyiola

    2017-11-01

    Full Text Available Chemical exposure of children, especially from toys, is an engineering concern. The concentration and availability of potentially toxic metals (PTM in children’s toys were determined to assess the risk that these metals pose to children. Samples of 25 toys imported from China to Nigeria were purchased. Ternary acid digestion, followed by Atomic Absorption Spectrophotometry, was used to determine the concentration of PTM in the sample. Simulations of the saline and stomach acid extraction conditions were performed to determine the concentrations of PTM that could leach out from the toys during children’s mouthing behaviours (available PTM, including chewing, sucking and swallowing. The total concentrations of PTM in the toys ranged from 3.55–40.7, 3.21–38.2, 9.78–159, 3.55–11.2, and 36.1–106 mg/kg for Cd, Cr, Cu, and Pb, respectively. Availability studies showed concentrations ranging from 2.60–5.60 mg/kg for Pb, 0.53–2.03 mg/kg for Cd and 0.15–2.88 mg/kg for Ni after saline extraction, and the concentrations after stomach acid extractions ranged from 3.33–7.10 mg/kg, 1.15–3.15 mg/kg and 1.33–1.81 mg/kg for Pb, Cd and Ni, respectively. Statistical analysis showed a positive correlation between the total concentration of PTM and toys made from PVC materials. The risk assessment study showed that Cd posed the highest risk, with a hazard index (HI as high as 4.50 for saline extraction. The study revealed that more precaution is needed during the manufacture of children’s toys. Keywords: Availability, Polyvinyl chloride (PVC, Potentially toxic metals (PTM, Risk assessment, Toys

  8. Identification of novel risk factors for community-acquired Clostridium difficile infection using spatial statistics and geographic information system analyses.

    Directory of Open Access Journals (Sweden)

    Deverick J Anderson

    Full Text Available The rate of community-acquired Clostridium difficile infection (CA-CDI is increasing. While receipt of antibiotics remains an important risk factor for CDI, studies related to acquisition of C. difficile outside of hospitals are lacking. As a result, risk factors for exposure to C. difficile in community settings have been inadequately studied.To identify novel environmental risk factors for CA-CDI.We performed a population-based retrospective cohort study of patients with CA-CDI from 1/1/2007 through 12/31/2014 in a 10-county area in central North Carolina. 360 Census Tracts in these 10 counties were used as the demographic Geographic Information System (GIS base-map. Longitude and latitude (X, Y coordinates were generated from patient home addresses and overlaid to Census Tracts polygons using ArcGIS; ArcView was used to assess "hot-spots" or clusters of CA-CDI. We then constructed a mixed hierarchical model to identify environmental variables independently associated with increased rates of CA-CDI.A total of 1,895 unique patients met our criteria for CA-CDI. The mean patient age was 54.5 years; 62% were female and 70% were Caucasian. 402 (21% patient addresses were located in "hot spots" or clusters of CA-CDI (p<0.001. "Hot spot" census tracts were scattered throughout the 10 counties. After adjusting for clustering and population density, age ≥ 60 years (p = 0.03, race (<0.001, proximity to a livestock farm (0.01, proximity to farming raw materials services (0.02, and proximity to a nursing home (0.04 were independently associated with increased rates of CA-CDI.Our study is the first to use spatial statistics and mixed models to identify important environmental risk factors for acquisition of C. difficile and adds to the growing evidence that farm practices may put patients at risk for important drug-resistant infections.

  9. Cost and quality effectiveness of objective-based and statistically-based quality control for volatile organic compounds analyses of gases

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, J.T.; Crowder, C.A.; Connolly, M.J.

    1994-12-31

    Gas samples from drums of radioactive waste at the Department of Energy (DOE) Idaho National Engineering Laboratory are being characterized for 29 volatile organic compounds to determine the feasibility of storing the waste in DOE`s Waste Isolation Pilot Plant (WIPP) in Carlsbad, New Mexico. Quality requirements for the gas chromatography (GC) and GC/mass spectrometry chemical methods used to analyze the waste are specified in the Quality Assurance Program Plan for the WIPP Experimental Waste Characterization Program. Quality requirements consist of both objective criteria (data quality objectives, DQOs) and statistical criteria (process control). The DQOs apply to routine sample analyses, while the statistical criteria serve to determine and monitor precision and accuracy (P&A) of the analysis methods and are also used to assign upper confidence limits to measurement results close to action levels. After over two years and more than 1000 sample analyses there are two general conclusions concerning the two approaches to quality control: (1) Objective criteria (e.g., {+-} 25% precision, {+-} 30% accuracy) based on customer needs and the usually prescribed criteria for similar EPA- approved methods are consistently attained during routine analyses. (2) Statistical criteria based on short term method performance are almost an order of magnitude more stringent than objective criteria and are difficult to satisfy following the same routine laboratory procedures which satisfy the objective criteria. A more cost effective and representative approach to establishing statistical method performances criteria would be either to utilize a moving average of P&A from control samples over a several month time period or to determine within a sample variation by one-way analysis of variance of several months replicate sample analysis results or both. Confidence intervals for results near action levels could also be determined by replicate analysis of the sample in question.

  10. Update and Improve Subsection NH –– Alternative Simplified Creep-Fatigue Design Methods

    Energy Technology Data Exchange (ETDEWEB)

    Tai Asayama

    2009-10-26

    This report described the results of investigation on Task 10 of DOE/ASME Materials NGNP/Generation IV Project based on a contract between ASME Standards Technology, LLC (ASME ST-LLC) and Japan Atomic Energy Agency (JAEA). Task 10 is to Update and Improve Subsection NH -- Alternative Simplified Creep-Fatigue Design Methods. Five newly proposed promising creep-fatigue evaluation methods were investigated. Those are (1) modified ductility exhaustion method, (2) strain range separation method, (3) approach for pressure vessel application, (4) hybrid method of time fraction and ductility exhaustion, and (5) simplified model test approach. The outlines of those methods are presented first, and predictability of experimental results of these methods is demonstrated using the creep-fatigue data collected in previous Tasks 3 and 5. All the methods (except the simplified model test approach which is not ready for application) predicted experimental results fairly accurately. On the other hand, predicted creep-fatigue life in long-term regions showed considerable differences among the methodologies. These differences come from the concepts each method is based on. All the new methods investigated in this report have advantages over the currently employed time fraction rule and offer technical insights that should be thought much of in the improvement of creep-fatigue evaluation procedures. The main points of the modified ductility exhaustion method, the strain range separation method, the approach for pressure vessel application and the hybrid method can be reflected in the improvement of the current time fraction rule. The simplified mode test approach would offer a whole new advantage including robustness and simplicity which are definitely attractive but this approach is yet to be validated for implementation at this point. Therefore, this report recommends the following two steps as a course of improvement of NH based on newly proposed creep-fatigue evaluation

  11. Taxonomical and nomenclatural notes on Centaurea: A proposal of classification, a description of new sections and subsections, and a species list of the redefined section Centaurea

    Directory of Open Access Journals (Sweden)

    Hilpold, A.

    2014-12-01

    Full Text Available In this paper, we summarize the results of our long-date research on the genus Centaurea. The first part of the paper deals with the overall classification of the genus, which we propose to divide into three subgenera: subgenus Centaurea, subgenus Cyanus and subgenus Lopholoma. The second part of this publication gives a recopilation of the species of the redefined section Centaurea, a group that includes former sections Acrolophus (sect. Centaurea s. str., Phalolepis and Willkommia, together with taxonomical, geographical, ecological and karyological considerations. Finally, new descriptions or nomenclatural combinations are proposed to correlate nomenclature to the new classification: a new combination (sect. Acrocentron subsect. Chamaecyanus is proposed in subgenus Lopholoma; three new sections (sects. Akamantis, Cnicus, and Hyerapolitanae are described in subgenus Centaurea; two subsections (subsects. Phalolepis and Willkommia in sect. Centaurea; and three subsections (subsects. Exarata, Jacea, and Subtilis in sect. Phrygia.En este trabajo presentamos los resultados de nuestras investigaciones de larga fecha en el género Centaurea. La primera parte del trabajo trata de la clasificación del género, que proponemos dividir en tres subgéneros: subgénero Centaurea, subgénero Cyanus y subgénero Lopholoma. La segunda parte es una recopilación de las especies de la redefinida sección Centaurea, que incluye las antiguas secciones Acrolophus (sect. Centaurea s. str., Phalolepis y Willkommia, junto con consideraciones geográficas, ecológicas y cariológicas. Por último, proponemos nuevas secciones, subsecciones y combinaciones para correlacionar nomenclatura y clasificación: proponemos una nueva (sect. Acrocentron subsect. Chamaecyanus en el subgénero Lopholoma; se describen tres secciones nuevas (sects. Akamantis, Cnicus y Hyerapolitanae en el subgénero Centaurea; dos subsecciones (subsects. Phalolepis and Willkommia en la secci

  12. Unlocking Data for Statistical Analyses and Data Mining: Generic Case Extraction of Clinical Items from i2b2 and tranSMART.

    Science.gov (United States)

    Firnkorn, Daniel; Merker, Sebastian; Ganzinger, Matthias; Muley, Thomas; Knaup, Petra

    2016-01-01

    In medical science, modern IT concepts are increasingly important to gather new findings out of complex diseases. Data Warehouses (DWH) as central data repository systems play a key role by providing standardized, high-quality and secure medical data for effective analyses. However, DWHs in medicine must fulfil various requirements concerning data privacy and the ability to describe the complexity of (rare) disease phenomena. Here, i2b2 and tranSMART are free alternatives representing DWH solutions especially developed for medical informatics purposes. But different functionalities are not yet provided in a sufficient way. In fact, data import and export is still a major problem because of the diversity of schemas, parameter definitions and data quality which are described variously in each single clinic. Further, statistical analyses inside i2b2 and tranSMART are possible, but restricted to the implemented functions. Thus, data export is needed to provide a data basis which can be directly included within statistics software like SPSS and SAS or data mining tools like Weka and RapidMiner. The standard export tools of i2b2 and tranSMART are more or less creating a database dump of key-value pairs which cannot be used immediately by the mentioned tools. They need an instance-based or a case-based representation of each patient. To overcome this lack, we developed a concept called Generic Case Extractor (GCE) which pivots the key-value pairs of each clinical fact into a row-oriented format for each patient sufficient to enable analyses in a broader context. Therefore, complex pivotisation routines where necessary to ensure temporal consistency especially in terms of different data sets and the occurrence of identical but repeated parameters like follow-up data. GCE is embedded inside a comprehensive software platform for systems medicine.

  13. POLLEN AND SEED SURFACE MORFOLOGY IN SOME REPRESENTATIVES OF THE GENUS RHODODENDRON SUBSECT. RHODORASTRUM (ERICACEAE IN THE RUSSIAN FAR EAST

    Directory of Open Access Journals (Sweden)

    I. M. Koksheeva

    2015-05-01

    Full Text Available Comparative study of pollen and seed morphology of three species of Rhododendron L. subsect. Rhodorastrum (Maxim. Cullen (Rh. dauricum L., Rh. mucronolatum Turcz., Rh. sichotense Pojark. is performed. Results of discriminant analysis of the total of morphometric characters of pollen and seeds have proved the distinctness of all three species from each other. Differences of polen are observed in the type of sculpture (granulate, rugulate, microrugulate and in the diameter of tetrads. The coefficient of elongation of the exotesta cells is established as a valuable morphometric character

  14. Performing statistical analyses on quantitative data in Taverna workflows: An example using R and maxdBrowse to identify differentially-expressed genes from microarray data

    Directory of Open Access Journals (Sweden)

    Pocock Matthew R

    2008-08-01

    Full Text Available Abstract Background There has been a dramatic increase in the amount of quantitative data derived from the measurement of changes at different levels of biological complexity during the post-genomic era. However, there are a number of issues associated with the use of computational tools employed for the analysis of such data. For example, computational tools such as R and MATLAB require prior knowledge of their programming languages in order to implement statistical analyses on data. Combining two or more tools in an analysis may also be problematic since data may have to be manually copied and pasted between separate user interfaces for each tool. Furthermore, this transfer of data may require a reconciliation step in order for there to be interoperability between computational tools. Results Developments in the Taverna workflow system have enabled pipelines to be constructed and enacted for generic and ad hoc analyses of quantitative data. Here, we present an example of such a workflow involving the statistical identification of differentially-expressed genes from microarray data followed by the annotation of their relationships to cellular processes. This workflow makes use of customised maxdBrowse web services, a system that allows Taverna to query and retrieve gene expression data from the maxdLoad2 microarray database. These data are then analysed by R to identify differentially-expressed genes using the Taverna RShell processor which has been developed for invoking this tool when it has been deployed as a service using the RServe library. In addition, the workflow uses Beanshell scripts to reconcile mismatches of data between services as well as to implement a form of user interaction for selecting subsets of microarray data for analysis as part of the workflow execution. A new plugin system in the Taverna software architecture is demonstrated by the use of renderers for displaying PDF files and CSV formatted data within the Taverna

  15. Performing statistical analyses on quantitative data in Taverna workflows: an example using R and maxdBrowse to identify differentially-expressed genes from microarray data.

    Science.gov (United States)

    Li, Peter; Castrillo, Juan I; Velarde, Giles; Wassink, Ingo; Soiland-Reyes, Stian; Owen, Stuart; Withers, David; Oinn, Tom; Pocock, Matthew R; Goble, Carole A; Oliver, Stephen G; Kell, Douglas B

    2008-08-07

    There has been a dramatic increase in the amount of quantitative data derived from the measurement of changes at different levels of biological complexity during the post-genomic era. However, there are a number of issues associated with the use of computational tools employed for the analysis of such data. For example, computational tools such as R and MATLAB require prior knowledge of their programming languages in order to implement statistical analyses on data. Combining two or more tools in an analysis may also be problematic since data may have to be manually copied and pasted between separate user interfaces for each tool. Furthermore, this transfer of data may require a reconciliation step in order for there to be interoperability between computational tools. Developments in the Taverna workflow system have enabled pipelines to be constructed and enacted for generic and ad hoc analyses of quantitative data. Here, we present an example of such a workflow involving the statistical identification of differentially-expressed genes from microarray data followed by the annotation of their relationships to cellular processes. This workflow makes use of customised maxdBrowse web services, a system that allows Taverna to query and retrieve gene expression data from the maxdLoad2 microarray database. These data are then analysed by R to identify differentially-expressed genes using the Taverna RShell processor which has been developed for invoking this tool when it has been deployed as a service using the RServe library. In addition, the workflow uses Beanshell scripts to reconcile mismatches of data between services as well as to implement a form of user interaction for selecting subsets of microarray data for analysis as part of the workflow execution. A new plugin system in the Taverna software architecture is demonstrated by the use of renderers for displaying PDF files and CSV formatted data within the Taverna workbench. Taverna can be used by data analysis

  16. Performing statistical analyses on quantitative data in Taverna workflows: An example using R and maxdBrowse to identify differentially-expressed genes from microarray data

    Science.gov (United States)

    Li, Peter; Castrillo, Juan I; Velarde, Giles; Wassink, Ingo; Soiland-Reyes, Stian; Owen, Stuart; Withers, David; Oinn, Tom; Pocock, Matthew R; Goble, Carole A; Oliver, Stephen G; Kell, Douglas B

    2008-01-01

    Background There has been a dramatic increase in the amount of quantitative data derived from the measurement of changes at different levels of biological complexity during the post-genomic era. However, there are a number of issues associated with the use of computational tools employed for the analysis of such data. For example, computational tools such as R and MATLAB require prior knowledge of their programming languages in order to implement statistical analyses on data. Combining two or more tools in an analysis may also be problematic since data may have to be manually copied and pasted between separate user interfaces for each tool. Furthermore, this transfer of data may require a reconciliation step in order for there to be interoperability between computational tools. Results Developments in the Taverna workflow system have enabled pipelines to be constructed and enacted for generic and ad hoc analyses of quantitative data. Here, we present an example of such a workflow involving the statistical identification of differentially-expressed genes from microarray data followed by the annotation of their relationships to cellular processes. This workflow makes use of customised maxdBrowse web services, a system that allows Taverna to query and retrieve gene expression data from the maxdLoad2 microarray database. These data are then analysed by R to identify differentially-expressed genes using the Taverna RShell processor which has been developed for invoking this tool when it has been deployed as a service using the RServe library. In addition, the workflow uses Beanshell scripts to reconcile mismatches of data between services as well as to implement a form of user interaction for selecting subsets of microarray data for analysis as part of the workflow execution. A new plugin system in the Taverna software architecture is demonstrated by the use of renderers for displaying PDF files and CSV formatted data within the Taverna workbench. Conclusion Taverna can

  17. Development of a high pressure automated lag time apparatus for experimental studies and statistical analyses of nucleation and growth of gas hydrates.

    Science.gov (United States)

    Maeda, Nobuo; Wells, Darrell; Becker, Norman C; Hartley, Patrick G; Wilson, Peter W; Haymet, Anthony D J; Kozielski, Karen A

    2011-06-01

    Nucleation in a supercooled or a supersaturated medium is a stochastic event, and hence statistical analyses are required for the understanding and prediction of such events. The development of reliable statistical methods for quantifying nucleation probability is highly desirable for applications where control of nucleation is required. The nucleation of gas hydrates in supercooled conditions is one such application. We describe the design and development of a high pressure automated lag time apparatus (HP-ALTA) for the statistical study of gas hydrate nucleation and growth at elevated gas pressures. The apparatus allows a small volume (≈150 μl) of water to be cooled at a controlled rate in a pressurized gas atmosphere, and the temperature of gas hydrate nucleation, T(f), to be detected. The instrument then raises the sample temperature under controlled conditions to facilitate dissociation of the gas hydrate before repeating the cooling-nucleation cycle again. This process of forming and dissociating gas hydrates can be automatically repeated for a statistically significant (>100) number of nucleation events. The HP-ALTA can be operated in two modes, one for the detection of hydrate in the bulk of the sample, under a stirring action, and the other for the detection of the formation of hydrate films across the water-gas interface of a quiescent sample. The technique can be applied to the study of several parameters, such as gas pressure, cooling rate and gas composition, on the gas hydrate nucleation probability distribution for supercooled water samples. © 2011 American Institute of Physics

  18. Statistical methods for cost-effectiveness analyses that use data from cluster randomized trials: a systematic review and checklist for critical appraisal.

    Science.gov (United States)

    Gomes, Manuel; Grieve, Richard; Nixon, Richard; Edmunds, W J

    2012-01-01

    The best data for cost-effectiveness analyses (CEAs) of group-level interventions often come from cluster randomized trials (CRTs), where randomization is by cluster (e.g., the hospital attended), not by individual. for these CEAs need to recognize both the correlation between costs and outcomes and that these data may be dependent on the cluster. General checklists and methodological guidance for critically appraising CEA ignore these issues. This article develops a new checklist and applies it in a systematic review of CEAs that use CRTs. The authors developed a checklist for CEAs that use CRTs, informed by a conceptual review of statistical methods. This checklist included criteria such as whether the analysis allowed for both clustering and the correlation between individuals' costs and outcomes. The authors undertook a systematic literature review of full economic evaluations that used CRTs. The quality of studies was assessed with the new checklist and by the "Drummond checklist." The authors identified 62 papers that met the inclusion criteria. On average, studies satisfied 9 of the 10 criteria for the checklist but only 20% of criteria for the new checklist. More than 40% of studies adopted statistical methods that completely ignored clustering, and 75% disregarded any correlation between costs and outcomes. Only 4 studies employed appropriate statistical methods that allowed for both clustering and correlation. Most economic evaluations that use data from CRTs ignored clustering or correlation. Statistical methods that address these issues are available, and their use should be encouraged. The new checklist can supplement generic CEA guidelines and highlight where research practice can be improved.

  19. Inferring the origin of rare fruit distillates from compositional data using multivariate statistical analyses and the identification of new flavour constituents.

    Science.gov (United States)

    Mihajilov-Krstev, Tatjana M; Denić, Marija S; Zlatković, Bojan K; Stankov-Jovanović, Vesna P; Mitić, Violeta D; Stojanović, Gordana S; Radulović, Niko S

    2015-04-01

    In Serbia, delicatessen fruit alcoholic drinks are produced from autochthonous fruit-bearing species such as cornelian cherry, blackberry, elderberry, wild strawberry, European wild apple, European blueberry and blackthorn fruits. There are no chemical data on many of these and herein we analysed volatile minor constituents of these rare fruit distillates. Our second goal was to determine possible chemical markers of these distillates through a statistical/multivariate treatment of the herein obtained and previously reported data. Detailed chemical analyses revealed a complex volatile profile of all studied fruit distillates with 371 identified compounds. A number of constituents were recognised as marker compounds for a particular distillate. Moreover, 33 of them represent newly detected flavour constituents in alcoholic beverages or, in general, in foodstuffs. With the aid of multivariate analyses, these volatile profiles were successfully exploited to infer the origin of raw materials used in the production of these spirits. It was also shown that all fruit distillates possessed weak antimicrobial properties. It seems that the aroma of these highly esteemed wild-fruit spirits depends on the subtle balance of various minor volatile compounds, whereby some of them are specific to a certain type of fruit distillate and enable their mutual distinction. © 2014 Society of Chemical Industry.

  20. Energy neutral: the human foot and ankle subsections combine to produce near zero net mechanical work during walking.

    Science.gov (United States)

    Takahashi, Kota Z; Worster, Kate; Bruening, Dustin A

    2017-11-13

    The human foot and ankle system is equipped with structures that can produce mechanical work through elastic (e.g., Achilles tendon, plantar fascia) or viscoelastic (e.g., heel pad) mechanisms, or by active muscle contractions. Yet, quantifying the work distribution among various subsections of the foot and ankle can be difficult, in large part due to a lack of objective methods for partitioning the forces acting underneath the stance foot. In this study, we deconstructed the mechanical work production during barefoot walking in a segment-by-segment manner (hallux, forefoot, hindfoot, and shank). This was accomplished by isolating the forces acting within each foot segment through controlling the placement of the participants' foot as it contacted a ground-mounted force platform. Combined with an analysis that incorporated non-rigid mechanics, we quantified the total work production distal to each of the four isolated segments. We found that various subsections within the foot and ankle showed disparate work distribution, particularly within structures distal to the hindfoot. When accounting for all sources of positive and negative work distal to the shank (i.e., ankle joint and all foot structures), these structures resembled an energy-neutral system that produced net mechanical work close to zero (-0.012 ± 0.054 J/kg).

  1. How to Make Nothing Out of Something: Analyses of the Impact of Study Sampling and Statistical Interpretation in Misleading Meta-Analytic Conclusions

    Directory of Open Access Journals (Sweden)

    Michael Robert Cunningham

    2016-10-01

    Full Text Available The limited resource model states that self-control is governed by a relatively finite set of inner resources on which people draw when exerting willpower. Once self-control resources have been used up or depleted, they are less available for other self-control tasks, leading to a decrement in subsequent self-control success. The depletion effect has been studied for over 20 years, tested or extended in more than 600 studies, and supported in an independent meta-analysis (Hagger, Wood, Stiff, and Chatzisarantis, 2010. Meta-analyses are supposed to reduce bias in literature reviews. Carter, Kofler, Forster, and McCullough’s (2015 meta-analysis, by contrast, included a series of questionable decisions involving sampling, methods, and data analysis. We provide quantitative analyses of key sampling issues: exclusion of many of the best depletion studies based on idiosyncratic criteria and the emphasis on mini meta-analyses with low statistical power as opposed to the overall depletion effect. We discuss two key methodological issues: failure to code for research quality, and the quantitative impact of weak studies by novice researchers. We discuss two key data analysis issues: questionable interpretation of the results of trim and fill and funnel plot asymmetry test procedures, and the use and misinterpretation of the untested Precision Effect Test [PET] and Precision Effect Estimate with Standard Error (PEESE procedures. Despite these serious problems, the Carter et al. meta-analysis results actually indicate that there is a real depletion effect – contrary to their title.

  2. Region-of-interest analyses of one-dimensional biomechanical trajectories: bridging 0D and 1D theory, augmenting statistical power

    Directory of Open Access Journals (Sweden)

    Todd C. Pataky

    2016-11-01

    Full Text Available One-dimensional (1D kinematic, force, and EMG trajectories are often analyzed using zero-dimensional (0D metrics like local extrema. Recently whole-trajectory 1D methods have emerged in the literature as alternatives. Since 0D and 1D methods can yield qualitatively different results, the two approaches may appear to be theoretically distinct. The purposes of this paper were (a to clarify that 0D and 1D approaches are actually just special cases of a more general region-of-interest (ROI analysis framework, and (b to demonstrate how ROIs can augment statistical power. We first simulated millions of smooth, random 1D datasets to validate theoretical predictions of the 0D, 1D and ROI approaches and to emphasize how ROIs provide a continuous bridge between 0D and 1D results. We then analyzed a variety of public datasets to demonstrate potential effects of ROIs on biomechanical conclusions. Results showed, first, that a priori ROI particulars can qualitatively affect the biomechanical conclusions that emerge from analyses and, second, that ROIs derived from exploratory/pilot analyses can detect smaller biomechanical effects than are detectable using full 1D methods. We recommend regarding ROIs, like data filtering particulars and Type I error rate, as parameters which can affect hypothesis testing results, and thus as sensitivity analysis tools to ensure arbitrary decisions do not influence scientific interpretations. Last, we describe open-source Python and MATLAB implementations of 1D ROI analysis for arbitrary experimental designs ranging from one-sample t tests to MANOVA.

  3. Region-of-interest analyses of one-dimensional biomechanical trajectories: bridging 0D and 1D theory, augmenting statistical power.

    Science.gov (United States)

    Pataky, Todd C; Robinson, Mark A; Vanrenterghem, Jos

    2016-01-01

    One-dimensional (1D) kinematic, force, and EMG trajectories are often analyzed using zero-dimensional (0D) metrics like local extrema. Recently whole-trajectory 1D methods have emerged in the literature as alternatives. Since 0D and 1D methods can yield qualitatively different results, the two approaches may appear to be theoretically distinct. The purposes of this paper were (a) to clarify that 0D and 1D approaches are actually just special cases of a more general region-of-interest (ROI) analysis framework, and (b) to demonstrate how ROIs can augment statistical power. We first simulated millions of smooth, random 1D datasets to validate theoretical predictions of the 0D, 1D and ROI approaches and to emphasize how ROIs provide a continuous bridge between 0D and 1D results. We then analyzed a variety of public datasets to demonstrate potential effects of ROIs on biomechanical conclusions. Results showed, first, that a priori ROI particulars can qualitatively affect the biomechanical conclusions that emerge from analyses and, second, that ROIs derived from exploratory/pilot analyses can detect smaller biomechanical effects than are detectable using full 1D methods. We recommend regarding ROIs, like data filtering particulars and Type I error rate, as parameters which can affect hypothesis testing results, and thus as sensitivity analysis tools to ensure arbitrary decisions do not influence scientific interpretations. Last, we describe open-source Python and MATLAB implementations of 1D ROI analysis for arbitrary experimental designs ranging from one-sample t tests to MANOVA.

  4. Multivariate statistical and lead isotopic analyses approach to identify heavy metal sources in topsoil from the industrial zone of Beijing Capital Iron and Steel Factory.

    Science.gov (United States)

    Zhu, Guangxu; Guo, Qingjun; Xiao, Huayun; Chen, Tongbin; Yang, Jun

    2017-06-01

    Heavy metals are considered toxic to humans and ecosystems. In the present study, heavy metal concentration in soil was investigated using the single pollution index (PIi), the integrated Nemerow pollution index (PIN), and the geoaccumulation index (Igeo) to determine metal accumulation and its pollution status at the abandoned site of the Capital Iron and Steel Factory in Beijing and its surrounding area. Multivariate statistical (principal component analysis and correlation analysis), geostatistical analysis (ArcGIS tool), combined with stable Pb isotopic ratios, were applied to explore the characteristics of heavy metal pollution and the possible sources of pollutants. The results indicated that heavy metal elements show different degrees of accumulation in the study area, the observed trend of the enrichment factors, and the geoaccumulation index was Hg > Cd > Zn > Cr > Pb > Cu ≈ As > Ni. Hg, Cd, Zn, and Cr were the dominant elements that influenced soil quality in the study area. The Nemerow index method indicated that all of the heavy metals caused serious pollution except Ni. Multivariate statistical analysis indicated that Cd, Zn, Cu, and Pb show obvious correlation and have higher loads on the same principal component, suggesting that they had the same sources, which are related to industrial activities and vehicle emissions. The spatial distribution maps based on ordinary kriging showed that high concentrations of heavy metals were located in the local factory area and in the southeast-northwest part of the study region, corresponding with the predominant wind directions. Analyses of lead isotopes confirmed that Pb in the study soils is predominantly derived from three Pb sources: dust generated during steel production, coal combustion, and the natural background. Moreover, the ternary mixture model based on lead isotope analysis indicates that lead in the study soils originates mainly from anthropogenic sources, which contribute much more

  5. Practical Statistics

    CERN Document Server

    Lyons, L.

    2016-01-01

    Accelerators and detectors are expensive, both in terms of money and human effort. It is thus important to invest effort in performing a good statistical anal- ysis of the data, in order to extract the best information from it. This series of five lectures deals with practical aspects of statistical issues that arise in typical High Energy Physics analyses.

  6. Report on the FY17 Development of Computer Program for ASME Section III, Division 5, Subsection HB, Subpart B Rules

    Energy Technology Data Exchange (ETDEWEB)

    Swindeman, M. J. [Argonne National Lab. (ANL), Argonne, IL (United States); Jetter, R. I. [Argonne National Lab. (ANL), Argonne, IL (United States); Sham, T. -L. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-01-01

    One of the objectives of the high temperature design methodology activities is to develop and validate both improvements and the basic features of ASME Boiler and Pressure Vessel Code, Section III, Rules for Construction of Nuclear Facility Components, Division 5, High Temperature Reactors, Subsection HB, Subpart B (HBB). The overall scope of this task is to develop a computer program to aid assessment procedures of components under specified loading conditions in accordance with the elevated temperature design requirements for Division 5 Class A components. There are many features and alternative paths of varying complexity in HBB. The initial focus of this computer program is a basic path through the various options for a single reference material, 316H stainless steel. However, the computer program is being structured for eventual incorporation all of the features and permitted materials of HBB. This report will first provide a description of the overall computer program, particular challenges in developing numerical procedures for the assessment, and an overall approach to computer program development. This is followed by a more comprehensive appendix, which is the draft computer program manual for the program development. The strain limits rules have been implemented in the computer program. The evaluation of creep-fatigue damage will be implemented in future work scope.

  7. Comparison of statistical inferences from the DerSimonian-Laird and alternative random-effects model meta-analyses - an empirical assessment of 920 Cochrane primary outcome meta-analyses

    DEFF Research Database (Denmark)

    Thorlund, Kristian; Wetterslev, Jørn; Awad, Tahany

    2011-01-01

    In random-effects model meta-analysis, the conventional DerSimonian-Laird (DL) estimator typically underestimates the between-trial variance. Alternative variance estimators have been proposed to address this bias. This study aims to empirically compare statistical inferences from random-effects ......In random-effects model meta-analysis, the conventional DerSimonian-Laird (DL) estimator typically underestimates the between-trial variance. Alternative variance estimators have been proposed to address this bias. This study aims to empirically compare statistical inferences from random...

  8. Genetic, morphological, geographical and ecological approaches reveal phylogenetic relationships in complex groups, an example of recently diverged pinyon pine species (Subsection Cembroides).

    Science.gov (United States)

    Flores-Rentería, Lluvia; Wegier, Ana; Ortega Del Vecchyo, Diego; Ortíz-Medrano, Alejandra; Piñero, Daniel; Whipple, Amy V; Molina-Freaner, Francisco; Domínguez, César A

    2013-12-01

    Elucidating phylogenetic relationships and species boundaries within complex taxonomic groups is challenging for intrinsic and extrinsic (i.e., technical) reasons. Mexican pinyon pines are a complex group whose phylogenetic relationships and species boundaries have been widely studied but poorly resolved, partly due to intrinsic ecological and evolutionary features such as low morphological and genetic differentiation caused by recent divergence, hybridization and introgression. Extrinsic factors such as limited sampling and difficulty in selecting informative molecular markers have also impeded progress. Some of the Mexican pinyon pines are of conservation concern but others may remain unprotected because the species boundaries have not been established. In this study we combined approaches to resolve the phylogenetic relationships in this complex group and to establish species boundaries in four recently diverged taxa: P. discolor, P. johannis, P. culminicola and P. cembroides. We performed phylogenetic analyses using the chloroplast markers matK and psbA-trnH as well as complete and partial chloroplast genomes of species of Subsection Cembroides. Additionally, we performed a phylogeographic analysis combining genetic data (18 chloroplast markers), morphological data and geographical data to define species boundaries in four recently diverged taxa. Ecological divergence was supported by differences in climate among localities for distinct genetic lineages. Whereas the phylogenetic analysis inferred with matK and psbA-trnH was unable to resolve the relationships in this complex group, we obtained a resolved phylogeny with the use of the chloroplast genomes. The resolved phylogeny was concordant with a haplotype network obtained using chloroplast markers. In species with potential for recent divergence, hybridization or introgression, nonhierarchical network-based approaches are probably more appropriate to protect against misclassification due to incomplete

  9. Verification of Allowable Stresses In ASME Section III Subsection NH For Grade 91 Steel & Alloy 800H

    Energy Technology Data Exchange (ETDEWEB)

    R. W. Swindeman; M. J. Swindeman; B. W. Roberts; B. E. Thurgood; D. L. Marriott

    2007-11-30

    The database for the creep-rupture of 9Cr-1Mo-V (Grade 91) steel was collected and reviewed to determine if it met the needs for recommending time-dependent strength values, S{sub t}, for coverage in ASME Section III Subsection NH (ASME III-NH) to 650 C (1200 F) and 600,000 hours. The accumulated database included over 300 tests for 1% total strain, nearly 400 tests for tertiary creep, and nearly 1700 tests to rupture. Procedures for analyzing creep and rupture data for ASME III-NH were reviewed and compared to the procedures used to develop the current allowable stress values for Gr 91 for ASME II-D. The criteria in ASME III-NH for estimating S{sub t} included the average strength for 1% total strain for times to 600,000 hours, 80% of the minimum strength for tertiary creep for times to 600,000 hours, and 67% of the minimum rupture strength values for times to 600,000 hours. Time-temperature-stress parametric formulations were selected to correlate the data and make predictions of the long-time strength. It was found that the stress corresponding to 1% total strain and the initiation of tertiary creep were not the controlling criteria over the temperature-time range of concern. It was found that small adjustments to the current values in III-NH could be introduced but that the existing values were conservative and could be retained. The existing database was found to be adequate to extend the coverage to 600,000 hours for temperatures below 650 C (1200 F).

  10. Ecological Subsections of Minnesota

    Data.gov (United States)

    Minnesota Department of Natural Resources — This coverage provides information for the third level of the Ecological Classification System. The boundaries of the polygons of this coverage were derived from...

  11. A FORTRAN 77 Program and User's Guide for the Statistical Analyses of Scatterplots to Identify Important Factors in Large-Scale Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Helton, Jon C.; Shortencarier, Maichael J.

    1999-08-01

    A description and user's guide are given for a computer program, PATTRN, developed at Sandia National Laboratories for use in sensitivity analyses of complex models. This program is intended for use in the analysis of input-output relationships in Monte Carlo analyses when the input has been selected using random or Latin hypercube sampling. Procedures incorporated into the program are based upon attempts to detect increasingly complex patterns in scatterplots and involve the detection of linear relationships, monotonic relationships, trends in measures of central tendency, trends in measures of variability, and deviations from randomness. The program was designed to be easy to use and portable.

  12. Statistics Clinic

    Science.gov (United States)

    Feiveson, Alan H.; Foy, Millennia; Ploutz-Snyder, Robert; Fiedler, James

    2014-01-01

    Do you have elevated p-values? Is the data analysis process getting you down? Do you experience anxiety when you need to respond to criticism of statistical methods in your manuscript? You may be suffering from Insufficient Statistical Support Syndrome (ISSS). For symptomatic relief of ISSS, come for a free consultation with JSC biostatisticians at our help desk during the poster sessions at the HRP Investigators Workshop. Get answers to common questions about sample size, missing data, multiple testing, when to trust the results of your analyses and more. Side effects may include sudden loss of statistics anxiety, improved interpretation of your data, and increased confidence in your results.

  13. Statistical properties of coastal long waves analysed through sea-level time-gradient functions: exemplary analysis of the Siracusa, Italy, tide-gauge data

    Directory of Open Access Journals (Sweden)

    L. Bressan

    2016-01-01

    reconstructed sea level (RSL, the background slope (BS and the control function (CF. These functions are examined through a traditional spectral fast Fourier transform (FFT analysis and also through a statistical analysis, showing that they can be characterised by probability distribution functions PDFs such as the Student's t distribution (IS and RSL and the beta distribution (CF. As an example, the method has been applied to data from the tide-gauge station of Siracusa, Italy.

  14. Greater robustness of second order statistics than higher order statistics algorithms to distortions of the mixing matrix in blind source separation of human EEG: implications for single-subject and group analyses.

    Science.gov (United States)

    Lio, Guillaume; Boulinguez, Philippe

    2013-02-15

    A mandatory assumption in blind source separation (BSS) of the human electroencephalogram (EEG) is that the mixing matrix remains invariant, i.e., that the sources, electrodes and geometry of the head do not change during the experiment. Actually, this is not often the case. For instance, it is common that some electrodes slightly move during EEG recording. This issue is even more critical for group independent component analysis (gICA), a method of growing interest, in which only one mixing matrix is estimated for several subjects. Indeed, because of interindividual anatomo-functional variability, this method violates the mandatory principle of invariance. Here, using simulated (experiments 1 and 2) and real (experiment 3) EEG data, we test how eleven current BSS algorithms undergo distortions of the mixing matrix. We show that this usual kind of perturbation creates non-Gaussian features that are virtually added to all sources, impairing the estimation of real higher order statistics (HOS) features of the actual sources by HOS algorithms (e.g., Ext-INFOMAX, FASTICA). HOS-based methods are likely to identify more components (with similar properties) than actual neurological sources, a problem frequently encountered by BSS users. In practice, the quality of the recovered signal and the efficiency of subsequent source localization are substantially impaired. Performing dimensionality reduction before applying HOS-based BSS does not seem to be a safe strategy to circumvent the problem. Second order statistics (SOS)-based BSS methods belonging to the less popular SOBI family class are much less sensitive to this bias. Copyright © 2012 Elsevier Inc. All rights reserved.

  15. Discovery and characterisation of dietary patterns in two Nordic countries. Using non-supervised and supervised multivariate statistical techniques to analyse dietary survey data

    DEFF Research Database (Denmark)

    Edberg, Anna; Freyhult, Eva; Sand, Salomon

    /Predictive Modelling, on the other. The first among the unsupervised analyses involved inspection largely by, but not restricted to, an in-house implemented multi-branching hierarchical clustering algorithm (OMB-DHC), thereby revealing various aggregations of reasonably coherent consumers in unabridged and agedefined...... sub-populations. Notably, a hierarchical OMB-DHC design of operation tied to a palatable output display, unlike earlier reports in the dietary survey area, helped identifying the degree of heterogeneity of clusters appearing at several segregation levels, thereby also supporting the judicious...... selection of aggregations for further compilation and scrutiny. Numbers and salient features of such dietary sub-populations were found to largely, but not exactly, commensurate with those of various scientific reports in the area. Thus, 4–5 dietary clusters – in this report also referred to as dietary...

  16. OIL POLLUTION IN INDONESIAN WATERS: COMBINING STATISTICAL ANALYSES OF ENVISAT ASAR AND SENTINEL-1A C-SAR DATA WITH NUMERICAL TRACER MODELLING

    Directory of Open Access Journals (Sweden)

    M. Gade

    2017-11-01

    Full Text Available This Pilot Study aimed at improving the information on the state of the Indonesian marine environment that is gained from satellite data. More than 2000 historical and actual synthetic aperture radar (SAR data from ENVISAT ASAR and Sentinel-1A/B C-SAR, respectively, were used to produce oil pollution density maps of two regions of interest (ROI in Indonesian waters. The normalized spill number and the normalized mean polluted area were calculated, and our findings indicate that in general, the marine oil pollution in both ROI is of different origin: while ship traffic appears to be the main source in the Java Sea, oil production industry causes the highest pollution rates in the Strait of Makassar. In most cases hot spots of marine oil pollution were found in the open sea, and the largest number of oil spills in the Java Sea was found from March to May and from September to December, i.e., during the transition from the north-west monsoon to the south-east monsoon, and vice versa. This is when the overall wind and current patterns change, thereby making oil pollution detection with SAR sensors easier. In support of our SAR image analyses high-resolution numerical forward and backward tracer experiments were performed. Using the previously gained information we identify strongly affected coastal areas (with most oil pollution being driven onshore, but also sensitive parts of major ship traffic lanes (where any oil pollution is likely to be driven into marine protected areas. Our results demonstrate the feasibility of our approach, to combine numerical tracer modelling with (visual SAR image analyses for an assessment of the marine environment in Indonesian waters, and they help in better understanding the observed seasonality.

  17. Computational and statistical analyses of amino acid usage and physico-chemical properties of the twelve late embryogenesis abundant protein classes.

    Directory of Open Access Journals (Sweden)

    Emmanuel Jaspard

    Full Text Available Late Embryogenesis Abundant Proteins (LEAPs are ubiquitous proteins expected to play major roles in desiccation tolerance. Little is known about their structure - function relationships because of the scarcity of 3-D structures for LEAPs. The previous building of LEAPdb, a database dedicated to LEAPs from plants and other organisms, led to the classification of 710 LEAPs into 12 non-overlapping classes with distinct properties. Using this resource, numerous physico-chemical properties of LEAPs and amino acid usage by LEAPs have been computed and statistically analyzed, revealing distinctive features for each class. This unprecedented analysis allowed a rigorous characterization of the 12 LEAP classes, which differed also in multiple structural and physico-chemical features. Although most LEAPs can be predicted as intrinsically disordered proteins, the analysis indicates that LEAP class 7 (PF03168 and probably LEAP class 11 (PF04927 are natively folded proteins. This study thus provides a detailed description of the structural properties of this protein family opening the path toward further LEAP structure - function analysis. Finally, since each LEAP class can be clearly characterized by a unique set of physico-chemical properties, this will allow development of software to predict proteins as LEAPs.

  18. A graphical user interface (GUI) toolkit for the calculation of three-dimensional (3D) multi-phase biological effective dose (BED) distributions including statistical analyses.

    Science.gov (United States)

    Kauweloa, Kevin I; Gutierrez, Alonso N; Stathakis, Sotirios; Papanikolaou, Niko; Mavroidis, Panayiotis

    2016-07-01

    A toolkit has been developed for calculating the 3-dimensional biological effective dose (BED) distributions in multi-phase, external beam radiotherapy treatments such as those applied in liver stereotactic body radiation therapy (SBRT) and in multi-prescription treatments. This toolkit also provides a wide range of statistical results related to dose and BED distributions. MATLAB 2010a, version 7.10 was used to create this GUI toolkit. The input data consist of the dose distribution matrices, organ contour coordinates, and treatment planning parameters from the treatment planning system (TPS). The toolkit has the capability of calculating the multi-phase BED distributions using different formulas (denoted as true and approximate). Following the calculations of the BED distributions, the dose and BED distributions can be viewed in different projections (e.g. coronal, sagittal and transverse). The different elements of this toolkit are presented and the important steps for the execution of its calculations are illustrated. The toolkit is applied on brain, head & neck and prostate cancer patients, who received primary and boost phases in order to demonstrate its capability in calculating BED distributions, as well as measuring the inaccuracy and imprecision of the approximate BED distributions. Finally, the clinical situations in which the use of the present toolkit would have a significant clinical impact are indicated. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  19. Rapid detection and statistical differentiation of KPC gene variants in Gram-negative pathogens by use of high-resolution melting and ScreenClust analyses.

    Science.gov (United States)

    Roth, Amanda L; Hanson, Nancy D

    2013-01-01

    In the United States, the production of the Klebsiella pneumoniae carbapenemase (KPC) is an important mechanism of carbapenem resistance in Gram-negative pathogens. Infections with KPC-producing organisms are associated with increased morbidity and mortality; therefore, the rapid detection of KPC-producing pathogens is critical in patient care and infection control. We developed a real-time PCR assay complemented with traditional high-resolution melting (HRM) analysis, as well as statistically based genotyping, using the Rotor-Gene ScreenClust HRM software to both detect the presence of bla(KPC) and differentiate between KPC-2-like and KPC-3-like alleles. A total of 166 clinical isolates of Enterobacteriaceae, Pseudomonas aeruginosa, and Acinetobacter baumannii with various β-lactamase susceptibility patterns were tested in the validation of this assay; 66 of these organisms were known to produce the KPC β-lactamase. The real-time PCR assay was able to detect the presence of bla(KPC) in all 66 of these clinical isolates (100% sensitivity and specificity). HRM analysis demonstrated that 26 had KPC-2-like melting peak temperatures, while 40 had KPC-3-like melting peak temperatures. Sequencing of 21 amplified products confirmed the melting peak results, with 9 isolates carrying bla(KPC-2) and 12 isolates carrying bla(KPC-3). This PCR/HRM assay can identify KPC-producing Gram-negative pathogens in as little as 3 h after isolation of pure colonies and does not require post-PCR sample manipulation for HRM analysis, and ScreenClust analysis easily distinguishes bla(KPC-2-like) and bla(KPC-3-like) alleles. Therefore, this assay is a rapid method to identify the presence of bla(KPC) enzymes in Gram-negative pathogens that can be easily integrated into busy clinical microbiology laboratories.

  20. More insights into early brain development through statistical analyses of eigen-structural elements of diffusion tensor imaging using multivariate adaptive regression splines.

    Science.gov (United States)

    Chen, Yasheng; Zhu, Hongtu; An, Hongyu; Armao, Diane; Shen, Dinggang; Gilmore, John H; Lin, Weili

    2014-03-01

    The aim of this study was to characterize the maturational changes of the three eigenvalues (λ1 ≥ λ2 ≥ λ3) of diffusion tensor imaging (DTI) during early postnatal life for more insights into early brain development. In order to overcome the limitations of using presumed growth trajectories for regression analysis, we employed Multivariate Adaptive Regression Splines (MARS) to derive data-driven growth trajectories for the three eigenvalues. We further employed Generalized Estimating Equations (GEE) to carry out statistical inferences on the growth trajectories obtained with MARS. With a total of 71 longitudinal datasets acquired from 29 healthy, full-term pediatric subjects, we found that the growth velocities of the three eigenvalues were highly correlated, but significantly different from each other. This paradox suggested the existence of mechanisms coordinating the maturations of the three eigenvalues even though different physiological origins may be responsible for their temporal evolutions. Furthermore, our results revealed the limitations of using the average of λ2 and λ3 as the radial diffusivity in interpreting DTI findings during early brain development because these two eigenvalues had significantly different growth velocities even in central white matter. In addition, based upon the three eigenvalues, we have documented the growth trajectory differences between central and peripheral white matter, between anterior and posterior limbs of internal capsule, and between inferior and superior longitudinal fasciculus. Taken together, we have demonstrated that more insights into early brain maturation can be gained through analyzing eigen-structural elements of DTI.

  1. Regulatory Safety Issues in the Structural Design Criteria of ASME Section III Subsection NH and for Very High Temperatures for VHTR & GEN IV

    Energy Technology Data Exchange (ETDEWEB)

    William J. O’Donnell; Donald S. Griffin

    2007-05-07

    The objective of this task is to identify issues relevant to ASME Section III, Subsection NH [1], and related Code Cases that must be resolved for licensing purposes for VHTGRs (Very High Temperature Gas Reactor concepts such as those of PBMR, Areva, and GA); and to identify the material models, design criteria, and analysis methods that need to be added to the ASME Code to cover the unresolved safety issues. Subsection NH was originally developed to provide structural design criteria and limits for elevated-temperature design of Liquid Metal Fast Breeder Reactor (LMFBR) systems and some gas-cooled systems. The U.S. Nuclear Regulatory Commission (NRC) and its Advisory Committee for Reactor Safeguards (ACRS) reviewed the design limits and procedures in the process of reviewing the Clinch River Breeder Reactor (CRBR) for a construction permit in the late 1970s and early 1980s, and identified issues that needed resolution. In the years since then, the NRC and various contractors have evaluated the applicability of the ASME Code and Code Cases to high-temperature reactor designs such as the VHTGRs, and identified issues that need to be resolved to provide a regulatory basis for licensing. This Report describes: (1) NRC and ACRS safety concerns raised during the licensing process of CRBR , (2) how some of these issues are addressed by the current Subsection NH of the ASME Code; and (3) the material models, design criteria, and analysis methods that need to be added to the ASME Code and Code Cases to cover unresolved regulatory issues for very high temperature service.

  2. Cancer Statistics

    Science.gov (United States)

    ... What Is Cancer? Cancer Statistics Cancer Disparities Cancer Statistics Cancer has a major impact on society in ... success of efforts to control and manage cancer. Statistics at a Glance: The Burden of Cancer in ...

  3. Caregiving Statistics

    Science.gov (United States)

    ... Coping with Alzheimer’s COPD Caregiving Take Care! Caregiver Statistics Statistics on Family Caregivers and Family Caregiving Caregiving Population ... Health Care Caregiver Self-Awareness State by State Statistics Caregiving Population The value of the services family ...

  4. Statistics in a nutshell

    CERN Document Server

    Boslaugh, Sarah

    2013-01-01

    Need to learn statistics for your job? Want help passing a statistics course? Statistics in a Nutshell is a clear and concise introduction and reference for anyone new to the subject. Thoroughly revised and expanded, this edition helps you gain a solid understanding of statistics without the numbing complexity of many college texts. Each chapter presents easy-to-follow descriptions, along with graphics, formulas, solved examples, and hands-on exercises. If you want to perform common statistical analyses and learn a wide range of techniques without getting in over your head, this is your book.

  5. Descriptive statistics.

    Science.gov (United States)

    Shi, Runhua; McLarty, Jerry W

    2009-10-01

    In this article, we introduced basic concepts of statistics, type of distributions, and descriptive statistics. A few examples were also provided. The basic concepts presented herein are only a fraction of the concepts related to descriptive statistics. Also, there are many commonly used distributions not presented herein, such as Poisson distributions for rare events and exponential distributions, F distributions, and logistic distributions. More information can be found in many statistics books and publications.

  6. Statistical physics

    CERN Document Server

    Sadovskii, Michael V

    2012-01-01

    This volume provides a compact presentation of modern statistical physics at an advanced level. Beginning with questions on the foundations of statistical mechanics all important aspects of statistical physics are included, such as applications to ideal gases, the theory of quantum liquids and superconductivity and the modern theory of critical phenomena. Beyond that attention is given to new approaches, such as quantum field theory methods and non-equilibrium problems.

  7. Euphorbia L. subsect. Esula (Boiss. in DC. Pax in the Iberian Peninsula. Leaf surface, chromosome numbers and taxonomic treatment

    Directory of Open Access Journals (Sweden)

    Molero, Julià

    1992-12-01

    Full Text Available We present a taxonomic study of the representatives or Euphorbia subsect. Esula in the Iberian Peninsula. Prior to this, a first section is included on the study of the leaf surface and a second section on chromosome numbers.
    The section on leaf surface is based on a study of the leaves or 45 populations of Iberian and European taxa of the subsections using a light microscope and SEM. The characters analyzed are cell shape, morphology of the cells and stomata (primary and secondary sculpture and epicuticular waxes (tertiary sculpture. Some microcharacters of the leaf surface proved particularly usefu1for taxonomical purposes. Thus the basic type of stoma and the distribution model of the stomata on the two sides of the leaf are characters which make it possible to separate taxa as closely related as E. esula L. subsp. esula and E. esula L. subsp orientalis (Boiss. in DC. Molero & Rovira. The morphological type of the epicuticular waxes also enables us to differentiate between E.graminifolia Vill. and E. esula aggr. And to distinguish subsp. bolosii Molero & Rovira from the remaining subespecies in E. nevadensis Boiss. & Reuter.
    Cytogenetic investigation reveals the presence of only the diploid cytotype (2n=10 in E. cyparissias L. and E. esula L. subsp. esula in the Iberian Peninsula. We describe for the first time in E. nevadensis s.1. a polyploidy complex with a base of x= 10 in which the diploid level (2n=20 is present in all subspecies; the tetraploid level (2n=40 is present in E. nevadensis subsp. nevadensis and the hexaploid level (2n=60 is found in E. nevadensis subsp. bolosii. Chromosome number is not a parameter that can be used for taxonomic purposes. In E. nevadensis, cytogenetic differentiation has followed its own course, with no apparent relationship to the process of morphological

  8. Harmonic statistics

    Energy Technology Data Exchange (ETDEWEB)

    Eliazar, Iddo, E-mail: eliazar@post.tau.ac.il

    2017-05-15

    The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their ‘public relations’ for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of this object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford’s law, and 1/f noise. - Highlights: • Harmonic statistics are described and reviewed in detail. • Connections to various statistical laws are established. • Connections to perturbation, renormalization and dynamics are established.

  9. Statistical distributions

    CERN Document Server

    Forbes, Catherine; Hastings, Nicholas; Peacock, Brian J.

    2010-01-01

    A new edition of the trusted guide on commonly used statistical distributions Fully updated to reflect the latest developments on the topic, Statistical Distributions, Fourth Edition continues to serve as an authoritative guide on the application of statistical methods to research across various disciplines. The book provides a concise presentation of popular statistical distributions along with the necessary knowledge for their successful use in data modeling and analysis. Following a basic introduction, forty popular distributions are outlined in individual chapters that are complete with re

  10. Statistical methods

    CERN Document Server

    Szulc, Stefan

    1965-01-01

    Statistical Methods provides a discussion of the principles of the organization and technique of research, with emphasis on its application to the problems in social statistics. This book discusses branch statistics, which aims to develop practical ways of collecting and processing numerical data and to adapt general statistical methods to the objectives in a given field.Organized into five parts encompassing 22 chapters, this book begins with an overview of how to organize the collection of such information on individual units, primarily as accomplished by government agencies. This text then

  11. Statistical optics

    CERN Document Server

    Goodman, Joseph W

    2015-01-01

    This book discusses statistical methods that are useful for treating problems in modern optics, and the application of these methods to solving a variety of such problems This book covers a variety of statistical problems in optics, including both theory and applications.  The text covers the necessary background in statistics, statistical properties of light waves of various types, the theory of partial coherence and its applications, imaging with partially coherent light, atmospheric degradations of images, and noise limitations in the detection of light. New topics have been introduced i

  12. Modeling of soil penetration resistance using statistical analyses and artificial neural networks=Modelagem da resistência à penetração do solo usando análises estatísticas e redes neurais artificiais

    Directory of Open Access Journals (Sweden)

    Domingos Sárvio Magalhães Valente

    2012-04-01

    Full Text Available An important factor for the evaluation of an agricultural system’s sustainability is the monitoring of soil quality via its physical attributes. The physical attributes of soil, such as soil penetration resistance, can be used to monitor and evaluate the soil’s quality. Artificial Neural Networks (ANN have been employed to solve many problems in agriculture, and the use of this technique can be considered an alternative approach for predicting the penetration resistance produced by the soil’s basic properties, such as bulk density and water content. The aim of this work is to perform an analysis of the soil penetration resistance behavior measured from the cone index under different levels of bulk density and water content using statistical analyses, specifically regression analysis and ANN modeling. Both techniques show that soil penetration resistance is associated with soil bulk density and water content. The regression analysis presented a determination coefficient of 0.92 and an RMSE of 0.951, and the ANN modeling presented a determination coefficient of 0.98 and an RMSE of 0.084. The results show that the ANN modeling presented better results than the mathematical model obtained from regression analysis.Um importante fator para a avaliação da sustentabilidade de sistemas agrícolas é o monitoramento da qualidade do solo por meio de seus atritutos físicos. Logo, atributos físicos do solo, como resistência à penetração, podem ser empregados no monitoramento e na avaliação da qualidade do solo. Redes Neurais Artificiais (RNA tem sido empregadas na solução de vários problemas na agricultura, neste contexto, o uso desta técnica pode ser considerada uma abordagem alternativa para se predizer a resistência à penetração do solo a partir de suas propriedades básicas como densidade e teor de água. Portanto, o objetivo desse trabalho foi desenvolver um estudo do comportamento da resistência à penetração do solo, medida

  13. Scan Statistics

    CERN Document Server

    Glaz, Joseph

    2009-01-01

    Suitable for graduate students and researchers in applied probability and statistics, as well as for scientists in biology, computer science, pharmaceutical science and medicine, this title brings together a collection of chapters illustrating the depth and diversity of theory, methods and applications in the area of scan statistics.

  14. Semiconductor statistics

    CERN Document Server

    Blakemore, J S

    1962-01-01

    Semiconductor Statistics presents statistics aimed at complementing existing books on the relationships between carrier densities and transport effects. The book is divided into two parts. Part I provides introductory material on the electron theory of solids, and then discusses carrier statistics for semiconductors in thermal equilibrium. Of course a solid cannot be in true thermodynamic equilibrium if any electrical current is passed; but when currents are reasonably small the distribution function is but little perturbed, and the carrier distribution for such a """"quasi-equilibrium"""" co

  15. Lehrer in der Bundesrepublik Deutschland. Eine Kritische Analyse Statistischer Daten uber das Lehrpersonal an Allgemeinbildenden Schulen. (Education in the Federal Republic of Germany. A Statistical Study of Teachers in Schools of General Education.)

    Science.gov (United States)

    Kohler, Helmut

    The purpose of this study was to analyze the available statistics concerning teachers in schools of general education in the Federal Republic of Germany. An analysis of the demographic structure of the pool of full-time teachers showed that in 1971 30 percent of the teachers were under age 30, and 50 percent were under age 35. It was expected that…

  16. CMS Statistics

    Data.gov (United States)

    U.S. Department of Health & Human Services — The CMS Center for Strategic Planning produces an annual CMS Statistics reference booklet that provides a quick reference for summary information about health...

  17. WPRDC Statistics

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Data about the usage of the WPRDC site and its various datasets, obtained by combining Google Analytics statistics with information from the WPRDC's data portal.

  18. Accident Statistics

    Data.gov (United States)

    Department of Homeland Security — Accident statistics available on the Coast Guard’s website by state, year, and one variable to obtain tables and/or graphs. Data from reports has been loaded for...

  19. Image Statistics

    Energy Technology Data Exchange (ETDEWEB)

    Wendelberger, Laura Jean [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-08

    In large datasets, it is time consuming or even impossible to pick out interesting images. Our proposed solution is to find statistics to quantify the information in each image and use those to identify and pick out images of interest.

  20. Multiparametric statistics

    CERN Document Server

    Serdobolskii, Vadim Ivanovich

    2007-01-01

    This monograph presents mathematical theory of statistical models described by the essentially large number of unknown parameters, comparable with sample size but can also be much larger. In this meaning, the proposed theory can be called "essentially multiparametric". It is developed on the basis of the Kolmogorov asymptotic approach in which sample size increases along with the number of unknown parameters.This theory opens a way for solution of central problems of multivariate statistics, which up until now have not been solved. Traditional statistical methods based on the idea of an infinite sampling often break down in the solution of real problems, and, dependent on data, can be inefficient, unstable and even not applicable. In this situation, practical statisticians are forced to use various heuristic methods in the hope the will find a satisfactory solution.Mathematical theory developed in this book presents a regular technique for implementing new, more efficient versions of statistical procedures. ...

  1. Trichomoniasis Statistics

    Science.gov (United States)

    ... Search Form Controls Cancel Submit Search the CDC Trichomoniasis Note: Javascript is disabled or is not supported ... Twitter STD on Facebook Sexually Transmitted Diseases (STDs) Trichomoniasis Statistics Recommend on Facebook Tweet Share Compartir In ...

  2. Reversible Statistics

    DEFF Research Database (Denmark)

    Tryggestad, Kjell

    2004-01-01

    The study aims is to describe how the inclusion and exclusion of materials and calculative devices construct the boundaries and distinctions between statistical facts and artifacts in economics. My methodological approach is inspired by John Graunt's (1667) Political arithmetic and more recent work...... within constructivism and the field of Science and Technology Studies (STS). The result of this approach is here termed reversible statistics, reconstructing the findings of a statistical study within economics in three different ways. It is argued that all three accounts are quite normal, albeit...... in different ways. The presence and absence of diverse materials, both natural and political, is what distinguishes them from each other. Arguments are presented for a more symmetric relation between the scientific statistical text and the reader. I will argue that a more symmetric relation can be achieved...

  3. Vital statistics

    CERN Document Server

    MacKenzie, Dana

    2004-01-01

    The drawbacks of using 19th-century mathematics in physics and astronomy are illustrated. To continue with the expansion of the knowledge about the cosmos, the scientists will have to come in terms with modern statistics. Some researchers have deliberately started importing techniques that are used in medical research. However, the physicists need to identify the brand of statistics that will be suitable for them, and make a choice between the Bayesian and the frequentists approach. (Edited abstract).

  4. Statistical Pattern Recognition

    CERN Document Server

    Webb, Andrew R

    2011-01-01

    Statistical pattern recognition relates to the use of statistical techniques for analysing data measurements in order to extract information and make justified decisions.  It is a very active area of study and research, which has seen many advances in recent years. Applications such as data mining, web searching, multimedia data retrieval, face recognition, and cursive handwriting recognition, all require robust and efficient pattern recognition techniques. This third edition provides an introduction to statistical pattern theory and techniques, with material drawn from a wide range of fields,

  5. Informal Statistics Help Desk

    Science.gov (United States)

    Young, M.; Koslovsky, M.; Schaefer, Caroline M.; Feiveson, A. H.

    2017-01-01

    Back by popular demand, the JSC Biostatistics Laboratory and LSAH statisticians are offering an opportunity to discuss your statistical challenges and needs. Take the opportunity to meet the individuals offering expert statistical support to the JSC community. Join us for an informal conversation about any questions you may have encountered with issues of experimental design, analysis, or data visualization. Get answers to common questions about sample size, repeated measures, statistical assumptions, missing data, multiple testing, time-to-event data, and when to trust the results of your analyses.

  6. Statistical mechanics

    CERN Document Server

    Jana, Madhusudan

    2015-01-01

    Statistical mechanics is self sufficient, written in a lucid manner, keeping in mind the exam system of the universities. Need of study this subject and its relation to Thermodynamics is discussed in detail. Starting from Liouville theorem gradually, the Statistical Mechanics is developed thoroughly. All three types of Statistical distribution functions are derived separately with their periphery of applications and limitations. Non-interacting ideal Bose gas and Fermi gas are discussed thoroughly. Properties of Liquid He-II and the corresponding models have been depicted. White dwarfs and condensed matter physics, transport phenomenon - thermal and electrical conductivity, Hall effect, Magneto resistance, viscosity, diffusion, etc. are discussed. Basic understanding of Ising model is given to explain the phase transition. The book ends with a detailed coverage to the method of ensembles (namely Microcanonical, canonical and grand canonical) and their applications. Various numerical and conceptual problems ar...

  7. Statistical physics

    CERN Document Server

    Guénault, Tony

    2007-01-01

    In this revised and enlarged second edition of an established text Tony Guénault provides a clear and refreshingly readable introduction to statistical physics, an essential component of any first degree in physics. The treatment itself is self-contained and concentrates on an understanding of the physical ideas, without requiring a high level of mathematical sophistication. A straightforward quantum approach to statistical averaging is adopted from the outset (easier, the author believes, than the classical approach). The initial part of the book is geared towards explaining the equilibrium properties of a simple isolated assembly of particles. Thus, several important topics, for example an ideal spin-½ solid, can be discussed at an early stage. The treatment of gases gives full coverage to Maxwell-Boltzmann, Fermi-Dirac and Bose-Einstein statistics. Towards the end of the book the student is introduced to a wider viewpoint and new chapters are included on chemical thermodynamics, interactions in, for exam...

  8. Statistical mechanics

    CERN Document Server

    Schwabl, Franz

    2006-01-01

    The completely revised new edition of the classical book on Statistical Mechanics covers the basic concepts of equilibrium and non-equilibrium statistical physics. In addition to a deductive approach to equilibrium statistics and thermodynamics based on a single hypothesis - the form of the microcanonical density matrix - this book treats the most important elements of non-equilibrium phenomena. Intermediate calculations are presented in complete detail. Problems at the end of each chapter help students to consolidate their understanding of the material. Beyond the fundamentals, this text demonstrates the breadth of the field and its great variety of applications. Modern areas such as renormalization group theory, percolation, stochastic equations of motion and their applications to critical dynamics, kinetic theories, as well as fundamental considerations of irreversibility, are discussed. The text will be useful for advanced students of physics and other natural sciences; a basic knowledge of quantum mechan...

  9. Statistics for Learning Genetics

    Science.gov (United States)

    Charles, Abigail Sheena

    2012-01-01

    This study investigated the knowledge and skills that biology students may need to help them understand statistics/mathematics as it applies to genetics. The data are based on analyses of current representative genetics texts, practicing genetics professors' perspectives, and more directly, students' perceptions of, and performance in, doing…

  10. Statistical mechanics

    CERN Document Server

    Davidson, Norman

    2003-01-01

    Clear and readable, this fine text assists students in achieving a grasp of the techniques and limitations of statistical mechanics. The treatment follows a logical progression from elementary to advanced theories, with careful attention to detail and mathematical development, and is sufficiently rigorous for introductory or intermediate graduate courses.Beginning with a study of the statistical mechanics of ideal gases and other systems of non-interacting particles, the text develops the theory in detail and applies it to the study of chemical equilibrium and the calculation of the thermody

  11. AP statistics

    CERN Document Server

    Levine-Wissing, Robin

    2012-01-01

    All Access for the AP® Statistics Exam Book + Web + Mobile Everything you need to prepare for the Advanced Placement® exam, in a study system built around you! There are many different ways to prepare for an Advanced Placement® exam. What's best for you depends on how much time you have to study and how comfortable you are with the subject matter. To score your highest, you need a system that can be customized to fit you: your schedule, your learning style, and your current level of knowledge. This book, and the online tools that come with it, will help you personalize your AP® Statistics prep

  12. Statistical inference

    CERN Document Server

    Rohatgi, Vijay K

    2003-01-01

    Unified treatment of probability and statistics examines and analyzes the relationship between the two fields, exploring inferential issues. Numerous problems, examples, and diagrams--some with solutions--plus clear-cut, highlighted summaries of results. Advanced undergraduate to graduate level. Contents: 1. Introduction. 2. Probability Model. 3. Probability Distributions. 4. Introduction to Statistical Inference. 5. More on Mathematical Expectation. 6. Some Discrete Models. 7. Some Continuous Models. 8. Functions of Random Variables and Random Vectors. 9. Large-Sample Theory. 10. General Meth

  13. Statistical Mechancis

    CERN Document Server

    Gallavotti, Giovanni

    2011-01-01

    C. Cercignani: A sketch of the theory of the Boltzmann equation.- O.E. Lanford: Qualitative and statistical theory of dissipative systems.- E.H. Lieb: many particle Coulomb systems.- B. Tirozzi: Report on renormalization group.- A. Wehrl: Basic properties of entropy in quantum mechanics.

  14. Statistical Computing

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 4; Issue 10. Statistical Computing - Understanding Randomness and Random Numbers. Sudhakar Kunte. Series Article Volume 4 Issue 10 October 1999 pp 16-21. Fulltext. Click here to view fulltext PDF. Permanent link:

  15. Implementation of quality by design principles in the development of microsponges as drug delivery carriers: Identification and optimization of critical factors using multivariate statistical analyses and design of experiments studies.

    Science.gov (United States)

    Simonoska Crcarevska, Maja; Dimitrovska, Aneta; Sibinovska, Nadica; Mladenovska, Kristina; Slavevska Raicki, Renata; Glavas Dodov, Marija

    2015-07-15

    Microsponges drug delivery system (MDDC) was prepared by double emulsion-solvent-diffusion technique using rotor-stator homogenization. Quality by design (QbD) concept was implemented for the development of MDDC with potential to be incorporated into semisolid dosage form (gel). Quality target product profile (QTPP) and critical quality attributes (CQA) were defined and identified, accordingly. Critical material attributes (CMA) and Critical process parameters (CPP) were identified using quality risk management (QRM) tool, failure mode, effects and criticality analysis (FMECA). CMA and CPP were identified based on results obtained from principal component analysis (PCA-X&Y) and partial least squares (PLS) statistical analysis along with literature data, product and process knowledge and understanding. FMECA identified amount of ethylcellulose, chitosan, acetone, dichloromethane, span 80, tween 80 and water ratio in primary/multiple emulsions as CMA and rotation speed and stirrer type used for organic solvent removal as CPP. The relationship between identified CPP and particle size as CQA was described in the design space using design of experiments - one-factor response surface method. Obtained results from statistically designed experiments enabled establishment of mathematical models and equations that were used for detailed characterization of influence of identified CPP upon MDDC particle size and particle size distribution and their subsequent optimization. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Statistical analysis of the deformations of the alloy 600: quantification and localization at the micro and macroscopic scale; Analyse statistique des deformations de l alliage 600: quantification et localisation a l echelle micro et macroscopique

    Energy Technology Data Exchange (ETDEWEB)

    Clair, Aurelie; Markey, Laurent; Finot, Eric [Laboratoire Physique de l Universite de Bourgogne, UMR CNRS 5027, BP 47870, 21078 Dijon (France); Clair, Aurelie; Foucault, Marc; Brugier, Benedicte [Areva NP, Centre Technique Departement Corrosion-Chimie, BP 181, 71205 Le Creusot (France); Vignal, Vincent [Laboratoire de Recherche sur la Reactivite des Solides, UMR CNRS 5613, BP 47870, 21078 Dijon (France)

    2006-07-01

    The study of the stress corrosion cracking of the alloy 600 is fundamental for the understanding of the aging of PWR power plants. The quantification of the local deformation of the material and the surface analysis, key parameters of the corrosion, are indispensable to the study of the substrate damage due to this mechanism. A semi-automatic statistical assessment method of the local deformation tensor has been developed. On account of the material poly-crystallinity, the knowledge of the deformations distribution and the localization phenomena at the grain scale become fundamental. Nano-plots matrices being used as deformation markers have been lithographied at the surface of the tensile test specimens. Different analysis parameters such as the measurement error and the length of the gauges have been studied. (O.M.)

  17. Statistical analysis of high order moments is a turbulent boundary layer with strong density differences; Analyse statistique des moments d'ordre eleve dans une couche limite turbulente en presence de differences de densite importantes

    Energy Technology Data Exchange (ETDEWEB)

    Soudani, A. [Batna Univ., Dept. de Physique, Faculte des Sciences (Algeria); Bessaih, R. [Mentouri-Constantine Univ., Dept. de Genie Mecanique, Faculte des Sciences de l' Ingenieur (Algeria)

    2004-12-01

    The study of turbulent boundary layer with strong differences of density is important for the understanding of practical situations occurring for example in the cooling of turbine blades through the tangential injection of a different gas or in combustion. In order to study the fine structure of wall turbulence in the presence of significant variations of density, a statistical analysis of the experimental data, obtained in a wind tunnel, is carried out. The results show that the relaxation of the skewness factor of u'(S{sub u'}) is carried out more quickly in the external layer than close to the wall, as well for the air injection as for the helium injection. S{sub u'} grows close to the injection slot in an appreciable way and this increase is accentuated for the air injection than for the helium injection. This growth of the skewness factor close to the injection slot can be explained by the increase in the longitudinal convective flux of turbulent energy in this zone. The results show for the distribution of the flatness factor F{sub u'} that there is no significant effect of the density gradient on the intermittent structure of the instantaneous longitudinal velocity in the developed zone, x/{delta} {>=} 5. The statistical analysis carried out in this study shows that the helium injection in the boundary layer generates more violent ejections than in the case of air injection. This result is confirmed by the significant contribution of the ejections to turbulent mass flux.

  18. [Descriptive statistics].

    Science.gov (United States)

    Rendón-Macías, Mario Enrique; Villasís-Keever, Miguel Ángel; Miranda-Novales, María Guadalupe

    2016-01-01

    Descriptive statistics is the branch of statistics that gives recommendations on how to summarize clearly and simply research data in tables, figures, charts, or graphs. Before performing a descriptive analysis it is paramount to summarize its goal or goals, and to identify the measurement scales of the different variables recorded in the study. Tables or charts aim to provide timely information on the results of an investigation. The graphs show trends and can be histograms, pie charts, "box and whiskers" plots, line graphs, or scatter plots. Images serve as examples to reinforce concepts or facts. The choice of a chart, graph, or image must be based on the study objectives. Usually it is not recommended to use more than seven in an article, also depending on its length.

  19. Experimental statistics

    CERN Document Server

    Natrella, Mary Gibbons

    1963-01-01

    Formulated to assist scientists and engineers engaged in army ordnance research and development programs, this well-known and highly regarded handbook is a ready reference for advanced undergraduate and graduate students as well as for professionals seeking engineering information and quantitative data for designing, developing, constructing, and testing equipment. Topics include characterizing and comparing the measured performance of a material, product, or process; general considerations in planning experiments; statistical techniques for analyzing extreme-value data; use of transformations

  20. Elementary Statistics Tables

    CERN Document Server

    Neave, Henry R

    2012-01-01

    This book, designed for students taking a basic introductory course in statistical analysis, is far more than just a book of tables. Each table is accompanied by a careful but concise explanation and useful worked examples. Requiring little mathematical background, Elementary Statistics Tables is thus not just a reference book but a positive and user-friendly teaching and learning aid. The new edition contains a new and comprehensive "teach-yourself" section on a simple but powerful approach, now well-known in parts of industry but less so in academia, to analysing and interpreting process dat

  1. Statistical mechanics

    CERN Document Server

    Sheffield, Scott

    2009-01-01

    In recent years, statistical mechanics has been increasingly recognized as a central domain of mathematics. Major developments include the Schramm-Loewner evolution, which describes two-dimensional phase transitions, random matrix theory, renormalization group theory and the fluctuations of random surfaces described by dimers. The lectures contained in this volume present an introduction to recent mathematical progress in these fields. They are designed for graduate students in mathematics with a strong background in analysis and probability. This book will be of particular interest to graduate students and researchers interested in modern aspects of probability, conformal field theory, percolation, random matrices and stochastic differential equations.

  2. Statistical analysis of management data

    CERN Document Server

    Gatignon, Hubert

    2013-01-01

    This book offers a comprehensive approach to multivariate statistical analyses. It provides theoretical knowledge of the concepts underlying the most important multivariate techniques and an overview of actual applications.

  3. Statistical analyses for the purpose of an early detection of global and regional climate change due to the anthropogenic greenhouse effect; Statistische Analysen zur Frueherkennung globaler und regionaler Klimaaenderungen aufgrund des anthropogenen Treibhauseffektes

    Energy Technology Data Exchange (ETDEWEB)

    Grieser, J.; Staeger, T.; Schoenwiese, C.D.

    2000-03-01

    The report answers the question where, why and how different climate variables have changed within the last 100 years. The analyzed variables are observed time series of temperature (mean, maximum, minimum), precipitation, air pressure, and water vapour pressure in a monthly resolution. The time series are given as station data and grid box data as well. Two kinds of time-series analysis are performed. The first is applied to find significant changes concerning mean and variance of the time series. Thereby also changes in the annual cycle and frequency of extreme events arise. The second approach is used to detect significant spatio-temporal patterns in the variations of climate variables, which are most likely driven by known natural and anthropogenic climate forcings. Furtheron, an estimation of climate noise allows to indicate regions where certain climate variables have changed significantly due to the enhanced anthropogenic greenhouse effect. (orig.) [German] Der Bericht gibt Antwort auf die Frage, wo sich welche Klimavariable wie und warum veraendert hat. Ausgangspunkt der Analyse sind huntertjaehrige Zeitreihen der Temperatur (Mittel, Maximum, Minimum), des Niederschlags, Luftdrucks und Wasserdampfpartialdrucks in monatlicher Aufloesung. Es wurden sowohl Stationsdaten als auch Gitterpunktdaten verwendet. Mit Hilfe der strukturorientierten Zeitreihenzerlegung wurden signifikankte Aenderungen im Mittel und in der Varianz der Zeitreihen gefunden. Diese betreffen auch Aenderungen im Jahresgang und in der Haeufigkeit extremer Ereignisse. Die ursachenorientierte Zeitreihenzerlegung selektiert signifikante raumzeitliche Variationen der Klimavariablen, die natuerlichen bzw. anthropogenen Klimaantrieben zugeordnet werden koennen. Eine Abschaetzung des Klimarauschens erlaubt darueber hinaus anzugeben, wo und wie signifikant der anthropogene Treibhauseffekt welche Klimavariablen veraendert hat. (orig.)

  4. Métodos estatísticos e estrutura espacial de populações: uma análise comparativa = Statistic methods and population spatial structure: a comparative analyses

    Directory of Open Access Journals (Sweden)

    Matheus de Souza Lima-Ribeiro

    2006-07-01

    Full Text Available O presente estudo teve por objetivo comparar os resultados de distribuição espacial obtidos entre os métodos clássicos e os métodos que estimam a variância entre parcelas. Foram analisadas duas espécies, Vernonia aurea e Duguetia furfuracea. Foram utilizados a Distribuição de Poisson (padrão aleatório, a Distribuição Binomial Negativa (padrão agregado e os métodos BQV, TTLQV e PQV (variância entre parcelas, bem como a razão variância:média (I, coeficiente de Green (Ig e o índice de dispersão de Morisita (Im. Ambas metodologias detectaram padrão de distribuição espacial agregado para as populações analisadas, com resultados similares quanto ao nível de agregação, além de complementação das informações, em diferentes escalas, entre os métodos clássicos e de variância entre parcelas. Desse modo, recomenda-se a utilização desses métodos estatísticos em estudos de estrutura espacial, uma vez que os testes são robustos e complementares e os dados são de fácil coleta em campo.This study aims to compare the results of spatial structure obtained between the classic and quadrat variance methods. Two species were analised, Vernonia aurea and Duguetia furfuracea. The Poisson distribution (random pattern, the Negative Binomial distribution (aggregate pattern, the BQV, TTLQV and PQV methods, the ratiovariance: mean (I, the Green coefficient (Ig and the Morisita’s index of dispersion (Im were used to detect the populations spatial pattern. An aggregated spatial pattern distribution was detected through both methodologies, with similar results as for the aggregation level and the complementation of the information in different scales between classic and quadrat variance methods. Thus, the utilization of these statistic methods in studies of the spatialstructure is recommended, given that tests are robust and complementary and field data samples are easy to collect.

  5. MEVSİMSEL DÜZELTMEDE KULLANILAN İSTATİSTİKİ YÖNTEMLER ÜZERİNE BİR İNCELEME-AN ANALYSE ON STATISTICAL METHODS WHICH ARE USED FOR SEASONAL ADJUSTMENT

    Directory of Open Access Journals (Sweden)

    Handan YOLSAL

    2012-06-01

    Full Text Available Bu makalenin amacı zaman serileri için resmi istatistik ajansları tarafından geliştirilen ve çok yaygın olarak uygulanan mevsim düzeltme programlarını tanıtmaktır. Bu programlar iki ana grupta sınıflanmaktadır. Bunlardan biri, ilk defa olarak NBER tarafından geliştirilen ve hareketli ortalamalar filtreleri kullanan CENSUS II X-11 ailesidir. Bu aile X-11 ARIMA ve X-12 ARIMA tekniklerini içerir. Diğeri ise İspanya Merkez Bankası tarafından geliştirilen ve model bazlı bir yaklaşım olan TRAMO/SEATS programıdır. Bu makalede sözü edilen tekniklerin mevsimsel ayrıştırma süreçleri, bu tekniklerin içerdiği ticari gün, takvim etkisi gibi bazı özel etkiler, avantaj ve dezavantajları ve ayrıca öngörü performansları tartışılacaktır.-This paper’s aim is to introduce most commonly applied seasonal adjustment programs improved by official statistical agencies for the time series. These programs are classified in two main groups. One of them is the family of  CENSUS II X-11 which was using moving average filters and was first developed by NBER. This family involves X-11 ARIMA and X-12 ARIMA techniques. The other one is TRAMO/SEATS program which was a model based approach and has been developed by Spain Central Bank. The seasonal decomposition procedures of these techniques which are mentioned before and consisting of some special effects such as trading day, calendar effects and their advantages-disadvantages and also forecasting performances of them will be discussed in this paper.

  6. Statistical reporting in the "Clujul Medical" journal.

    Science.gov (United States)

    LeucuȚa, Daniel-Corneliu; Drugan, Tudor; AchimaȘ, Andrei

    2015-01-01

    Medical research needs statistical analyses to understand the reality of variable phenomena. There are numerous studies showing poor statistical reporting in many journals with different rankings, in different countries. Our aim was to assess the reporting of statistical analyses in original papers published in Clujul Medical journal in the year 2014. All original articles published in Clujul Medical in the year 2014 were assessed using mainly Statistical Analyses and Methods in the Published Literature guidelines. The most important issues found in reporting statistical analyses were reduced reporting of: assumptions checking, difference between groups or measures of associations, confidence intervals for the primary outcomes, and errors in the statistical test choice or the descriptive statistic choice for several analyses. These results are similar with other studies assessing different journals worldwide. Statistical reporting in Clujul Medical, like in other journals, have to be improved.

  7. Statistical Neurodynamics.

    Science.gov (United States)

    Paine, Gregory Harold

    1982-03-01

    The primary objective of the thesis is to explore the dynamical properties of small nerve networks by means of the methods of statistical mechanics. To this end, a general formalism is developed and applied to elementary groupings of model neurons which are driven by either constant (steady state) or nonconstant (nonsteady state) forces. Neuronal models described by a system of coupled, nonlinear, first-order, ordinary differential equations are considered. A linearized form of the neuronal equations is studied in detail. A Lagrange function corresponding to the linear neural network is constructed which, through a Legendre transformation, provides a constant of motion. By invoking the Maximum-Entropy Principle with the single integral of motion as a constraint, a probability distribution function for the network in a steady state can be obtained. The formalism is implemented for some simple networks driven by a constant force; accordingly, the analysis focuses on a study of fluctuations about the steady state. In particular, a network composed of N noninteracting neurons, termed Free Thinkers, is considered in detail, with a view to interpretation and numerical estimation of the Lagrange multiplier corresponding to the constant of motion. As an archetypical example of a net of interacting neurons, the classical neural oscillator, consisting of two mutually inhibitory neurons, is investigated. It is further shown that in the case of a network driven by a nonconstant force, the Maximum-Entropy Principle can be applied to determine a probability distribution functional describing the network in a nonsteady state. The above examples are reconsidered with nonconstant driving forces which produce small deviations from the steady state. Numerical studies are performed on simplified models of two physical systems: the starfish central nervous system and the mammalian olfactory bulb. Discussions are given as to how statistical neurodynamics can be used to gain a better

  8. Search Databases and Statistics

    DEFF Research Database (Denmark)

    Refsgaard, Jan C; Munk, Stephanie; Jensen, Lars J

    2016-01-01

    Advances in mass spectrometric instrumentation in the past 15 years have resulted in an explosion in the raw data yield from typical phosphoproteomics workflows. This poses the challenge of confidently identifying peptide sequences, localizing phosphosites to proteins and quantifying these from...... searches. Additionally, careful filtering and use of appropriate statistical tests on the output datasets affects the quality of all downstream analyses and interpretation of the data. Our considerations and general practices on these aspects of phosphoproteomics data processing are presented here....

  9. Intuitive introductory statistics

    CERN Document Server

    Wolfe, Douglas A

    2017-01-01

    This textbook is designed to give an engaging introduction to statistics and the art of data analysis. The unique scope includes, but also goes beyond, classical methodology associated with the normal distribution. What if the normal model is not valid for a particular data set? This cutting-edge approach provides the alternatives. It is an introduction to the world and possibilities of statistics that uses exercises, computer analyses, and simulations throughout the core lessons. These elementary statistical methods are intuitive. Counting and ranking features prominently in the text. Nonparametric methods, for instance, are often based on counts and ranks and are very easy to integrate into an introductory course. The ease of computation with advanced calculators and statistical software, both of which factor into this text, allows important techniques to be introduced earlier in the study of statistics. This book's novel scope also includes measuring symmetry with Walsh averages, finding a nonp...

  10. Statistical and regression analyses of detected extrasolar systems

    Czech Academy of Sciences Publication Activity Database

    Pintr, Pavel; Peřinová, V.; Lukš, A.; Pathak, A.

    2013-01-01

    Roč. 75, č. 1 (2013), s. 37-45 ISSN 0032-0633 Institutional support: RVO:61389021 Keywords : Exoplanets * Kepler candidates * Regression analysis Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics Impact factor: 1.630, year: 2013 http://www.sciencedirect.com/science/article/pii/S0032063312003066

  11. SWORDS: A statistical tool for analysing large DNA sequences

    Indian Academy of Sciences (India)

    Unknown

    Unusual frequencies of certain DNA words in. Escherichia coli and virus genomes and possible statis- tical and biological implications of such over- and under- representation of those words have been studied in the literature based on Markov chain models for DNA sequences (Phillips et al 1987a,b; Prum et al 1995; Leung.

  12. Statistical analyses of plume composition and deposited radionuclide mixture ratios

    Energy Technology Data Exchange (ETDEWEB)

    Kraus, Terrence D.; Sallaberry, Cedric Jean-Marie; Eckert-Gallup, Aubrey Celia; Brito, Roxanne; Hunt, Brian D.; Osborn, Douglas.

    2014-01-01

    A proposed method is considered to classify the regions in the close neighborhood of selected measurements according to the ratio of two radionuclides measured from either a radioactive plume or a deposited radionuclide mixture. The subsequent associated locations are then considered in the area of interest with a representative ratio class. This method allows for a more comprehensive and meaningful understanding of the data sampled following a radiological incident.

  13. Practical Statistics for Particle Physics Analyses: Likelihoods (1/4)

    CERN Multimedia

    CERN. Geneva; Lyons, Louis

    2016-01-01

    This will be a 4-day series of 2-hour sessions as part of CERN's Academic Training Course. Each session will consist of a 1-hour lecture followed by one hour of practical computing, which will have exercises based on that day's lecture. While it is possible to follow just the lectures or just the computing exercises, we highly recommend that, because of the way this course is designed, participants come to both parts. In order to follow the hands-on exercises sessions, students need to bring their own laptops. The exercises will be run on a dedicated CERN Web notebook service, SWAN (swan.cern.ch), which is open to everybody holding a CERN computing account. The requirement to use the SWAN service is to have a CERN account and to have also access to Cernbox, the shared storage service at CERN. New users of cernbox are invited to activate beforehand cernbox by simply connecting to https://cernbox.cern.ch. A basic prior knowledge of ROOT and C++ is also recommended for participation in the practical session....

  14. Advanced Statistical Analyses to Reduce Inconsistency of Bond Strength Data.

    Science.gov (United States)

    Minamino, T; Mine, A; Shintani, A; Higashi, M; Kawaguchi-Uemura, A; Kabetani, T; Hagino, R; Imai, D; Tajiri, Y; Matsumoto, M; Yatani, H

    2017-11-01

    This study was designed to clarify the interrelationship of factors that affect the value of microtensile bond strength (µTBS), focusing on nondestructive testing by which information of the specimens can be stored and quantified. µTBS test specimens were prepared from 10 noncarious human molars. Six factors of µTBS test specimens were evaluated: presence of voids at the interface, X-ray absorption coefficient of resin, X-ray absorption coefficient of dentin, length of dentin part, size of adhesion area, and individual differences of teeth. All specimens were observed nondestructively by optical coherence tomography and micro-computed tomography before µTBS testing. After µTBS testing, the effect of these factors on µTBS data was analyzed by the general linear model, linear mixed effects regression model, and nonlinear regression model with 95% confidence intervals. By the general linear model, a significant difference in individual differences of teeth was observed ( P < 0.001). A significantly positive correlation was shown between µTBS and length of dentin part ( P < 0.001); however, there was no significant nonlinearity ( P = 0.157). Moreover, a significantly negative correlation was observed between µTBS and size of adhesion area ( P = 0.001), with significant nonlinearity ( P = 0.014). No correlation was observed between µTBS and X-ray absorption coefficient of resin ( P = 0.147), and there was no significant nonlinearity ( P = 0.089). Additionally, a significantly positive correlation was observed between µTBS and X-ray absorption coefficient of dentin ( P = 0.022), with significant nonlinearity ( P = 0.036). A significant difference was also observed between the presence and absence of voids by linear mixed effects regression analysis. Our results showed correlations between various parameters of tooth specimens and µTBS data. To evaluate the performance of the adhesive more precisely, the effect of tooth variability and a method to reduce variation in bond strength values should also be considered.

  15. Hydrometeorological and Statistical Analyses of Heavy Rainfall in Midwestern USA

    DEFF Research Database (Denmark)

    Thorndahl, Søren Liedtke; Smith, J. A.; Krajewski, W. F.

    2012-01-01

    During the last two decades the mid-western states of the United States of America has been largely afflicted by heavy flood producing rainfall. Several of these storms seem to have similar hydrometeorological properties in terms of pattern, track, evolution, life cycle, clustering, etc. which raise...

  16. Using Perl for Statistics: Data Processing and Statistical Computing

    Directory of Open Access Journals (Sweden)

    Giovanni Baiocchi

    2004-05-01

    Full Text Available In this paper we show how Perl, an expressive and extensible high-level programming language, with network and ob ject-oriented programming support, can be used in processing data for statistics and statistical computing. The paper is organized in two parts. In Part I, we introduce the Perl programming language, with particular emphasis on the features that distinguish it from conventional languages. Then, using practical examples, we demonstrate how Perl's distinguishing features make it particularly well suited to perform labor intensive and sophisticated tasks ranging from the preparation of data to the writing of statistical reports. In Part II we show how Perl can be extended to perform statistical computations using modules and by "embedding" specialized statistical applications. We provide example on how Perl can be used to do simple statistical analyses, perform complex statistical computations involving matrix algebra and numerical optimization, and make statistical computations more easily reproducible. We also investigate the numerical and statistical reliability of various Perl statistical modules. Important computing issues such as ease of use, speed of calculation, and efficient memory usage, are also considered.

  17. As espécies de Croton L. sect. Cyclostigma Griseb. e Croton L. sect. Luntia (Raf. G. L. Webster subsect. Matourenses G. L. Webster (Euphorbiaceae s.s. ocorrentes na Amazônia brasileira Species of Croton L. sect. Cyclostigma Griseb. and Croton L. sect. Luntia (Raf. G. L. Webster subsect. Matourenses G. L. Webster (Euphorbiaceae s.s. occuring within Brazilian Amazon

    Directory of Open Access Journals (Sweden)

    Luiz Alberto Cavalcante Guimarães

    2010-09-01

    Full Text Available Como parte de uma revisão taxonômica das espécies de Croton L. na Amazônia brasileira, estudou-se as seguintes espécies de Croton sect. Cyclostigma Griseb. e Croton sect. Luntia (Raf. G. L. Webster subsect. Matourenses G. L. Webster: Croton urucurana Baill., C. draconoides Müll. Arg., C. trombetensis R. Secco, P. E. Berry & N.A. Rosa, C. sampatik Müll. Arg., C. palanostigma Kl., C. pullei Lanj. e C. matourensis Aubl. O estudo foi baseado em trabalho de campo realizado nos Estados do Pará e Maranhão, e em material depositado nos herbários IAN, INPA, MG e RB, incluindo tipos. Algumas dessas espécies, como C. urucurana, C. draconoides, C. palanostigma e C. sampatik , são frequentemente encontradas nos herbários com identificações equivocadas. São discutidas a posição taxonômica das espécies nas seções e suas afinidades, e uma chave dicotômica e ilustrações foram elaboradas para um melhor entendimento dos táxons.As part of a taxonomic revision of Brazilian Amazonia species of Croton L., the following species of Croton sect. Cyclostigma Griseb. and Croton sect. Luntia (Raf. G. L. Webster subsect. Matourenses G. L. Webster were studied: Croton urucurana Baill., C. draconoides Müll. Arg., C. trombetensis R. Secco, P. E. Berry & N.A. Rosa, C. sampatik Müll. Arg., C. palanostigma Kl., C. pullei Lanj. and C. matourensis Aubl. This study was based on field work in the States of Pará and Maranhão, and material deposited in herbaria IAN, INPA, MG e RB, including the types. Some of the species, such as C. urucurana, C. draconoides, C. palanostigma, and C. sampatik , are frequently found with misidentification in herbaria. The taxonomic position of the species within sections and their affinities are discussed, and a dychotomic key and illustrations were elaborated for a better understanding of the taxa.

  18. Childhood Cancer Statistics

    Science.gov (United States)

    ... Financial Reports Watchdog Ratings Feedback Contact Select Page Childhood Cancer Statistics Home > Cancer Resources > Childhood Cancer Statistics Childhood Cancer Statistics – Graphs and Infographics Number of Diagnoses ...

  19. Nonparametric statistical methods using R

    CERN Document Server

    Kloke, John

    2014-01-01

    A Practical Guide to Implementing Nonparametric and Rank-Based ProceduresNonparametric Statistical Methods Using R covers traditional nonparametric methods and rank-based analyses, including estimation and inference for models ranging from simple location models to general linear and nonlinear models for uncorrelated and correlated responses. The authors emphasize applications and statistical computation. They illustrate the methods with many real and simulated data examples using R, including the packages Rfit and npsm.The book first gives an overview of the R language and basic statistical c

  20. Students' attitudes towards learning statistics

    Science.gov (United States)

    Ghulami, Hassan Rahnaward; Hamid, Mohd Rashid Ab; Zakaria, Roslinazairimah

    2015-05-01

    Positive attitude towards learning is vital in order to master the core content of the subject matters under study. This is unexceptional in learning statistics course especially at the university level. Therefore, this study investigates the students' attitude towards learning statistics. Six variables or constructs have been identified such as affect, cognitive competence, value, difficulty, interest, and effort. The instrument used for the study is questionnaire that was adopted and adapted from the reliable instrument of Survey of Attitudes towards Statistics(SATS©). This study is conducted to engineering undergraduate students in one of the university in the East Coast of Malaysia. The respondents consist of students who were taking the applied statistics course from different faculties. The results are analysed in terms of descriptive analysis and it contributes to the descriptive understanding of students' attitude towards the teaching and learning process of statistics.

  1. MQSA National Statistics

    Science.gov (United States)

    ... Standards Act and Program MQSA Insights MQSA National Statistics Share Tweet Linkedin Pin it More sharing options ... but should level off with time. Archived Scorecard Statistics 2018 Scorecard Statistics 2017 Scorecard Statistics 2016 Scorecard ...

  2. Sampling, Probability Models and Statistical Reasoning Statistical ...

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 5. Sampling, Probability Models and Statistical Reasoning Statistical Inference. Mohan Delampady V R Padmawar. General Article Volume 1 Issue 5 May 1996 pp 49-58 ...

  3. Applying statistics in behavioural research

    NARCIS (Netherlands)

    Ellis, J.L.

    2016-01-01

    Applying Statistics in Behavioural Research is written for undergraduate students in the behavioural sciences, such as Psychology, Pedagogy, Sociology and Ethology. The topics range from basic techniques, like correlation and t-tests, to moderately advanced analyses, like multiple regression and

  4. Classical Statistics and Statistical Learning in Imaging Neuroscience

    Science.gov (United States)

    Bzdok, Danilo

    2017-01-01

    Brain-imaging research has predominantly generated insight by means of classical statistics, including regression-type analyses and null-hypothesis testing using t-test and ANOVA. Throughout recent years, statistical learning methods enjoy increasing popularity especially for applications in rich and complex data, including cross-validated out-of-sample prediction using pattern classification and sparsity-inducing regression. This concept paper discusses the implications of inferential justifications and algorithmic methodologies in common data analysis scenarios in neuroimaging. It is retraced how classical statistics and statistical learning originated from different historical contexts, build on different theoretical foundations, make different assumptions, and evaluate different outcome metrics to permit differently nuanced conclusions. The present considerations should help reduce current confusion between model-driven classical hypothesis testing and data-driven learning algorithms for investigating the brain with imaging techniques. PMID:29056896

  5. Agile Systems Engineering-Kanban Scheduling Subsection

    Science.gov (United States)

    2017-03-10

    sufficiently to incorporate mechanisms to support the student’s experiment 7. Updated software is available through www.sercuarc.org 8. One journal ...indicators descripted by wi_dictionary. children [“1”, “2”, “3”] List of Children Work Items Event src_oc_id “1” Source OC id dst_oc_id “2

  6. Statistical ecology comes of age.

    Science.gov (United States)

    Gimenez, Olivier; Buckland, Stephen T; Morgan, Byron J T; Bez, Nicolas; Bertrand, Sophie; Choquet, Rémi; Dray, Stéphane; Etienne, Marie-Pierre; Fewster, Rachel; Gosselin, Frédéric; Mérigot, Bastien; Monestiez, Pascal; Morales, Juan M; Mortier, Frédéric; Munoz, François; Ovaskainen, Otso; Pavoine, Sandrine; Pradel, Roger; Schurr, Frank M; Thomas, Len; Thuiller, Wilfried; Trenkel, Verena; de Valpine, Perry; Rexstad, Eric

    2014-12-01

    The desire to predict the consequences of global environmental change has been the driver towards more realistic models embracing the variability and uncertainties inherent in ecology. Statistical ecology has gelled over the past decade as a discipline that moves away from describing patterns towards modelling the ecological processes that generate these patterns. Following the fourth International Statistical Ecology Conference (1-4 July 2014) in Montpellier, France, we analyse current trends in statistical ecology. Important advances in the analysis of individual movement, and in the modelling of population dynamics and species distributions, are made possible by the increasing use of hierarchical and hidden process models. Exciting research perspectives include the development of methods to interpret citizen science data and of efficient, flexible computational algorithms for model fitting. Statistical ecology has come of age: it now provides a general and mathematically rigorous framework linking ecological theory and empirical data.

  7. Algebraic statistics computational commutative algebra in statistics

    CERN Document Server

    Pistone, Giovanni; Wynn, Henry P

    2000-01-01

    Written by pioneers in this exciting new field, Algebraic Statistics introduces the application of polynomial algebra to experimental design, discrete probability, and statistics. It begins with an introduction to Gröbner bases and a thorough description of their applications to experimental design. A special chapter covers the binary case with new application to coherent systems in reliability and two level factorial designs. The work paves the way, in the last two chapters, for the application of computer algebra to discrete probability and statistical modelling through the important concept of an algebraic statistical model.As the first book on the subject, Algebraic Statistics presents many opportunities for spin-off research and applications and should become a landmark work welcomed by both the statistical community and its relatives in mathematics and computer science.

  8. What are the statistics in statistical learning?

    Science.gov (United States)

    Holt, Lori L.; Lotto, Andrew J.

    2003-10-01

    The idea that speech perception is shaped by the statistical structure of the input is gaining wide enthusiasm and growing empirical support. Nonetheless, statistics and statistical learning are broad terms with many possible interpretations and, perhaps, many potential underlying mechanisms. In order to define the role of statistics in speech perception mechanistically, we will need to more precisely define the statistics of statistical learning and examine similarities and differences across subgroups. In this talk, we examine learning of four types of information: (1) acoustic variance that is defining for contrastive categories, (2) the correlation between acoustic attributes or linguistic features, (3) the probability or frequency of events or a series of events, (4) the shape of input distributions. We present representative data from online speech perception and speech development and discuss inter-relationships among the subgroups. [Work supported by NSF, NIH and the James S. McDonnell Foundation.

  9. Blood Facts and Statistics

    Science.gov (United States)

    ... Blood > Blood Facts and Statistics Blood Facts and Statistics Facts about blood needs Facts about the blood ... to Top Learn About Blood Blood Facts and Statistics Blood Components Whole Blood and Red Blood Cells ...

  10. Adrenal Gland Tumors: Statistics

    Science.gov (United States)

    ... Gland Tumor: Statistics Request Permissions Adrenal Gland Tumor: Statistics Approved by the Cancer.Net Editorial Board , 03/ ... primary adrenal gland tumor is very uncommon. Exact statistics are not available for this type of tumor ...

  11. State transportation statistics 2009

    Science.gov (United States)

    2009-01-01

    The Bureau of Transportation Statistics (BTS), a part of DOTs Research and : Innovative Technology Administration (RITA), presents State Transportation : Statistics 2009, a statistical profile of transportation in the 50 states and the : District ...

  12. Neuroendocrine Tumor: Statistics

    Science.gov (United States)

    ... Tumor > Neuroendocrine Tumor: Statistics Request Permissions Neuroendocrine Tumor: Statistics Approved by the Cancer.Net Editorial Board , 11/ ... the body. It is important to remember that statistics on the survival rates for people with a ...

  13. The foundations of statistics

    CERN Document Server

    Savage, Leonard J

    1972-01-01

    Classic analysis of the foundations of statistics and development of personal probability, one of the greatest controversies in modern statistical thought. Revised edition. Calculus, probability, statistics, and Boolean algebra are recommended.

  14. Generalized Fractional Statistics

    OpenAIRE

    Kaniadakis, G.; A. Lavagno(Politecnico di Torino and INFN Sezione di Torino, Torino Italy); Quarati, P.

    1996-01-01

    We link, by means of a semiclassical approach, the fractional statistics of particles obeying the Haldane exclusion principle to the Tsallis statistics and derive a generalized quantum entropy and its associated statistics.

  15. Usage statistics and demonstrator services

    CERN Multimedia

    CERN. Geneva

    2007-01-01

    An understanding of the use of repositories and their contents is clearly desirable for authors and repository managers alike, as well as those who are analysing the state of scholarly communications. A number of individual initiatives have produced statistics of variious kinds for individual repositories, but the real challenge is to produce statistics that can be collected and compared transparently on a global scale. This presentation details the steps to be taken to address the issues to attain this capability View Les Carr's biography

  16. Statistical methods in spatial genetics

    DEFF Research Database (Denmark)

    Guillot, Gilles; Leblois, Raphael; Coulon, Aurelie

    2009-01-01

    The joint analysis of spatial and genetic data is rapidly becoming the norm in population genetics. More and more studies explicitly describe and quantify the spatial organization of genetic variation and try to relate it to underlying ecological processes. As it has become increasingly difficult...... to keep abreast with the latest methodological developments, we review the statistical toolbox available to analyse population genetic data in a spatially explicit framework. We mostly focus on statistical concepts but also discuss practical aspects of the analytical methods, highlighting not only...

  17. Stan: Statistical inference

    Science.gov (United States)

    Stan Development Team

    2018-01-01

    Stan facilitates statistical inference at the frontiers of applied statistics and provides both a modeling language for specifying complex statistical models and a library of statistical algorithms for computing inferences with those models. These components are exposed through interfaces in environments such as R, Python, and the command line.

  18. Practical Statistics for Environmental and Biological Scientists

    CERN Document Server

    Townend, John

    2012-01-01

    All students and researchers in environmental and biological sciences require statistical methods at some stage of their work. Many have a preconception that statistics are difficult and unpleasant and find that the textbooks available are difficult to understand. Practical Statistics for Environmental and Biological Scientists provides a concise, user-friendly, non-technical introduction to statistics. The book covers planning and designing an experiment, how to analyse and present data, and the limitations and assumptions of each statistical method. The text does not refer to a specific comp

  19. Statistics using R

    CERN Document Server

    Purohit, Sudha G; Deshmukh, Shailaja R

    2015-01-01

    STATISTICS USING R will be useful at different levels, from an undergraduate course in statistics, through graduate courses in biological sciences, engineering, management and so on. The book introduces statistical terminology and defines it for the benefit of a novice. For a practicing statistician, it will serve as a guide to R language for statistical analysis. For a researcher, it is a dual guide, simultaneously explaining appropriate statistical methods for the problems at hand and indicating how these methods can be implemented using the R language. For a software developer, it is a guide in a variety of statistical methods for development of a suite of statistical procedures.

  20. A Statistical Analysis of Cryptocurrencies

    OpenAIRE

    Stephen Chan; Jeffrey Chu; Saralees Nadarajah; Joerg Osterrieder

    2017-01-01

    We analyze statistical properties of the largest cryptocurrencies (determined by market capitalization), of which Bitcoin is the most prominent example. We characterize their exchange rates versus the U.S. Dollar by fitting parametric distributions to them. It is shown that returns are clearly non-normal, however, no single distribution fits well jointly to all the cryptocurrencies analysed. We find that for the most popular currencies, such as Bitcoin and Litecoin, the generalized hyperbolic...

  1. Modern applied statistics with S-plus

    CERN Document Server

    Venables, W N

    1994-01-01

    S-Plus is a powerful environment for statistical and graphical analysis of data. It provides the tools to implement many statistical ideas which have been made possible by the widespread availability of workstations having good graphics and computational capabilities. This book is a guide to using S-Plus to perform statistical analyses and provides both an introduction to the use of S-Plus and a course in modern statistical methods. The aim of the book is to show how to use S-Plus as a powerful and graphical system. Readers are assumed to have a basic grounding in statistics, and so the book is intended for would-be users of S-Plus, and both students and researchers using statistics. Throughout, the emphasis is on presenting practical problems and full analyses of real data sets.

  2. Statistical Literacy in the Data Science Workplace

    Science.gov (United States)

    Grant, Robert

    2017-01-01

    Statistical literacy, the ability to understand and make use of statistical information including methods, has particular relevance in the age of data science, when complex analyses are undertaken by teams from diverse backgrounds. Not only is it essential to communicate to the consumers of information but also within the team. Writing from the…

  3. The CAMAC logic state analyser

    CERN Document Server

    Centro, Sandro

    1981-01-01

    Summary form only given, as follows. Large electronic experiments using distributed processors for parallel readout and data reduction need to analyse the data acquisition components status and monitor dead time constants of each active readout module and processor. For the UA1 experiment, a microprocessor-based CAMAC logic status analyser (CLSA) has been developed in order to implement these functions autonomously. CLSA is a single unit CAMAC module, able to record, up to 256 times, the logic status of 32 TTL inputs gated by a common clock, internal or external, with a maximum frequency of 2 MHz. The data stored in the internal CLSA memory can be read directly via CAMAC function or preprocessed by CLSA 6800 microprocessor. The 6800 resident firmware (4Kbyte) expands the module features to include an interactive monitor, data recording control, data reduction and histogram accumulation with statistics parameter evaluation. The microprocessor memory and the resident firmware can be externally extended using st...

  4. Statistics For Dummies

    CERN Document Server

    Rumsey, Deborah

    2011-01-01

    The fun and easy way to get down to business with statistics Stymied by statistics? No fear ? this friendly guide offers clear, practical explanations of statistical ideas, techniques, formulas, and calculations, with lots of examples that show you how these concepts apply to your everyday life. Statistics For Dummies shows you how to interpret and critique graphs and charts, determine the odds with probability, guesstimate with confidence using confidence intervals, set up and carry out a hypothesis test, compute statistical formulas, and more.Tracks to a typical first semester statistics cou

  5. Industrial statistics with Minitab

    CERN Document Server

    Cintas, Pere Grima; Llabres, Xavier Tort-Martorell

    2012-01-01

    Industrial Statistics with MINITAB demonstrates the use of MINITAB as a tool for performing statistical analysis in an industrial context. This book covers introductory industrial statistics, exploring the most commonly used techniques alongside those that serve to give an overview of more complex issues. A plethora of examples in MINITAB are featured along with case studies for each of the statistical techniques presented. Industrial Statistics with MINITAB: Provides comprehensive coverage of user-friendly practical guidance to the essential statistical methods applied in industry.Explores

  6. First-Generation Transgenic Plants and Statistics

    NARCIS (Netherlands)

    Nap, Jan-Peter; Keizer, Paul; Jansen, Ritsert

    1993-01-01

    The statistical analyses of populations of first-generation transgenic plants are commonly based on mean and variance and generally require a test of normality. Since in many cases the assumptions of normality are not met, analyses can result in erroneous conclusions. Transformation of data to

  7. Statistics for Finance

    DEFF Research Database (Denmark)

    Lindström, Erik; Madsen, Henrik; Nielsen, Jan Nygaard

    Statistics for Finance develops students’ professional skills in statistics with applications in finance. Developed from the authors’ courses at the Technical University of Denmark and Lund University, the text bridges the gap between classical, rigorous treatments of financial mathematics...

  8. CMS Program Statistics

    Data.gov (United States)

    U.S. Department of Health & Human Services — The CMS Office of Enterprise Data and Analytics has developed CMS Program Statistics, which includes detailed summary statistics on national health care, Medicare...

  9. Recreational Boating Statistics 2012

    Data.gov (United States)

    Department of Homeland Security — Every year, the USCG compiles statistics on reported recreational boating accidents. These statistics are derived from accident reports that are filed by the owners...

  10. Recreational Boating Statistics 2011

    Data.gov (United States)

    Department of Homeland Security — Every year, the USCG compiles statistics on reported recreational boating accidents. These statistics are derived from accident reports that are filed by the owners...

  11. Recreational Boating Statistics 2013

    Data.gov (United States)

    Department of Homeland Security — Every year, the USCG compiles statistics on reported recreational boating accidents. These statistics are derived from accident reports that are filed by the owners...

  12. Statistical data analysis handbook

    National Research Council Canada - National Science Library

    Wall, Francis J

    1986-01-01

    It must be emphasized that this is not a text book on statistics. Instead it is a working tool that presents data analysis in clear, concise terms which can be readily understood even by those without formal training in statistics...

  13. Mathematical and statistical analysis

    Science.gov (United States)

    Houston, A. Glen

    1988-01-01

    The goal of the mathematical and statistical analysis component of RICIS is to research, develop, and evaluate mathematical and statistical techniques for aerospace technology applications. Specific research areas of interest include modeling, simulation, experiment design, reliability assessment, and numerical analysis.

  14. National transportation statistics 2011

    Science.gov (United States)

    2011-04-01

    Compiled and published by the U.S. Department of Transportation's Bureau of Transportation Statistics (BTS), National Transportation Statistics presents information on the U.S. transportation system, including its physical components, safety record, ...

  15. Principles of applied statistics

    National Research Council Canada - National Science Library

    Cox, D. R; Donnelly, Christl A

    2011-01-01

    .... David Cox and Christl Donnelly distil decades of scientific experience into usable principles for the successful application of statistics, showing how good statistical strategy shapes every stage of an investigation...

  16. Dealing with statistics what you need to know

    CERN Document Server

    Brown, Reva Berman

    2007-01-01

    A guide to the essential statistical skills needed for success in assignments, projects or dissertations. It explains why it is impossible to avoid using statistics in analysing data. It also describes the language of statistics to make it easier to understand the various terms used for statistical techniques.

  17. Ethics in Statistics

    Science.gov (United States)

    Lenard, Christopher; McCarthy, Sally; Mills, Terence

    2014-01-01

    There are many different aspects of statistics. Statistics involves mathematics, computing, and applications to almost every field of endeavour. Each aspect provides an opportunity to spark someone's interest in the subject. In this paper we discuss some ethical aspects of statistics, and describe how an introduction to ethics has been…

  18. Fundamental concepts in statistics: elucidation and illustration.

    Science.gov (United States)

    Curran-Everett, D; Taylor, S; Kafadar, K

    1998-09-01

    Fundamental concepts in statistics form the cornerstone of scientific inquiry. If we fail to understand fully these fundamental concepts, then the scientific conclusions we reach are more likely to be wrong. This is more than supposition: for 60 years, statisticians have warned that the scientific literature harbors misunderstandings about basic statistical concepts. Original articles published in 1996 by the American Physiological Society's journals fared no better in their handling of basic statistical concepts. In this review, we summarize the two main scientific uses of statistics: hypothesis testing and estimation. Most scientists use statistics solely for hypothesis testing; often, however, estimation is more useful. We also illustrate the concepts of variability and uncertainty, and we demonstrate the essential distinction between statistical significance and scientific importance. An understanding of concepts such as variability, uncertainty, and significance is necessary, but it is not sufficient; we show also that the numerical results of statistical analyses have limitations.

  19. Statistical Analysis of Data for Timber Strengths

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Hoffmeyer, P.

    Statistical analyses are performed for material strength parameters from approximately 6700 specimens of structural timber. Non-parametric statistical analyses and fits to the following distributions types have been investigated: Normal, Lognormal, 2 parameter Weibull and 3-parameter Weibull. The......-parameter Weibull (and Normal) distributions give the best fits to the data available, especially if tail fits are used whereas the LogNormal distribution generally gives poor fit and larger coefficients of variation, especially if tail fits are used........ The statistical fits have generally been made using all data (100%) and the lower tail (30%) of the data. The Maximum Likelihood Method and the Least Square Technique have been used to estimate the statistical parameters in the selected distributions. 8 different databases are analysed. The results show that 2...

  20. Sproglig Metode og Analyse

    DEFF Research Database (Denmark)

    le Fevre Jakobsen, Bjarne

    Publikationen indeholder øvematerialer, tekster, powerpointpræsentationer og handouts til undervisningsfaget Sproglig Metode og Analyse på BA og tilvalg i Dansk/Nordisk 2010-2011......Publikationen indeholder øvematerialer, tekster, powerpointpræsentationer og handouts til undervisningsfaget Sproglig Metode og Analyse på BA og tilvalg i Dansk/Nordisk 2010-2011...

  1. Nonparametric statistical inference

    CERN Document Server

    Gibbons, Jean Dickinson

    2010-01-01

    Overall, this remains a very fine book suitable for a graduate-level course in nonparametric statistics. I recommend it for all people interested in learning the basic ideas of nonparametric statistical inference.-Eugenia Stoimenova, Journal of Applied Statistics, June 2012… one of the best books available for a graduate (or advanced undergraduate) text for a theory course on nonparametric statistics. … a very well-written and organized book on nonparametric statistics, especially useful and recommended for teachers and graduate students.-Biometrics, 67, September 2011This excellently presente

  2. Business statistics for dummies

    CERN Document Server

    Anderson, Alan

    2013-01-01

    Score higher in your business statistics course? Easy. Business statistics is a common course for business majors and MBA candidates. It examines common data sets and the proper way to use such information when conducting research and producing informational reports such as profit and loss statements, customer satisfaction surveys, and peer comparisons. Business Statistics For Dummies tracks to a typical business statistics course offered at the undergraduate and graduate levels and provides clear, practical explanations of business statistical ideas, techniques, formulas, and calculations, w

  3. Head First Statistics

    CERN Document Server

    Griffiths, Dawn

    2009-01-01

    Wouldn't it be great if there were a statistics book that made histograms, probability distributions, and chi square analysis more enjoyable than going to the dentist? Head First Statistics brings this typically dry subject to life, teaching you everything you want and need to know about statistics through engaging, interactive, and thought-provoking material, full of puzzles, stories, quizzes, visual aids, and real-world examples. Whether you're a student, a professional, or just curious about statistical analysis, Head First's brain-friendly formula helps you get a firm grasp of statistics

  4. Statistics & probaility for dummies

    CERN Document Server

    Rumsey, Deborah J

    2013-01-01

    Two complete eBooks for one low price! Created and compiled by the publisher, this Statistics I and Statistics II bundle brings together two math titles in one, e-only bundle. With this special bundle, you'll get the complete text of the following two titles: Statistics For Dummies, 2nd Edition  Statistics For Dummies shows you how to interpret and critique graphs and charts, determine the odds with probability, guesstimate with confidence using confidence intervals, set up and carry out a hypothesis test, compute statistical formulas, and more. Tra

  5. Statistics for Research

    CERN Document Server

    Dowdy, Shirley; Chilko, Daniel

    2011-01-01

    Praise for the Second Edition "Statistics for Research has other fine qualities besides superior organization. The examples and the statistical methods are laid out with unusual clarity by the simple device of using special formats for each. The book was written with great care and is extremely user-friendly."-The UMAP Journal Although the goals and procedures of statistical research have changed little since the Second Edition of Statistics for Research was published, the almost universal availability of personal computers and statistical computing application packages have made it possible f

  6. Estimation and inferential statistics

    CERN Document Server

    Sahu, Pradip Kumar; Das, Ajit Kumar

    2015-01-01

    This book focuses on the meaning of statistical inference and estimation. Statistical inference is concerned with the problems of estimation of population parameters and testing hypotheses. Primarily aimed at undergraduate and postgraduate students of statistics, the book is also useful to professionals and researchers in statistical, medical, social and other disciplines. It discusses current methodological techniques used in statistics and related interdisciplinary areas. Every concept is supported with relevant research examples to help readers to find the most suitable application. Statistical tools have been presented by using real-life examples, removing the “fear factor” usually associated with this complex subject. The book will help readers to discover diverse perspectives of statistical theory followed by relevant worked-out examples. Keeping in mind the needs of readers, as well as constantly changing scenarios, the material is presented in an easy-to-understand form.

  7. Lectures on algebraic statistics

    CERN Document Server

    Drton, Mathias; Sullivant, Seth

    2009-01-01

    How does an algebraic geometer studying secant varieties further the understanding of hypothesis tests in statistics? Why would a statistician working on factor analysis raise open problems about determinantal varieties? Connections of this type are at the heart of the new field of "algebraic statistics". In this field, mathematicians and statisticians come together to solve statistical inference problems using concepts from algebraic geometry as well as related computational and combinatorial techniques. The goal of these lectures is to introduce newcomers from the different camps to algebraic statistics. The introduction will be centered around the following three observations: many important statistical models correspond to algebraic or semi-algebraic sets of parameters; the geometry of these parameter spaces determines the behaviour of widely used statistical inference procedures; computational algebraic geometry can be used to study parameter spaces and other features of statistical models.

  8. Statistics for economics

    CERN Document Server

    Naghshpour, Shahdad

    2012-01-01

    Statistics is the branch of mathematics that deals with real-life problems. As such, it is an essential tool for economists. Unfortunately, the way you and many other economists learn the concept of statistics is not compatible with the way economists think and learn. The problem is worsened by the use of mathematical jargon and complex derivations. Here's a book that proves none of this is necessary. All the examples and exercises in this book are constructed within the field of economics, thus eliminating the difficulty of learning statistics with examples from fields that have no relation to business, politics, or policy. Statistics is, in fact, not more difficult than economics. Anyone who can comprehend economics can understand and use statistics successfully within this field, including you! This book utilizes Microsoft Excel to obtain statistical results, as well as to perform additional necessary computations. Microsoft Excel is not the software of choice for performing sophisticated statistical analy...

  9. Baseline Statistics of Linked Statistical Data

    NARCIS (Netherlands)

    Scharnhorst, Andrea; Meroño-Peñuela, Albert; Guéret, Christophe

    2014-01-01

    We are surrounded by an ever increasing ocean of information, everybody will agree to that. We build sophisticated strategies to govern this information: design data models, develop infrastructures for data sharing, building tool for data analysis. Statistical datasets curated by National

  10. National Statistical Commission and Indian Official Statistics

    Indian Academy of Sciences (India)

    T J Rao1. C. R. Rao Advanced Institute of Mathematics, Statistics and Computer Science (AIMSCS) University of Hyderabad Campus Central University Post Office, Prof. C. R. Rao Road Hyderabad 500 046, AP, India. Resonance – Journal of Science Education. Current Issue : Vol. 22, Issue 12 · Current Issue Volume 22 ...

  11. National Statistical Commission and Indian Official Statistics*

    Indian Academy of Sciences (India)

    IAS Admin

    Commission also stresses the importance of setting up of a. Methodological Study Unit to regularly undertake studies for bringing in improvements in the survey methodologies. The importance of a sound official statistical system in any country is well understood. Efficient governance depends largely on timely, accurate and ...

  12. Multivariate statistical methods a first course

    CERN Document Server

    Marcoulides, George A

    2014-01-01

    Multivariate statistics refer to an assortment of statistical methods that have been developed to handle situations in which multiple variables or measures are involved. Any analysis of more than two variables or measures can loosely be considered a multivariate statistical analysis. An introductory text for students learning multivariate statistical methods for the first time, this book keeps mathematical details to a minimum while conveying the basic principles. One of the principal strategies used throughout the book--in addition to the presentation of actual data analyses--is poin

  13. Illustrating the practice of statistics

    Energy Technology Data Exchange (ETDEWEB)

    Hamada, Christina A [Los Alamos National Laboratory; Hamada, Michael S [Los Alamos National Laboratory

    2009-01-01

    The practice of statistics involves analyzing data and planning data collection schemes to answer scientific questions. Issues often arise with the data that must be dealt with and can lead to new procedures. In analyzing data, these issues can sometimes be addressed through the statistical models that are developed. Simulation can also be helpful in evaluating a new procedure. Moreover, simulation coupled with optimization can be used to plan a data collection scheme. The practice of statistics as just described is much more than just using a statistical package. In analyzing the data, it involves understanding the scientific problem and incorporating the scientist's knowledge. In modeling the data, it involves understanding how the data were collected and accounting for limitations of the data where possible. Moreover, the modeling is likely to be iterative by considering a series of models and evaluating the fit of these models. Designing a data collection scheme involves understanding the scientist's goal and staying within hislher budget in terms of time and the available resources. Consequently, a practicing statistician is faced with such tasks and requires skills and tools to do them quickly. We have written this article for students to provide a glimpse of the practice of statistics. To illustrate the practice of statistics, we consider a problem motivated by some precipitation data that our relative, Masaru Hamada, collected some years ago. We describe his rain gauge observational study in Section 2. We describe modeling and an initial analysis of the precipitation data in Section 3. In Section 4, we consider alternative analyses that address potential issues with the precipitation data. In Section 5, we consider the impact of incorporating additional infonnation. We design a data collection scheme to illustrate the use of simulation and optimization in Section 6. We conclude this article in Section 7 with a discussion.

  14. The statistical stability phenomenon

    CERN Document Server

    Gorban, Igor I

    2017-01-01

    This monograph investigates violations of statistical stability of physical events, variables, and processes and develops a new physical-mathematical theory taking into consideration such violations – the theory of hyper-random phenomena. There are five parts. The first describes the phenomenon of statistical stability and its features, and develops methods for detecting violations of statistical stability, in particular when data is limited. The second part presents several examples of real processes of different physical nature and demonstrates the violation of statistical stability over broad observation intervals. The third part outlines the mathematical foundations of the theory of hyper-random phenomena, while the fourth develops the foundations of the mathematical analysis of divergent and many-valued functions. The fifth part contains theoretical and experimental studies of statistical laws where there is violation of statistical stability. The monograph should be of particular interest to engineers...

  15. Statistical Physics An Introduction

    CERN Document Server

    Yoshioka, Daijiro

    2007-01-01

    This book provides a comprehensive presentation of the basics of statistical physics. The first part explains the essence of statistical physics and how it provides a bridge between microscopic and macroscopic phenomena, allowing one to derive quantities such as entropy. Here the author avoids going into details such as Liouville’s theorem or the ergodic theorem, which are difficult for beginners and unnecessary for the actual application of the statistical mechanics. In the second part, statistical mechanics is applied to various systems which, although they look different, share the same mathematical structure. In this way readers can deepen their understanding of statistical physics. The book also features applications to quantum dynamics, thermodynamics, the Ising model and the statistical dynamics of free spins.

  16. Guidelines for Statistical Testing

    OpenAIRE

    Strigini, L.; Littlewood, B.; European Space Agency

    1997-01-01

    This document provides an introduction to statistical testing. Statistical testing of software is here defined as testing in which the test cases are produced by a random process meant to produce different test cases with the same probabilities with which they would arise in actual use of the software. Statistical testing of software has these main advantages: for the purpose of reliability assessment and product acceptance, it supports directly estimates of reliability, and thus decisions on...

  17. Applied statistics for economists

    CERN Document Server

    Lewis, Margaret

    2012-01-01

    This book is an undergraduate text that introduces students to commonly-used statistical methods in economics. Using examples based on contemporary economic issues and readily-available data, it not only explains the mechanics of the various methods, it also guides students to connect statistical results to detailed economic interpretations. Because the goal is for students to be able to apply the statistical methods presented, online sources for economic data and directions for performing each task in Excel are also included.

  18. Equilibrium statistical mechanics

    CERN Document Server

    Mayer, J E

    1968-01-01

    The International Encyclopedia of Physical Chemistry and Chemical Physics, Volume 1: Equilibrium Statistical Mechanics covers the fundamental principles and the development of theoretical aspects of equilibrium statistical mechanics. Statistical mechanical is the study of the connection between the macroscopic behavior of bulk matter and the microscopic properties of its constituent atoms and molecules. This book contains eight chapters, and begins with a presentation of the master equation used for the calculation of the fundamental thermodynamic functions. The succeeding chapters highlight t

  19. Mathematical statistics with applications

    CERN Document Server

    Wackerly, Dennis D; Scheaffer, Richard L

    2008-01-01

    In their bestselling MATHEMATICAL STATISTICS WITH APPLICATIONS, premiere authors Dennis Wackerly, William Mendenhall, and Richard L. Scheaffer present a solid foundation in statistical theory while conveying the relevance and importance of the theory in solving practical problems in the real world. The authors' use of practical applications and excellent exercises helps you discover the nature of statistics and understand its essential role in scientific research.

  20. Contributions to statistics

    CERN Document Server

    Mahalanobis, P C

    1965-01-01

    Contributions to Statistics focuses on the processes, methodologies, and approaches involved in statistics. The book is presented to Professor P. C. Mahalanobis on the occasion of his 70th birthday. The selection first offers information on the recovery of ancillary information and combinatorial properties of partially balanced designs and association schemes. Discussions focus on combinatorial applications of the algebra of association matrices, sample size analogy, association matrices and the algebra of association schemes, and conceptual statistical experiments. The book then examines latt

  1. Optimization techniques in statistics

    CERN Document Server

    Rustagi, Jagdish S

    1994-01-01

    Statistics help guide us to optimal decisions under uncertainty. A large variety of statistical problems are essentially solutions to optimization problems. The mathematical techniques of optimization are fundamentalto statistical theory and practice. In this book, Jagdish Rustagi provides full-spectrum coverage of these methods, ranging from classical optimization and Lagrange multipliers, to numerical techniques using gradients or direct search, to linear, nonlinear, and dynamic programming using the Kuhn-Tucker conditions or the Pontryagin maximal principle. Variational methods and optimiza

  2. Equilibrium statistical mechanics

    CERN Document Server

    Jackson, E Atlee

    2000-01-01

    Ideal as an elementary introduction to equilibrium statistical mechanics, this volume covers both classical and quantum methodology for open and closed systems. Introductory chapters familiarize readers with probability and microscopic models of systems, while additional chapters describe the general derivation of the fundamental statistical mechanics relationships. The final chapter contains 16 sections, each dealing with a different application, ordered according to complexity, from classical through degenerate quantum statistical mechanics. Key features include an elementary introduction t

  3. Lectures on statistical mechanics

    CERN Document Server

    Bowler, M G

    1982-01-01

    Anyone dissatisfied with the almost ritual dullness of many 'standard' texts in statistical mechanics will be grateful for the lucid explanation and generally reassuring tone. Aimed at securing firm foundations for equilibrium statistical mechanics, topics of great subtlety are presented transparently and enthusiastically. Very little mathematical preparation is required beyond elementary calculus and prerequisites in physics are limited to some elementary classical thermodynamics. Suitable as a basis for a first course in statistical mechanics, the book is an ideal supplement to more convent

  4. Fundamental duties to be observed in the procedure of decommissioning industrial installations subject to licensing according to Para. 5 sub-section 3 of the German Emission Control Act (BImSchG). Die Grundpflichten bei der Einstellung des Betriebes genehmigungsbeduerftiger Anlagen gemaess Para. 5 Abs. 3 BImSchG

    Energy Technology Data Exchange (ETDEWEB)

    Dierkes, C.

    1994-01-01

    The main reason for adding Para. 5, sub-section 3 to the BImSchG, containing the fundamental duties to be observed by owners of installations in the decommissioning procedure and after, was the knowledge that shutdown plant and site still may continue to be a source of hazards. The study in hand investigates to what extent this Para. 5 sub-section 3 BImSchG is a suitable instrument for managing the harzards and other problems occurring in connection with decommissioned installations. The applicability of the provisions is examined, also with a view to shutdowns restricted to parts of plant or in time, and the duties to be fulfilled by (former) plant owners are reviewed in detail. A major aspect is the question whether the legal provisions are an adequate instrument to provide for due disposal of long-standing pollution or long-lasting waste. The study also addresses the duty to guarantee proper treatment of recyclable wastes, and the responsibilities of former and current owners. (orig./HP)

  5. Digest of education statistics

    National Research Council Canada - National Science Library

    Contains information on a variety of subjects within the field of education statistics, including the number of schools and colleges, enrollments, teachers, graduates, educational attainment, finances...

  6. Annual Statistical Supplement, 2002

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2002 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  7. Annual Statistical Supplement, 2003

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2003 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  8. Annual Statistical Supplement, 2001

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2001 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  9. Annual Statistical Supplement, 2010

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2010 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  10. Annual Statistical Supplement, 2007

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2007 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  11. Annual Statistical Supplement, 2016

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2016 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  12. Annual Statistical Supplement, 2000

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2000 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  13. Statistics in a Nutshell

    CERN Document Server

    Boslaugh, Sarah

    2008-01-01

    Need to learn statistics as part of your job, or want some help passing a statistics course? Statistics in a Nutshell is a clear and concise introduction and reference that's perfect for anyone with no previous background in the subject. This book gives you a solid understanding of statistics without being too simple, yet without the numbing complexity of most college texts. You get a firm grasp of the fundamentals and a hands-on understanding of how to apply them before moving on to the more advanced material that follows. Each chapter presents you with easy-to-follow descriptions illustrat

  14. Annual Statistical Supplement, 2008

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2008 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  15. Annual Statistical Supplement, 2006

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2006 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  16. Annual Statistical Supplement, 2005

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2005 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  17. Annual Statistical Supplement, 2015

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2015 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  18. Statistics is Easy

    CERN Document Server

    Shasha, Dennis

    2010-01-01

    Statistics is the activity of inferring results about a population given a sample. Historically, statistics books assume an underlying distribution to the data (typically, the normal distribution) and derive results under that assumption. Unfortunately, in real life, one cannot normally be sure of the underlying distribution. For that reason, this book presents a distribution-independent approach to statistics based on a simple computational counting idea called resampling. This book explains the basic concepts of resampling, then systematically presents the standard statistical measures along

  19. Annual Statistical Supplement, 2004

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2004 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  20. 100 statistical tests

    CERN Document Server

    Kanji, Gopal K

    2006-01-01

    This expanded and updated Third Edition of Gopal K. Kanji's best-selling resource on statistical tests covers all the most commonly used tests with information on how to calculate and interpret results with simple datasets. Each entry begins with a short summary statement about the test's purpose, and contains details of the test objective, the limitations (or assumptions) involved, a brief outline of the method, a worked example, and the numerical calculation. 100 Statistical Tests, Third Edition is the one indispensable guide for users of statistical materials and consumers of statistical information at all levels and across all disciplines.

  1. Understanding Computational Bayesian Statistics

    CERN Document Server

    Bolstad, William M

    2011-01-01

    A hands-on introduction to computational statistics from a Bayesian point of view Providing a solid grounding in statistics while uniquely covering the topics from a Bayesian perspective, Understanding Computational Bayesian Statistics successfully guides readers through this new, cutting-edge approach. With its hands-on treatment of the topic, the book shows how samples can be drawn from the posterior distribution when the formula giving its shape is all that is known, and how Bayesian inferences can be based on these samples from the posterior. These ideas are illustrated on common statistic

  2. Record Statistics and Dynamics

    DEFF Research Database (Denmark)

    Sibani, Paolo; Jensen, Henrik J.

    2009-01-01

    The term record statistics covers the statistical properties of records within an ordered series of numerical data obtained from observations or measurements. A record within such series is simply a value larger (or smaller) than all preceding values. The mathematical properties of records strongly...... fluctuations of e. g. the energy are able to push the system past some sort of ‘edge of stability’, inducing irreversible configurational changes, whose statistics then closely follows the statistics of record fluctuations....

  3. Principles of statistics

    CERN Document Server

    Bulmer, M G

    1979-01-01

    There are many textbooks which describe current methods of statistical analysis, while neglecting related theory. There are equally many advanced textbooks which delve into the far reaches of statistical theory, while bypassing practical applications. But between these two approaches is an unfilled gap, in which theory and practice merge at an intermediate level. Professor M. G. Bulmer's Principles of Statistics, originally published in 1965, was created to fill that need. The new, corrected Dover edition of Principles of Statistics makes this invaluable mid-level text available once again fo

  4. Annual Statistical Supplement, 2014

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2014 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  5. Annual Statistical Supplement, 2009

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2009 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  6. Annual Statistical Supplement, 2011

    Data.gov (United States)

    Social Security Administration — The Annual Statistical Supplement, 2011 includes the most comprehensive data available on the Social Security and Supplemental Security Income programs. More than...

  7. A Statistical Analysis of Cryptocurrencies

    Directory of Open Access Journals (Sweden)

    Stephen Chan

    2017-05-01

    Full Text Available We analyze statistical properties of the largest cryptocurrencies (determined by market capitalization, of which Bitcoin is the most prominent example. We characterize their exchange rates versus the U.S. Dollar by fitting parametric distributions to them. It is shown that returns are clearly non-normal, however, no single distribution fits well jointly to all the cryptocurrencies analysed. We find that for the most popular currencies, such as Bitcoin and Litecoin, the generalized hyperbolic distribution gives the best fit, while for the smaller cryptocurrencies the normal inverse Gaussian distribution, generalized t distribution, and Laplace distribution give good fits. The results are important for investment and risk management purposes.

  8. Practical statistics for educators

    CERN Document Server

    Ravid, Ruth

    2014-01-01

    Practical Statistics for Educators, Fifth Edition, is a clear and easy-to-follow text written specifically for education students in introductory statistics courses and in action research courses. It is also a valuable resource and guidebook for educational practitioners who wish to study their own settings.

  9. Fermi–Dirac Statistics

    Indian Academy of Sciences (India)

    IAS Admin

    Dirac statistics, identical and in- distinguishable particles, Fermi gas. Fermi–Dirac Statistics. Derivation and Consequences. S Chaturvedi and Shyamal Biswas ... GENERAL. ARTICLE. RESONANCE. January 2014. 57. Historically, one of the first applications of. Fermi–Dirac distribution came from Fowler in the context of.

  10. Practical statistics simply explained

    CERN Document Server

    Langley, Dr Russell A

    1971-01-01

    For those who need to know statistics but shy away from math, this book teaches how to extract truth and draw valid conclusions from numerical data using logic and the philosophy of statistics rather than complex formulae. Lucid discussion of averages and scatter, investigation design, more. Problems with solutions.

  11. Thiele. Pioneer in statistics

    DEFF Research Database (Denmark)

    Lauritzen, Steffen Lilholt

    This book studies the brilliant Danish 19th Century astronomer, T.N. Thiele who made important contributions to statistics, actuarial science, astronomy and mathematics. The most important of these contributions in statistics are translated into English for the first time, and the text includes c...

  12. Statistical methods in metabolomics.

    Science.gov (United States)

    Korman, Alexander; Oh, Amy; Raskind, Alexander; Banks, David

    2012-01-01

    Metabolomics is the relatively new field in bioinformatics that uses measurements on metabolite abundance as a tool for disease diagnosis and other medical purposes. Although closely related to proteomics, the statistical analysis is potentially simpler since biochemists have significantly more domain knowledge about metabolites. This chapter reviews the challenges that metabolomics poses in the areas of quality control, statistical metrology, and data mining.

  13. On Statistical Testing.

    Science.gov (United States)

    Huberty, Carl J.

    An approach to statistical testing, which combines Neyman-Pearson hypothesis testing and Fisher significance testing, is recommended. The use of P-values in this approach is discussed in some detail. The author also discusses some problems which are often found in introductory statistics textbooks. The problems involve the definitions of…

  14. Applied Statistics with SPSS

    Science.gov (United States)

    Huizingh, Eelko K. R. E.

    2007-01-01

    Accessibly written and easy to use, "Applied Statistics Using SPSS" is an all-in-one self-study guide to SPSS and do-it-yourself guide to statistics. What is unique about Eelko Huizingh's approach is that this book is based around the needs of undergraduate students embarking on their own research project, and its self-help style is designed to…

  15. Handbook of Spatial Statistics

    CERN Document Server

    Gelfand, Alan E

    2010-01-01

    Offers an introduction detailing the evolution of the field of spatial statistics. This title focuses on the three main branches of spatial statistics: continuous spatial variation (point referenced data); discrete spatial variation, including lattice and areal unit data; and, spatial point patterns.

  16. Thiele. Pioneer in statistics

    DEFF Research Database (Denmark)

    Lauritzen, Steffen Lilholt

    This book studies the brilliant Danish 19th Century astronomer, T.N. Thiele who made important contributions to statistics, actuarial science, astronomy and mathematics. The most important of these contributions in statistics are translated into English for the first time, and the text includes...

  17. Enhancing statistical literacy

    NARCIS (Netherlands)

    Droogers, M.J.S.|info:eu-repo/dai/nl/413392252; Drijvers, P.H.M.|info:eu-repo/dai/nl/074302922

    2017-01-01

    Current secondary school statistics curricula focus on procedural knowledge and pay too little attention to statistical reasoning. As a result, students are not able to apply their knowledge to practice. In addition, education often targets the average student, which may lead to gifted students

  18. Fisher's Contributions to Statistics

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 9. Fisher's Contributions to Statistics. T Krishnan. General Article Volume 2 Issue 9 September 1997 pp 32-37 ... Author Affiliations. T Krishnan1. Computer Science Unit, Indian Statistical Institute, 203 B T Road, Calcutta 700 035, India ...

  19. Reform in Statistical Education

    Science.gov (United States)

    Huck, Schuyler W.

    2007-01-01

    Two questions are considered in this article: (a) What should professionals in school psychology do in an effort to stay current with developments in applied statistics? (b) What should they do with their existing knowledge to move from surface understanding of statistics to deep understanding? Written for school psychologists who have completed…

  20. The Economic Cost of Homosexuality: Multilevel Analyses

    Science.gov (United States)

    Baumle, Amanda K.; Poston, Dudley, Jr.

    2011-01-01

    This article builds on earlier studies that have examined "the economic cost of homosexuality," by using data from the 2000 U.S. Census and by employing multilevel analyses. Our findings indicate that partnered gay men experience a 12.5 percent earnings penalty compared to married heterosexual men, and a statistically insignificant earnings…

  1. Understanding advanced statistical methods

    CERN Document Server

    Westfall, Peter

    2013-01-01

    Introduction: Probability, Statistics, and ScienceReality, Nature, Science, and ModelsStatistical Processes: Nature, Design and Measurement, and DataModelsDeterministic ModelsVariabilityParametersPurely Probabilistic Statistical ModelsStatistical Models with Both Deterministic and Probabilistic ComponentsStatistical InferenceGood and Bad ModelsUses of Probability ModelsRandom Variables and Their Probability DistributionsIntroductionTypes of Random Variables: Nominal, Ordinal, and ContinuousDiscrete Probability Distribution FunctionsContinuous Probability Distribution FunctionsSome Calculus-Derivatives and Least SquaresMore Calculus-Integrals and Cumulative Distribution FunctionsProbability Calculation and SimulationIntroductionAnalytic Calculations, Discrete and Continuous CasesSimulation-Based ApproximationGenerating Random NumbersIdentifying DistributionsIntroductionIdentifying Distributions from Theory AloneUsing Data: Estimating Distributions via the HistogramQuantiles: Theoretical and Data-Based Estimate...

  2. Introduction to Bayesian statistics

    CERN Document Server

    Bolstad, William M

    2017-01-01

    There is a strong upsurge in the use of Bayesian methods in applied statistical analysis, yet most introductory statistics texts only present frequentist methods. Bayesian statistics has many important advantages that students should learn about if they are going into fields where statistics will be used. In this Third Edition, four newly-added chapters address topics that reflect the rapid advances in the field of Bayesian staistics. The author continues to provide a Bayesian treatment of introductory statistical topics, such as scientific data gathering, discrete random variables, robust Bayesian methods, and Bayesian approaches to inferenfe cfor discrete random variables, bionomial proprotion, Poisson, normal mean, and simple linear regression. In addition, newly-developing topics in the field are presented in four new chapters: Bayesian inference with unknown mean and variance; Bayesian inference for Multivariate Normal mean vector; Bayesian inference for Multiple Linear RegressionModel; and Computati...

  3. Laser Beam Focus Analyser

    DEFF Research Database (Denmark)

    Nielsen, Peter Carøe; Hansen, Hans Nørgaard; Olsen, Flemming Ove

    2007-01-01

    The quantitative and qualitative description of laser beam characteristics is important for process implementation and optimisation. In particular, a need for quantitative characterisation of beam diameter was identified when using fibre lasers for micro manufacturing. Here the beam diameter limits...... the obtainable features in direct laser machining as well as heat affected zones in welding processes. This paper describes the development of a measuring unit capable of analysing beam shape and diameter of lasers to be used in manufacturing processes. The analyser is based on the principle of a rotating...... mechanical wire being swept through the laser beam at varying Z-heights. The reflected signal is analysed and the resulting beam profile determined. The development comprised the design of a flexible fixture capable of providing both rotation and Z-axis movement, control software including data capture...

  4. Subgroup analyses in cost-effectiveness analyses to support health technology assessments.

    Science.gov (United States)

    Fletcher, Christine; Chuang-Stein, Christy; Paget, Marie-Ange; Reid, Carol; Hawkins, Neil

    2014-01-01

    'Success' in drug development is bringing to patients a new medicine that has an acceptable benefit-risk profile and that is also cost-effective. Cost-effectiveness means that the incremental clinical benefit is deemed worth paying for by a healthcare system, and it has an important role in enabling manufacturers to obtain new medicines to patients as soon as possible following regulatory approval. Subgroup analyses are increasingly being utilised by decision-makers in the determination of the cost-effectiveness of new medicines when making recommendations. This paper highlights the statistical considerations when using subgroup analyses to support cost-effectiveness for a health technology assessment. The key principles recommended for subgroup analyses supporting clinical effectiveness published by Paget et al. are evaluated with respect to subgroup analyses supporting cost-effectiveness. A health technology assessment case study is included to highlight the importance of subgroup analyses when incorporated into cost-effectiveness analyses. In summary, we recommend planning subgroup analyses for cost-effectiveness analyses early in the drug development process and adhering to good statistical principles when using subgroup analyses in this context. In particular, we consider it important to provide transparency in how subgroups are defined, be able to demonstrate the robustness of the subgroup results and be able to quantify the uncertainty in the subgroup analyses of cost-effectiveness. Copyright © 2014 John Wiley & Sons, Ltd.

  5. Meta-analyses

    NARCIS (Netherlands)

    Hendriks, Maria A.; Luyten, Johannes W.; Scheerens, Jaap; Sleegers, P.J.C.; Scheerens, J

    2014-01-01

    In this chapter results of a research synthesis and quantitative meta-analyses of three facets of time effects in education are presented, namely time at school during regular lesson hours, homework, and extended learning time. The number of studies for these three facets of time that could be used

  6. Contesting Citizenship: Comparative Analyses

    DEFF Research Database (Denmark)

    Siim, Birte; Squires, Judith

    2007-01-01

    importance of particularized experiences and multiple ineequality agendas). These developments shape the way citizenship is both practiced and analysed. Mapping neat citizenship modles onto distinct nation-states and evaluating these in relation to formal equality is no longer an adequate approach...

  7. Analysing Access Control Specifications

    DEFF Research Database (Denmark)

    Probst, Christian W.; Hansen, René Rydhof

    2009-01-01

    . Recent events have revealed intimate knowledge of surveillance and control systems on the side of the attacker, making it often impossible to deduce the identity of an inside attacker from logged data. In this work we present an approach that analyses the access control configuration to identify the set...

  8. Chromosome analyses in dogs.

    Science.gov (United States)

    Reimann-Berg, N; Bullerdiek, J; Murua Escobar, H; Nolte, I

    2012-01-01

    Cytogenetics is the study of normal and abnormal chromosomes. Every species is characterized by a given number of chromosomes that can be recognized by their specific shape. The chromosomes are arranged according to standard classification schemes for the respective species. While pre- and postnatal chromosome analyses investigate the constitutional karyotype, tumor cytogenetics is focused on the detection of clonal acquired, tumor-associated chromosome aberrations. Cytogenetic investigations in dogs are of great value especially for breeders dealing with fertility problems within their pedigrees, for veterinarians and last but not least for the dog owners. Dogs and humans share a variety of genetic diseases, including cancer. Thus, the dog has become an increasingly important model for genetic diseases. However, cytogenetic analyses of canine cells are complicated by the complex karyotype of the dog. Only just 15 years ago, a standard classification scheme for the complete canine karyotype was established. For chromosome analyses of canine cells the same steps of chromosome preparation are used as in human cytogenetics. There are few reports about cytogenetic changes in non-neoplastic cells, involving predominantly the sex chromosomes. Cytogenetic analyses of different entities of canine tumors revealed that, comparable to human tumors, tumors of the dog are often characterized by clonal chromosome aberrations, which might be used as diagnostic and prognostic markers. The integration of modern techniques (molecular genetic approaches, adaptive computer programs) will facilitate and complete conventional cytogenetic studies. However, conventional cytogenetics is still non-replaceable.

  9. Report sensory analyses veal

    NARCIS (Netherlands)

    Veldman, M.; Schelvis-Smit, A.A.M.

    2005-01-01

    On behalf of a client of Animal Sciences Group, different varieties of veal were analyzed by both instrumental and sensory analyses. The sensory evaluation was performed with a sensory analytical panel in the period of 13th of May and 31st of May, 2005. The three varieties of veal were: young bull,

  10. Filmstil - teori og analyse

    DEFF Research Database (Denmark)

    Hansen, Lennard Højbjerg

    Filmstil påvirker på afgørende vis vores oplevelse af film. Men filmstil, måden, de levende billeder organiserer fortællingen på fylder noget mindre end filmens handling, når vi taler om film. Filmstil - teori og analyse er en rigt eksemplificeret præsentation, kritik og videreudvikling af...

  11. Statistical physics; Physique statistique

    Energy Technology Data Exchange (ETDEWEB)

    Couture, L.; Zitoun, R. [Universite Pierre et Marie Curie, 75 - Paris (France)

    1992-12-31

    The basis of statistical physics is exposed. The statistical models of Maxwell-Boltzmann, of Bose-Einstein and of Fermi-Dirac and their particular application fields are presented. The statistical theory is applied in different ranges of physics: gas characteristics, paramagnetism, crystal thermal properties and solid electronic properties. A whole chapter is dedicated to helium and its characteristics such as superfluidity, another deals with superconductivity. Superconductivity is presented both experimentally and theoretically. Meissner effect and Josephson effect are described and the framework of BCS theory is drawn. (A.C.)

  12. Statistics a complete introduction

    CERN Document Server

    Graham, Alan

    2013-01-01

    Statistics: A Complete Introduction is the most comprehensive yet easy-to-use introduction to using Statistics. Written by a leading expert, this book will help you if you are studying for an important exam or essay, or if you simply want to improve your knowledge. The book covers all the key areas of Statistics including graphs, data interpretation, spreadsheets, regression, correlation and probability. Everything you will need is here in this one book. Each chapter includes not only an explanation of the knowledge and skills you need, but also worked examples and test questions.

  13. Statistical deception at work

    CERN Document Server

    Mauro, John

    2013-01-01

    Written to reveal statistical deceptions often thrust upon unsuspecting journalists, this book views the use of numbers from a public perspective. Illustrating how the statistical naivete of journalists often nourishes quantitative misinformation, the author's intent is to make journalists more critical appraisers of numerical data so that in reporting them they do not deceive the public. The book frequently uses actual reported examples of misused statistical data reported by mass media and describes how journalists can avoid being taken in by them. Because reports of survey findings seldom g

  14. Evolutionary Statistical Procedures

    CERN Document Server

    Baragona, Roberto; Poli, Irene

    2011-01-01

    This proposed text appears to be a good introduction to evolutionary computation for use in applied statistics research. The authors draw from a vast base of knowledge about the current literature in both the design of evolutionary algorithms and statistical techniques. Modern statistical research is on the threshold of solving increasingly complex problems in high dimensions, and the generalization of its methodology to parameters whose estimators do not follow mathematically simple distributions is underway. Many of these challenges involve optimizing functions for which analytic solutions a

  15. The nature of statistics

    CERN Document Server

    Wallis, W Allen

    2014-01-01

    Focusing on everyday applications as well as those of scientific research, this classic of modern statistical methods requires little to no mathematical background. Readers develop basic skills for evaluating and using statistical data. Lively, relevant examples include applications to business, government, social and physical sciences, genetics, medicine, and public health. ""W. Allen Wallis and Harry V. Roberts have made statistics fascinating."" - The New York Times ""The authors have set out with considerable success, to write a text which would be of interest and value to the student who,

  16. Statistical Group Comparison

    CERN Document Server

    Liao, Tim Futing

    2011-01-01

    An incomparably useful examination of statistical methods for comparisonThe nature of doing science, be it natural or social, inevitably calls for comparison. Statistical methods are at the heart of such comparison, for they not only help us gain understanding of the world around us but often define how our research is to be carried out. The need to compare between groups is best exemplified by experiments, which have clearly defined statistical methods. However, true experiments are not always possible. What complicates the matter more is a great deal of diversity in factors that are not inde

  17. AP statistics crash course

    CERN Document Server

    D'Alessio, Michael

    2012-01-01

    AP Statistics Crash Course - Gets You a Higher Advanced Placement Score in Less Time Crash Course is perfect for the time-crunched student, the last-minute studier, or anyone who wants a refresher on the subject. AP Statistics Crash Course gives you: Targeted, Focused Review - Study Only What You Need to Know Crash Course is based on an in-depth analysis of the AP Statistics course description outline and actual Advanced Placement test questions. It covers only the information tested on the exam, so you can make the most of your valuable study time. Our easy-to-read format covers: exploring da

  18. The World of Statistics

    Directory of Open Access Journals (Sweden)

    NIS

    2014-06-01

    Full Text Available The International Workshop New Challenges for Statistical Software – The Use of R in Official Statistics was the second of a series of events dedicated to the use of R Project in Romania and initiated by R-omanian R team. We are pleased to announce that 4th of April is the anniversary or R-omanian R team as an R User Group. One year ago, on 4th of April, was taking place the 1st workshop dedicated to the use of R – Workshop State-of-the-art statistical software commonly used in applied economics.

  19. Statistical innovations in diagnostic device evaluation.

    Science.gov (United States)

    Yu, Tinghui; Li, Qin; Gray, Gerry; Yue, Lilly Q

    2016-01-01

    Due to rapid technological development, innovations in diagnostic devices are proceeding at an extremely fast pace. Accordingly, the needs for adopting innovative statistical methods have emerged in the evaluation of diagnostic devices. Statisticians in the Center for Devices and Radiological Health at the Food and Drug Administration have provided leadership in implementing statistical innovations. The innovations discussed in this article include: the adoption of bootstrap and Jackknife methods, the implementation of appropriate multiple reader multiple case study design, the application of robustness analyses for missing data, and the development of study designs and data analyses for companion diagnostics.

  20. Cancer Data and Statistics Tools

    Science.gov (United States)

    ... National Cancer Conference Stay Informed Cancer Data and Statistics Tools Recommend on Facebook Tweet Share Compartir United States Cancer Statistics The United States Cancer Statistics (USCS): Incidence and ...

  1. Plague Maps and Statistics

    Science.gov (United States)

    ... and Statistics Recommend on Facebook Tweet Share Compartir Plague in the United States Plague was first introduced ... per year in the United States: 1900-2012. Plague Worldwide Plague epidemics have occurred in Africa, Asia, ...

  2. CDC WONDER: Cancer Statistics

    Data.gov (United States)

    U.S. Department of Health & Human Services — The United States Cancer Statistics (USCS) online databases in WONDER provide cancer incidence and mortality data for the United States for the years since 1999, by...

  3. Information theory and statistics

    CERN Document Server

    Kullback, Solomon

    1968-01-01

    Highly useful text studies logarithmic measures of information and their application to testing statistical hypotheses. Includes numerous worked examples and problems. References. Glossary. Appendix. 1968 2nd, revised edition.

  4. Blood Facts and Statistics

    Science.gov (United States)

    ... Donor Community Learn About Blood Blood Facts and Statistics Blood Types Blood Components What Happens to Donated Blood Blood and Diversity History of Blood Transfusion Iron and Blood Donation Hosting ...

  5. Statistics for Finance

    DEFF Research Database (Denmark)

    Lindström, Erik; Madsen, Henrik; Nielsen, Jan Nygaard

    Statistics for Finance develops students’ professional skills in statistics with applications in finance. Developed from the authors’ courses at the Technical University of Denmark and Lund University, the text bridges the gap between classical, rigorous treatments of financial mathematics...... that rarely connect concepts to data and books on econometrics and time series analysis that do not cover specific problems related to option valuation. The book discusses applications of financial derivatives pertaining to risk assessment and elimination. The authors cover various statistical......, identify interest rate models, value bonds, estimate parameters, and much more. This textbook will help students understand and manage empirical research in financial engineering. It includes examples of how the statistical tools can be used to improve value-at-risk calculations and other issues...

  6. Statistical mechanics of superconductivity

    CERN Document Server

    Kita, Takafumi

    2015-01-01

    This book provides a theoretical, step-by-step comprehensive explanation of superconductivity for undergraduate and graduate students who have completed elementary courses on thermodynamics and quantum mechanics. To this end, it adopts the unique approach of starting with the statistical mechanics of quantum ideal gases and successively adding and clarifying elements and techniques indispensible for understanding it. They include the spin-statistics theorem, second quantization, density matrices, the Bloch–De Dominicis theorem, the variational principle in statistical mechanics, attractive interaction, and bound states. Ample examples of their usage are also provided in terms of topics from advanced statistical mechanics such as two-particle correlations of quantum ideal gases, derivation of the Hartree–Fock equations, and Landau’s Fermi-liquid theory, among others. With these preliminaries, the fundamental mean-field equations of superconductivity are derived with maximum mathematical clarity based on ...

  7. Illinois travel statistics, 2010

    Science.gov (United States)

    2011-01-01

    The 2010 Illinois Travel Statistics publication is assembled to provide detailed traffic : information to the different users of traffic data. While most users of traffic data at this level : of detail are within the Illinois Department of Transporta...

  8. Illinois travel statistics, 2008

    Science.gov (United States)

    2009-01-01

    The 2008 Illinois Travel Statistics publication is assembled to provide detailed traffic : information to the different users of traffic data. While most users of traffic data at this level : of detail are within the Illinois Department of Transporta...

  9. Illinois travel statistics, 2009

    Science.gov (United States)

    2010-01-01

    The 2009 Illinois Travel Statistics publication is assembled to provide detailed traffic : information to the different users of traffic data. While most users of traffic data at this level : of detail are within the Illinois Department of Transporta...

  10. Statistical Measures of Marksmanship

    National Research Council Canada - National Science Library

    Johnson, Richard

    2001-01-01

    .... This report describes objective statistical procedures to measure both rifle marksmanship accuracy, the proximity of an array of shots to the center of mass of a target, and marksmanship precision...

  11. Basics of statistical physics

    CERN Document Server

    Müller-Kirsten, Harald J W

    2013-01-01

    Statistics links microscopic and macroscopic phenomena, and requires for this reason a large number of microscopic elements like atoms. The results are values of maximum probability or of averaging. This introduction to statistical physics concentrates on the basic principles, and attempts to explain these in simple terms supplemented by numerous examples. These basic principles include the difference between classical and quantum statistics, a priori probabilities as related to degeneracies, the vital aspect of indistinguishability as compared with distinguishability in classical physics, the differences between conserved and non-conserved elements, the different ways of counting arrangements in the three statistics (Maxwell-Boltzmann, Fermi-Dirac, Bose-Einstein), the difference between maximization of the number of arrangements of elements, and averaging in the Darwin-Fowler method. Significant applications to solids, radiation and electrons in metals are treated in separate chapters, as well as Bose-Eins...

  12. Information theory and statistics

    CERN Document Server

    Kullback, Solomon

    1997-01-01

    Highly useful text studies logarithmic measures of information and their application to testing statistical hypotheses. Includes numerous worked examples and problems. References. Glossary. Appendix. 1968 2nd, revised edition.

  13. Mental Illness Statistics

    Science.gov (United States)

    ... Post-Traumatic Stress Disorder (PTSD) Schizophrenia Suicide All Statistics Topics: A-Z Agoraphobia Anorexia Nervosa Any Anxiety Disorder Any Mood Disorder Attention-Deficit/Hyperactivity Disorder (ADHD) Autism Spectrum ...

  14. Data and Statistics

    Science.gov (United States)

    ... Websites About Us Information For… Media Policy Makers Data & Statistics Recommend on Facebook Tweet Share Compartir Sickle ... Findings Feature Articles Key Findings: CDC’s Sickle Cell Data Collection Program Data Useful in Describing Patterns of ...

  15. Statistics for Finance

    DEFF Research Database (Denmark)

    Lindström, Erik; Madsen, Henrik; Nielsen, Jan Nygaard

    , identify interest rate models, value bonds, estimate parameters, and much more. This textbook will help students understand and manage empirical research in financial engineering. It includes examples of how the statistical tools can be used to improve value-at-risk calculations and other issues......Statistics for Finance develops students’ professional skills in statistics with applications in finance. Developed from the authors’ courses at the Technical University of Denmark and Lund University, the text bridges the gap between classical, rigorous treatments of financial mathematics...... that rarely connect concepts to data and books on econometrics and time series analysis that do not cover specific problems related to option valuation. The book discusses applications of financial derivatives pertaining to risk assessment and elimination. The authors cover various statistical...

  16. Transport statistics 1996

    CSIR Research Space (South Africa)

    Shepperson, L

    1997-12-01

    Full Text Available This publication contains transport and related statistics on roads, vehicles, infrastructure, passengers, freight, rail, air, maritime and road traffic, and international comparisons. The information compiled in this publication has been gathered...

  17. CMS Statistics Reference Booklet

    Data.gov (United States)

    U.S. Department of Health & Human Services — The annual CMS Statistics reference booklet provides a quick reference for summary information about health expenditures and the Medicare and Medicaid health...

  18. EDI Performance Statistics

    Data.gov (United States)

    U.S. Department of Health & Human Services — This section contains statistical information and reports related to the percentage of electronic transactions being sent to Medicare contractors in the formats...

  19. Statistical theory of heat

    CERN Document Server

    Scheck, Florian

    2016-01-01

    Scheck’s textbook starts with a concise introduction to classical thermodynamics, including geometrical aspects. Then a short introduction to probabilities and statistics lays the basis for the statistical interpretation of thermodynamics. Phase transitions, discrete models and the stability of matter are explained in great detail. Thermodynamics has a special role in theoretical physics. Due to the general approach of thermodynamics the field has a bridging function between several areas like the theory of condensed matter, elementary particle physics, astrophysics and cosmology. The classical thermodynamics describes predominantly averaged properties of matter, reaching from few particle systems and state of matter to stellar objects. Statistical Thermodynamics covers the same fields, but explores them in greater depth and unifies classical statistical mechanics with quantum theory of multiple particle systems. The content is presented as two tracks: the fast track for master students, providing the essen...

  20. Titanic: A Statistical Exploration.

    Science.gov (United States)

    Takis, Sandra L.

    1999-01-01

    Uses the available data about the Titanic's passengers to interest students in exploring categorical data and the chi-square distribution. Describes activities incorporated into a statistics class and gives additional resources for collecting information about the Titanic. (ASK)

  1. Heroin: Statistics and Trends

    Science.gov (United States)

    ... Science Adolescent Brain Comorbidity College-Age & Young Adults Criminal Justice Drugged Driving Drug Testing Drugs and the ... opioid overdose Statistics and Trends Swipe left or right to scroll. Monitoring the Future Study: Trends in ...

  2. Boating Accident Statistics

    Data.gov (United States)

    Department of Homeland Security — Accident statistics available on the Coast Guard’s website by state, year, and one variable to obtain tables and/or graphs. Data from reports has been loaded for...

  3. Elements of statistical thermodynamics

    CERN Document Server

    Nash, Leonard K

    2006-01-01

    Encompassing essentially all aspects of statistical mechanics that appear in undergraduate texts, this concise, elementary treatment shows how an atomic-molecular perspective yields new insights into macroscopic thermodynamics. 1974 edition.

  4. Statistics for Finance

    DEFF Research Database (Denmark)

    Lindström, Erik; Madsen, Henrik; Nielsen, Jan Nygaard

    that rarely connect concepts to data and books on econometrics and time series analysis that do not cover specific problems related to option valuation. The book discusses applications of financial derivatives pertaining to risk assessment and elimination. The authors cover various statistical......Statistics for Finance develops students’ professional skills in statistics with applications in finance. Developed from the authors’ courses at the Technical University of Denmark and Lund University, the text bridges the gap between classical, rigorous treatments of financial mathematics......, identify interest rate models, value bonds, estimate parameters, and much more. This textbook will help students understand and manage empirical research in financial engineering. It includes examples of how the statistical tools can be used to improve value-at-risk calculations and other issues...

  5. Business statistics I essentials

    CERN Document Server

    Clark, Louise

    2014-01-01

    REA's Essentials provide quick and easy access to critical information in a variety of different fields, ranging from the most basic to the most advanced. As its name implies, these concise, comprehensive study guides summarize the essentials of the field covered. Essentials are helpful when preparing for exams, doing homework and will remain a lasting reference source for students, teachers, and professionals. Business Statistics I includes descriptive statistics, introduction to probability, probability distributions, sampling and sampling distributions, interval estimation, and hypothesis t

  6. Understanding descriptive statistics.

    Science.gov (United States)

    Fisher, Murray J; Marshall, Andrea P

    2009-05-01

    There is an increasing expectation that critical care nurses use clinical research when making decisions about patient care. This article is the second in a series which addresses statistics for clinical nursing practice. In this article we provide an introduction to the use of descriptive statistics. Concepts such as levels of measurement, measures of central tendency and dispersion are described and their use in clinical practice is illustrated.

  7. Statistic>

    DEFF Research Database (Denmark)

    Hansen, Flemming Tvede

    2008-01-01

    Co-organizer for and participant at the exhibition: Statistic>Statistic>2-16/3 2008 Museum fur Kunst und Gewerbe, Hamburg 3/4-27/4 2008...

  8. Statistical Engine Knock Control

    DEFF Research Database (Denmark)

    Stotsky, Alexander A.

    2008-01-01

    A new statistical concept of the knock control of a spark ignition automotive engine is proposed . The control aim is associated with the statistical hy pothesis test which compares the threshold value to the average value of the max imal amplitud e of the knock sensor signal at a given freq uency...... which includ es generation of the amplitud e signals, a threshold value d etermination and a knock sound mod el is d eveloped for evaluation of the control concept....

  9. Reproducible statistical analysis with multiple languages

    DEFF Research Database (Denmark)

    Lenth, Russell; Højsgaard, Søren

    2011-01-01

    This paper describes the system for making reproducible statistical analyses. differs from other systems for reproducible analysis in several ways. The two main differences are: (1) Several statistics programs can be in used in the same document. (2) Documents can be prepared using OpenOffice or ......Office or \\LaTeX. The main part of this paper is an example showing how to use and together in an OpenOffice text document. The paper also contains some practical considerations on the use of literate programming in statistics....

  10. Breakthroughs in statistics

    CERN Document Server

    Johnson, Norman

    This is author-approved bcc: This is the third volume of a collection of seminal papers in the statistical sciences written during the past 110 years. These papers have each had an outstanding influence on the development of statistical theory and practice over the last century. Each paper is preceded by an introduction written by an authority in the field providing background information and assessing its influence. Volume III concerntrates on articles from the 1980's while including some earlier articles not included in Volume I and II. Samuel Kotz is Professor of Statistics in the College of Business and Management at the University of Maryland. Norman L. Johnson is Professor Emeritus of Statistics at the University of North Carolina. Also available: Breakthroughs in Statistics Volume I: Foundations and Basic Theory Samuel Kotz and Norman L. Johnson, Editors 1993. 631 pp. Softcover. ISBN 0-387-94037-5 Breakthroughs in Statistics Volume II: Methodology and Distribution Samuel Kotz and Norman L. Johnson, Edi...

  11. Dropped object protection analyses

    OpenAIRE

    Nilsen, Ingve

    2014-01-01

    Master's thesis in Offshore structural engineering Impact from dropped object is a typical accident action (NOKSOK N-004, 2013). Hence, the DOP structure is to be analyzed in an accidental limit state (ALS) design practice, which means that a non-linear finite element analysis can be applied. The DOP structure will be based on a typical DOP structure. Several FEM analyses are performed for the DOP structure. Different shapes size and weights and various impact positions are used for si...

  12. Biomass feedstock analyses

    Energy Technology Data Exchange (ETDEWEB)

    Wilen, C.; Moilanen, A.; Kurkela, E. [VTT Energy, Espoo (Finland). Energy Production Technologies

    1996-12-31

    The overall objectives of the project `Feasibility of electricity production from biomass by pressurized gasification systems` within the EC Research Programme JOULE II were to evaluate the potential of advanced power production systems based on biomass gasification and to study the technical and economic feasibility of these new processes with different type of biomass feed stocks. This report was prepared as part of this R and D project. The objectives of this task were to perform fuel analyses of potential woody and herbaceous biomasses with specific regard to the gasification properties of the selected feed stocks. The analyses of 15 Scandinavian and European biomass feed stock included density, proximate and ultimate analyses, trace compounds, ash composition and fusion behaviour in oxidizing and reducing atmospheres. The wood-derived fuels, such as whole-tree chips, forest residues, bark and to some extent willow, can be expected to have good gasification properties. Difficulties caused by ash fusion and sintering in straw combustion and gasification are generally known. The ash and alkali metal contents of the European biomasses harvested in Italy resembled those of the Nordic straws, and it is expected that they behave to a great extent as straw in gasification. Any direct relation between the ash fusion behavior (determined according to the standard method) and, for instance, the alkali metal content was not found in the laboratory determinations. A more profound characterisation of the fuels would require gasification experiments in a thermobalance and a PDU (Process development Unit) rig. (orig.) (10 refs.)

  13. Statistical Perspectives on Stratospheric Transport

    Science.gov (United States)

    Sparling, L. C.

    1999-01-01

    Long-lived tropospheric source gases, such as nitrous oxide, enter the stratosphere through the tropical tropopause, are transported throughout the stratosphere by the Brewer-Dobson circulation, and are photochemically destroyed in the upper stratosphere. These chemical constituents, or "tracers" can be used to track mixing and transport by the stratospheric winds. Much of our understanding about the stratospheric circulation is based on large scale gradients and other spatial features in tracer fields constructed from satellite measurements. The point of view presented in this paper is different, but complementary, in that transport is described in terms of tracer probability distribution functions (PDFs). The PDF is computed from the measurements, and is proportional to the area occupied by tracer values in a given range. The flavor of this paper is tutorial, and the ideas are illustrated with several examples of transport-related phenomena, annotated with remarks that summarize the main point or suggest new directions. One example shows how the multimodal shape of the PDF gives information about the different branches of the circulation. Another example shows how the statistics of fluctuations from the most probable tracer value give insight into mixing between different regions of the atmosphere. Also included is an analysis of the time-dependence of the PDF during the onset and decline of the winter circulation, and a study of how "bursts" in the circulation are reflected in transient periods of rapid evolution of the PDF. The dependence of the statistics on location and time are also shown to be important for practical problems related to statistical robustness and satellite sampling. The examples illustrate how physically-based statistical analysis can shed some light on aspects of stratospheric transport that may not be obvious or quantifiable with other types of analyses. An important motivation for the work presented here is the need for synthesis of the

  14. UN Data: Environment Statistics: Waste

    Data.gov (United States)

    World Wide Human Geography Data Working Group — The Environment Statistics Database contains selected water and waste statistics by country. Statistics on water and waste are based on official statistics supplied...

  15. UN Data- Environmental Statistics: Waste

    Data.gov (United States)

    World Wide Human Geography Data Working Group — The Environment Statistics Database contains selected water and waste statistics by country. Statistics on water and waste are based on official statistics supplied...

  16. Multivariate Statistical Process Control

    DEFF Research Database (Denmark)

    Kulahci, Murat

    2013-01-01

    As sensor and computer technology continues to improve, it becomes a normal occurrence that we confront with high dimensional data sets. As in many areas of industrial statistics, this brings forth various challenges in statistical process control (SPC) and monitoring for which the aim is to iden......As sensor and computer technology continues to improve, it becomes a normal occurrence that we confront with high dimensional data sets. As in many areas of industrial statistics, this brings forth various challenges in statistical process control (SPC) and monitoring for which the aim...... is to identify “out-of-control” state of a process using control charts in order to reduce the excessive variation caused by so-called assignable causes. In practice, the most common method of monitoring multivariate data is through a statistic akin to the Hotelling’s T2. For high dimensional data with excessive...... amount of cross correlation, practitioners are often recommended to use latent structures methods such as Principal Component Analysis to summarize the data in only a few linear combinations of the original variables that capture most of the variation in the data. Applications of these control charts...

  17. Statistical analysis of the influence of lancing on the secondary corrosion affecting SG tube ends in the CP0 series of PWRs; Analyse statistique de l`influence de l`effet du lancage sur la corrosion secondaire affectant le pied des tubes GV du palier CP0

    Energy Technology Data Exchange (ETDEWEB)

    Souchois, T.

    1995-05-01

    The main method of tube sheet cleaning during unit outages is high pressure lancing. A new tool called CECIL has been in use for this purpose since 1991 on the CP0 series of PWRs. This paper presents a statistical analysis of inspection data providing a basis for determining whether the type of lancing tool used has a `statistically` significant effect, on the one hand, on the progression of the secondary corrosion affecting the CP0 type PWR SG tube ends and, on the other hand, on the appearance of corrosion on sound tubes after one operating cycle. The study results showed the CECIL cleaning method to be more efficient in tat larger amounts of accumulated deposits were removed and the risks of early corrosion were 14 times lower, with slower degradation in the course of subsequent cycles. (author). 2 refs., 8 figs., 19 tabs.

  18. Energy statistics manual

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2010-07-01

    Detailed, complete, timely and reliable statistics are essential to monitor the energy situation at a country level as well as at an international level. Energy statistics on supply, trade, stocks, transformation and demand are indeed the basis for any sound energy policy decision. For instance, the market of oil -- which is the largest traded commodity worldwide -- needs to be closely monitored in order for all market players to know at any time what is produced, traded, stocked and consumed and by whom. In view of the role and importance of energy in world development, one would expect that basic energy information to be readily available and reliable. This is not always the case and one can even observe a decline in the quality, coverage and timeliness of energy statistics over the last few years.

  19. Philosophy of statistics

    CERN Document Server

    Forster, Malcolm R

    2011-01-01

    Statisticians and philosophers of science have many common interests but restricted communication with each other. This volume aims to remedy these shortcomings. It provides state-of-the-art research in the area of philosophy of statistics by encouraging numerous experts to communicate with one another without feeling "restricted” by their disciplines or thinking "piecemeal” in their treatment of issues. A second goal of this book is to present work in the field without bias toward any particular statistical paradigm. Broadly speaking, the essays in this Handbook are concerned with problems of induction, statistics and probability. For centuries, foundational problems like induction have been among philosophers' favorite topics; recently, however, non-philosophers have increasingly taken a keen interest in these issues. This volume accordingly contains papers by both philosophers and non-philosophers, including scholars from nine academic disciplines.

  20. Probability and Bayesian statistics

    CERN Document Server

    1987-01-01

    This book contains selected and refereed contributions to the "Inter­ national Symposium on Probability and Bayesian Statistics" which was orga­ nized to celebrate the 80th birthday of Professor Bruno de Finetti at his birthplace Innsbruck in Austria. Since Professor de Finetti died in 1985 the symposium was dedicated to the memory of Bruno de Finetti and took place at Igls near Innsbruck from 23 to 26 September 1986. Some of the pa­ pers are published especially by the relationship to Bruno de Finetti's scientific work. The evolution of stochastics shows growing importance of probability as coherent assessment of numerical values as degrees of believe in certain events. This is the basis for Bayesian inference in the sense of modern statistics. The contributions in this volume cover a broad spectrum ranging from foundations of probability across psychological aspects of formulating sub­ jective probability statements, abstract measure theoretical considerations, contributions to theoretical statistics an...

  1. Statistics for Finance

    DEFF Research Database (Denmark)

    Lindström, Erik; Madsen, Henrik; Nielsen, Jan Nygaard

    Statistics for Finance develops students’ professional skills in statistics with applications in finance. Developed from the authors’ courses at the Technical University of Denmark and Lund University, the text bridges the gap between classical, rigorous treatments of financial mathematics...... that rarely connect concepts to data and books on econometrics and time series analysis that do not cover specific problems related to option valuation. The book discusses applications of financial derivatives pertaining to risk assessment and elimination. The authors cover various statistical...... and mathematical techniques, including linear and nonlinear time series analysis, stochastic calculus models, stochastic differential equations, Itō’s formula, the Black–Scholes model, the generalized method-of-moments, and the Kalman filter. They explain how these tools are used to price financial derivatives...

  2. Per Object statistical analysis

    DEFF Research Database (Denmark)

    2008-01-01

    This RS code is to do Object-by-Object analysis of each Object's sub-objects, e.g. statistical analysis of an object's individual image data pixels. Statistics, such as percentiles (so-called "quartiles") are derived by the process, but the return of that can only be a Scene Variable, not an Object...... an analysis of the values of the object's pixels in MS-Excel. The shell of the proceedure could also be used for purposes other than just the derivation of Object - Sub-object statistics, e.g. rule-based assigment processes....... Variable. This procedure was developed in order to be able to export objects as ESRI shape data with the 90-percentile of the Hue of each object's pixels as an item in the shape attribute table. This procedure uses a sub-level single pixel chessboard segmentation, loops for each of the objects...

  3. Perception in statistical graphics

    Science.gov (United States)

    VanderPlas, Susan Ruth

    There has been quite a bit of research on statistical graphics and visualization, generally focused on new types of graphics, new software to create graphics, interactivity, and usability studies. Our ability to interpret and use statistical graphics hinges on the interface between the graph itself and the brain that perceives and interprets it, and there is substantially less research on the interplay between graph, eye, brain, and mind than is sufficient to understand the nature of these relationships. The goal of the work presented here is to further explore the interplay between a static graph, the translation of that graph from paper to mental representation (the journey from eye to brain), and the mental processes that operate on that graph once it is transferred into memory (mind). Understanding the perception of statistical graphics should allow researchers to create more effective graphs which produce fewer distortions and viewer errors while reducing the cognitive load necessary to understand the information presented in the graph. Taken together, these experiments should lay a foundation for exploring the perception of statistical graphics. There has been considerable research into the accuracy of numerical judgments viewers make from graphs, and these studies are useful, but it is more effective to understand how errors in these judgments occur so that the root cause of the error can be addressed directly. Understanding how visual reasoning relates to the ability to make judgments from graphs allows us to tailor graphics to particular target audiences. In addition, understanding the hierarchy of salient features in statistical graphics allows us to clearly communicate the important message from data or statistical models by constructing graphics which are designed specifically for the perceptual system.

  4. Stochastics introduction to probability and statistics

    CERN Document Server

    Georgii, Hans-Otto; Baake, Ellen; Georgii, Hans-Otto

    2008-01-01

    This book is a translation of the third edition of the well accepted German textbook 'Stochastik', which presents the fundamental ideas and results of both probability theory and statistics, and comprises the material of a one-year course. The stochastic concepts, models and methods are motivated by examples and problems and then developed and analysed systematically.

  5. Probability and Statistics for Particle Physicists

    CERN Document Server

    Ocariz, J.

    2014-01-01

    Lectures presented at the 1st CERN Asia-Europe-Pacific School of High-Energy Physics, Fukuoka, Japan, 14-27 October 2012. A pedagogical selection of topics in probability and statistics is presented. Choice and emphasis are driven by the author's personal experience, predominantly in the context of physics analyses using experimental data from high-energy physics detectors.

  6. Quantum statistics in multiple particle production

    OpenAIRE

    Zalewski, K.

    2004-01-01

    Effects of quantum statistics are clearly seen in the final states of high-energy multiparticle production processes. These effects are being widely used to obtain information about the regions where the final state hadrons are produced. Here we briefly present and discuss the assumptions underlying most of these analyses.

  7. Computational statistical mechanics

    CERN Document Server

    Hoover, WG

    1991-01-01

    Computational Statistical Mechanics describes the use of fast computers to simulate the equilibrium and nonequilibrium properties of gases, liquids, and solids at, and away from equilibrium. The underlying theory is developed from basic principles and illustrated by applying it to the simplest possible examples. Thermodynamics, based on the ideal gas thermometer, is related to Gibb's statistical mechanics through the use of Nosé-Hoover heat reservoirs. These reservoirs use integral feedback to control temperature. The same approach is carried through to the simulation and anal

  8. Statistics II essentials

    CERN Document Server

    Milewski, Emil G

    2012-01-01

    REA's Essentials provide quick and easy access to critical information in a variety of different fields, ranging from the most basic to the most advanced. As its name implies, these concise, comprehensive study guides summarize the essentials of the field covered. Essentials are helpful when preparing for exams, doing homework and will remain a lasting reference source for students, teachers, and professionals. Statistics II discusses sampling theory, statistical inference, independent and dependent variables, correlation theory, experimental design, count data, chi-square test, and time se

  9. Statistics As Principled Argument

    CERN Document Server

    Abelson, Robert P

    2012-01-01

    In this illuminating volume, Robert P. Abelson delves into the too-often dismissed problems of interpreting quantitative data and then presenting them in the context of a coherent story about one's research. Unlike too many books on statistics, this is a remarkably engaging read, filled with fascinating real-life (and real-research) examples rather than with recipes for analysis. It will be of true interest and lasting value to beginning graduate students and seasoned researchers alike. The focus of the book is that the purpose of statistics is to organize a useful argument from quantitative

  10. Bayesian statistics an introduction

    CERN Document Server

    Lee, Peter M

    2012-01-01

    Bayesian Statistics is the school of thought that combines prior beliefs with the likelihood of a hypothesis to arrive at posterior beliefs. The first edition of Peter Lee’s book appeared in 1989, but the subject has moved ever onwards, with increasing emphasis on Monte Carlo based techniques. This new fourth edition looks at recent techniques such as variational methods, Bayesian importance sampling, approximate Bayesian computation and Reversible Jump Markov Chain Monte Carlo (RJMCMC), providing a concise account of the way in which the Bayesian approach to statistics develops as wel

  11. Modern applied statistics with s-plus

    CERN Document Server

    Venables, W N

    1997-01-01

    S-PLUS is a powerful environment for the statistical and graphical analysis of data. It provides the tools to implement many statistical ideas which have been made possible by the widespread availability of workstations having good graphics and computational capabilities. This book is a guide to using S-PLUS to perform statistical analyses and provides both an introduction to the use of S-PLUS and a course in modern statistical methods. S-PLUS is available for both Windows and UNIX workstations, and both versions are covered in depth. The aim of the book is to show how to use S-PLUS as a powerful and graphical system. Readers are assumed to have a basic grounding in statistics, and so the book is intended for would-be users of S-PLUS, and both students and researchers using statistics. Throughout, the emphasis is on presenting practical problems and full analyses of real data sets. Many of the methods discussed are state-of-the-art approaches to topics such as linear and non-linear regression models, robust a...

  12. The CALORIES trial: statistical analysis plan.

    Science.gov (United States)

    Harvey, Sheila E; Parrott, Francesca; Harrison, David A; Mythen, Michael; Rowan, Kathryn M

    2014-12-01

    The CALORIES trial is a pragmatic, open, multicentre, randomised controlled trial (RCT) of the clinical effectiveness and cost-effectiveness of early nutritional support via the parenteral route compared with early nutritional support via the enteral route in unplanned admissions to adult general critical care units (CCUs) in the United Kingdom. The trial derives from the need for a large, pragmatic RCT to determine the optimal route of delivery for early nutritional support in the critically ill. To describe the proposed statistical analyses for the evaluation of the clinical effectiveness in the CALORIES trial. With the primary and secondary outcomes defined precisely and the approach to safety monitoring and data collection summarised, the planned statistical analyses, including prespecified subgroups and secondary analyses, were developed and are described. The primary outcome is all-cause mortality at 30 days. The primary analysis will be reported as a relative risk and absolute risk reduction and tested with the Fisher exact test. Prespecified subgroup analyses will be based on age, degree of malnutrition, acute severity of illness, mechanical ventilation at admission to the CCU, presence of cancer and time from CCU admission to commencement of early nutritional support. Secondary analyses include adjustment for baseline covariates. In keeping with best trial practice, we have developed, described and published a statistical analysis plan for the CALORIES trial and are placing it in the public domain before inspecting data from the trial.

  13. Whither Statistics Education Research?

    Science.gov (United States)

    Watson, Jane

    2016-01-01

    This year marks the 25th anniversary of the publication of a "National Statement on Mathematics for Australian Schools", which was the first curriculum statement this country had including "Chance and Data" as a significant component. It is hence an opportune time to survey the history of the related statistics education…

  14. Elementary statistical physics

    CERN Document Server

    Kittel, C

    1965-01-01

    This book is intended to help physics students attain a modest working knowledge of several areas of statistical mechanics, including stochastic processes and transport theory. The areas discussed are among those forming a useful part of the intellectual background of a physicist.

  15. Statistically Valid Planting Trials

    Science.gov (United States)

    C. B. Briscoe

    1961-01-01

    More than 100 million tree seedlings are planted each year in Latin America, and at least ten time'that many should be planted Rational control and development of a program of such magnitude require establishing and interpreting carefully planned trial plantings which will yield statistically valid answers to real and important questions. Unfortunately, many...

  16. Indiana forest statistics.

    Science.gov (United States)

    W. Brad Smith; Mark F. Golitz

    1988-01-01

    The third inventory of Indiana's timber resource shows that timberland area in Indiana climbed from 3.9 to 4.3 million acres between 1967 and 1986, an increase of more than 10%. During the same period growing-stock volume increased 43%. Highlights and statistics are presented on area, volume, growth, mortality, and removals.

  17. Illinois forest statistics, 1985.

    Science.gov (United States)

    Jerold T. Hahn

    1987-01-01

    The third inventory of the timber resource of Illinois shows a 1% increase in commercial forest area and a 40% gain in growing-stock volume between 1962 and 1985. Presented are highlights and statistics on area, volume, growth, mortality, removals, utilization, and biomass.

  18. Statistical air quality mapping

    NARCIS (Netherlands)

    Kassteele, van de J.

    2006-01-01

    This thesis handles statistical mapping of air quality data. Policy makers require more and more detailed air quality information to take measures to improve air quality. Besides, researchers need detailed air quality information to assess health effects. Accurate and spatially highly resolved maps

  19. Michigan forest statistics, 1980.

    Science.gov (United States)

    Gerhard K. Raile; W. Brad Smith

    1983-01-01

    The fourth inventory of the timber resource of Michigan shows a 7% decline in commercial forest area and a 27% gain in growing-stock volume between 1966 and 1980. Highlights and statistics are presented on area, volume, growth, mortality, removals, utilization, and biomass.

  20. Simple Statistics: - Summarized!

    Science.gov (United States)

    Blai, Boris, Jr.

    Statistics are an essential tool for making proper judgement decisions. It is concerned with probability distribution models, testing of hypotheses, significance tests and other means of determining the correctness of deductions and the most likely outcome of decisions. Measures of central tendency include the mean, median and mode. A second…

  1. Revising Educational Statistics.

    Science.gov (United States)

    Banner, James M., Jr.

    When gathering and presenting educational statistics, five principles should be considered. (1) The data must be accurate, valid, and complete. Limitations, weaknesses, margins of error, and levels of confidence should be clearly stated. (2) The data must include comparable information, sought in comparable ways, in comparable forms, from…

  2. The Pleasures of Statistics

    CERN Document Server

    Mosteller, Frederick; Hoaglin, David C; Tanur, Judith M

    2010-01-01

    Includes chapter-length insider accounts of work on the pre-election polls of 1948, statistical aspects of the Kinsey report on sexual behavior in the human male, mathematical learning theory, authorship of the disputed Federalist papers, safety of anesthetics, and an examination of the Coleman report on equality of educational opportunity

  3. Selected Outdoor Recreation Statistics.

    Science.gov (United States)

    Bureau of Outdoor Recreation (Dept. of Interior), Washington, DC.

    In this recreational information report, 96 tables are compiled from Bureau of Outdoor Recreation programs and surveys, other governmental agencies, and private sources. Eight sections comprise the document: (1) The Bureau of Outdoor Recreation, (2) Federal Assistance to Recreation, (3) Recreation Surveys for Planning, (4) Selected Statistics of…

  4. Fisher's Contributions to Statistics

    Indian Academy of Sciences (India)

    its applications, especially to agriculture and the design of experiments therein. His contributions to statistics are so many that it is not even possible to mention them all in this short article. We, therefore, confine our attention to discussing what we regard as the more important among them. Fisher provided a unified and ...

  5. Statistical Hadronization and Holography

    DEFF Research Database (Denmark)

    Bechi, Jacopo

    2009-01-01

    In this paper we consider some issues about the statistical model of the hadronization in a holographic approach. We introduce a Rindler like horizon in the bulk and we understand the string breaking as a tunneling event under this horizon. We calculate the hadron spectrum and we get a thermal...

  6. EEG analyses with SOBI.

    Energy Technology Data Exchange (ETDEWEB)

    Glickman, Matthew R.; Tang, Akaysha (University of New Mexico, Albuquerque, NM)

    2009-02-01

    The motivating vision behind Sandia's MENTOR/PAL LDRD project has been that of systems which use real-time psychophysiological data to support and enhance human performance, both individually and of groups. Relevant and significant psychophysiological data being a necessary prerequisite to such systems, this LDRD has focused on identifying and refining such signals. The project has focused in particular on EEG (electroencephalogram) data as a promising candidate signal because it (potentially) provides a broad window on brain activity with relatively low cost and logistical constraints. We report here on two analyses performed on EEG data collected in this project using the SOBI (Second Order Blind Identification) algorithm to identify two independent sources of brain activity: one in the frontal lobe and one in the occipital. The first study looks at directional influences between the two components, while the second study looks at inferring gender based upon the frontal component.

  7. Analyse af elbilers forbrug

    DEFF Research Database (Denmark)

    Andersen, Ove; Krogh, Benjamin Bjerre; Torp, Kristian

    2014-01-01

    Denne rapport undersøger GPS og CAN bus datagrundlaget opsamlet ved kørsel med elbiler og analysere på elbilers forbrug. Analyserne er baseret på godt 133 millioner GPS og CAN bus målinger opsamlet fra 164 elbiler (Citroen C-Zero, Mitsubishi iMiev og Peugeot Ion) i kalenderåret 2012....... For datagrundlaget kan det konstateres, at der er behov for væsentlige, men simple opstramninger for fremadrettet at gøre det nemmere at anvende GPS/CAN bus data fra elbiler i andre analyser. Brugen af elbiler er sammenlignet med brændstofbiler og konklusionen er, at elbiler generelt kører 10-15 km/t langsommere på...

  8. Hitchhikers’ guide to analysing bird ringing data

    Directory of Open Access Journals (Sweden)

    Harnos Andrea

    2016-06-01

    Full Text Available This paper is the second part of our bird ringing data analyses series (Harnos et al. 2015a in which we continue to focus on exploring data using the R software. We give a short description of data distributions and the measures of data spread and explain how to obtain basic descriptive statistics. We show how to detect and select one and two dimensional outliers and explain how to treat these in case of avian ringing data.

  9. The Statistical Drake Equation

    Science.gov (United States)

    Maccone, Claudio

    2010-12-01

    We provide the statistical generalization of the Drake equation. From a simple product of seven positive numbers, the Drake equation is now turned into the product of seven positive random variables. We call this "the Statistical Drake Equation". The mathematical consequences of this transformation are then derived. The proof of our results is based on the Central Limit Theorem (CLT) of Statistics. In loose terms, the CLT states that the sum of any number of independent random variables, each of which may be ARBITRARILY distributed, approaches a Gaussian (i.e. normal) random variable. This is called the Lyapunov Form of the CLT, or the Lindeberg Form of the CLT, depending on the mathematical constraints assumed on the third moments of the various probability distributions. In conclusion, we show that: The new random variable N, yielding the number of communicating civilizations in the Galaxy, follows the LOGNORMAL distribution. Then, as a consequence, the mean value of this lognormal distribution is the ordinary N in the Drake equation. The standard deviation, mode, and all the moments of this lognormal N are also found. The seven factors in the ordinary Drake equation now become seven positive random variables. The probability distribution of each random variable may be ARBITRARY. The CLT in the so-called Lyapunov or Lindeberg forms (that both do not assume the factors to be identically distributed) allows for that. In other words, the CLT "translates" into our statistical Drake equation by allowing an arbitrary probability distribution for each factor. This is both physically realistic and practically very useful, of course. An application of our statistical Drake equation then follows. The (average) DISTANCE between any two neighboring and communicating civilizations in the Galaxy may be shown to be inversely proportional to the cubic root of N. Then, in our approach, this distance becomes a new random variable. We derive the relevant probability density

  10. Relationship between mathematics statistics engagement and attitudes towards statistics among undergraduate students in Malaysia

    Science.gov (United States)

    Salim, Nur Raidah; Ayub, Ahmad Fauzi Mohd

    2017-01-01

    This paper explored the relationship between the attitudes toward statistics and mathematics statistics engagement among undergraduate students taking the statistic course. A total of 293 undergraduate students from several programs at Universiti Putra Malaysia (UPM) were the sample of the study. A structured, self-administered questionnaire was used to elicit responses from these students. Descriptive analyses showed the overall mean for students' mathematics statistics engagement was 3.38 (SD = .36). The analysis on mathematics statistics engagement domains revealed behavioural engagement had the highest mean (M = 3.63, SD = .52), followed by affective engagement (M = 3.35, SD = .41) and cognitive engagement (M = 3.26, SD = .35). Inferential analysis indicated attitudes towards statistic were positively related to mathematics statistics engagement (p = .721**, p = .001). Further analysis on mathematics statistics engagement domain indicated attitudes towards statistics were positively related to the affective domain (p = .902**, p cognitive domain (p = .818**, p engaged in mathematics statistics. Implications of the findings are discussed.

  11. Statistical theory and inference

    CERN Document Server

    Olive, David J

    2014-01-01

    This text is for  a one semester graduate course in statistical theory and covers minimal and complete sufficient statistics, maximum likelihood estimators, method of moments, bias and mean square error, uniform minimum variance estimators and the Cramer-Rao lower bound, an introduction to large sample theory, likelihood ratio tests and uniformly most powerful  tests and the Neyman Pearson Lemma. A major goal of this text is to make these topics much more accessible to students by using the theory of exponential families. Exponential families, indicator functions and the support of the distribution are used throughout the text to simplify the theory. More than 50 ``brand name" distributions are used to illustrate the theory with many examples of exponential families, maximum likelihood estimators and uniformly minimum variance unbiased estimators. There are many homework problems with over 30 pages of solutions.

  12. Visuanimation in statistics

    KAUST Repository

    Genton, Marc G.

    2015-04-14

    This paper explores the use of visualization through animations, coined visuanimation, in the field of statistics. In particular, it illustrates the embedding of animations in the paper itself and the storage of larger movies in the online supplemental material. We present results from statistics research projects using a variety of visuanimations, ranging from exploratory data analysis of image data sets to spatio-temporal extreme event modelling; these include a multiscale analysis of classification methods, the study of the effects of a simulated explosive volcanic eruption and an emulation of climate model output. This paper serves as an illustration of visuanimation for future publications in Stat. Copyright © 2015 John Wiley & Sons, Ltd.

  13. Diffeomorphic Statistical Deformation Models

    DEFF Research Database (Denmark)

    Hansen, Michael Sass; Hansen, Mads/Fogtman; Larsen, Rasmus

    2007-01-01

    In this paper we present a new method for constructing diffeomorphic statistical deformation models in arbitrary dimensional images with a nonlinear generative model and a linear parameter space. Our deformation model is a modified version of the diffeomorphic model introduced by Cootes et al....... The modifications ensure that no boundary restriction has to be enforced on the parameter space to prevent folds or tears in the deformation field. For straightforward statistical analysis, principal component analysis and sparse methods, we assume that the parameters for a class of deformations lie on a linear...... manifold and that the distance between two deformations are given by the metric introduced by the L2-norm in the parameter space. The chosen L2-norm is shown to have a clear and intuitive interpretation on the usual nonlinear manifold. Our model is validated on a set of MR images of corpus callosum...

  14. Classical and statistical thermodynamics

    CERN Document Server

    Rizk, Hanna A

    2016-01-01

    This is a text book of thermodynamics for the student who seeks thorough training in science or engineering. Systematic and thorough treatment of the fundamental principles rather than presenting the large mass of facts has been stressed. The book includes some of the historical and humanistic background of thermodynamics, but without affecting the continuity of the analytical treatment. For a clearer and more profound understanding of thermodynamics this book is highly recommended. In this respect, the author believes that a sound grounding in classical thermodynamics is an essential prerequisite for the understanding of statistical thermodynamics. Such a book comprising the two wide branches of thermodynamics is in fact unprecedented. Being a written work dealing systematically with the two main branches of thermodynamics, namely classical thermodynamics and statistical thermodynamics, together with some important indexes under only one cover, this treatise is so eminently useful.

  15. General and Statistical Thermodynamics

    CERN Document Server

    Tahir-Kheli, Raza

    2012-01-01

    This textbook explains completely the general and statistical thermodynamics. It begins with an introductory statistical mechanics course, deriving all the important formulae meticulously and explicitly, without mathematical short cuts. The main part of the book deals with the careful discussion of the concepts and laws of thermodynamics, van der Waals, Kelvin and Claudius theories, ideal and real gases, thermodynamic potentials, phonons and all the related aspects. To elucidate the concepts introduced and to provide practical problem solving support, numerous carefully worked examples are of great value for students. The text is clearly written and punctuated with many interesting anecdotes. This book is written as main textbook for upper undergraduate students attending a course on thermodynamics.

  16. Cambodia; Statistical Appendix

    OpenAIRE

    International Monetary Fund

    2004-01-01

    In this study, the following statistical data are presented in detail: agriculture, livestock, and fishery production, structure of revenue, monetary survey, reserve money, interest rates, central government operations, profile of the commercial bank system, consumer price index, foreign debt, status of state-owned enterprises, proposed privatization standards, gross domestic product by expenditure at current prices, interest rates, budgetary expenditure by ministry, deflators for GDP by sect...

  17. Statistical Physics of Adaptation

    Science.gov (United States)

    2016-08-23

    system is indeed observed to be in II, then it will implicitly be distributed over the microstates available to it according to some new density... system , such that our choice of frequency for an external driving field determines the location of the peak in the resonance spectrum for a particle...Statistical Physics of Adaptation Nikolay Perunov, Robert A. Marsland, and Jeremy L. England Department of Physics, Physics of Living Systems Group

  18. Statistical classification of images

    OpenAIRE

    Giuliodori, María Andrea

    2011-01-01

    Image classification is a burgeoning field of study. Despite the advances achieved in this camp, there is no general agreement about what is the most effective methods for the classification of digital images. This dissertation contributes to this line of research by developing different statistical methods aim to classifying digital images. In Chapter 1 we introduce basic concepts of image classification and review some results and methodologies proposed previously in the literature. In Chap...

  19. Twin Prime Statistics

    Science.gov (United States)

    Dubner, Harvey

    2005-08-01

    Hardy and Littlewood conjectured that the number of twin primes less than x is asymptotic to 2 C_2 int_2^x dt/(log t)^2 where C_2 is the twin prime constant. This has been shown to give excellent results for x up to 10^16. This article presents statistics supporting the accuracy of the conjecture up to 10^600.

  20. Asymptotics in Quantum Statistics

    OpenAIRE

    Gill, Richard D.

    2004-01-01

    Observations or measurements taken of a quantum system (a small number of fundamental particles) are inherently random. If the state of the system depends on unknown parameters, then the distribution of the outcome depends on these parameters too, and statistical inference problems result. Often one has a choice of what measurement to take, corresponding to different experimental set-ups or settings of measurement apparatus. This leads to a design problem--which measurement is best for a give...

  1. Statistics I essentials

    CERN Document Server

    Milewski, Emil G

    2012-01-01

    REA's Essentials provide quick and easy access to critical information in a variety of different fields, ranging from the most basic to the most advanced. As its name implies, these concise, comprehensive study guides summarize the essentials of the field covered. Essentials are helpful when preparing for exams, doing homework and will remain a lasting reference source for students, teachers, and professionals. Statistics I covers include frequency distributions, numerical methods of describing data, measures of variability, parameters of distributions, probability theory, and distributions.

  2. 1979 DOE statistical symposium

    Energy Technology Data Exchange (ETDEWEB)

    Gardiner, D.A.; Truett T. (comps. and eds.)

    1980-09-01

    The 1979 DOE Statistical Symposium was the fifth in the series of annual symposia designed to bring together statisticians and other interested parties who are actively engaged in helping to solve the nation's energy problems. The program included presentations of technical papers centered around exploration and disposal of nuclear fuel, general energy-related topics, and health-related issues, and workshops on model evaluation, risk analysis, analysis of large data sets, and resource estimation.

  3. Statistical Analysis Plan

    DEFF Research Database (Denmark)

    Ris Hansen, Inge; Søgaard, Karen; Gram, Bibi

    2015-01-01

    This is the analysis plan for the multicentre randomised control study looking at the effect of training and exercises in chronic neck pain patients that is being conducted in Jutland and Funen, Denmark. This plan will be used as a work description for the analyses of the data collected....

  4. Overdispersion in nuclear statistics

    Energy Technology Data Exchange (ETDEWEB)

    Semkow, Thomas M. [State University of New York, Albany, NY (United States)

    1999-02-11

    The modern statistical distribution theory is applied to the development of the overdispersion theory in ionizing-radiation statistics for the first time. The physical nuclear system is treated as a sequence of binomial processes, each depending on a characteristic probability, such as probability of decay, detection, etc. The probabilities fluctuate in the course of a measurement, and the physical reasons for that are discussed. If the average values of the probabilities change from measurement to measurement, which originates from the random Lexis binomial sampling scheme, then the resulting distribution is overdispersed. The generating functions and probability distribution functions are derived, followed by a moment analysis. The Poisson and Gaussian limits are also given. The distribution functions belong to a family of generalized hypergeometric factorial moment distributions by Kemp and Kemp, and can serve as likelihood functions for the statistical estimations. An application to radioactive decay with detection is described and working formulae are given, including a procedure for testing the counting data for overdispersion. More complex experiments in nuclear physics (such as solar neutrino) can be handled by this model, as well as distinguishing between the source and background.

  5. Testing statistical hypotheses

    CERN Document Server

    Lehmann, E L

    2005-01-01

    The third edition of Testing Statistical Hypotheses updates and expands upon the classic graduate text, emphasizing optimality theory for hypothesis testing and confidence sets. The principal additions include a rigorous treatment of large sample optimality, together with the requisite tools. In addition, an introduction to the theory of resampling methods such as the bootstrap is developed. The sections on multiple testing and goodness of fit testing are expanded. The text is suitable for Ph.D. students in statistics and includes over 300 new problems out of a total of more than 760. E.L. Lehmann is Professor of Statistics Emeritus at the University of California, Berkeley. He is a member of the National Academy of Sciences and the American Academy of Arts and Sciences, and the recipient of honorary degrees from the University of Leiden, The Netherlands and the University of Chicago. He is the author of Elements of Large-Sample Theory and (with George Casella) he is also the author of Theory of Point Estimat...

  6. Statistics for lawyers

    CERN Document Server

    Finkelstein, Michael O

    2015-01-01

    This classic text, first published in 1990, is designed to introduce law students, law teachers, practitioners, and judges to the basic ideas of mathematical probability and statistics as they have been applied in the law. The third edition includes over twenty new sections, including the addition of timely topics, like New York City police stops, exonerations in death-sentence cases, projecting airline costs, and new material on various statistical techniques such as the randomized response survey technique, rare-events meta-analysis, competing risks, and negative binomial regression. The book consists of sections of exposition followed by real-world cases and case studies in which statistical data have played a role. The reader is asked to apply the theory to the facts, to calculate results (a hand calculator is sufficient), and to explore legal issues raised by quantitative findings. The authors' calculations and comments are given in the back of the book. As with previous editions, the cases and case stu...

  7. R for statistics

    CERN Document Server

    Cornillon, Pierre-Andre; Husson, Francois; Jegou, Nicolas; Josse, Julie; Kloareg, Maela; Matzner-Lober, Eric; Rouviere, Laurent

    2012-01-01

    An Overview of RMain ConceptsInstalling RWork SessionHelpR ObjectsFunctionsPackagesExercisesPreparing DataReading Data from FileExporting ResultsManipulating VariablesManipulating IndividualsConcatenating Data TablesCross-TabulationExercisesR GraphicsConventional Graphical FunctionsGraphical Functions with latticeExercisesMaking Programs with RControl FlowsPredefined FunctionsCreating a FunctionExercisesStatistical MethodsIntroduction to the Statistical MethodsA Quick Start with RInstalling ROpening and Closing RThe Command PromptAttribution, Objects, and FunctionSelectionOther Rcmdr PackageImporting (or Inputting) DataGraphsStatistical AnalysisHypothesis TestConfidence Intervals for a MeanChi-Square Test of IndependenceComparison of Two MeansTesting Conformity of a ProportionComparing Several ProportionsThe Power of a TestRegressionSimple Linear RegressionMultiple Linear RegressionPartial Least Squares (PLS) RegressionAnalysis of Variance and CovarianceOne-Way Analysis of VarianceMulti-Way Analysis of Varian...

  8. Brain Aneurysm Statistics and Facts

    Science.gov (United States)

    ... Statistics and Facts A- A A+ Brain Aneurysm Statistics and Facts An estimated 6 million people in ... Understanding the Brain Warning Signs/ Symptoms Brain Aneurysm Statistics and Facts Seeking Medical Attention Risk Factors Aneurysm ...

  9. Molecular ecological network analyses

    Directory of Open Access Journals (Sweden)

    Deng Ye

    2012-05-01

    Full Text Available Abstract Background Understanding the interaction among different species within a community and their responses to environmental changes is a central goal in ecology. However, defining the network structure in a microbial community is very challenging due to their extremely high diversity and as-yet uncultivated status. Although recent advance of metagenomic technologies, such as high throughout sequencing and functional gene arrays, provide revolutionary tools for analyzing microbial community structure, it is still difficult to examine network interactions in a microbial community based on high-throughput metagenomics data. Results Here, we describe a novel mathematical and bioinformatics framework to construct ecological association networks named molecular ecological networks (MENs through Random Matrix Theory (RMT-based methods. Compared to other network construction methods, this approach is remarkable in that the network is automatically defined and robust to noise, thus providing excellent solutions to several common issues associated with high-throughput metagenomics data. We applied it to determine the network structure of microbial communities subjected to long-term experimental warming based on pyrosequencing data of 16 S rRNA genes. We showed that the constructed MENs under both warming and unwarming conditions exhibited topological features of scale free, small world and modularity, which were consistent with previously described molecular ecological networks. Eigengene analysis indicated that the eigengenes represented the module profiles relatively well. In consistency with many other studies, several major environmental traits including temperature and soil pH were found to be important in determining network interactions in the microbial communities examined. To facilitate its application by the scientific community, all these methods and statistical tools have been integrated into a comprehensive Molecular Ecological

  10. Uncertainty quantification approaches for advanced reactor analyses.

    Energy Technology Data Exchange (ETDEWEB)

    Briggs, L. L.; Nuclear Engineering Division

    2009-03-24

    The original approach to nuclear reactor design or safety analyses was to make very conservative modeling assumptions so as to ensure meeting the required safety margins. Traditional regulation, as established by the U. S. Nuclear Regulatory Commission required conservatisms which have subsequently been shown to be excessive. The commission has therefore moved away from excessively conservative evaluations and has determined best-estimate calculations to be an acceptable alternative to conservative models, provided the best-estimate results are accompanied by an uncertainty evaluation which can demonstrate that, when a set of analysis cases which statistically account for uncertainties of all types are generated, there is a 95% probability that at least 95% of the cases meet the safety margins. To date, nearly all published work addressing uncertainty evaluations of nuclear power plant calculations has focused on light water reactors and on large-break loss-of-coolant accident (LBLOCA) analyses. However, there is nothing in the uncertainty evaluation methodologies that is limited to a specific type of reactor or to specific types of plant scenarios. These same methodologies can be equally well applied to analyses for high-temperature gas-cooled reactors and to liquid metal reactors, and they can be applied to steady-state calculations, operational transients, or severe accident scenarios. This report reviews and compares both statistical and deterministic uncertainty evaluation approaches. Recommendations are given for selection of an uncertainty methodology and for considerations to be factored into the process of evaluating uncertainties for advanced reactor best-estimate analyses.

  11. Beginning statistics with data analysis

    CERN Document Server

    Mosteller, Frederick; Rourke, Robert EK

    2013-01-01

    This introduction to the world of statistics covers exploratory data analysis, methods for collecting data, formal statistical inference, and techniques of regression and analysis of variance. 1983 edition.

  12. Narrative Review of Statistical Reporting Checklists, Mandatory Statistical Editing, and Rectifying Common Problems in the Reporting of Scientific Articles.

    Science.gov (United States)

    Dexter, Franklin; Shafer, Steven L

    2017-03-01

    Considerable attention has been drawn to poor reproducibility in the biomedical literature. One explanation is inadequate reporting of statistical methods by authors and inadequate assessment of statistical reporting and methods during peer review. In this narrative review, we examine scientific studies of several well-publicized efforts to improve statistical reporting. We also review several retrospective assessments of the impact of these efforts. These studies show that instructions to authors and statistical checklists are not sufficient; no findings suggested that either improves the quality of statistical methods and reporting. Second, even basic statistics, such as power analyses, are frequently missing or incorrectly performed. Third, statistical review is needed for all papers that involve data analysis. A consistent finding in the studies was that nonstatistical reviewers (eg, "scientific reviewers") and journal editors generally poorly assess statistical quality. We finish by discussing our experience with statistical review at Anesthesia & Analgesia from 2006 to 2016.

  13. Statistical Inference at Work: Statistical Process Control as an Example

    Science.gov (United States)

    Bakker, Arthur; Kent, Phillip; Derry, Jan; Noss, Richard; Hoyles, Celia

    2008-01-01

    To characterise statistical inference in the workplace this paper compares a prototypical type of statistical inference at work, statistical process control (SPC), with a type of statistical inference that is better known in educational settings, hypothesis testing. Although there are some similarities between the reasoning structure involved in…

  14. On the likelihood of future eruptions in the Chilean Southern Volcanic Zone: interpreting the past century's eruption record based on statistical analyses Probabilidades de futuras erupciones en la Zona Volcánica del Sur de Chile: interpretación estadística de la serie temporal de erupciones del siglo pasado

    Directory of Open Access Journals (Sweden)

    Yvonne Dzierma

    2012-09-01

    Full Text Available A sequence of 150 explosive eruptions recorded during the past century at the Chilean Southern Volcanic Zone (SVZ is subjected to statistical time series analysis. The exponential, Weibull, and log-logistic distribution functions are fit to the eruption record, separately for literature-assigned volcanic exploslvlty indices (VEI ≥ 2 and VEI ≥ 3. Since statistical tests confirm the adequacy of all the fits to describe the data, all models are used to estimate the likelihood of future eruptions. Only small differences are observed between the different distribution functions with regard to the eruption forecast, whereby the log-logistic distribution predicts the lowest probabilities. There is a 50% probability for VEI ≥ 2 eruptions to occur in the SVZ within less than a year, and 90% probability to occur within the next 2-3 years. For the larger VEI ≥ 3 eruptions, the 50% probability is reached in 3-4 years, while the 90% level is reached in 9-11 years.Se presenta un análisis estadístico de la serie temporal de 150 erupciones volcánicas explosivas registradas durante el siglo pasado en la Zona Volcánica del Sur de Chile. Se modeló el conjunto de erupciones mediante la distribución exponencial, de Weibull y log-logística, restringiendo el análisis a erupciones de índice de explosividad volcánica (IEV mayores a 2 y 3, respectivamente. Como los modelos pasan las pruebas estadísticas, los tres modelos se aplican para estimar la probabilidad de erupciones futuras. Se observan solo diferencias menores entre las predicciones mediante los distintos modelos, con la distribución log-logística dando las probabilidades más bajas. Para erupciones de IEV ≥ 2, la probabilidad de producirse una erupción dentro de un año es más del 50%, creciendo al 90% en 2-3 años. Para erupciones más grandes, de IEV ≥ 3, el 50% de probabilidad se alcanza dentro de 3-4 años, y el 90% dentro de 9-11 años.

  15. SPSS for applied sciences basic statistical testing

    CERN Document Server

    Davis, Cole

    2013-01-01

    This book offers a quick and basic guide to using SPSS and provides a general approach to solving problems using statistical tests. It is both comprehensive in terms of the tests covered and the applied settings it refers to, and yet is short and easy to understand. Whether you are a beginner or an intermediate level test user, this book will help you to analyse different types of data in applied settings. It will also give you the confidence to use other statistical software and to extend your expertise to more specific scientific settings as required.The author does not use mathematical form

  16. Wind Statistics from a Forested Landscape

    DEFF Research Database (Denmark)

    Arnqvist, Johan; Segalini, Antonio; Dellwik, Ebba

    2015-01-01

    An analysis and interpretation of measurements from a 138-m tall tower located in a forested landscape is presented. Measurement errors and statistical uncertainties are carefully evaluated to ensure high data quality. A 40(Formula presented.) wide wind-direction sector is selected as the most...... representative for large-scale forest conditions, and from that sector first-, second- and third-order statistics, as well as analyses regarding the characteristic length scale, the flux-profile relationship and surface roughness are presented for a wide range of stability conditions. The results are discussed...

  17. On quantum statistical inference

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Gill, Richard D.; Jupp, Peter E.

    Recent developments in the mathematical foundations of quantum mechanics have brought the theory closer to that of classical probability and statistics. On the other hand, the unique character of quantum physics sets many of the questions addressed apart from those met classically in stochastics....... Furthermore, concurrent advances in experimental techniques and in the theory of quantum computation have led to a strong interest in questions of quantum information, in particular in the sense of the amount of information about unknown parameters in given observational data or accessible through various...

  18. Statistics for Finance

    DEFF Research Database (Denmark)

    Lindström, Erik; Madsen, Henrik; Nielsen, Jan Nygaard

    that rarely connect concepts to data and books on econometrics and time series analysis that do not cover specific problems related to option valuation. The book discusses applications of financial derivatives pertaining to risk assessment and elimination. The authors cover various statistical...... and mathematical techniques, including linear and nonlinear time series analysis, stochastic calculus models, stochastic differential equations, Itō’s formula, the Black–Scholes model, the generalized method-of-moments, and the Kalman filter. They explain how these tools are used to price financial derivatives...

  19. Nonparametric statistical inference

    CERN Document Server

    Gibbons, Jean Dickinson

    2014-01-01

    Thoroughly revised and reorganized, the fourth edition presents in-depth coverage of the theory and methods of the most widely used nonparametric procedures in statistical analysis and offers example applications appropriate for all areas of the social, behavioral, and life sciences. The book presents new material on the quantiles, the calculation of exact and simulated power, multiple comparisons, additional goodness-of-fit tests, methods of analysis of count data, and modern computer applications using MINITAB, SAS, and STATXACT. It includes tabular guides for simplified applications of tests and finding P values and confidence interval estimates.

  20. Key World Energy Statistics

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2010-07-01

    The IEA produced its first handy, pocket-sized summary of key energy data in 1997. This new edition responds to the enormously positive reaction to the book since then. Key World Energy Statistics produced by the IEA contains timely, clearly-presented data on supply, transformation and consumption of all major energy sources. The interested businessman, journalist or student will have at his or her fingertips the annual Canadian production of coal, the electricity consumption in Thailand, the price of diesel oil in Spain and thousands of other useful energy facts. It exists in different formats to suit our readers' requirements.

  1. Statistical mechanics of learning

    CERN Document Server

    Engel, Andreas

    2001-01-01

    The effort to build machines that are able to learn and undertake tasks such as datamining, image processing and pattern recognition has led to the development of artificial neural networks in which learning from examples may be described and understood. The contribution to this subject made over the past decade by researchers applying the techniques of statistical mechanics is the subject of this book. The authors provide a coherent account of various important concepts and techniques that are currently only found scattered in papers, supplement this with background material in mathematics and physics, and include many examples and exercises.

  2. Circular statistics in R

    CERN Document Server

    Pewsey, Arthur; Ruxton, Graeme D

    2013-01-01

    Circular Statistics in R provides the most comprehensive guide to the analysis of circular data in over a decade. Circular data arise in many scientific contexts whether it be angular directions such as: observed compass directions of departure of radio-collared migratory birds from a release point; bond angles measured in different molecules; wind directions at different times of year at a wind farm; direction of stress-fractures in concretebridge supports; longitudes of earthquake epicentres or seasonal and daily activity patterns, for example: data on the times of day at which animals are c

  3. Statistical fluid mechanics

    CERN Document Server

    Monin, A S

    2007-01-01

    ""If ever a field needed a definitive book, it is the study of turbulence; if ever a book on turbulence could be called definitive, it is this book."" - ScienceWritten by two of Russia's most eminent and productive scientists in turbulence, oceanography, and atmospheric physics, this two-volume survey is renowned for its clarity as well as its comprehensive treatment. The first volume begins with an outline of laminar and turbulent flow. The remainder of the book treats a variety of aspects of turbulence: its statistical and Lagrangian descriptions, shear flows near surfaces and free turbulenc

  4. Statistics in biomedical research

    Directory of Open Access Journals (Sweden)

    González-Manteiga, Wenceslao

    2007-06-01

    Full Text Available The discipline of biostatistics is nowadays a fundamental scientific component of biomedical, public health and health services research. Traditional and emerging areas of application include clinical trials research, observational studies, physiology, imaging, and genomics. The present article reviews the current situation of biostatistics, considering the statistical methods traditionally used in biomedical research, as well as the ongoing development of new methods in response to the new problems arising in medicine. Clearly, the successful application of statistics in biomedical research requires appropriate training of biostatisticians. This training should aim to give due consideration to emerging new areas of statistics, while at the same time retaining full coverage of the fundamentals of statistical theory and methodology. In addition, it is important that students of biostatistics receive formal training in relevant biomedical disciplines, such as epidemiology, clinical trials, molecular biology, genetics, and neuroscience.La Bioestadística es hoy en día una componente científica fundamental de la investigación en Biomedicina, salud pública y servicios de salud. Las áreas tradicionales y emergentes de aplicación incluyen ensayos clínicos, estudios observacionales, fisología, imágenes, y genómica. Este artículo repasa la situación actual de la Bioestadística, considerando los métodos estadísticos usados tradicionalmente en investigación biomédica, así como los recientes desarrollos de nuevos métodos, para dar respuesta a los nuevos problemas que surgen en Medicina. Obviamente, la aplicación fructífera de la estadística en investigación biomédica exige una formación adecuada de los bioestadísticos, formación que debería tener en cuenta las áreas emergentes en estadística, cubriendo al mismo tiempo los fundamentos de la teoría estadística y su metodología. Es importante, además, que los estudiantes de

  5. Who Needs Statistics? | Poster

    Science.gov (United States)

    You may know the feeling. You have collected a lot of new data on an important experiment. Now you are faced with multiple groups of data, a sea of numbers, and a deadline for submitting your paper to a peer-reviewed journal. And you are not sure which data are relevant, or even the best way to present them. The statisticians at Data Management Services (DMS) know how to help. This small group of experts provides a wide array of statistical and mathematical consulting services to the scientific community at NCI at Frederick and NCI-Bethesda.

  6. Functional and Operatorial Statistics

    CERN Document Server

    Dabo-niang, Sophie

    2008-01-01

    An increasing number of statistical problems and methods involve infinite-dimensional aspects. This is due to the progress of technologies which allow us to store more and more information while modern instruments are able to collect data much more effectively due to their increasingly sophisticated design. This evolution directly concerns statisticians, who have to propose new methodologies while taking into account such high-dimensional data (e.g. continuous processes, functional data, etc.). The numerous applications (micro-arrays, paleo- ecological data, radar waveforms, spectrometric curv

  7. Nanotechnology and statistical inference

    Science.gov (United States)

    Vesely, Sara; Vesely, Leonardo; Vesely, Alessandro

    2017-08-01

    We discuss some problems that arise when applying statistical inference to data with the aim of disclosing new func-tionalities. A predictive model analyzes the data taken from experiments on a specific material to assess the likelihood that another product, with similar structure and properties, will exhibit the same functionality. It doesn't have much predictive power if vari-ability occurs as a consequence of a specific, non-linear behavior. We exemplify our discussion on some experiments with biased dice.

  8. Associative Analysis in Statistics

    Directory of Open Access Journals (Sweden)

    Mihaela Muntean

    2015-03-01

    Full Text Available In the last years, the interest in technologies such as in-memory analytics and associative search has increased. This paper explores how you can use in-memory analytics and an associative model in statistics. The word “associative” puts the emphasis on understanding how datasets relate to one another. The paper presents the main characteristics of “associative” data model. Also, the paper presents how to design an associative model for labor market indicators analysis. The source is the EU Labor Force Survey. Also, this paper presents how to make associative analysis.

  9. International petroleum statistics report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-10-01

    The International Petroleum Statistics Report is a monthly publication that provides current international oil data. This report presents data on international oil production, demand, imports, exports and stocks. The report has four sections. Section 1 contains time series data on world oil production, and on oil demand and stocks in the Organization for Economic Cooperation and Development (OECD). Section 2 presents an oil supply/demand balance for the world, in quarterly intervals for the most recent two years. Section 3 presents data on oil imports by OECD countries. Section 4 presents annual time series data on world oil production and oil stocks, demand, and trade in OECD countries.

  10. Schatzki ring, statistically reexamined.

    Science.gov (United States)

    Pezzullo, John C; Lewicki, Ann M

    2003-09-01

    In the article by Schatzki published in 1963, data about the lower esophageal ring relate ring diameter to presence of dysphagia. Statistical analysis of these measurements was performed to quantify conclusions of Schatzki and to extract additional information from the data. Ring diameters in 332 patients with and without dysphagia are described in a histogram in the original article of Schatzki. Data were evaluated with analysis of variance, logistic regression, and receiver operating characteristic (ROC) analysis to quantify the relationship between ring diameter and dysphagia. Follow-up information was available in 36 symptomatic and 30 asymptomatic patients of Schatzki. Logistic regression indicated that there was a highly significant difference in ring diameter between the asymptomatic group and patients with recurrent dysphagia (P Schatzki had a 96% (104 of 108) sensitivity and a 58% (130 of 224) specificity, with area under the ROC curve of 0.888. Retrospective statistical analysis of original data of Schatzki validated his major conclusions about the data. Some important questions remain unanswered because of missing data in the study of Schatzki. Copyright RSNA, 2003

  11. Statistical clumped isotope signatures

    Science.gov (United States)

    Röckmann, T.; Popa, M. E.; Krol, M. C.; Hofmann, M. E. G.

    2016-01-01

    High precision measurements of molecules containing more than one heavy isotope may provide novel constraints on element cycles in nature. These so-called clumped isotope signatures are reported relative to the random (stochastic) distribution of heavy isotopes over all available isotopocules of a molecule, which is the conventional reference. When multiple indistinguishable atoms of the same element are present in a molecule, this reference is calculated from the bulk (≈average) isotopic composition of the involved atoms. We show here that this referencing convention leads to apparent negative clumped isotope anomalies (anti-clumping) when the indistinguishable atoms originate from isotopically different populations. Such statistical clumped isotope anomalies must occur in any system where two or more indistinguishable atoms of the same element, but with different isotopic composition, combine in a molecule. The size of the anti-clumping signal is closely related to the difference of the initial isotope ratios of the indistinguishable atoms that have combined. Therefore, a measured statistical clumped isotope anomaly, relative to an expected (e.g. thermodynamical) clumped isotope composition, may allow assessment of the heterogeneity of the isotopic pools of atoms that are the substrate for formation of molecules. PMID:27535168

  12. Statistical methods for forecasting

    CERN Document Server

    Abraham, Bovas

    2009-01-01

    The Wiley-Interscience Paperback Series consists of selected books that have been made more accessible to consumers in an effort to increase global appeal and general circulation. With these new unabridged softcover volumes, Wiley hopes to extend the lives of these works by making them available to future generations of statisticians, mathematicians, and scientists."This book, it must be said, lives up to the words on its advertising cover: ''Bridging the gap between introductory, descriptive approaches and highly advanced theoretical treatises, it provides a practical, intermediate level discussion of a variety of forecasting tools, and explains how they relate to one another, both in theory and practice.'' It does just that!"-Journal of the Royal Statistical Society"A well-written work that deals with statistical methods and models that can be used to produce short-term forecasts, this book has wide-ranging applications. It could be used in the context of a study of regression, forecasting, and time series ...

  13. International petroleum statistics report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-05-01

    The International Petroleum Statistics Report is a monthly publication that provides current international oil data. This report is published for the use of Members of Congress, Federal agencies, State agencies, industry, and the general public. Publication of this report is in keeping with responsibilities given the Energy Information Administration in Public Law 95-91. The International Petroleum Statistics Report presents data on international oil production, demand, imports, and stocks. The report has four sections. Section 1 contains time series data on world oil production, and on oil demand and stocks in the Organization for Economic Cooperation and Development (OECD). This section contains annual data beginning in 1985, and monthly data for the most recent two years. Section 2 presents an oil supply/demand balance for the world. This balance is presented in quarterly intervals for the most recent two years. Section 3 presents data on oil imports by OECD countries. This section contains annual data for the most recent year, quarterly data for the most recent two quarters, and monthly data for the most recent twelve months. Section 4 presents annual time series data on world oil production and oil stocks, demand, and trade in OECD countries. World oil production and OECD demand data are for the years 1970 through 1995; OECD stocks from 1973 through 1995; and OECD trade from 1985 through 1995.

  14. Making sense of statistics a non-mathematical approach

    CERN Document Server

    Wood, Michael

    2003-01-01

    This text provides a thorough but accessible introduction to statistics and probability, without the distractions of mathematics. Guidance is provided on how to design investigations, analyse data and interpret results.

  15. Climate time series analysis classical statistical and bootstrap methods

    CERN Document Server

    Mudelsee, Manfred

    2010-01-01

    This book presents bootstrap resampling as a computationally intensive method able to meet the challenges posed by the complexities of analysing climate data. It shows how the bootstrap performs reliably in the most important statistical estimation techniques.

  16. Experimental Mathematics and Computational Statistics

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.; Borwein, Jonathan M.

    2009-04-30

    The field of statistics has long been noted for techniques to detect patterns and regularities in numerical data. In this article we explore connections between statistics and the emerging field of 'experimental mathematics'. These includes both applications of experimental mathematics in statistics, as well as statistical methods applied to computational mathematics.

  17. Which statistics should tropical biologists learn?

    Science.gov (United States)

    Loaiza Velásquez, Natalia; González Lutz, María Isabel; Monge-Nájera, Julián

    2011-09-01

    Tropical biologists study the richest and most endangered biodiversity in the planet, and in these times of climate change and mega-extinctions, the need for efficient, good quality research is more pressing than in the past. However, the statistical component in research published by tropical authors sometimes suffers from poor quality in data collection; mediocre or bad experimental design and a rigid and outdated view of data analysis. To suggest improvements in their statistical education, we listed all the statistical tests and other quantitative analyses used in two leading tropical journals, the Revista de Biología Tropical and Biotropica, during a year. The 12 most frequent tests in the articles were: Analysis of Variance (ANOVA), Chi-Square Test, Student's T Test, Linear Regression, Pearson's Correlation Coefficient, Mann-Whitney U Test, Kruskal-Wallis Test, Shannon's Diversity Index, Tukey's Test, Cluster Analysis, Spearman's Rank Correlation Test and Principal Component Analysis. We conclude that statistical education for tropical biologists must abandon the old syllabus based on the mathematical side of statistics and concentrate on the correct selection of these and other procedures and tests, on their biological interpretation and on the use of reliable and friendly freeware. We think that their time will be better spent understanding and protecting tropical ecosystems than trying to learn the mathematical foundations of statistics: in most cases, a well designed one-semester course should be enough for their basic requirements.

  18. Use Of R in Statistics Lithuania

    Directory of Open Access Journals (Sweden)

    Tomas Rudys

    2016-06-01

    Full Text Available Recently R becoming more and more popular among official statistics offices. It can be used not even for research purposes, but also for a production of official statistics. Statistics Lithuania recently started an analysis of possibilities where R can be used and could it replace some other statistical programming languages or systems. For this reason a work group was arranged. In the paper we will present overview of the current situation on implementation of R in Statistics Lithuania, some problems we are chasing with and some future plans. At the current situation R is used mainly for research purposes. Looking for- ward a short courses on basic R was prepared and at the moment we are starting to use R for data analysis, data manipulation from Oracle data bases, some reports preparation, data editing, survey estimation. On the other hand we found some problems working with big data sets, also survey sampling as there are surveys with complex sampling designs. We are also analysing the running of R on our servers in order to have possibilities to use more random access memory (RAM. Despite the problems, we are trying to use R in more fields in production of official statistics.

  19. Statistics and Probability

    Directory of Open Access Journals (Sweden)

    Laktineh Imad

    2010-04-01

    Full Text Available This ourse constitutes a brief introduction to probability applications in high energy physis. First the mathematical tools related to the diferent probability conepts are introduced. The probability distributions which are commonly used in high energy physics and their characteristics are then shown and commented. The central limit theorem and its consequences are analysed. Finally some numerical methods used to produce diferent kinds of probability distribution are presented. The full article (17 p. corresponding to this lecture is written in french and is provided in the proceedings of the book SOS 2008.

  20. Statistics of Extremes

    KAUST Repository

    Davison, Anthony C.

    2015-04-10

    Statistics of extremes concerns inference for rare events. Often the events have never yet been observed, and their probabilities must therefore be estimated by extrapolation of tail models fitted to available data. Because data concerning the event of interest may be very limited, efficient methods of inference play an important role. This article reviews this domain, emphasizing current research topics. We first sketch the classical theory of extremes for maxima and threshold exceedances of stationary series. We then review multivariate theory, distinguishing asymptotic independence and dependence models, followed by a description of models for spatial and spatiotemporal extreme events. Finally, we discuss inference and describe two applications. Animations illustrate some of the main ideas. © 2015 by Annual Reviews. All rights reserved.

  1. Statistical Interior Tomography

    Science.gov (United States)

    Xu, Qiong; Wang, Ge; Sieren, Jered; Hoffman, Eric A.

    2011-01-01

    This paper presents a statistical interior tomography (SIT) approach making use of compressed sensing (CS) theory. With the projection data modeled by the Poisson distribution, an objective function with a total variation (TV) regularization term is formulated in the maximization of a posteriori (MAP) framework to solve the interior problem. An alternating minimization method is used to optimize the objective function with an initial image from the direct inversion of the truncated Hilbert transform. The proposed SIT approach is extensively evaluated with both numerical and real datasets. The results demonstrate that SIT is robust with respect to data noise and down-sampling, and has better resolution and less bias than its deterministic counterpart in the case of low count data. PMID:21233044

  2. Statistical physics ""Beyond equilibrium

    Energy Technology Data Exchange (ETDEWEB)

    Ecke, Robert E [Los Alamos National Laboratory

    2009-01-01

    The scientific challenges of the 21st century will increasingly involve competing interactions, geometric frustration, spatial and temporal intrinsic inhomogeneity, nanoscale structures, and interactions spanning many scales. We will focus on a broad class of emerging problems that will require new tools in non-equilibrium statistical physics and that will find application in new material functionality, in predicting complex spatial dynamics, and in understanding novel states of matter. Our work will encompass materials under extreme conditions involving elastic/plastic deformation, competing interactions, intrinsic inhomogeneity, frustration in condensed matter systems, scaling phenomena in disordered materials from glasses to granular matter, quantum chemistry applied to nano-scale materials, soft-matter materials, and spatio-temporal properties of both ordinary and complex fluids.

  3. Applied multivariate statistical analysis

    CERN Document Server

    Härdle, Wolfgang Karl

    2015-01-01

    Focusing on high-dimensional applications, this 4th edition presents the tools and concepts used in multivariate data analysis in a style that is also accessible for non-mathematicians and practitioners.  It surveys the basic principles and emphasizes both exploratory and inferential statistics; a new chapter on Variable Selection (Lasso, SCAD and Elastic Net) has also been added.  All chapters include practical exercises that highlight applications in different multivariate data analysis fields: in quantitative financial studies, where the joint dynamics of assets are observed; in medicine, where recorded observations of subjects in different locations form the basis for reliable diagnoses and medication; and in quantitative marketing, where consumers’ preferences are collected in order to construct models of consumer behavior.  All of these examples involve high to ultra-high dimensions and represent a number of major fields in big data analysis. The fourth edition of this book on Applied Multivariate ...

  4. Statistical physics of vaccination

    Science.gov (United States)

    Wang, Zhen; Bauch, Chris T.; Bhattacharyya, Samit; d'Onofrio, Alberto; Manfredi, Piero; Perc, Matjaž; Perra, Nicola; Salathé, Marcel; Zhao, Dawei

    2016-12-01

    Historically, infectious diseases caused considerable damage to human societies, and they continue to do so today. To help reduce their impact, mathematical models of disease transmission have been studied to help understand disease dynamics and inform prevention strategies. Vaccination-one of the most important preventive measures of modern times-is of great interest both theoretically and empirically. And in contrast to traditional approaches, recent research increasingly explores the pivotal implications of individual behavior and heterogeneous contact patterns in populations. Our report reviews the developmental arc of theoretical epidemiology with emphasis on vaccination, as it led from classical models assuming homogeneously mixing (mean-field) populations and ignoring human behavior, to recent models that account for behavioral feedback and/or population spatial/social structure. Many of the methods used originated in statistical physics, such as lattice and network models, and their associated analytical frameworks. Similarly, the feedback loop between vaccinating behavior and disease propagation forms a coupled nonlinear system with analogs in physics. We also review the new paradigm of digital epidemiology, wherein sources of digital data such as online social media are mined for high-resolution information on epidemiologically relevant individual behavior. Armed with the tools and concepts of statistical physics, and further assisted by new sources of digital data, models that capture nonlinear interactions between behavior and disease dynamics offer a novel way of modeling real-world phenomena, and can help improve health outcomes. We conclude the review by discussing open problems in the field and promising directions for future research.

  5. Grey literature in meta-analyses.

    Science.gov (United States)

    Conn, Vicki S; Valentine, Jeffrey C; Cooper, Harris M; Rantz, Marilyn J

    2003-01-01

    In meta-analysis, researchers combine the results of individual studies to arrive at cumulative conclusions. Meta-analysts sometimes include "grey literature" in their evidential base, which includes unpublished studies and studies published outside widely available journals. Because grey literature is a source of data that might not employ peer review, critics have questioned the validity of its data and the results of meta-analyses that include it. To examine evidence regarding whether grey literature should be included in meta-analyses and strategies to manage grey literature in quantitative synthesis. This article reviews evidence on whether the results of studies published in peer-reviewed journals are representative of results from broader samplings of research on a topic as a rationale for inclusion of grey literature. Strategies to enhance access to grey literature are addressed. The most consistent and robust difference between published and grey literature is that published research is more likely to contain results that are statistically significant. Effect size estimates of published research are about one-third larger than those of unpublished studies. Unfunded and small sample studies are less likely to be published. Yet, importantly, methodological rigor does not differ between published and grey literature. Meta-analyses that exclude grey literature likely (a) over-represent studies with statistically significant findings, (b) inflate effect size estimates, and (c) provide less precise effect size estimates than meta-analyses including grey literature. Meta-analyses should include grey literature to fully reflect the existing evidential base and should assess the impact of methodological variations through moderator analysis.

  6. Random matrix theory and multivariate statistics

    OpenAIRE

    Diaz-Garcia, Jose A.; Jáimez, Ramon Gutiérrez

    2009-01-01

    Some tools and ideas are interchanged between random matrix theory and multivariate statistics. In the context of the random matrix theory, classes of spherical and generalised Wishart random matrix ensemble, containing as particular cases the classical random matrix ensembles, are proposed. Some properties of these classes of ensemble are analysed. In addition, the random matrix ensemble approach is extended and a unified theory proposed for the study of distributions for real normed divisio...

  7. Evidence-Based Curricular Guidelines for Statistical Education in Social Work

    Science.gov (United States)

    Gebotys, Robert J.; Hardie, Susan Lynn

    2007-01-01

    The types of statistical analyses used in more than 800 journal articles commonly cited by social workers were examined and comparisons of statistical analyses used in this published research were made between journal articles published in the late 1980s and early 2000. The data clearly indicate little has changed in the statistical methods used…

  8. Statistics for Sleep and Biological Rhythms Research.

    Science.gov (United States)

    Klerman, Elizabeth B; Wang, Wei; Phillips, Andrew J K; Bianchi, Matt T

    2017-02-01

    This article is part of a Journal of Biological Rhythms series exploring analysis and statistical topics relevant to researchers in biological rhythms and sleep research. The goal is to provide an overview of the most common issues that arise in the analysis and interpretation of data in these fields. In this article, we address issues related to the collection of multiple data points from the same organism or system at different times, since such longitudinal data collection is fundamental to the assessment of biological rhythms. Rhythmic longitudinal data require additional specific statistical considerations, ranging from curve fitting to threshold definitions to accounting for correlation structure. We discuss statistical analyses of longitudinal data including issues of correlational structure and stationarity, markers of biological rhythms, demasking of biological rhythms, and determining phase, waveform, and amplitude of biological rhythms.

  9. Statistics in clinical research: Important considerations

    Directory of Open Access Journals (Sweden)

    Howard Barkan

    2015-01-01

    Full Text Available Statistical analysis is one of the foundations of evidence-based clinical practice, a key in conducting new clinical research and in evaluating and applying prior research. In this paper, we review the choice of statistical procedures, analyses of the associations among variables and techniques used when the clinical processes being examined are still in process. We discuss methods for building predictive models in clinical situations, and ways to assess the stability of these models and other quantitative conclusions. Techniques for comparing independent events are distinguished from those used with events in a causal chain or otherwise linked. Attention then turns to study design, to the determination of the sample size needed to make a given comparison, and to statistically negative studies.

  10. Statistical Survey of Non-Formal Education

    Directory of Open Access Journals (Sweden)

    Ondřej Nývlt

    2012-12-01

    Full Text Available focused on a programme within a regular education system. Labour market flexibility and new requirements on employees create a new domain of education called non-formal education. Is there a reliable statistical source with a good methodological definition for the Czech Republic? Labour Force Survey (LFS has been the basic statistical source for time comparison of non-formal education for the last ten years. Furthermore, a special Adult Education Survey (AES in 2011 was focused on individual components of non-formal education in a detailed way. In general, the goal of the EU is to use data from both internationally comparable surveys for analyses of the particular fields of lifelong learning in the way, that annual LFS data could be enlarged by detailed information from AES in five years periods. This article describes reliability of statistical data aboutnon-formal education. This analysis is usually connected with sampling and non-sampling errors.

  11. Statistics in clinical research: Important considerations

    Science.gov (United States)

    Barkan, Howard

    2015-01-01

    Statistical analysis is one of the foundations of evidence-based clinical practice, a key in conducting new clinical research and in evaluating and applying prior research. In this paper, we review the choice of statistical procedures, analyses of the associations among variables and techniques used when the clinical processes being examined are still in process. We discuss methods for building predictive models in clinical situations, and ways to assess the stability of these models and other quantitative conclusions. Techniques for comparing independent events are distinguished from those used with events in a causal chain or otherwise linked. Attention then turns to study design, to the determination of the sample size needed to make a given comparison, and to statistically negative studies. PMID:25566715

  12. Nonindependence and sensitivity analyses in ecological and evolutionary meta-analyses.

    Science.gov (United States)

    Noble, Daniel W A; Lagisz, Malgorzata; O'dea, Rose E; Nakagawa, Shinichi

    2017-05-01

    Meta-analysis is an important tool for synthesizing research on a variety of topics in ecology and evolution, including molecular ecology, but can be susceptible to nonindependence. Nonindependence can affect two major interrelated components of a meta-analysis: (i) the calculation of effect size statistics and (ii) the estimation of overall meta-analytic estimates and their uncertainty. While some solutions to nonindependence exist at the statistical analysis stages, there is little advice on what to do when complex analyses are not possible, or when studies with nonindependent experimental designs exist in the data. Here we argue that exploring the effects of procedural decisions in a meta-analysis (e.g. inclusion of different quality data, choice of effect size) and statistical assumptions (e.g. assuming no phylogenetic covariance) using sensitivity analyses are extremely important in assessing the impact of nonindependence. Sensitivity analyses can provide greater confidence in results and highlight important limitations of empirical work (e.g. impact of study design on overall effects). Despite their importance, sensitivity analyses are seldom applied to problems of nonindependence. To encourage better practice for dealing with nonindependence in meta-analytic studies, we present accessible examples demonstrating the impact that ignoring nonindependence can have on meta-analytic estimates. We also provide pragmatic solutions for dealing with nonindependent study designs, and for analysing dependent effect sizes. Additionally, we offer reporting guidelines that will facilitate disclosure of the sources of nonindependence in meta-analyses, leading to greater transparency and more robust conclusions. © 2017 John Wiley & Sons Ltd.

  13. Statistical Background Needed to Read Professional Pharmacy Journals.

    Science.gov (United States)

    Moore, Randy; And Others

    1978-01-01

    An examination of professional pharmacy literature was undertaken to determine types of statistical terminology and analyses presented and compare these with the results of a survey to determine the statistical backgrounds of graduates of schools that grant the Doctor of Pharmacy and/or Master of Science in Hospital Pharmacy. (JMD)

  14. Spatial Statistical Analysis of Large Astronomical Datasets

    Science.gov (United States)

    Szapudi, Istvan

    2002-12-01

    The future of astronomy will be dominated with large and complex data bases. Megapixel CMB maps, joint analyses of surveys across several wavelengths, as envisioned in the planned National Virtual Observatory (NVO), TByte/day data rate of future surveys (Pan-STARRS) put stringent constraints on future data analysis methods: they have to achieve at least N log N scaling to be viable in the long term. This warrants special attention to computational requirements, which were ignored during the initial development of current analysis tools in favor of statistical optimality. Even an optimal measurement, however, has residual errors due to statistical sample variance. Hence a suboptimal technique with significantly smaller measurement errors than the unavoidable sample variance produces results which are nearly identical to that of a statistically optimal technique. For instance, for analyzing CMB maps, I present a suboptimal alternative, indistinguishable from the standard optimal method with N3 scaling, that can be rendered N log N with a hierarchical representation of the data; a speed up of a trillion times compared to other methods. In this spirit I will present a set of novel algorithms and methods for spatial statistical analyses of future large astronomical data bases, such as galaxy catalogs, megapixel CMB maps, or any point source catalog.

  15. Genetic analyses of captive Alala (Corvus hawaiiensis) using AFLP analyses

    Science.gov (United States)

    Jarvi, Susan I.; Bianchi, Kiara R.

    2006-01-01

    Population level studies of genetic diversity can provide information about population structure, individual genetic distinctiveness and former population size. They are especially important for rare and threatened species like the Alala, where they can be used to assess extinction risks and evolutionary potential. In an ideal situation multiple methods should be used to detect variation, and these methods should be comparable across studies. In this report, we discuss AFLP (Amplified Fragment Length Polymorphism) as a genetic approach for detecting variation in the Alala , describe our findings, and discuss these in relation to mtDNA and microsatellite data reported elsewhere in this same population. AFLP is a technique for DNA fingerprinting that has wide applications. Because little or no prior knowledge of the particular species is required to carry out this method of analysis, AFLP can be used universally across varied taxonomic groups. Within individuals, estimates of diversity or heterozygosity across genomes may be complex because levels of diversity differ between and among genes. One of the more traditional methods of estimating diversity employs the use of codominant markers such as microsatellites. Codominant markers detect each allele at a locus independently. Hence, one can readily distinguish heterozygotes from homozygotes, directly assess allele frequencies and calculate other population level statistics. Dominant markers (for example, AFLP) are scored as either present or absent (null) so heterozygotes cannot be directly distinguished from homozygotes. However, the presence or absence data can be converted to expected heterozygosity estimates which are comparable to those determined by codominant markers. High allelic diversity and heterozygosity inherent in microsatellites make them excellent tools for studies of wild populations and they have been used extensively. One limitation to the use of microsatellites is that heterozygosity estimates are

  16. A statistical package for computing time and frequency domain analysis

    Science.gov (United States)

    Brownlow, J.

    1978-01-01

    The spectrum analysis (SPA) program is a general purpose digital computer program designed to aid in data analysis. The program does time and frequency domain statistical analyses as well as some preanalysis data preparation. The capabilities of the SPA program include linear trend removal and/or digital filtering of data, plotting and/or listing of both filtered and unfiltered data, time domain statistical characterization of data, and frequency domain statistical characterization of data.

  17. Waste statistics 2004

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2006-04-07

    The 2004 reporting to the ISAG comprises 394 plants owned by 256 enterprises. In 2003, reports covered 403 plants owned by 273 enterprises. Waste generation in 2004 is compared to targets for 2008 in the government's Waste Strategy 2005-2008. The following summarises waste generation in 2004: 1) In 2004, total reported waste arisings amounted to 13,359,000 tonnes, which is 745,000 tonnes, or 6 per cent, more than in 2003. 2) If amounts of residues from coal-fired power plants are excluded from statistics, waste arisings in 2004 were 12,179,000 tonnes, which is a 9 per cent increase from 2003. 3) If amounts of residues from coal-fired power plants and waste from the building and construction sector are excluded from statistics, total waste generation in 2004 amounted to 7,684,000 tonnes, which is 328,000 tonnes, or 4 per cent, more than in 2002. In other words, there has been an increase in total waste arisings, if residues and waste from building and construction are excluded. Waste from the building and construction sector is more sensitive to economic change than most other waste. 4) The total rate of recycling was 65 per cent. The 2008 target for recycling is 65 per cent. The rate of recycling in 2003 was also 65 per cent. 5) The total amount of waste led to incineration amounted to 26 per cent, plus an additional 1 per cent left in temporary storage to be incinerated at a later time. The 2008 target for incineration is 26 per cent. These are the same percentage figures as applied to incineration and storage in 2003. 6) The total amount of waste led to landfills amounted to 8 per cent, which is one percentage point better than the overall landfill target of a maximum of 9 per cent landfilling in 2008. Also in 2003, 8 per cent of the waste was landfilled. 7) The targets for treatment of waste from individual sectors are still not being met: too little waste from households and the service sector is being recycled, and too much waste from industry is being

  18. Waste statistics 2003

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2005-07-01

    The 2003 reporting to the ISAG comprises 403 plants owned by 273 enterprises. In 2002, reports covered 407 plants owned by 296 enterprises. Waste generation in 2003 is compared to targets from 2008 in the government's Waste Strategy 2005-2008. The following can be said to summarise waste generation in 2003: 1) In 2003, total reported waste arisings amounted to 12,835,000 tonnes, which is 270,000 tonnes, or 2 per cent, less than in 2002. 2) If amounts of residues from coal-fired power plants are excluded from statistics, waste arisings in 2003 were 11,597,000 tonnes, which is a 2 per cent increase from 2002. 3) If amounts of residues from coal-fired power plants and waste from the building and construction sector are excluded from statistics, total waste generation in 2003 amounted to 7,814,000 tonnes, which is 19,000 tonnes, or 1 per cent, less than in 2002. In other words, there has been a fall in total waste arisings, if residues and waste from building and construction are excluded. 4) The overall rate of recycling amounted to 66 per cent, which is one percentage point above the overall recycling target of 65 per cent for 2008. In 2002 the total rate of recycling was 64 per cent. 5) The total amount of waste led to incineration amounted to 26 per cent, plus an additional 1 per cent left in temporary storage to be incinerated at a later time. The 2008 target for incineration is 26 per cent. These are the same percentage figures as applied to incineration and storage in 2002. 6) The total amount of waste led to landfills amounted to 8 per cent, which is one percentage point below the overall landfill target of a maximum of 9 per cent landfilling in 2008. In 2002, 9 per cent was led to landfill. 7) The targets for treatment of waste from individual sectors are still not being met: too little waste from households and the service sector is being recycled, and too much waste from industry is being led to landfill. (au)

  19. German cancer statistics 2004

    Directory of Open Access Journals (Sweden)

    Ziese Thomas

    2010-02-01

    Full Text Available Abstract Background For years the Robert Koch Institute (RKI has been annually pooling and reviewing the data from the German population-based cancer registries and evaluating them together with the cause-of-death statistics provided by the statistical offices. Traditionally, the RKI periodically estimates the number of new cancer cases in Germany on the basis of the available data from the regional cancer registries in which registration is complete; this figure, in turn, forms the basis for further important indicators. Methods This article gives a brief overview of current indicators - such as incidence, prevalence, mortality, survival rates - on the most common types of cancer, as well as important ratios on the risks of developing and dying of cancer in Germany. Results According to the latest estimate, there were a total of 436,500 new cancer cases in Germany in 2004. The most common cancer in men is prostate cancer with over 58,000 new cases per annum, followed by colorectal and lung cancer. In women, breast cancer remains the most common cancer with an estimated 57,000 new cases every year, also followed by colorectal cancer. These and further findings on selected cancer sites can be found in the current brochure on "Cancer in Germany", which is regularly published by the RKI together with the Association of Population-based Cancer Registries in Germany (GEKID. In addition, the RKI made cancer-prevalence estimates and calculated current morbidity and mortality risks at the federal level for the first time. According to these figures, the 5-year partial prevalence - i.e. the total number of cancer patients diagnosed over the past five years who are currently still living - exceeds 600,000 in men; the figure is about the same among women. Here, too, the most common cancers are prostate cancer in men and breast cancer in women. The lifetime risk of developing cancer, which is more related to the individual, is estimated to be higher among

  20. Statistics with JMP graphs, descriptive statistics and probability

    CERN Document Server

    Goos, Peter

    2015-01-01

    Peter Goos, Department of Statistics, University ofLeuven, Faculty of Bio-Science Engineering and University ofAntwerp, Faculty of Applied Economics, BelgiumDavid Meintrup, Department of Mathematics and Statistics,University of Applied Sciences Ingolstadt, Faculty of MechanicalEngineering, GermanyThorough presentation of introductory statistics and probabilitytheory, with numerous examples and applications using JMPDescriptive Statistics and Probability provides anaccessible and thorough overview of the most important descriptivestatistics for nominal, ordinal and quantitative data withpartic

  1. [Statistical analysis using freely-available "EZR (Easy R)" software].

    Science.gov (United States)

    Kanda, Yoshinobu

    2015-10-01

    Clinicians must often perform statistical analyses for purposes such evaluating preexisting evidence and designing or executing clinical studies. R is a free software environment for statistical computing. R supports many statistical analysis functions, but does not incorporate a statistical graphical user interface (GUI). The R commander provides an easy-to-use basic-statistics GUI for R. However, the statistical function of the R commander is limited, especially in the field of biostatistics. Therefore, the author added several important statistical functions to the R commander and named it "EZR (Easy R)", which is now being distributed on the following website: http://www.jichi.ac.jp/saitama-sct/. EZR allows the application of statistical functions that are frequently used in clinical studies, such as survival analyses, including competing risk analyses and the use of time-dependent covariates and so on, by point-and-click access. In addition, by saving the script automatically created by EZR, users can learn R script writing, maintain the traceability of the analysis, and assure that the statistical process is overseen by a supervisor.

  2. Contributions to sampling statistics

    CERN Document Server

    Conti, Pier; Ranalli, Maria

    2014-01-01

    This book contains a selection of the papers presented at the ITACOSM 2013 Conference, held in Milan in June 2013. ITACOSM is the bi-annual meeting of the Survey Sampling Group S2G of the Italian Statistical Society, intended as an international  forum of scientific discussion on the developments of theory and application of survey sampling methodologies and applications in human and natural sciences. The book gathers research papers carefully selected from both invited and contributed sessions of the conference. The whole book appears to be a relevant contribution to various key aspects of sampling methodology and techniques; it deals with some hot topics in sampling theory, such as calibration, quantile-regression and multiple frame surveys, and with innovative methodologies in important topics of both sampling theory and applications. Contributions cut across current sampling methodologies such as interval estimation for complex samples, randomized responses, bootstrap, weighting, modeling, imputati...

  3. Energy statistics manual

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-07-01

    The Manual is written in a question-and-answer format. The points developed are introduced with a basic question, such as: What do people mean by 'fuels' and 'energy'? What units are used to express oil? How are energy data presented? Answers are given in simple terms and illustrated by graphs, charts and tables. More technical explanations are found in the annexes. The Manual contains seven chapters. The first one presents the fundamentals of energy statistics, five chapters deal with the five different fuels (electricity and heat; natural gas; oil; solid fuels and manufactured gases; renewables and waste) and the last chapter explains the energy balance. Three technical annexes and a glossary are also included. For the five chapters dedicated to the fuels, there are three levels of reading: the first one contains general information on the subject, the second one reviews issues which are specific to the joint IEA/OECD-Eurostat-UNECE questionnaires and the third one focuses on the essential elements of the subject. 43 figs., 22 tabs., 3 annexes.

  4. Waste statistics 2001

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-07-01

    Reports to the ISAG (Information System for Waste and Recycling) for 2001 cover 402 Danish waste treatment plants owned by 295 enterprises. The total waste generation in 2001 amounted to 12,768,000 tonnes, which is 2% less than in 2000. Reductions are primarily due to the fact that sludge for mineralization is included with a dry matter content of 20% compared to 1,5% in previous statistics. This means that sludge amounts have been reduced by 808,886 tonnes. The overall rate of recycling amounted to 63%, which is 1% less than the overall recycling target of 64% for 2004. Since sludge has a high recycling rate, the reduction in sludge amounts of 808,886 tonnes has also caused the total recycling rate to fall. Waste amounts incinerated accounted for 25%, which is 1% more than the overall target of 24% for incineration in 2004. Waste going to landfill amounted to 10%, which is better than the overall landfill target for 2004 of a maximum of 12% for landfilling. Targets for treatment of waste from the different sectors, however, are still not complied with, since too little waste from households and the service sector is recycled, and too much waste from industry is led to landfill. (BA)

  5. International petroleum statistics report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-10-01

    The International Petroleum Statistics Report presents data on international oil production, demand, imports, and stocks. The report has four sections. Section 1 contains time series data on world oil production, and on oil demand and stocks in the Organization for Economic Cooperation and Development (OECD). This section contains annual data beginning in 1985, and monthly data for the most recent two years. Section 2 presents an oil supply/demand balance for the world. This balance is presented in quarterly intervals for the most recent two years. Section 3 presents data on oil imports by OECD countries. This section contains annual data for the most recent year, quarterly data for the most recent two quarters, and monthly data for the most recent twelve months. Section 4 presents annual time series data on world oil production and oil stocks, demand, and trade in OECD countries. Word oil production and OECD demand data are for the years 1970 through 1995; OECD stocks from 1973 through 1995; and OECD trade from 1985 through 1995.

  6. International petroleum statistics report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-07-27

    The International Petroleum Statistics Report presents data on international oil production, demand, imports, and exports, and stocks. The report has four sections. Section 1 contains time series data on world oil production, and on oil demand and stocks in the Organization for Economic Cooperation and Development (OECD). This section contains annual data beginning in 1985, and monthly data for the most recent two years. Section 2 presents an oil supply/demand balance for the world. This balance is presented in quarterly intervals for the most recent two years. Section 3 presents data on oil imports by OECD countries. This section contains annual data for the most recent year, quarterly data for the most recent two quarters, and monthly data for the most recent twelve months. Section 4 presents annual time series data on world oil production and oil stocks, demand, and trade in OECD countries. World oil production and OECD demand data are for the years 1970 through 1994; OECD stocks from 1973 through 1994; and OECD trade from 1984 through 1994.

  7. Statistics of indistinguishable particles.

    Science.gov (United States)

    Wittig, Curt

    2009-07-02

    The wave function of a system containing identical particles takes into account the relationship between a particle's intrinsic spin and its statistical property. Specifically, the exchange of two identical particles having odd-half-integer spin results in the wave function changing sign, whereas the exchange of two identical particles having integer spin is accompanied by no such sign change. This is embodied in a term (-1)(2s), which has the value +1 for integer s (bosons), and -1 for odd-half-integer s (fermions), where s is the particle spin. All of this is well-known. In the nonrelativistic limit, a detailed consideration of the exchange of two identical particles shows that exchange is accompanied by a 2pi reorientation that yields the (-1)(2s) term. The same bookkeeping is applicable to the relativistic case described by the proper orthochronous Lorentz group, because any proper orthochronous Lorentz transformation can be expressed as the product of spatial rotations and a boost along the direction of motion.

  8. International petroleum statistics report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-11-01

    The International Petroleum Statistics Report presents data on international oil production, demand, imports, exports, and stocks. The report has four sections. Section 1 contains time series data on world oil production, and on oil demand and stocks in the Organization for Economic Cooperation and Development (OECD). This section contains annual data beginning in 1985, and monthly data for the most recent two years. Section 2 presents an oil supply/demand balance for the world. This balance is presented in quarterly intervals for the most recent two years. Section 3 presents data on oil imports by OECD countries. This section contains annual data for the most recent year, quarterly data for the most recent two quarters, and monthly data for the most recent twelve months. Section 4 presents annual time series data on world oil production and oil stocks, demand, and trade in OECD countries. World oil production and OECD demand data are for the years 1970 through 1994; OECD stocks from 1973 through 1994; and OECD trade from 1984 through 1994.

  9. Problematizing Statistical Literacy: An Intersection of Critical and Statistical Literacies

    Science.gov (United States)

    Weiland, Travis

    2017-01-01

    In this paper, I problematize traditional notions of statistical literacy by juxtaposing it with critical literacy. At the school level statistical literacy is vitally important for students who are preparing to become citizens in modern societies that are increasingly shaped and driven by data based arguments. The teaching of statistics, which is…

  10. Statistical Literacy: Developing a Youth and Adult Education Statistical Project

    Science.gov (United States)

    Conti, Keli Cristina; Lucchesi de Carvalho, Dione

    2014-01-01

    This article focuses on the notion of literacy--general and statistical--in the analysis of data from a fieldwork research project carried out as part of a master's degree that investigated the teaching and learning of statistics in adult education mathematics classes. We describe the statistical context of the project that involved the…

  11. Statistics Anxiety and Business Statistics: The International Student

    Science.gov (United States)

    Bell, James A.

    2008-01-01

    Does the international student suffer from statistics anxiety? To investigate this, the Statistics Anxiety Rating Scale (STARS) was administered to sixty-six beginning statistics students, including twelve international students and fifty-four domestic students. Due to the small number of international students, nonparametric methods were used to…

  12. Statistical definition of observable and the structure of statistical models

    Science.gov (United States)

    Holevo, A. S.

    1985-12-01

    A purely statistical characterization of measurements of observables (described by spectral measures in conventional formalism of quantum mechanics) is given in the framework of the general statistical (convex) approach. The relation to physical premises underlying the conventional notion of observable is discussed. Structural aspects of general statistical models such as central decomposition and characterization of classical models are considered. It is shown by explicit construction that an arbitrary statistical model admits a formal introduction of "hidden variables" preserving the structural properties of a single statistical model. The relation of this result to other theorems on hidden variables is discussed.

  13. Colorectal cancer statistics, 2017.

    Science.gov (United States)

    Siegel, Rebecca L; Miller, Kimberly D; Fedewa, Stacey A; Ahnen, Dennis J; Meester, Reinier G S; Barzi, Afsaneh; Jemal, Ahmedin

    2017-05-06

    Colorectal cancer (CRC) is one of the most common malignancies in the United States. Every 3 years, the American Cancer Society provides an update of CRC incidence, survival, and mortality rates and trends. Incidence data through 2013 were provided by the Surveillance, Epidemiology, and End Results program, the National Program of Cancer Registries, and the North American Association of Central Cancer Registries. Mortality data through 2014 were provided by the National Center for Health Statistics. CRC incidence rates are highest in Alaska Natives and blacks and lowest in Asian/Pacific Islanders, and they are 30% to 40% higher in men than in women. Recent temporal patterns are generally similar by race and sex, but differ by age. Between 2000 and 2013, incidence rates in adults aged ≥50 years declined by 32%, with the drop largest for distal tumors in people aged ≥65 years (incidence rate ratio [IRR], 0.50; 95% confidence interval [95% CI], 0.48-0.52) and smallest for rectal tumors in ages 50 to 64 years (male IRR, 0.91; 95% CI, 0.85-0.96; female IRR, 1.00; 95% CI, 0.93-1.08). Overall CRC incidence in individuals ages ≥50 years declined from 2009 to 2013 in every state except Arkansas, with the decrease exceeding 5% annually in 7 states; however, rectal tumor incidence in those ages 50 to 64 years was stable in most states. Among adults aged history of CRC/advanced adenomas) and eliminating disparities in high-quality treatment. In addition, research is needed to elucidate causes for increasing CRC in young adults. CA Cancer J Clin 2017. © 2017 American Cancer Society. CA Cancer J Clin 2017;67:177-193. © 2017 American Cancer Society. © 2017 American Cancer Society.

  14. Breast cancer statistics, 2013.

    Science.gov (United States)

    DeSantis, Carol; Ma, Jiemin; Bryan, Leah; Jemal, Ahmedin

    2014-01-01

    In this article, the American Cancer Society provides an overview of female breast cancer statistics in the United States, including data on incidence, mortality, survival, and screening. Approximately 232,340 new cases of invasive breast cancer and 39,620 breast cancer deaths are expected to occur among US women in 2013. One in 8 women in the United States will develop breast cancer in her lifetime. Breast cancer incidence rates increased slightly among African American women; decreased among Hispanic women; and were stable among whites, Asian Americans/Pacific Islanders, and American Indians/Alaska Natives from 2006 to 2010. Historically, white women have had the highest breast cancer incidence rates among women aged 40 years and older; however, incidence rates are converging among white and African American women, particularly among women aged 50 years to 59 years. Incidence rates increased for estrogen receptor-positive breast cancers in the youngest white women, Hispanic women aged 60 years to 69 years, and all but the oldest African American women. In contrast, estrogen receptor-negative breast cancers declined among most age and racial/ethnic groups. These divergent trends may reflect etiologic heterogeneity and the differing effects of some factors, such as obesity and parity, on risk by tumor subtype. Since 1990, breast cancer death rates have dropped by 34% and this decrease was evident in all racial/ethnic groups except American Indians/Alaska Natives. Nevertheless, survival disparities persist by race/ethnicity, with African American women having the poorest breast cancer survival of any racial/ethnic group. Continued progress in the control of breast cancer will require sustained and increased efforts to provide high-quality screening, diagnosis, and treatment to all segments of the population. © 2013 American Cancer Society, Inc.

  15. SDI: Statistical dynamic interactions

    Energy Technology Data Exchange (ETDEWEB)

    Blann, M.; Mustafa, M.G. (Lawrence Livermore National Lab., CA (USA)); Peilert, G.; Stoecker, H.; Greiner, W. (Frankfurt Univ. (Germany, F.R.). Inst. fuer Theoretische Physik)

    1991-04-01

    We focus on the combined statistical and dynamical aspects of heavy ion induced reactions. The overall picture is illustrated by considering the reaction {sup 36}Ar + {sup 238}U at a projectile energy of 35 MeV/nucleon. We illustrate the time dependent bound excitation energy due to the fusion/relaxation dynamics as calculated with the Boltzmann master equation. An estimate of the mass, charge and excitation of an equilibrated nucleus surviving the fast (dynamic) fusion-relaxation process is used as input into an evaporation calculation which includes 20 heavy fragment exit channels. The distribution of excitations between residue and clusters is explicitly calculated, as is the further deexcitation of clusters to bound nuclei. These results are compared with the exclusive cluster multiplicity measurements of Kim et al., and are found to give excellent agreement. We consider also an equilibrated residue system at 25% lower initial excitation, which gives an unsatisfactory exclusive multiplicity distribution. This illustrates that exclusive fragment multiplicity may provide a thermometer for system excitation. This analysis of data involves successive binary decay with no compressional effects nor phase transitions. Several examples of primary versus final (stable) cluster decay probabilities for an A = 100 nucleus at excitations of 100 to 800 MeV are presented. From these results a large change in multifragmentation patterns may be understood as a simple phase space consequence, invoking neither phase transitions, nor equation of state information. These results are used to illustrate physical quantities which are ambiguous to deduce from experimental fragment measurements. 14 refs., 4 figs.

  16. Mathematical foundations of quantum statistics

    CERN Document Server

    Khinchin, A Y

    2011-01-01

    A coherent, well-organized look at the basis of quantum statistics' computational methods, the determination of the mean values of occupation numbers, the foundations of the statistics of photons and material particles, thermodynamics.

  17. Statistics: Number of Cancer Survivors

    Science.gov (United States)

    ... 5 million), gynecologic (8%, 1.3 million) and melanoma (8%, 1.2 million).* 2, 3 For additional ... For Researchers For Health Care Professionals For Advocates Definitions, Statistics and Graphs Definitions Statistics Graphs Funding History ...

  18. Birth Defects Data and Statistics

    Science.gov (United States)

    ... Submit" /> Information For… Media Policy Makers Data & Statistics Recommend on Facebook Tweet Share Compartir On This ... and critical. Read below for the latest national statistics on the occurrence of birth defects in the ...

  19. Heart Disease and Stroke Statistics

    Science.gov (United States)

    ... Media for Heart.org Heart and Stroke Association Statistics Each year, the American Heart Association, in conjunction ... health and disease in the population. Heart & Stroke Statistics FAQs What is Prevalence? Prevalence is an estimate ...

  20. Usage Statistics: MedlinePlus

    Science.gov (United States)

    ... this page: https://medlineplus.gov/usestatistics.html MedlinePlus Statistics To use the sharing features on this page, ... By Quarter View image full size Quarterly User Statistics Quarter Page Views Unique Visitors Oct-Dec-98 ...