Bhattacharya, Debanjali; Sinha, Neelam; Saini, Jitender
2017-03-01
Multiple system atrophy (MSA) is a rare, non-curable, progressive neurodegenerative disorder that affects nervous system and movement, poses a considerable diagnostic challenge to medical researchers. Corpus callosum (CC) being the largest white matter structure in brain, enabling inter-hemispheric communication, quantification of callosal atrophy may provide vital information at the earliest possible stages. The main objective is to identify the differences in CC structure for this disease, based on quantitative analysis on the pattern of callosal atrophy. We report results of quantification of structural changes in regional anatomical thickness, area and length of CC between patient-groups with MSA with respect to healthy controls. The method utilizes isolating and parcellating the mid-sagittal CC into 100 segments along the length - measuring the width of each segment. It also measures areas within geometrically defined five callosal compartments of the well-known Witelson, and Hofer-Frahma schemes. For quantification, statistical tests are performed on these different callosal measurements. From the statistical analysis, it is concluded that compared to healthy controls, width is reduced drastically throughout CC for MSA group and as well as changes in area and length are also significant for MSA. The study is further extended to check if any significant difference in thickness is found between the two variations of MSA, Parkinsonian MSA and Cerebellar MSA group, using the same methodology. However area and length of this two sub-MSA group, no substantial difference is obtained. The study is performed on twenty subjects for each control and MSA group, who had T1-weighted MRI.
Acoustic Analysis and Design of the E-STA MSA Simulator
Bittinger, Samantha A.
2016-01-01
The Orion European Service Module Structural Test Article (E-STA) Acoustic Test was completed in May 2016 to verify that the European Service Module (ESM) can withstand qualification acoustic environments. The test article required an aft closeout to simulate the Multi-Purpose Crew Vehicle (MPCV) Stage Adapter (MSA) cavity, however, the flight MSA design was too cost-prohibitive to build. NASA Glenn Research Center (GRC) had 6 months to design an MSA Simulator that could recreate the qualification prediction MSA cavity sound pressure level to within a reasonable tolerance. This paper summarizes the design and analysis process to arrive at a design for the MSA Simulator, and then compares its performance to the final prediction models created prior to test.
CARRYING OUT OF MEASUREMENT SYSTEM ANALYSIS (MSA ON OJSC «BSW – MANAGEMENT COMPANY OF HOLDING «BMC»
Directory of Open Access Journals (Sweden)
E. V. Borisenko
2017-01-01
Full Text Available On the ОJSC «BSW – Management Company of Holding «BMC» it is introduced and operates as unified whole the integrated system of management which includes in particular the requirements of the standard of a quality management system of automotive industry IATF 16949:2016. One of special requirements of IATF 16949:2016 in the field of metrology is the Measurement System Analysis (MSA. MSA is the constituent of statistical management of processes of manufacture intended for a research of measuring processes for the purpose of an assessment of their acceptability. The order and the procedure of carrying out of the Measurement System Analysis (MSA on OJSC «BSW – Management Company of Holding «BMC» are described in the article.
Prepositions in MSA and English
Directory of Open Access Journals (Sweden)
Saad Nasser Aldwayan
2013-01-01
Full Text Available Spatial scenes are identical in the world languages. However, cultures may diverge in profiling spatial scenes (Levinson 2003. This paper selects for study the prepositions in and on in English and their Modern Standard Arabic (MSA counterparts fi and 3ala, arguing that MSA and English seem to diverge in the spatial configurations and meanings of these prepositions. The sub-schemas of CONTAINMENT (in-ness in MSA are found to partially overlap with those of English, with the other sub-schemas being taken care of by SUPPORT (on-ness and PUNCTUALITY (point-ness. Such differences classify MSA more as a CONTAINMENT-based language than English, which seems to prefer SUPPORT and PUNCTUALITY. However, English and MSA seem to converge in their metaphoric conceptualizations of states owing to conceptual embodiment (Lakoff 1987. The article discusses the implications of such findings for spatial cognition and cultural cognition and EFL/ESL writing and translating.
Lifshits, A M
1979-01-01
General characteristics of the multivariate statistical analysis (MSA) is given. Methodical premises and criteria for the selection of an adequate MSA method applicable to pathoanatomic investigations of the epidemiology of multicausal diseases are presented. The experience of using MSA with computors and standard computing programs in studies of coronary arteries aterosclerosis on the materials of 2060 autopsies is described. The combined use of 4 MSA methods: sequential, correlational, regressional, and discriminant permitted to quantitate the contribution of each of the 8 examined risk factors in the development of aterosclerosis. The most important factors were found to be the age, arterial hypertension, and heredity. Occupational hypodynamia and increased fatness were more important in men, whereas diabetes melitus--in women. The registration of this combination of risk factors by MSA methods provides for more reliable prognosis of the likelihood of coronary heart disease with a fatal outcome than prognosis of the degree of coronary aterosclerosis.
Tattiyapong, Muncharee; Sivakumar, Thillaiampalam; Takemae, Hitoshi; Simking, Pacharathon; Jittapalapong, Sathaporn; Igarashi, Ikuo; Yokoyama, Naoaki
2016-07-01
Babesia bovis, an intraerythrocytic protozoan parasite, causes severe clinical disease in cattle worldwide. The genetic diversity of parasite antigens often results in different immune profiles in infected animals, hindering efforts to develop immune control methodologies against the B. bovis infection. In this study, we analyzed the genetic diversity of the merozoite surface antigen-1 (msa-1) gene using 162 B. bovis-positive blood DNA samples sourced from cattle populations reared in different geographical regions of Thailand. The identity scores shared among 93 msa-1 gene sequences isolated by PCR amplification were 43.5-100%, and the similarity values among the translated amino acid sequences were 42.8-100%. Of 23 total clades detected in our phylogenetic analysis, Thai msa-1 gene sequences occurred in 18 clades; seven among them were composed of sequences exclusively from Thailand. To investigate differential antigenicity of isolated MSA-1 proteins, we expressed and purified eight recombinant MSA-1 (rMSA-1) proteins, including an rMSA-1 from B. bovis Texas (T2Bo) strain and seven rMSA-1 proteins based on the Thai msa-1 sequences. When these antigens were analyzed in a western blot assay, anti-T2Bo cattle serum strongly reacted with the rMSA-1 from T2Bo, as well as with three other rMSA-1 proteins that shared 54.9-68.4% sequence similarity with T2Bo MSA-1. In contrast, no or weak reactivity was observed for the remaining rMSA-1 proteins, which shared low sequence similarity (35.0-39.7%) with T2Bo MSA-1. While demonstrating the high genetic diversity of the B. bovis msa-1 gene in Thailand, the present findings suggest that the genetic diversity results in antigenicity variations among the MSA-1 antigens of B. bovis in Thailand. Copyright © 2016 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Alba Marina Gimenez
2016-11-01
Full Text Available Abstract Background Babesia bovis is a tick-transmitted protozoan hemoparasite and the causative agent of bovine babesiosis, a potential risk to more than 500 million cattle worldwide. The vaccines currently available are based on attenuated parasites, which are difficult to produce, and are only recommended for use in bovines under one year of age. When used in older animals, these vaccines may cause life-threatening clinical symptoms and eventually death. The development of a multi-subunit recombinant vaccine against B. bovis would be attractive from an economic standpoint and, most importantly, could be recommended for animals of any age. In the present study, recombinant ectodomains of MSA-2a1, MSA-2b and MSA-2c antigens were expressed in Pichia pastoris yeast as secreted soluble peptides. Results The antigens were purified to homogeneity, and biochemically and immunologically characterized. A vaccine formulation was obtained by emulsifying a mixture of the three peptides with the adjuvant Montanide ISA 720, which elicited high IgG antibody titers against each of the above antigens. IgG antibodies generated against each MSA-antigen recognized merozoites and significantly inhibited the invasion of bovine erythrocytes. Cellular immune responses were also detected, which were characterized by splenic and lymph node CD4+ T cells producing IFN-γ and TNF-α upon stimulation with the antigens MSA-2a1 or MSA-2c. Conclusions These data strongly suggest the high protective potential of the presented formulation, and we propose that it could be tested in vaccination trials of bovines challenged with B. bovis.
Methanesulfonic acid (MSA) migration in polar ice: data synthesis and theory
Osman, Matthew; Das, Sarah B.; Marchal, Olivier; Evans, Matthew J.
2017-11-01
Methanesulfonic acid (MSA; CH3SO3H) in polar ice is a unique proxy of marine primary productivity, synoptic atmospheric transport, and regional sea-ice behavior. However, MSA can be mobile within the firn and ice matrix, a post-depositional process that is well known but poorly understood and documented, leading to uncertainties in the integrity of the MSA paleoclimatic signal. Here, we use a compilation of 22 ice core MSA records from Greenland and Antarctica and a model of soluble impurity transport in order to comprehensively investigate the vertical migration of MSA from summer layers, where MSA is originally deposited, to adjacent winter layers in polar ice. We find that the shallowest depth of MSA migration in our compilation varies over a wide range (˜ 2 to 400 m) and is positively correlated with snow accumulation rate and negatively correlated with ice concentration of Na+ (typically the most abundant marine cation). Although the considered soluble impurity transport model provides a useful mechanistic framework for studying MSA migration, it remains limited by inadequate constraints on key physico-chemical parameters - most notably, the diffusion coefficient of MSA in cold ice (DMS). We derive a simplified version of the model, which includes DMS as the sole parameter, in order to illuminate aspects of the migration process. Using this model, we show that the progressive phase alignment of MSA and Na+ concentration peaks observed along a high-resolution West Antarctic core is most consistent with 10-12 m2 s-1 values previously estimated from laboratory studies. More generally, our data synthesis and model results suggest that (i) MSA migration may be fairly ubiquitous, particularly at coastal and (or) high-accumulation regions across Greenland and Antarctica; and (ii) can significantly change annual and multiyear MSA concentration averages. Thus, in most cases, caution should be exercised when interpreting polar ice core MSA records, although records
Using msa-2b as a molecular marker for genotyping Mexican isolates of Babesia bovis.
Genis, Alma D; Perez, Jocelin; Mosqueda, Juan J; Alvarez, Antonio; Camacho, Minerva; Muñoz, Maria de Lourdes; Rojas, Carmen; Figueroa, Julio V
2009-12-01
Variable merozoite surface antigens of Babesia bovis are exposed glycoproteins having a role in erythrocyte invasion. Members of this gene family include msa-1 and msa-2 (msa-2c, msa-2a(1), msa-2a(2) and msa-2b). To determine the sequence variation among B. bovis Mexican isolates using msa-2b as a genetic marker, PCR amplicons corresponding to msa-2b were cloned and plasmids carrying the corresponding inserts were purified and sequenced. Comparative analysis of nucleotide and deduced amino acid sequences revealed distinct degrees of variability and identity among the coding gene sequences obtained from 16 geographically different Mexican B. bovis isolates and a reference strain. Clustal-W multiple alignments of the MSA-2b deduced amino acid sequences performed with the 17 B. bovis Mexican isolates, revealed the identification of three genotypes with a distinct set each of amino acid residues present at the variable region: Genotype I represented by the MO7 strain (in vitro culture-derived from the Mexico isolate) as well as RAD, Chiapas-1, Tabasco and Veracruz-3 isolates; Genotype II, represented by the Jalisco, Mexico and Veracruz-2 isolates; and Genotype III comprising the sequences from most of the isolates studied, Tamaulipas-1, Chiapas-2, Guerrero-1, Nayarit, Quintana Roo, Nuevo Leon, Tamaulipas-2, Yucatan and Guerrero-2. Moreover, these three genotypes could be discriminated against each other by using a PCR-RFLP approach. The results suggest that occurrence of indels within the variable region of msa-2b sequences can be useful markers for identifying a particular genotype present in field populations of B. bovis isolated from infected cattle in Mexico.
A success story LHC cable production at ALSTOM-MSA
Mocaer, P; Köhler, C; Verwaerde, C
2005-01-01
ITER, when constructed, will be the equipment using the largest amount of superconductor strands ever built (Nb$_{3}$Sn and NbTi). ALSTOM- MSA Magnets and Superconductors SA, "ALSTOM-MSA" received in 1998 the largest orders to date for the delivery of superconducting strands and cables (3100 km of cables for dipole and quadrupole magnets and various strands) for the Large Hadron Collider (LHC) being built at CERN Geneva. These orders to ALSTOM-MSA correspond to more than 600 metric tons of superconducting strands, an amount to be compared to around 600 metric tons of Nb$_{3}$Sn strands and 250 metric tons of NbTi strands necessary for ITER. Starting from small and short R&D programs in the early nineties, ALSTOM-MSA has reached its industrial targets and has, as of September 2004, delivered around 74% of the whole orders with products meeting high quality standards. Production is going on at contractual delivery rate and with satisfactory financial results to finish deliveries around end 2005, taking into...
Li, Xiaodi; Wang, Yuzhou; Wang, Zhanhang; Xu, Yan; Zheng, Wenhua
2018-01-01
The objective of the study is to evaluate postural dysfunction of multiple system atrophy-parkinsonian type (MSA-P) and cerebellar type (MSA-C) by static posturography exam. A total of 29 MSA-P patients, 40 MSA-C patients, and 23 healthy controls (HC) were recruited and engaged in a sensory organization test (SOT). The amplitude of the postural sway was measured and transformed into energy value by Fourier analyzer. SOT scores, frequency of falls and typical 3-Hz postural tremors during the four stance tasks, and energy value in three different frequency bands were recorded and compared. Compared with HC, SOT scores were significantly lower in MSA groups (P C patients (P C group (P C patients, in 24.1% MSA-P patients but in none of the HC (P C group was significantly higher than that of MSA-P, especially in higher frequency band (2 ~ 20 Hz) or in more difficult stance tasks (SOT 3 ~ 4, foam surface with eyes open or closed) (P C were characterized by severe static postural dysfunction. However, typical 3-Hz postural tremor was predominant in MSA-C and was very useful in the differential diagnosis between MSA-P and MSA-C.
BOREAS HYP-8 DEM Data Over The NSA-MSA and SSA-MSA in The AEAC Projection
Knapp, David E.; Hall, Forrest G. (Editor); Wang, Xue-Wen; Band, L. E.; Smith, David E. (Technical Monitor)
2000-01-01
These data were derived from the original Digital Elevation Models (DEMs) produced by the Boreal Ecosystem-Atmosphere Study (BOREAS) Hydrology (HYD)-8 team. The original DEMs were in the Universal Transverse Mercator (UTM) projection, while this product is projected in the Albers Equal-Area Conic (AEAC) projection. The pixel size of the data is 100 meters, which is appropriate for the 1:50,000-scale contours from which the DEMs were made. The original data were compiled from information available in the 1970s and 1980s. This data set covers the two Modeling Sub-Areas (MSAs) that are contained within the Southern Study Area (SSA) and the Northern Study Area (NSA). The data are stored in binary, image format files. The DEM data over the NSA-MSA and SSA-MSA in the AEAC projection are available from the Earth Observing System Data and Information System (EOSDIS) Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC). The data files are available on a CD-ROM (see document number 20010000884).
Denali Ice Core MSA: A Record of North Pacific Primary Productivity
Polashenski, D.; Osterberg, E. C.; Winski, D.; Kreutz, K. J.; Wake, C. P.; Ferris, D. G.; Introne, D.; Campbell, S. W.
2017-12-01
The high nutrient, low chlorophyll region of the North Pacific is one of the most biologically productive marine ecosystems in the world and forms the basis of commercial, sport, and subsistence fisheries worth more than a billion dollars annually. Marine phytoplankton prove to be important both as the primary producers in these ecosystems and as a major source of biogenic sulfur emissions which have long been hypothesized to serve as a biological control on Earth's climate system. Despite their importance, the record of marine phytoplankton abundance and the flux of biogenic sulfur from these regions is not well constrained. In situ measurements of marine phytoplankton from oceanographic cruises over the past several decades are limited in both spatial and temporal resolution. Meanwhile, marine sediment records may provide insight on million year timescales, but lack decadal resolution due to slow sediment deposition rates and bioturbation. In this study, we aim to investigate changes in marine phytoplankton productivity of the northeastern subarctic Pacific Ocean (NSPO) over the twentieth century using the methanesulfonic acid (MSA) record from the Mt. Hunter ice cores drilled in Denali National Park, Alaska. These parallel, 208 meter long ice cores were drilled during the 2013 field season on the Mt. Hunter plateau (63° N, 151° W, 4,000 m above sea level). Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) modeling is used to identify likely source areas in the NSPO for MSA being transported to the core site. SeaWiFS satellite imagery allows for a direct comparison of chlorophyll a concentrations in these source areas with MSA concentrations in the core record through time. Our findings suggest that the Denali ice core MSA record reflects changes in the biological productivity of marine phytoplankton and shows a significant decline in MSA beginning in 1961. We investigate several hypotheses for potential mechanisms driving this MSA decline
Directory of Open Access Journals (Sweden)
Sadreyev Ruslan I
2004-08-01
Full Text Available Abstract Background Profile-based analysis of multiple sequence alignments (MSA allows for accurate comparison of protein families. Here, we address the problems of detecting statistically confident dissimilarities between (1 MSA position and a set of predicted residue frequencies, and (2 between two MSA positions. These problems are important for (i evaluation and optimization of methods predicting residue occurrence at protein positions; (ii detection of potentially misaligned regions in automatically produced alignments and their further refinement; and (iii detection of sites that determine functional or structural specificity in two related families. Results For problems (1 and (2, we propose analytical estimates of P-value and apply them to the detection of significant positional dissimilarities in various experimental situations. (a We compare structure-based predictions of residue propensities at a protein position to the actual residue frequencies in the MSA of homologs. (b We evaluate our method by the ability to detect erroneous position matches produced by an automatic sequence aligner. (c We compare MSA positions that correspond to residues aligned by automatic structure aligners. (d We compare MSA positions that are aligned by high-quality manual superposition of structures. Detected dissimilarities reveal shortcomings of the automatic methods for residue frequency prediction and alignment construction. For the high-quality structural alignments, the dissimilarities suggest sites of potential functional or structural importance. Conclusion The proposed computational method is of significant potential value for the analysis of protein families.
Sahukhal, Gyan S; Elasri, Mohamed O
2014-06-11
Community-acquired, methicillin-resistant Staphylococcus aureus strains often cause localized infections in immunocompromised hosts, but some strains show enhanced virulence leading to severe infections even among healthy individuals with no predisposing risk factors. The genetic basis for this enhanced virulence has yet to be determined. S. aureus possesses a wide variety of virulence factors, the expression of which is carefully coordinated by a variety of regulators. Several virulence regulators have been well characterized, but others have yet to be thoroughly investigated. Previously, we identified the msa gene as a regulator of several virulence genes, biofilm development, and antibiotic resistance. We also found evidence of the involvement of upstream genes in msa function. To investigate the mechanism of regulation of the msa gene (renamed msaC), we examined the upstream genes whose expression was affected by its deletion. We showed that msaC is part of a newly defined four-gene operon (msaABCR), in which msaC is a non-protein-coding RNA that is essential for the function of the operon. Furthermore, we found that an antisense RNA (msaR) is complementary to the 5' end of the msaB gene and is expressed in a growth phase-dependent manner suggesting that it is involved in regulation of the operon. These findings allow us to define a new operon that regulates fundamental phenotypes in S. aureus such as biofilm development and virulence. Characterization of the msaABCR operon will allow us to investigate the mechanism of function of this operon and the role of the individual genes in regulation and interaction with its targets. This study identifies a new element in the complex regulatory circuits in S. aureus, and our findings may be therapeutically relevant.
International Nuclear Information System (INIS)
Choi, Jae Yeon; Jeong, Jae Min; Yoo, Byong Chul; Kim, Kyunggon; Kim, Youngsoo; Yang, Bo Yeun; Lee, Yun-Sang; Lee, Dong Soo; Chung, June-Key; Lee, Myung Chul
2011-01-01
Introduction: Although many sentinel lymph node (SLN) imaging agents labeled with 99m Tc have been developed, no positron-emitting agent has been specifically designed for SLN imaging. Furthermore, the development of the beta probe and the requirement for better image resolution have increased the need for a positron-emitting SLN imaging agent. Here, we describe the development of a novel positron-emitting SLN imaging agent labeled with 68 Ga. Methods: A mannosylated human serum albumin (MSA) was synthesized by conjugating α-D-mannopyranosylphenyl isothiocyanate to human serum albumin in sodium carbonate buffer (pH 9.5), and then 2-(p-isothiocyanatobenzyl)-1,4,7-triazacyclononane-1,4,7-triacetic acid was conjugated to synthesize NOTA-MSA. Numbers of mannose and NOTA units conjugated in NOTA-MSA were determined by matrix-assisted laser desorption ionization time-of-flight mass spectrometry. NOTA-MSA was labeled with 68 Ga at room temperature. The stability of 68 Ga-NOTA-MSA was checked in labeling medium at room temperature and in human serum at 37 o C. Biodistribution in normal ICR mice was investigated after tail vein injection, and micro-positron emission tomography (PET) images were obtained after injecting 68 Ga-NOTA-MSA into a tail vein or a footpad. Results: The numbers of conjugated α-D-mannopyranosylphenyl isothiocyanate and 2-(p-isothiocyanatobenzyl)-1,4,7-triazacyclononane-1,4,7-triacetic acid units in NOTA-MSA were 10.6 and 6.6, respectively. The labeling efficiency of 68 Ga-NOTA-MSA was greater than 99% at room temperature, and its stability was greater than 99% at 4 h. Biodistribution and micro-PET studies of 68 Ga-NOTA-MSA showed high liver and spleen uptakes after intravenous injection. 68 Ga-NOTA-MSA injected into a footpad rapidly migrated to the lymph node. Conclusions: 68 Ga-NOTA-MSA was successfully developed as a novel SLN imaging agent for PET. NOTA-MSA is easily labeled at high efficiency, and subcutaneously administered 68 Ga-NOTA-MSA was
Sahukhal, Gyan S; Batte, Justin L; Elasri, Mohamed O
2015-02-01
Staphylococcus aureus is an important human pathogen that causes nosocomial and community-acquired infections. One of the most important aspects of staphylococcal infections is biofilm development within the host, which renders the bacterium resistant to the host's immune response and antimicrobial agents. Biofilm development is very complex and involves several regulators that ensure cell survival on surfaces within the extracellular polymeric matrix. Previously, we identified the msaABCR operon as an additional positive regulator of biofilm formation. In this study, we define the regulatory pathway by which msaABCR controls biofilm formation. We demonstrate that the msaABCR operon is a negative regulator of proteases. The control of protease production mediates the processing of the major autolysin, Atl, and thus regulates the rate of autolysis. In the absence of the msaABCR operon, Atl is processed by proteases at a high rate, leading to increased cell death and a defect in biofilm maturation. We conclude that the msaABCR operon plays a key role in maintaining the balance between autolysis and growth within the staphylococcal biofilm. © FEMS 2015. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Planning JWST NIRSpec MSA spectroscopy using NIRCam pre-images
Beck, Tracy L.; Ubeda, Leonardo; Kassin, Susan A.; Gilbert, Karoline; Karakla, Diane M.; Reid, I. N.; Blair, William P.; Keyes, Charles D.; Soderblom, D. R.; Peña-Guerrero, Maria A.
2016-07-01
The Near-Infrared Spectrograph (NIRSpec) is the work-horse spectrograph at 1-5microns for the James Webb Space Telescope (JWST). A showcase observing mode of NIRSpec is the multi-object spectroscopy with the Micro-Shutter Arrays (MSAs), which consist of a quarter million tiny configurable shutters that are 0. ''20×0. ''46 in size. The NIRSpec MSA shutters can be opened in adjacent rows to create flexible and positionable spectroscopy slits on prime science targets of interest. Because of the very small shutter width, the NIRSpec MSA spectral data quality will benefit significantly from accurate astrometric knowledge of the positions of planned science sources. Images acquired with the Hubble Space Telescope (HST) have the optimal relative astrometric accuracy for planning NIRSpec observations of 5-10 milli-arcseconds (mas). However, some science fields of interest might have no HST images, galactic fields can have moderate proper motions at the 5mas level or greater, and extragalactic images with HST may have inadequate source information at NIRSpec wavelengths beyond 2 microns. Thus, optimal NIRSpec spectroscopy planning may require pre-imaging observations with the Near-Infrared Camera (NIRCam) on JWST to accurately establish source positions for alignment with the NIRSpec MSAs. We describe operational philosophies and programmatic considerations for acquiring JWST NIRCam pre-image observations for NIRSpec MSA spectroscopic planning within the same JWST observing Cycle.
Directory of Open Access Journals (Sweden)
Okky Rizkia Yustian
2016-06-01
consumer’s demands, competition, and requirement to improve profit. This research is intended to figure out and analyze the tools applied in the PT MSA’s product development and its alternatives. Quality Function Deployment (QFD is used to determine product attributes for consumer’s demand, company’s performance, level of interest, technical parameters, requirement of process, and quality procedures to improve the products. This research is designed as a quantitative research method. QFD application in PT MSA results 15 product attributes, which are demanded by consumers. Those attributes are interpreted through 9 technical parameters in House of Quality (HOQ for packaged fresh milk, and 15 product attributes for yoghurt. Based on the priority analysis, PT MSA will conduct their business decision to develop yoghurt product, which is expected to increase the total sales.
Accurate reconstruction of insertion-deletion histories by statistical phylogenetics.
Directory of Open Access Journals (Sweden)
Oscar Westesson
Full Text Available The Multiple Sequence Alignment (MSA is a computational abstraction that represents a partial summary either of indel history, or of structural similarity. Taking the former view (indel history, it is possible to use formal automata theory to generalize the phylogenetic likelihood framework for finite substitution models (Dayhoff's probability matrices and Felsenstein's pruning algorithm to arbitrary-length sequences. In this paper, we report results of a simulation-based benchmark of several methods for reconstruction of indel history. The methods tested include a relatively new algorithm for statistical marginalization of MSAs that sums over a stochastically-sampled ensemble of the most probable evolutionary histories. For mammalian evolutionary parameters on several different trees, the single most likely history sampled by our algorithm appears less biased than histories reconstructed by other MSA methods. The algorithm can also be used for alignment-free inference, where the MSA is explicitly summed out of the analysis. As an illustration of our method, we discuss reconstruction of the evolutionary histories of human protein-coding genes.
Energy Technology Data Exchange (ETDEWEB)
Choi, Jae Yeon [Department of Nuclear Medicine, Institute of Radiation Medicine, Seoul National University College of Medicine, Seoul (Korea, Republic of); Cancer Research Institute, Seoul National University College of Medicine, Seoul (Korea, Republic of); Department of Radiation Applied Life Sciences, Seoul National University College of Medicine, Seoul (Korea, Republic of); Jeong, Jae Min, E-mail: jmjng@snu.ac.k [Department of Nuclear Medicine, Institute of Radiation Medicine, Seoul National University College of Medicine, Seoul (Korea, Republic of); Cancer Research Institute, Seoul National University College of Medicine, Seoul (Korea, Republic of); Department of Radiation Applied Life Sciences, Seoul National University College of Medicine, Seoul (Korea, Republic of); Yoo, Byong Chul [Research Institute, National Cancer Center, Goyang, Gyeonggi (Korea, Republic of); Kim, Kyunggon; Kim, Youngsoo [Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul (Korea, Republic of); Yang, Bo Yeun; Lee, Yun-Sang; Lee, Dong Soo [Department of Nuclear Medicine, Institute of Radiation Medicine, Seoul National University College of Medicine, Seoul (Korea, Republic of); Department of Radiation Applied Life Sciences, Seoul National University College of Medicine, Seoul (Korea, Republic of); Chung, June-Key [Department of Nuclear Medicine, Institute of Radiation Medicine, Seoul National University College of Medicine, Seoul (Korea, Republic of); Cancer Research Institute, Seoul National University College of Medicine, Seoul (Korea, Republic of); Department of Radiation Applied Life Sciences, Seoul National University College of Medicine, Seoul (Korea, Republic of); Lee, Myung Chul [Department of Nuclear Medicine, Institute of Radiation Medicine, Seoul National University College of Medicine, Seoul (Korea, Republic of)
2011-04-15
Introduction: Although many sentinel lymph node (SLN) imaging agents labeled with {sup 99m}Tc have been developed, no positron-emitting agent has been specifically designed for SLN imaging. Furthermore, the development of the beta probe and the requirement for better image resolution have increased the need for a positron-emitting SLN imaging agent. Here, we describe the development of a novel positron-emitting SLN imaging agent labeled with {sup 68}Ga. Methods: A mannosylated human serum albumin (MSA) was synthesized by conjugating {alpha}-D-mannopyranosylphenyl isothiocyanate to human serum albumin in sodium carbonate buffer (pH 9.5), and then 2-(p-isothiocyanatobenzyl)-1,4,7-triazacyclononane-1,4,7-triacetic acid was conjugated to synthesize NOTA-MSA. Numbers of mannose and NOTA units conjugated in NOTA-MSA were determined by matrix-assisted laser desorption ionization time-of-flight mass spectrometry. NOTA-MSA was labeled with {sup 68}Ga at room temperature. The stability of {sup 68}Ga-NOTA-MSA was checked in labeling medium at room temperature and in human serum at 37{sup o}C. Biodistribution in normal ICR mice was investigated after tail vein injection, and micro-positron emission tomography (PET) images were obtained after injecting {sup 68}Ga-NOTA-MSA into a tail vein or a footpad. Results: The numbers of conjugated {alpha}-D-mannopyranosylphenyl isothiocyanate and 2-(p-isothiocyanatobenzyl)-1,4,7-triazacyclononane-1,4,7-triacetic acid units in NOTA-MSA were 10.6 and 6.6, respectively. The labeling efficiency of {sup 68}Ga-NOTA-MSA was greater than 99% at room temperature, and its stability was greater than 99% at 4 h. Biodistribution and micro-PET studies of {sup 68}Ga-NOTA-MSA showed high liver and spleen uptakes after intravenous injection. {sup 68}Ga-NOTA-MSA injected into a footpad rapidly migrated to the lymph node. Conclusions: {sup 68}Ga-NOTA-MSA was successfully developed as a novel SLN imaging agent for PET. NOTA-MSA is easily labeled at high
Gender-specific association of MSA2756G with hypertension in ...
African Journals Online (AJOL)
MSA2756G) in the hypertensive patients in northwest Chinese population. Methods: A total of 378 unrelated hypertensive patients attending Ningxia Peoples Hospital, Ningxia Province, China, were recruited for this study. We analyzed genotype by ...
DEFF Research Database (Denmark)
Petersen, Lene Maj; Holm, Dorte Koefoed; Gotfredsen, Charlotte Held
2015-01-01
further incorporates into three compounds that we name aculins A and B, and epi-aculin A, described here for the first time. Based on NMR data and bioinformatic studies we propose the structures of the compounds as well as a biosynthetic pathway leading to formation of aculins from 6-MSA....
Energy Technology Data Exchange (ETDEWEB)
Diantoro, Markus, E-mail: m-diantoror@yahoo.com; Fitrianingsih, Rina, E-mail: m-diantoror@yahoo.com; Mufti, Nandang, E-mail: m-diantoror@yahoo.com; Fuad, Abdulloh, E-mail: m-diantoror@yahoo.com [Department of Physics, Faculty of Mathematics and Natural Sciences, Universitas Negeri Malang (UM), Jl. Semarang No. 5 Malang 65145 (Indonesia)
2014-03-24
Nanosilver is currently one of the most common engineered nanomaterials and is used in many applications that lead to the release of silver nanoparticles and silver ions into aqueous systems. Nanosilver also possesses enhanced antimicrobial activity and bioavailability that may less environmental risk compared with other manufactured nanomaterials. Described in this research are the synthesis of silver nanoparticle produced by chemical reduction from silver nitrate (AgNO{sub 3}) solution. As a reducing agent, Sodium Borohydride (NaBH{sub 4}) was used and mercaptosuccinic Acid (MSA) as stabilizer to prevent the nanoparticle from aglomerating. It was also used two kinds of solvent, they are water and methanol. In typical experiment MSA was dissolve in methanol with a number of variation of molarity i.e. 0,03 M, 0,06 M, 0,12 M, 0,15 M, and the mixture was kept under vigorous stirring in an ice bath. A solution of silver nitrate of 340 mg in 6,792 ml water was added. A freshly prepared aqueous solution of sodium borohydride (756,6 mL in 100 mL of water) was added drop wisely. The solution was kept for half an hour for stirring and were allowed to settle down in methanol. The obtained samples then characterized by means of x-ray diffractometer, and scanning electron microscopy, as well as transmission electron microscopy to obtain their structures of silver nanoparticles, morphology, and sizes. It is shown that diameter of silver nanoparticle sized about 24.3 nm (Ag@MSA 0.03 M), 20.4 nm (Ag@MSA 0.06 M), 16.8 nm (Ag@MSA 0.12 M), 16.9 nm (Ag@MSA 0.15 M) which was calculated by Scherrer formula by taking the FWHM from fitting to Gaussian. The phases and lattice parameter showed that there is no significant change in its volume by increasing molarity of stabilizer. In contrast, the size of particles is decreasing.
2010-01-01
... Evansville-Henderson MSA IN-KY Fort Wayne MSA IN Fresno MSA CA Grand Rapids-Muskegon-Holland MSA MI... MSA LA New York-N. New Jersey-Long Island CMSA NY-NJ-CT-PA Norfolk-Virginia Beach-Newport News MSA VA...
Directory of Open Access Journals (Sweden)
Ana C. Henriques
2016-10-01
Full Text Available Methanesulfonic acid (MSA is a relevant intermediate of the biogeochemical cycle of sulfur and environmental microorganisms assume an important role in the mineralization of this compound. Several methylotrophic bacterial strains able to grow on MSA have been isolated from soil or marine water and two conserved operons, msmABCD coding for MSA monooxygenase and msmEFGH coding for a transport system, have been repeatedly encountered in most of these strains. Homologous sequences have also been amplified directly from the environment or observed in marine metagenomic data, but these showed a base composition (G + C content very different from their counterparts from cultivated bacteria. The aim of this study was to understand which microorganisms within the coastal surface oceanic microflora responded to MSA as a nutrient and how the community evolved in the early phases of an enrichment by means of metagenome and gene-targeted amplicon sequencing. From the phylogenetic point of view, the community shifted significantly with the disappearance of all signals related to the Archaea, the Pelagibacteraceae and phylum SAR406, and the increase in methylotroph-harboring taxa, accompanied by other groups so far not known to comprise methylotrophs such as the Hyphomonadaceae. At the functional level, the abundance of several genes related to sulfur metabolism and methylotrophy increased during the enrichment and the allelic distribution of gene msmA diagnostic for MSA monooxygenase altered considerably. Even more dramatic was the disappearance of MSA import-related gene msmE, which suggests that alternative transporters must be present in the enriched community and illustrate the inadequacy of msmE as an ecofunctional marker for MSA degradation at sea.
Optimisation of timetable-based, stochastic transit assignment models based on MSA
DEFF Research Database (Denmark)
Nielsen, Otto Anker; Frederiksen, Rasmus Dyhr
2006-01-01
(CRM), such a large-scale transit assignment model was developed and estimated. The Stochastic User Equilibrium problem was solved by the Method of Successive Averages (MSA). However, the model suffered from very large calculation times. The paper focuses on how to optimise transit assignment models...
Jha, Chaitanya Kumar; Patel, Baldev; Saraf, Meenu
2012-03-01
A novel Enterobacter cancerogenus MSA2 is a plant growth promoting gamma-proteobacterium that was isolated from the rhizosphere of Jatropha cucas a potentially important biofuel feed stock plant. Based on phenotypic, physiological, biochemical and phylogenetic studies, strain MSA2 could be classified as a member of E. cancerogenus. However, comparisons of characteristics with other known species of the genus Enterobacter suggested that strain MSA2 could be a novel PGPB strain. In vitro studies were carried for the plant growth promoting attribute of this culture. It tested positive for ACC (1-aminocyclopropane-1-carboxylic acid) deaminase production, phytase, phosphate solubilization, IAA (Indole acetic acid) production, siderophore, and ammonia production. The isolate was then used as a inoculant for the vegetative study of Jatropha curcas plant. Enterobacter cancerogenus MSA2 supplemented with 1% carboxymethylcellulose showed overall plant growth promotion effect resulting in enhanced root length (124.14%), fresh root mass (81%), fresh shoot mass (120.02%), dry root mass (124%), dry shoot mass (105.54%), number of leaf (30.72%), chlorophyll content (50.41%), and biomass (87.20%) over control under the days of experimental observation. This study was designed for 120 days and was in triplicate and the data was collected at every 30 days.
Nagano, Daisuke; Sivakumar, Thillaiampalam; De De Macedo, Alane Caine Costa; Inpankaew, Tawin; Alhassan, Andy; Igarashi, Ikuo; Yokoyama, Naoaki
2013-11-01
In the present study, we screened blood DNA samples obtained from cattle bred in Brazil (n=164) and Ghana (n=80) for Babesia bovis using a diagnostic PCR assay and found prevalences of 14.6% and 46.3%, respectively. Subsequently, the genetic diversity of B. bovis in Thailand, Brazil and Ghana was analyzed, based on the DNA sequence of merozoite surface antigen-1 (MSA-1). In Thailand, MSA-1 sequences were relatively conserved and found in a single clade of the phylogram, while Brazilian MSA-1 sequences showed high genetic diversity and were dispersed across three different clades. In contrast, the sequences from Ghanaian samples were detected in two different clades, one of which contained only a single Ghanaian sequence. The identities among the MSA-1 sequences from Thailand, Brazil and Ghana were 99.0-100%, 57.5-99.4% and 60.3-100%, respectively, while the similarities among the deduced MSA-1 amino acid sequences within the respective countries were 98.4-100%, 59.4-99.7% and 58.7-100%, respectively. These observations suggested that the genetic diversity of B. bovis based on MSA-1 sequences was higher in Brazil and Ghana than in Thailand. The current data highlight the importance of conducting extensive studies on the genetic diversity of B. bovis before designing immune control strategies in each surveyed country.
Kiran, G Seghal; Thomas, T Anto; Selvin, Joseph
2010-06-15
Considering the need of potential biosurfactant producers and economic production processes using industrial waste, the present study aims to develop solid-state culture (SSC) of a marine actinobacterium for biosurfactant production. A potential biosurfactant producer Nocardiopsis lucentensis MSA04 was isolated from the marine sponge Dendrilla nigra. Among the substrates screened, wheat bran increased the production significantly (E(24) 25%) followed by oil seed cake and industrial waste such as tannery pretreated sludge, treated molasses (distillery waste) and pretreated molasses. Enhanced biosurfactant production was achieved under SSC conditions using kerosene as carbon source, beef extract as nitrogen source and wheat bran as substrate. The maximum production of biosurfactant by MSA04 occurred at a C/N ratio of 0.5 envisaging that a higher amount of nitrogen source is required by the strain compared to that of the carbon source. The kerosene and beef extract interactively increase the production and a stable production was attained with the influence of both factors independently. A significant interactive influence of secondary control factors such as copper sulfate and inoculum size was validated in response surface methods-based experiments. The surface active compound produced by MSA04 was characterized as glycolipid with a hydrophobic non-polar hydrocarbon chain (nonanoic acid methyl ester) and hydrophilic sugar, 3-acetyl 2,5 dimethyl furan. In conclusion, the strain N. lucentensis MSA04 was a potential source of glycolipid biosurfactant, could be used for the development of bioremediation processes in the marine environment. Copyright 2010 Elsevier B.V. All rights reserved.
Miller, Nick; Nath, Uma; Noble, Emma; Burn, David
2017-06-01
To determine if perceptual speech measures distinguish people with Parkinson's disease (PD), multiple system atrophy with predominant parkinsonism (MSA-P) and progressive supranuclear palsy (PSP). Speech-language therapists blind to patient characteristics employed clinical rating scales to evaluate speech/voice in 24 people with clinically diagnosed PD, 17 with PSP and 9 with MSA-P, matched for disease duration (mean 4.9 years, standard deviation 2.2). No consistent intergroup differences appeared on specific speech/voice variables. People with PD were significantly less impaired on overall speech/voice severity. Analyses by severity suggested further investigation around laryngeal, resonance and fluency changes may characterize individual groups. MSA-P and PSP compared with PD were distinguished by severity of speech/voice deterioration, but individual speech/voice parameters failed to consistently differentiate groups.
Kim, Daniel; Griffin, Beth Ann; Kabeto, Mohammed; Escarce, José; Langa, Kenneth M; Shih, Regina A
2016-01-01
Much variation in individual-level cognitive function in late life remains unexplained, with little exploration of area-level/contextual factors to date. Income inequality is a contextual factor that may plausibly influence cognitive function. In a nationally-representative cohort of older Americans from the Health and Retirement Study, we examined state- and metropolitan statistical area (MSA)-level income inequality as predictors of individual-level cognitive function measured by the 27-point Telephone Interview for Cognitive Status (TICS-m) scale. We modeled latency periods of 8-20 years, and controlled for state-/metropolitan statistical area (MSA)-level and individual-level factors. Higher MSA-level income inequality predicted lower cognitive function 16-18 years later. Using a 16-year lag, living in a MSA in the highest income inequality quartile predicted a 0.9-point lower TICS-m score (β = -0.86; 95% CI = -1.41, -0.31), roughly equivalent to the magnitude associated with five years of aging. We observed no associations for state-level income inequality. The findings were robust to sensitivity analyses using propensity score methods. Among older Americans, MSA-level income inequality appears to influence cognitive function nearly two decades later. Policies reducing income inequality levels within cities may help address the growing burden of declining cognitive function among older populations within the United States.
Improvements in the sensibility of MSA-GA tool using COFFEE objective function
International Nuclear Information System (INIS)
Amorim, A R; Zafalon, G F D; Neves, L A; Valêncio, C R; Machado, J M; Pinto, A R
2015-01-01
The sequence alignment is one of the most important tasks in Bioinformatics, playing an important role in the sequences analysis. There are many strategies to perform sequence alignment, since those use deterministic algorithms, as dynamic programming, until those ones, which use heuristic algorithms, as Progressive, Ant Colony (ACO), Genetic Algorithms (GA), Simulated Annealing (SA), among others. In this work, we have implemented the objective function COFFEE in the MSA-GA tool, in substitution of Weighted Sum-of-Pairs (WSP), to improve the final results. In the tests, we were able to verify the approach using COFFEE function achieved better results in 81% of the lower similarity alignments when compared with WSP approach. Moreover, even in the tests with more similar sets, the approach using COFFEE was better in 43% of the times
Moretti, Marino; Grunau, Alexander; Minerdi, Daniela; Gehrig, Peter; Roschitzki, Bernd; Eberl, Leo; Garibaldi, Angelo; Gullino, Maria Lodovica; Riedel, Kathrin
2010-09-01
Fusarium oxysporum is an important plant pathogen that causes severe damage of many economically important crop species. Various microorganisms have been shown to inhibit this soil-borne plant pathogen, including non-pathogenic F. oxysporum strains. In this study, F. oxysporum wild-type (WT) MSA 35, a biocontrol multispecies consortium that consists of a fungus and numerous rhizobacteria mainly belonging to gamma-proteobacteria, was analyzed by two complementary metaproteomic approaches (2-DE combined with MALDI-Tof/Tof MS and 1-D PAGE combined with LC-ESI-MS/MS) to identify fungal or bacterial factors potentially involved in antagonistic or synergistic interactions between the consortium members. Moreover, the proteome profiles of F. oxysporum WT MSA 35 and its cured counter-part CU MSA 35 (WT treated with antibiotics) were compared with unravel the bacterial impact on consortium functioning. Our study presents the first proteome mapping of an antagonistic F. oxysporum strain and proposes candidate proteins that might play an important role for the biocontrol activity and the close interrelationship between the fungus and its bacterial partners.
2011-07-21
... during the course of the FY. Management uncertainty for the LAGC IFQ fleet is considered very low because... incorporate ACT into the LAGC IFQ fleet allocation, but chose not to apply any management uncertainty buffer... Atmospheric Administration 50 CFR Part 648 Magnuson-Stevens Fishery Conservation and Management Act (MSA...
Okuno, Tomofumi; Miura, Kiyoshi; Sakazaki, Fumitoshi; Nakamuro, Katsuhiko; Ueno, Hitoshi
2012-01-01
The purpose of this study was to clarify the cell growth inhibitory mechanism of human breast cancer cells caused by selenium (Se) compounds. In the presence of 17β-estradiol (E(2)) at physiological concentrations, growth of estrogen receptor α (ERα)-positive T47D cells was markedly inhibited by 1 × 10(-6) mol/L methylseleninic acid (MSA) with no Se related toxicity.Under conditions where cell growth was inhibited, MSA decreased ERα mRNA levels and subsequent protein levels; further decreasing expression of estrogen-responsive finger protein (Efp) which is a target gene product of ERα and promotes G2/M progression of the cell cycle. Therefore, the decline in Efp expression is presumed to be involved in G2 arrest. Coincidentally, the antioxidative thioredoxin/ thioredoxin reductase (Trx/TrxR) system in cells was enhanced by the synergistic action of E(2) and MSA. It has been reported that ROS-induced oxidative stress enhanced ERα expression. E(2) increased production of intracellular ROS in T47D cells. Meanwhile, MSA significantly decreased E(2)-induced ROS accumulation. From these results, activation of the Trx/TrxR system induced by the coexistence of MSA and E(2) suppresses oxidative stress and decreases expression of ERα, and finally induces the growth arrest of T47D cells through disruption of ERα signaling.
Directory of Open Access Journals (Sweden)
H. Bardouki
2003-01-01
Full Text Available A detailed study of the levels, the temporal and diurnal variability of the main compounds involved in the biogenic sulfur cycle was carried out in Crete (Eastern Mediterranean during the Mediterranean Intensive Oxidant Study (MINOS field experiment in July-August 2001. Intensive measurements of gaseous dimethylsulfide (DMS, dimethylsulfoxide (DMSO, sulfur dioxide (SO2, sulfuric (H2SO4 and methanesulfonic acids (MSA and particulate sulfate (SO42- and methanesulfonate (MS- have been performed during the campaign. Dimethylsulfide (DMS levels ranged from 2.9 to 136 pmol·mol-1 (mean value of 21.7 pmol·mol-1 and showed a clear diurnal variation with daytime maximum. During nighttime DMS levels fall close or below the detection limit of 2 pmol·mol-1. Concurrent measurements of OH and NO3 radicals during the campaign indicate that NO3 levels can explain most of the observed diurnal variation of DMS. Dimethylsulfoxide (DMSO ranged between 0.02 and 10.1 pmol·mol-1 (mean value of 1.7 pmol·mol-1 and presents a diurnal variation similar to that of DMS. SO2 levels ranged from 220 to 2970 pmol·mol-1 (mean value of 1030 pmol·mol-1, while nss-SO42- and MS- ranged from 330 to 7100 pmol·mol-1, (mean value of 1440 pmol·mol-1 and 1.1 to 37.5 pmol·mol-1 (mean value of 11.5 pmol·mol-1 respectively. Of particular interest are the measurements of gaseous MSA and H2SO4. MSA ranged from below the detection limit (3x104 to 3.7x107 molecules cm-3, whereas H2SO4 ranged between 1x105 and 9.0x107 molecules cm-3. The measured H2SO4 maxima are among the highest reported in literature and can be attributed to high insolation, absence of precipitation and increased SO2 levels in the area. From the concurrent SO2, OH, and H2SO4 measurements a sticking coefficient of 0.52±0.28 was calculated for H2SO4. From the concurrent MSA, OH, and DMS measurements the yield of gaseous MSA from the OH-initiated oxidation of DMS was calculated to range between 0.1-0.4%. This low MSA
Tsukada, Kimiko
2012-01-01
This study aimed to compare the perception of short vs. long vowel contrasts in Japanese and Modern Standard Arabic (MSA) by four groups of listeners differing in their linguistic backgrounds: native Arabic (NA), native Japanese (NJ), non-native Japanese (NNJ) and Australian English (OZ) speakers. The NNJ and OZ groups shared the first language…
Statistical data analysis using SAS intermediate statistical methods
Marasinghe, Mervyn G
2018-01-01
The aim of this textbook (previously titled SAS for Data Analytics) is to teach the use of SAS for statistical analysis of data for advanced undergraduate and graduate students in statistics, data science, and disciplines involving analyzing data. The book begins with an introduction beyond the basics of SAS, illustrated with non-trivial, real-world, worked examples. It proceeds to SAS programming and applications, SAS graphics, statistical analysis of regression models, analysis of variance models, analysis of variance with random and mixed effects models, and then takes the discussion beyond regression and analysis of variance to conclude. Pedagogically, the authors introduce theory and methodological basis topic by topic, present a problem as an application, followed by a SAS analysis of the data provided and a discussion of results. The text focuses on applied statistical problems and methods. Key features include: end of chapter exercises, downloadable SAS code and data sets, and advanced material suitab...
Directory of Open Access Journals (Sweden)
Caitlin L. Banks
2017-08-01
Full Text Available Muscle synergy analysis (MSA is a mathematical technique that reduces the dimensionality of electromyographic (EMG data. Used increasingly in biomechanics research, MSA requires methodological choices at each stage of the analysis. Differences in methodological steps affect the overall outcome, making it difficult to compare results across studies. We applied MSA to EMG data collected from individuals post-stroke identified as either responders (RES or non-responders (nRES on the basis of a critical post-treatment increase in walking speed. Importantly, no clinical or functional indicators identified differences between the cohort of RES and nRES at baseline. For this exploratory study, we selected the five highest RES and five lowest nRES available from a larger sample. Our goal was to assess how the methodological choices made before, during, and after MSA affect the ability to differentiate two groups with intrinsic physiologic differences based on MSA results. We investigated 30 variations in MSA methodology to determine which choices allowed differentiation of RES from nRES at baseline. Trial-to-trial variability in time-independent synergy vectors (SVs and time-varying neural commands (NCs were measured as a function of: (1 number of synergies computed; (2 EMG normalization method before MSA; (3 whether SVs were held constant across trials or allowed to vary during MSA; and (4 synergy analysis output normalization method after MSA. MSA methodology had a strong effect on our ability to differentiate RES from nRES at baseline. Across all 10 individuals and MSA variations, two synergies were needed to reach an average of 90% variance accounted for (VAF. Based on effect sizes, differences in SV and NC variability between groups were greatest using two synergies with SVs that varied from trial-to-trial. Differences in SV variability were clearest using unit magnitude per trial EMG normalization, while NC variability was less sensitive to EMG
BOREAS TGB-12 Soil Carbon and Flux Data of NSA-MSA in Raster Format
Hall, Forrest G. (Editor); Knapp, David E. (Editor); Rapalee, Gloria; Davidson, Eric; Harden, Jennifer W.; Trumbore, Susan E.; Veldhuis, Hugo
2000-01-01
The BOREAS TGB-12 team made measurements of soil carbon inventories, carbon concentration in soil gases, and rates of soil respiration at several sites. This data set provides: (1) estimates of soil carbon stocks by horizon based on soil survey data and analyses of data from individual soil profiles; (2) estimates of soil carbon fluxes based on stocks, fire history, drain-age, and soil carbon inputs and decomposition constants based on field work using radiocarbon analyses; (3) fire history data estimating age ranges of time since last fire; and (4) a raster image and an associated soils table file from which area-weighted maps of soil carbon and fluxes and fire history may be generated. This data set was created from raster files, soil polygon data files, and detailed lab analysis of soils data that were received from Dr. Hugo Veldhuis, who did the original mapping in the field during 1994. Also used were soils data from Susan Trumbore and Jennifer Harden (BOREAS TGB-12). The binary raster file covers a 733-km 2 area within the NSA-MSA.
An Instructional Module on Mokken Scale Analysis
Wind, Stefanie A.
2017-01-01
Mokken scale analysis (MSA) is a probabilistic-nonparametric approach to item response theory (IRT) that can be used to evaluate fundamental measurement properties with less strict assumptions than parametric IRT models. This instructional module provides an introduction to MSA as a probabilistic-nonparametric framework in which to explore…
Multivariate Empirical Mode Decomposition Based Signal Analysis and Efficient-Storage in Smart Grid
Energy Technology Data Exchange (ETDEWEB)
Liu, Lu [University of Tennessee, Knoxville (UTK); Albright, Austin P [ORNL; Rahimpour, Alireza [University of Tennessee, Knoxville (UTK); Guo, Jiandong [University of Tennessee, Knoxville (UTK); Qi, Hairong [University of Tennessee, Knoxville (UTK); Liu, Yilu [University of Tennessee (UTK) and Oak Ridge National Laboratory (ORNL)
2017-01-01
Wide-area-measurement systems (WAMSs) are used in smart grid systems to enable the efficient monitoring of grid dynamics. However, the overwhelming amount of data and the severe contamination from noise often impede the effective and efficient data analysis and storage of WAMS generated measurements. To solve this problem, we propose a novel framework that takes advantage of Multivariate Empirical Mode Decomposition (MEMD), a fully data-driven approach to analyzing non-stationary signals, dubbed MEMD based Signal Analysis (MSA). The frequency measurements are considered as a linear superposition of different oscillatory components and noise. The low-frequency components, corresponding to the long-term trend and inter-area oscillations, are grouped and compressed by MSA using the mean shift clustering algorithm. Whereas, higher-frequency components, mostly noise and potentially part of high-frequency inter-area oscillations, are analyzed using Hilbert spectral analysis and they are delineated by statistical behavior. By conducting experiments on both synthetic and real-world data, we show that the proposed framework can capture the characteristics, such as trends and inter-area oscillation, while reducing the data storage requirements
Comparison of Cerebral Glucose Metabolism between Possible and Probable Multiple System Atrophy
Directory of Open Access Journals (Sweden)
Kyum-Yil Kwon
2009-05-01
Full Text Available Background: To investigate the relationship between presenting clinical manifestations and imaging features of multisystem neuronal dysfunction in MSA patients, using 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET. Methods: We studied 50 consecutive MSA patients with characteristic brain MRI findings of MSA, including 34 patients with early MSA-parkinsonian (MSA-P and 16 with early MSA-cerebellar (MSA-C. The cerebral glucose metabolism of all MSA patients was evaluated in comparison with 25 age-matched controls. 18F-FDG PET results were assessed by the Statistic Parametric Mapping (SPM analysis and the regions of interest (ROI method. Results: The mean time from disease onset to 18F-FDG PET was 25.9±13.0 months in 34 MSA-P patients and 20.1±11.1 months in 16 MSA-C patients. Glucose metabolism of the putamen showed a greater decrease in possible MSA-P than in probable MSA-P (p=0.031. Although the Unified Multiple System Atrophy Rating Scale (UMSARS score did not differ between possible MSA-P and probable MSA-P, the subscores of rigidity (p=0.04 and bradykinesia (p= 0.008 were significantly higher in possible MSA-P than in probable MSA-P. Possible MSA-C showed a greater decrease in glucose metabolism of the cerebellum than probable MSA-C (p=0.016. Conclusions: Our results may suggest that the early neuropathological pattern of possible MSA with a predilection for the striatonigral or olivopontocerebellar system differs from that of probable MSA, which has prominent involvement of the autonomic nervous system in addition to the striatonigral or olivopontocerebellar system.
Beginning statistics with data analysis
Mosteller, Frederick; Rourke, Robert EK
2013-01-01
This introduction to the world of statistics covers exploratory data analysis, methods for collecting data, formal statistical inference, and techniques of regression and analysis of variance. 1983 edition.
van Himbergen-Gondwe, P.M.; Krol, M.; Gieskes, W.W C; Klaassen, W.; de Baar, H.J.W.
2003-01-01
[1] The contribution of ocean-derived DMS to the atmospheric burdens of a variety of sulphur compounds (DMS, MSA, SO2, and nss SO4=) is quantified from season to season. Such quantification, especially for nss SO4= (the climate-relevant product of DMS oxidation), is essential for the quantification
Different metabolic patterns analysis of Parkinsonism on the 18F-FDG PET
International Nuclear Information System (INIS)
Juh, Rahyeong; Kim, Jaesung; Moon, Daehyuk; Choe, Boyoung; Suh, Tasuk
2004-01-01
Idiopathic Parkinson's disease (IPD), progressive supranuclear palsy (PSP) and multiple system atrophy (MSA) are the most common movement disorders associated with neurodegenerative disease. A clinical differential diagnosis of IPD and atypical Parkinsonian disorders, such as MSA and PSP, is often complicated by the presence of symptoms common to both groups. Since Parkinsonism has a different pathophysiology in the cortical and subcortical brain structures, assessing the regional cerebral glucose metabolism may assist in making a differential diagnosis of Parkinsonism. The 18 F-FDG PET images of IPD, MSA and PSP were assessed using statistical parametric mapping (SPM) in order to determine the useful metabolic patterns. Twenty-four patients with Parkinsonism: eight patients (mean age 67.9±10.7 years; M/F: 3/5) with IPD, nine patients (57.9±9.2 years; M/F: 4/5) with MSA and seven patients (67.6±4.8 years; M/F: 3/4) with PSP were enrolled in this study. All patients with Parkinsonism and 22 age-matched normal controls underwent 18 F-FDG PET, (after 370 MBq 18 F-FDG). The three groups and the individual IPD, MSA and PSP patients were compared with a normal control group using a two-sided t-test of SPM (uncorrected P 100 voxel). The IPD, MSA and PSP groups showed significant hypometabolism in the cerebral neocortex compared to the normal control group. The MSA group showed significant hypometabolism in the putamen, pons and cerebellum compared to the normal controls and IPD groups. In addition, PSP showed significant hypometabolism in the caudate nucleus, the thalamus, midbrain and the cingulate gyrus compared to the normal controls, the IPD and the MSA groups. In conclusion, an assessment of the 18 F-FDG PET images using SPM may be a useful adjunct to a clinical examination when making a differential diagnosis of Parkinsonism
Research design and statistical analysis
Myers, Jerome L; Lorch Jr, Robert F
2013-01-01
Research Design and Statistical Analysis provides comprehensive coverage of the design principles and statistical concepts necessary to make sense of real data. The book's goal is to provide a strong conceptual foundation to enable readers to generalize concepts to new research situations. Emphasis is placed on the underlying logic and assumptions of the analysis and what it tells the researcher, the limitations of the analysis, and the consequences of violating assumptions. Sampling, design efficiency, and statistical models are emphasized throughout. As per APA recommendations
BOREAS TE-20 Soils Data Over the NSA-MSA and Tower Sites in Raster Format
Hall, Forrest G. (Editor); Veldhuis, Hugo; Knapp, David; Veldhuis, Hugo
2000-01-01
The BOREAS TE-20 team collected several data sets for use in developing and testing models of forest ecosystem dynamics. This data set was gridded from vector layers of soil maps that were received from Dr. Hugo Veldhuis, who did the original mapping in the field during 1994. The vector layers were gridded into raster files that cover the NSA-MSA and tower sites. The data are stored in binary, image format files. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Active Center (DAAC).
BOREAS TE-20 Soils Data Over the NSA-MSA and Tower Sites in Vector Format
Hall, Forrest G. (Editor); Veldhuis, Hugo; Knapp, David
2000-01-01
The BOREAS TE-20 team collected several data sets for use in developing and testing models of forest ecosystem dynamics. This data set contains vector layers of soil maps that were received from Dr. Hugo Veldhuis, who did the original mapping in the field during 1994. The vector layers were converted to ARCANFO EXPORT files. These data cover 1-kilometer diameters around each of the NSA tower sites, and another layer covers the NSA-MSA. The data files are available on a CD-ROM (see document number 20010000884), or from the Oak Ridge National Laboratory (ORNL) Distributed Active Center (DAAC).
Statistical data analysis handbook
National Research Council Canada - National Science Library
Wall, Francis J
1986-01-01
It must be emphasized that this is not a text book on statistics. Instead it is a working tool that presents data analysis in clear, concise terms which can be readily understood even by those without formal training in statistics...
The Retina in Multiple System Atrophy: Systematic Review and Meta-Analysis
Directory of Open Access Journals (Sweden)
Carlos E. Mendoza-Santiesteban
2017-05-01
Full Text Available BackgroundMultiple system atrophy (MSA is a rare, adult-onset, rapidly progressive fatal synucleinopathy that primarily affects oligodendroglial cells in the brain. Patients with MSA only rarely have visual complaints, but recent studies of the retina using optical coherence tomography (OCT showed atrophy of the peripapillary retinal nerve fiber layer (RNFL and to a lesser extent the macular ganglion cell layer (GCL complex.MethodsWe performed a literature review and meta-analysis according to the preferred reporting items for systematic reviews and meta-analyses guidelines for studies published before January 2017, identified through PubMed and Google Scholar databases, which reported OCT-related outcomes in patients with MSA and controls. A random-effects model was constructed.ResultsThe meta-analysis search strategy yielded 15 articles of which 7 met the inclusion criteria. The pooled difference in the average thickness of the RNFL was −5.48 μm (95% CI, −6.23 to −4.73; p < 0.0001, indicating significant thinning in patients with MSA. The pooled results showed significant thinning in all the specific RNFL quadrants, except in the temporal RNFL quadrant, where the thickness in MSA and controls was similar [pooled difference of 1.11 µm (95% CI, −4.03 to 6.26; p = 0.67]. This pattern of retinal damage suggests that MSA patients have preferential loss of retinal ganglion cells projecting to the magnocellular pathway (M-cells, which are mainly located in the peripheral retina and are not essential for visual acuity. Visual acuity, on the other hand, relies mostly on macular ganglion cells projecting to the parvocellular pathway (P-cells through the temporal portion of the RNFL, which are relatively spared in MSA patients.ConclusionThe retinal damage in patients with MSA differs from that observed in patients with Parkinson disease (PD. Patients with MSA have more relative preservation of temporal sector of the RNFL and less
Directory of Open Access Journals (Sweden)
Amanda Worker
Full Text Available Although often clinically indistinguishable in the early stages, Parkinson's disease (PD, Multiple System Atrophy (MSA and Progressive Supranuclear Palsy (PSP have distinct neuropathological changes. The aim of the current study was to identify white matter tract neurodegeneration characteristic of each of the three syndromes. Tract-based spatial statistics (TBSS was used to perform a whole-brain automated analysis of diffusion tensor imaging (DTI data to compare differences in fractional anisotropy (FA and mean diffusivity (MD between the three clinical groups and healthy control subjects. Further analyses were conducted to assess the relationship between these putative indices of white matter microstructure and clinical measures of disease severity and symptoms. In PSP, relative to controls, changes in DTI indices consistent with white matter tract degeneration were identified in the corpus callosum, corona radiata, corticospinal tract, superior longitudinal fasciculus, anterior thalamic radiation, superior cerebellar peduncle, medial lemniscus, retrolenticular and anterior limb of the internal capsule, cerebral peduncle and external capsule bilaterally, as well as the left posterior limb of the internal capsule and the right posterior thalamic radiation. MSA patients also displayed differences in the body of the corpus callosum corticospinal tract, cerebellar peduncle, medial lemniscus, anterior and superior corona radiata, posterior limb of the internal capsule external capsule and cerebral peduncle bilaterally, as well as the left anterior limb of the internal capsule and the left anterior thalamic radiation. No significant white matter abnormalities were observed in the PD group. Across groups, MD correlated positively with disease severity in all major white matter tracts. These results show widespread changes in white matter tracts in both PSP and MSA patients, even at a mid-point in the disease process, which are not found in patients
DEFF Research Database (Denmark)
Ambika, C.; Karuppasamy, K.; Vikraman, Dhanasekaran
2018-01-01
-MSA complexes were plausibly investigated for the first time. The complexation behaviors of both plasticized and unplasticized polymer electrolyte systems were confirmed with the aid of Fourier transform infrared spectroscopy. The conductivity values were found to be enhanced due to the addition of DMC...
Statistical Power in Meta-Analysis
Liu, Jin
2015-01-01
Statistical power is important in a meta-analysis study, although few studies have examined the performance of simulated power in meta-analysis. The purpose of this study is to inform researchers about statistical power estimation on two sample mean difference test under different situations: (1) the discrepancy between the analytical power and…
An Analysis of Bid Evaluation Procedures of Contemporary Models for Procurement in Pakistan
2016-12-01
protection program are excluded. Petroleum Pipeline Corporation (also referred to as “BOTAŞ” in Turkish language) is allowed to buy liquefied natural gas ...Process Life Cycle.” a. Materiel Solution Analysis The basic aim of the Material Solution Analysis (MSA) phase is to provide the potential solutions...Fee (CPAF) and Cost Plus Incentive Fee (CPIF). The Cost type contracts are most appropriate for Material Solution Analysis (MSA) and Technology
Different metabolic patterns analysis of Parkinsonism on the {sup 18}F-FDG PET
Energy Technology Data Exchange (ETDEWEB)
Juh, Rahyeong; Kim, Jaesung; Moon, Daehyuk; Choe, Boyoung; Suh, Tasuk E-mail: suhsanta@catholic.ac.kr
2004-09-01
Idiopathic Parkinson's disease (IPD), progressive supranuclear palsy (PSP) and multiple system atrophy (MSA) are the most common movement disorders associated with neurodegenerative disease. A clinical differential diagnosis of IPD and atypical Parkinsonian disorders, such as MSA and PSP, is often complicated by the presence of symptoms common to both groups. Since Parkinsonism has a different pathophysiology in the cortical and subcortical brain structures, assessing the regional cerebral glucose metabolism may assist in making a differential diagnosis of Parkinsonism. The {sup 18}F-FDG PET images of IPD, MSA and PSP were assessed using statistical parametric mapping (SPM) in order to determine the useful metabolic patterns. Twenty-four patients with Parkinsonism: eight patients (mean age 67.9{+-}10.7 years; M/F: 3/5) with IPD, nine patients (57.9{+-}9.2 years; M/F: 4/5) with MSA and seven patients (67.6{+-}4.8 years; M/F: 3/4) with PSP were enrolled in this study. All patients with Parkinsonism and 22 age-matched normal controls underwent {sup 18}F-FDG PET, (after 370 MBq {sup 18}F-FDG). The three groups and the individual IPD, MSA and PSP patients were compared with a normal control group using a two-sided t-test of SPM (uncorrected P<0.01, extent threshold >100 voxel). The IPD, MSA and PSP groups showed significant hypometabolism in the cerebral neocortex compared to the normal control group. The MSA group showed significant hypometabolism in the putamen, pons and cerebellum compared to the normal controls and IPD groups. In addition, PSP showed significant hypometabolism in the caudate nucleus, the thalamus, midbrain and the cingulate gyrus compared to the normal controls, the IPD and the MSA groups. In conclusion, an assessment of the {sup 18}F-FDG PET images using SPM may be a useful adjunct to a clinical examination when making a differential diagnosis of Parkinsonism.
Rweb:Web-based Statistical Analysis
Directory of Open Access Journals (Sweden)
Jeff Banfield
1999-03-01
Full Text Available Rweb is a freely accessible statistical analysis environment that is delivered through the World Wide Web (WWW. It is based on R, a well known statistical analysis package. The only requirement to run the basic Rweb interface is a WWW browser that supports forms. If you want graphical output you must, of course, have a browser that supports graphics. The interface provides access to WWW accessible data sets, so you may run Rweb on your own data. Rweb can provide a four window statistical computing environment (code input, text output, graphical output, and error information through browsers that support Javascript. There is also a set of point and click modules under development for use in introductory statistics courses.
Regularized Statistical Analysis of Anatomy
DEFF Research Database (Denmark)
Sjöstrand, Karl
2007-01-01
This thesis presents the application and development of regularized methods for the statistical analysis of anatomical structures. Focus is on structure-function relationships in the human brain, such as the connection between early onset of Alzheimer’s disease and shape changes of the corpus...... and mind. Statistics represents a quintessential part of such investigations as they are preluded by a clinical hypothesis that must be verified based on observed data. The massive amounts of image data produced in each examination pose an important and interesting statistical challenge...... efficient algorithms which make the analysis of large data sets feasible, and gives examples of applications....
Statistical methods for astronomical data analysis
Chattopadhyay, Asis Kumar
2014-01-01
This book introduces “Astrostatistics” as a subject in its own right with rewarding examples, including work by the authors with galaxy and Gamma Ray Burst data to engage the reader. This includes a comprehensive blending of Astrophysics and Statistics. The first chapter’s coverage of preliminary concepts and terminologies for astronomical phenomenon will appeal to both Statistics and Astrophysics readers as helpful context. Statistics concepts covered in the book provide a methodological framework. A unique feature is the inclusion of different possible sources of astronomical data, as well as software packages for converting the raw data into appropriate forms for data analysis. Readers can then use the appropriate statistical packages for their particular data analysis needs. The ideas of statistical inference discussed in the book help readers determine how to apply statistical tests. The authors cover different applications of statistical techniques already developed or specifically introduced for ...
Fuzzy Gauge Capability (Cg and Cgk) through Buckley Approach
Seyed Habib A. Rahmati; Mohsen Sadegh Amalnick
2015-01-01
Different terms of the Statistical Process Control (SPC) has sketch in the fuzzy environment. However, Measurement System Analysis (MSA), as a main branch of the SPC, is rarely investigated in fuzzy area. This procedure assesses the suitability of the data to be used in later stages or decisions of the SPC. Therefore, this research focuses on some important measures of MSA and through a new method introduces the measures in fuzzy environment. In this method, which works b...
New Developments in Mokken Scale Analysis in R
Directory of Open Access Journals (Sweden)
L. Andries van der Ark
2012-05-01
Full Text Available Mokken (1971 developed a scaling procedure for both dichotomous and polytomous items that was later coined Mokken scale analysis (MSA. MSA has been developed ever since, and the developments until 2000 have been implemented in the software package MSP (Molenaar and Sijtsma 2000 and the R package mokken (Van der Ark 2007. This paper describes the new developments in MSA since 2000 that have been implemented in mokken since its first release in 2007. These new developments pertain to invariant item ordering, a new automated item selection procedure based on a genetic algorithm, inclusion of reliability coefficients, and the computation of standard errors for the scalability coefficients. We demonstrate all new applications using data obtained with a transitive reasoning test and a personality test.
A Statistical Toolkit for Data Analysis
International Nuclear Information System (INIS)
Donadio, S.; Guatelli, S.; Mascialino, B.; Pfeiffer, A.; Pia, M.G.; Ribon, A.; Viarengo, P.
2006-01-01
The present project aims to develop an open-source and object-oriented software Toolkit for statistical data analysis. Its statistical testing component contains a variety of Goodness-of-Fit tests, from Chi-squared to Kolmogorov-Smirnov, to less known, but generally much more powerful tests such as Anderson-Darling, Goodman, Fisz-Cramer-von Mises, Kuiper, Tiku. Thanks to the component-based design and the usage of the standard abstract interfaces for data analysis, this tool can be used by other data analysis systems or integrated in experimental software frameworks. This Toolkit has been released and is downloadable from the web. In this paper we describe the statistical details of the algorithms, the computational features of the Toolkit and describe the code validation
Statistical considerations on safety analysis
International Nuclear Information System (INIS)
Pal, L.; Makai, M.
2004-01-01
The authors have investigated the statistical methods applied to safety analysis of nuclear reactors and arrived at alarming conclusions: a series of calculations with the generally appreciated safety code ATHLET were carried out to ascertain the stability of the results against input uncertainties in a simple experimental situation. Scrutinizing those calculations, we came to the conclusion that the ATHLET results may exhibit chaotic behavior. A further conclusion is that the technological limits are incorrectly set when the output variables are correlated. Another formerly unnoticed conclusion of the previous ATHLET calculations that certain innocent looking parameters (like wall roughness factor, the number of bubbles per unit volume, the number of droplets per unit volume) can influence considerably such output parameters as water levels. The authors are concerned with the statistical foundation of present day safety analysis practices and can only hope that their own misjudgment will be dispelled. Until then, the authors suggest applying correct statistical methods in safety analysis even if it makes the analysis more expensive. It would be desirable to continue exploring the role of internal parameters (wall roughness factor, steam-water surface in thermal hydraulics codes, homogenization methods in neutronics codes) in system safety codes and to study their effects on the analysis. In the validation and verification process of a code one carries out a series of computations. The input data are not precisely determined because measured data have an error, calculated data are often obtained from a more or less accurate model. Some users of large codes are content with comparing the nominal output obtained from the nominal input, whereas all the possible inputs should be taken into account when judging safety. At the same time, any statement concerning safety must be aleatory, and its merit can be judged only when the probability is known with which the
Statistical shape analysis with applications in R
Dryden, Ian L
2016-01-01
A thoroughly revised and updated edition of this introduction to modern statistical methods for shape analysis Shape analysis is an important tool in the many disciplines where objects are compared using geometrical features. Examples include comparing brain shape in schizophrenia; investigating protein molecules in bioinformatics; and describing growth of organisms in biology. This book is a significant update of the highly-regarded `Statistical Shape Analysis’ by the same authors. The new edition lays the foundations of landmark shape analysis, including geometrical concepts and statistical techniques, and extends to include analysis of curves, surfaces, images and other types of object data. Key definitions and concepts are discussed throughout, and the relative merits of different approaches are presented. The authors have included substantial new material on recent statistical developments and offer numerous examples throughout the text. Concepts are introduced in an accessible manner, while reta...
Spatial analysis statistics, visualization, and computational methods
Oyana, Tonny J
2015-01-01
An introductory text for the next generation of geospatial analysts and data scientists, Spatial Analysis: Statistics, Visualization, and Computational Methods focuses on the fundamentals of spatial analysis using traditional, contemporary, and computational methods. Outlining both non-spatial and spatial statistical concepts, the authors present practical applications of geospatial data tools, techniques, and strategies in geographic studies. They offer a problem-based learning (PBL) approach to spatial analysis-containing hands-on problem-sets that can be worked out in MS Excel or ArcGIS-as well as detailed illustrations and numerous case studies. The book enables readers to: Identify types and characterize non-spatial and spatial data Demonstrate their competence to explore, visualize, summarize, analyze, optimize, and clearly present statistical data and results Construct testable hypotheses that require inferential statistical analysis Process spatial data, extract explanatory variables, conduct statisti...
Application of descriptive statistics in analysis of experimental data
Mirilović Milorad; Pejin Ivana
2008-01-01
Statistics today represent a group of scientific methods for the quantitative and qualitative investigation of variations in mass appearances. In fact, statistics present a group of methods that are used for the accumulation, analysis, presentation and interpretation of data necessary for reaching certain conclusions. Statistical analysis is divided into descriptive statistical analysis and inferential statistics. The values which represent the results of an experiment, and which are the subj...
Higo, R; Tayama, N; Watanabe, T; Nitou, T; Takeuchi, S
2003-07-01
Vocal fold motion impairment (VFMI), especially vocal fold abductor paralysis, is frequently seen in multiple system atrophy (MSA). Since the regulation system of laryngeal function is closely related to swallowing function, swallowing function is considered to be more involved in MSA patients with VFMI than in patients that do not have VFMI. However, the relationship between dysphagia and VFMI in MSA patients has not been systematically explored. To elucidate the relationship between VFMI and dysphagia in MSA. We evaluated swallowing function of 36 MSA patients with and without VFMI, by videofluoroscopy, and investigated the relationship between VFMI and pharyngeal swallowing function. VFMI was found in 17 patients (47.2%). Patients with VFMI had advanced severity of the disease. Although there was a tendency for bolus stasis at the pyriform sinus and the upper oesophageal sphincter opening to be more involved in patients with VFMI, statistical analysis did not show significant differences in swallowing function of MSA patients between with and without VFMI. In contrast, patients who underwent a tracheotomy ultimately required tube feeding or a laryngectomy. Appearance of VFMI is a sign of disease progression but does not necessary mean patients should change their way of taking nutrition. However, MSA patients who need a tracheotomy might have advanced to a high-risk group for dysphagia. Appropriate evaluation and treatment for VFMI and dysphagia are required to maintain patients' quality of life in MSA.
Statistical Analysis of Research Data | Center for Cancer Research
Recent advances in cancer biology have resulted in the need for increased statistical analysis of research data. The Statistical Analysis of Research Data (SARD) course will be held on April 5-6, 2018 from 9 a.m.-5 p.m. at the National Institutes of Health's Natcher Conference Center, Balcony C on the Bethesda Campus. SARD is designed to provide an overview on the general principles of statistical analysis of research data. The first day will feature univariate data analysis, including descriptive statistics, probability distributions, one- and two-sample inferential statistics.
Statistical analysis with Excel for dummies
Schmuller, Joseph
2013-01-01
Take the mystery out of statistical terms and put Excel to work! If you need to create and interpret statistics in business or classroom settings, this easy-to-use guide is just what you need. It shows you how to use Excel's powerful tools for statistical analysis, even if you've never taken a course in statistics. Learn the meaning of terms like mean and median, margin of error, standard deviation, and permutations, and discover how to interpret the statistics of everyday life. You'll learn to use Excel formulas, charts, PivotTables, and other tools to make sense of everything fro
Conducta simbólica: la muerte en el Musteriense y MSA
Directory of Open Access Journals (Sweden)
Ángel RIVERA ARRIZABALAGA
2010-07-01
Full Text Available : El simbolismo es la principal característica de la conducta humana, pero sigue siendo desconocido en muchos aspectos. En el presente trabajo se realizará un análisis estructural del simbolismo humano, por medio de una síntesis metodológica elaborada con las aportaciones de varias ciencias relacionadas con los seres humanos (Biología evolutiva, Neurología, Psicología y Sociología. Tal síntesis ha dado lugar a un modelo Psicobiológico sobre el comportamiento humano, que nos permite elaborar un método adecuado para el estudio del simbolismo, desde su origen hasta su plena manifestación con las características actuales. Posteriormente, se aplicaría a las conductas funerarias que se conocen del Paleolítico Medio de Europa, del Próximo Oriente y del MSA de Sudáfrica, para valorar la intencionalidad de los enterramientos, junto con el posible simbolismo asociado a ellos. También, se estudiará la antropofagia como forma de eliminación de los cadáveres en este periodo, intentando comprender si se realizaba como conducta de supervivencia o asociada a elementos simbólicos similares a los relacionados con los enterramientos.
Statistical analysis of dynamic parameters of the core
International Nuclear Information System (INIS)
Ionov, V.S.
2007-01-01
The transients of various types were investigated for the cores of zero power critical facilities in RRC KI and NPP. Dynamic parameters of neutron transients were explored by tool statistical analysis. Its have sufficient duration, few channels for currents of chambers and reactivity and also some channels for technological parameters. On these values the inverse period. reactivity, lifetime of neutrons, reactivity coefficients and some effects of a reactivity are determinate, and on the values were restored values of measured dynamic parameters as result of the analysis. The mathematical means of statistical analysis were used: approximation(A), filtration (F), rejection (R), estimation of parameters of descriptive statistic (DSP), correlation performances (kk), regression analysis(KP), the prognosis (P), statistician criteria (SC). The calculation procedures were realized by computer language MATLAB. The reasons of methodical and statistical errors are submitted: inadequacy of model operation, precision neutron-physical parameters, features of registered processes, used mathematical model in reactivity meters, technique of processing for registered data etc. Examples of results of statistical analysis. Problems of validity of the methods used for definition and certification of values of statistical parameters and dynamic characteristics are considered (Authors)
CONFIDENCE LEVELS AND/VS. STATISTICAL HYPOTHESIS TESTING IN STATISTICAL ANALYSIS. CASE STUDY
Directory of Open Access Journals (Sweden)
ILEANA BRUDIU
2009-05-01
Full Text Available Estimated parameters with confidence intervals and testing statistical assumptions used in statistical analysis to obtain conclusions on research from a sample extracted from the population. Paper to the case study presented aims to highlight the importance of volume of sample taken in the study and how this reflects on the results obtained when using confidence intervals and testing for pregnant. If statistical testing hypotheses not only give an answer "yes" or "no" to some questions of statistical estimation using statistical confidence intervals provides more information than a test statistic, show high degree of uncertainty arising from small samples and findings build in the "marginally significant" or "almost significant (p very close to 0.05.
Collecting operational event data for statistical analysis
International Nuclear Information System (INIS)
Atwood, C.L.
1994-09-01
This report gives guidance for collecting operational data to be used for statistical analysis, especially analysis of event counts. It discusses how to define the purpose of the study, the unit (system, component, etc.) to be studied, events to be counted, and demand or exposure time. Examples are given of classification systems for events in the data sources. A checklist summarizes the essential steps in data collection for statistical analysis
Statistics and analysis of scientific data
Bonamente, Massimiliano
2013-01-01
Statistics and Analysis of Scientific Data covers the foundations of probability theory and statistics, and a number of numerical and analytical methods that are essential for the present-day analyst of scientific data. Topics covered include probability theory, distribution functions of statistics, fits to two-dimensional datasheets and parameter estimation, Monte Carlo methods and Markov chains. Equal attention is paid to the theory and its practical application, and results from classic experiments in various fields are used to illustrate the importance of statistics in the analysis of scientific data. The main pedagogical method is a theory-then-application approach, where emphasis is placed first on a sound understanding of the underlying theory of a topic, which becomes the basis for an efficient and proactive use of the material for practical applications. The level is appropriate for undergraduates and beginning graduate students, and as a reference for the experienced researcher. Basic calculus is us...
Method for statistical data analysis of multivariate observations
Gnanadesikan, R
1997-01-01
A practical guide for multivariate statistical techniques-- now updated and revised In recent years, innovations in computer technology and statistical methodologies have dramatically altered the landscape of multivariate data analysis. This new edition of Methods for Statistical Data Analysis of Multivariate Observations explores current multivariate concepts and techniques while retaining the same practical focus of its predecessor. It integrates methods and data-based interpretations relevant to multivariate analysis in a way that addresses real-world problems arising in many areas of inte
Advances in statistical models for data analysis
Minerva, Tommaso; Vichi, Maurizio
2015-01-01
This edited volume focuses on recent research results in classification, multivariate statistics and machine learning and highlights advances in statistical models for data analysis. The volume provides both methodological developments and contributions to a wide range of application areas such as economics, marketing, education, social sciences and environment. The papers in this volume were first presented at the 9th biannual meeting of the Classification and Data Analysis Group (CLADAG) of the Italian Statistical Society, held in September 2013 at the University of Modena and Reggio Emilia, Italy.
Multispectral analysis of multimodal images
Energy Technology Data Exchange (ETDEWEB)
Kvinnsland, Yngve; Brekke, Njaal (Dept. of Surgical Sciences, Univ. of Bergen, Bergen (Norway)); Taxt, Torfinn M.; Gruener, Renate (Dept. of Biomedicine, Univ. of Bergen, Bergen (Norway))
2009-02-15
An increasing number of multimodal images represent a valuable increase in available image information, but at the same time it complicates the extraction of diagnostic information across the images. Multispectral analysis (MSA) has the potential to simplify this problem substantially as unlimited number of images can be combined, and tissue properties across the images can be extracted automatically. Materials and methods. We have developed a software solution for MSA containing two algorithms for unsupervised classification, an EM-algorithm finding multinormal class descriptions and the k-means clustering algorithm, and two for supervised classification, a Bayesian classifier using multinormal class descriptions and a kNN-algorithm. The software has an efficient user interface for the creation and manipulation of class descriptions, and it has proper tools for displaying the results. Results. The software has been tested on different sets of images. One application is to segment cross-sectional images of brain tissue (T1- and T2-weighted MR images) into its main normal tissues and brain tumors. Another interesting set of images are the perfusion maps and diffusion maps, derived images from raw MR images. The software returns segmentation that seem to be sensible. Discussion. The MSA software appears to be a valuable tool for image analysis with multimodal images at hand. It readily gives a segmentation of image volumes that visually seems to be sensible. However, to really learn how to use MSA, it will be necessary to gain more insight into what tissues the different segments contain, and the upcoming work will therefore be focused on examining the tissues through for example histological sections.
Statistical models and methods for reliability and survival analysis
Couallier, Vincent; Huber-Carol, Catherine; Mesbah, Mounir; Huber -Carol, Catherine; Limnios, Nikolaos; Gerville-Reache, Leo
2013-01-01
Statistical Models and Methods for Reliability and Survival Analysis brings together contributions by specialists in statistical theory as they discuss their applications providing up-to-date developments in methods used in survival analysis, statistical goodness of fit, stochastic processes for system reliability, amongst others. Many of these are related to the work of Professor M. Nikulin in statistics over the past 30 years. The authors gather together various contributions with a broad array of techniques and results, divided into three parts - Statistical Models and Methods, Statistical
Classification, (big) data analysis and statistical learning
Conversano, Claudio; Vichi, Maurizio
2018-01-01
This edited book focuses on the latest developments in classification, statistical learning, data analysis and related areas of data science, including statistical analysis of large datasets, big data analytics, time series clustering, integration of data from different sources, as well as social networks. It covers both methodological aspects as well as applications to a wide range of areas such as economics, marketing, education, social sciences, medicine, environmental sciences and the pharmaceutical industry. In addition, it describes the basic features of the software behind the data analysis results, and provides links to the corresponding codes and data sets where necessary. This book is intended for researchers and practitioners who are interested in the latest developments and applications in the field. The peer-reviewed contributions were presented at the 10th Scientific Meeting of the Classification and Data Analysis Group (CLADAG) of the Italian Statistical Society, held in Santa Margherita di Pul...
Statistical hot spot analysis of reactor cores
International Nuclear Information System (INIS)
Schaefer, H.
1974-05-01
This report is an introduction into statistical hot spot analysis. After the definition of the term 'hot spot' a statistical analysis is outlined. The mathematical method is presented, especially the formula concerning the probability of no hot spots in a reactor core is evaluated. A discussion with the boundary conditions of a statistical hot spot analysis is given (technological limits, nominal situation, uncertainties). The application of the hot spot analysis to the linear power of pellets and the temperature rise in cooling channels is demonstrated with respect to the test zone of KNK II. Basic values, such as probability of no hot spots, hot spot potential, expected hot spot diagram and cumulative distribution function of hot spots, are discussed. It is shown, that the risk of hot channels can be dispersed equally over all subassemblies by an adequate choice of the nominal temperature distribution in the core
The statistical analysis of anisotropies
International Nuclear Information System (INIS)
Webster, A.
1977-01-01
One of the many uses to which a radio survey may be put is an analysis of the distribution of the radio sources on the celestial sphere to find out whether they are bunched into clusters or lie in preferred regions of space. There are many methods of testing for clustering in point processes and since they are not all equally good this contribution is presented as a brief guide to what seems to be the best of them. The radio sources certainly do not show very strong clusering and may well be entirely unclustered so if a statistical method is to be useful it must be both powerful and flexible. A statistic is powerful in this context if it can efficiently distinguish a weakly clustered distribution of sources from an unclustered one, and it is flexible if it can be applied in a way which avoids mistaking defects in the survey for true peculiarities in the distribution of sources. The paper divides clustering statistics into two classes: number density statistics and log N/log S statistics. (Auth.)
Basic statistical tools in research and data analysis
Directory of Open Access Journals (Sweden)
Zulfiqar Ali
2016-01-01
Full Text Available Statistical methods involved in carrying out a study include planning, designing, collecting data, analysing, drawing meaningful interpretation and reporting of the research findings. The statistical analysis gives meaning to the meaningless numbers, thereby breathing life into a lifeless data. The results and inferences are precise only if proper statistical tests are used. This article will try to acquaint the reader with the basic research tools that are utilised while conducting various studies. The article covers a brief outline of the variables, an understanding of quantitative and qualitative variables and the measures of central tendency. An idea of the sample size estimation, power analysis and the statistical errors is given. Finally, there is a summary of parametric and non-parametric tests used for data analysis.
Reproducible statistical analysis with multiple languages
DEFF Research Database (Denmark)
Lenth, Russell; Højsgaard, Søren
2011-01-01
This paper describes the system for making reproducible statistical analyses. differs from other systems for reproducible analysis in several ways. The two main differences are: (1) Several statistics programs can be in used in the same document. (2) Documents can be prepared using OpenOffice or ......Office or \\LaTeX. The main part of this paper is an example showing how to use and together in an OpenOffice text document. The paper also contains some practical considerations on the use of literate programming in statistics....
Directory of Open Access Journals (Sweden)
Priya Ranganathan
2015-01-01
Full Text Available In the second part of a series on pitfalls in statistical analysis, we look at various ways in which a statistically significant study result can be expressed. We debunk some of the myths regarding the ′P′ value, explain the importance of ′confidence intervals′ and clarify the importance of including both values in a paper
Statistics and analysis of scientific data
Bonamente, Massimiliano
2017-01-01
The revised second edition of this textbook provides the reader with a solid foundation in probability theory and statistics as applied to the physical sciences, engineering and related fields. It covers a broad range of numerical and analytical methods that are essential for the correct analysis of scientific data, including probability theory, distribution functions of statistics, fits to two-dimensional data and parameter estimation, Monte Carlo methods and Markov chains. Features new to this edition include: • a discussion of statistical techniques employed in business science, such as multiple regression analysis of multivariate datasets. • a new chapter on the various measures of the mean including logarithmic averages. • new chapters on systematic errors and intrinsic scatter, and on the fitting of data with bivariate errors. • a new case study and additional worked examples. • mathematical derivations and theoretical background material have been appropriately marked,to improve the readabili...
Statistical evaluation of diagnostic performance topics in ROC analysis
Zou, Kelly H; Bandos, Andriy I; Ohno-Machado, Lucila; Rockette, Howard E
2016-01-01
Statistical evaluation of diagnostic performance in general and Receiver Operating Characteristic (ROC) analysis in particular are important for assessing the performance of medical tests and statistical classifiers, as well as for evaluating predictive models or algorithms. This book presents innovative approaches in ROC analysis, which are relevant to a wide variety of applications, including medical imaging, cancer research, epidemiology, and bioinformatics. Statistical Evaluation of Diagnostic Performance: Topics in ROC Analysis covers areas including monotone-transformation techniques in parametric ROC analysis, ROC methods for combined and pooled biomarkers, Bayesian hierarchical transformation models, sequential designs and inferences in the ROC setting, predictive modeling, multireader ROC analysis, and free-response ROC (FROC) methodology. The book is suitable for graduate-level students and researchers in statistics, biostatistics, epidemiology, public health, biomedical engineering, radiology, medi...
Bayesian Inference in Statistical Analysis
Box, George E P
2011-01-01
The Wiley Classics Library consists of selected books that have become recognized classics in their respective fields. With these new unabridged and inexpensive editions, Wiley hopes to extend the life of these important works by making them available to future generations of mathematicians and scientists. Currently available in the Series: T. W. Anderson The Statistical Analysis of Time Series T. S. Arthanari & Yadolah Dodge Mathematical Programming in Statistics Emil Artin Geometric Algebra Norman T. J. Bailey The Elements of Stochastic Processes with Applications to the Natural Sciences Rob
Analysis of Variance: What Is Your Statistical Software Actually Doing?
Li, Jian; Lomax, Richard G.
2011-01-01
Users assume statistical software packages produce accurate results. In this article, the authors systematically examined Statistical Package for the Social Sciences (SPSS) and Statistical Analysis System (SAS) for 3 analysis of variance (ANOVA) designs, mixed-effects ANOVA, fixed-effects analysis of covariance (ANCOVA), and nested ANOVA. For each…
Comparing Visual and Statistical Analysis of Multiple Baseline Design Graphs.
Wolfe, Katie; Dickenson, Tammiee S; Miller, Bridget; McGrath, Kathleen V
2018-04-01
A growing number of statistical analyses are being developed for single-case research. One important factor in evaluating these methods is the extent to which each corresponds to visual analysis. Few studies have compared statistical and visual analysis, and information about more recently developed statistics is scarce. Therefore, our purpose was to evaluate the agreement between visual analysis and four statistical analyses: improvement rate difference (IRD); Tau-U; Hedges, Pustejovsky, Shadish (HPS) effect size; and between-case standardized mean difference (BC-SMD). Results indicate that IRD and BC-SMD had the strongest overall agreement with visual analysis. Although Tau-U had strong agreement with visual analysis on raw values, it had poorer agreement when those values were dichotomized to represent the presence or absence of a functional relation. Overall, visual analysis appeared to be more conservative than statistical analysis, but further research is needed to evaluate the nature of these disagreements.
24 CFR 1710.13 - Metropolitan Statistical Area (MSA) exemption.
2010-04-01
...; (ii) Contains a good faith estimate of the year in which the roads, water and sewer facilities and... of State law, that period becomes the Federal revocation period and the contract must reflect the... subdivision, and adverse claims which are applicable to the lot to be purchased. (ii) Good faith estimates of...
2000 Census Metropolitan Statistical Areas (MSA) for Sandoval County, New Mexico, 2006se TIGER
Earth Data Analysis Center, University of New Mexico — The 2006 Second Edition TIGER/Line files are an extract of selected geographic and cartographic information from the Census TIGER database. The geographic coverage...
Sensitivity analysis and related analysis : A survey of statistical techniques
Kleijnen, J.P.C.
1995-01-01
This paper reviews the state of the art in five related types of analysis, namely (i) sensitivity or what-if analysis, (ii) uncertainty or risk analysis, (iii) screening, (iv) validation, and (v) optimization. The main question is: when should which type of analysis be applied; which statistical
Differential Diagnosis Tool for Parkinsonian Syndrome Using Multiple Structural Brain Measures
Directory of Open Access Journals (Sweden)
Miho Ota
2013-01-01
Full Text Available Clinical differentiation of parkinsonian syndromes such as the Parkinson variant of multiple system atrophy (MSA-P and cerebellar subtype (MSA-C from Parkinson's disease is difficult in the early stage of the disease. To identify the correlative pattern of brain changes for differentiating parkinsonian syndromes, we applied discriminant analysis techniques by magnetic resonance imaging (MRI. T1-weighted volume data and diffusion tensor images were obtained by MRI in eighteen patients with MSA-C, 12 patients with MSA-P, 21 patients with Parkinson’s disease, and 21 healthy controls. They were evaluated using voxel-based morphometry and tract-based spatial statistics, respectively. Discriminant functions derived by step wise methods resulted in correct classification rates of 0.89. When differentiating these diseases with the use of three independent variables together, the correct classification rate was the same as that obtained with step wise methods. These findings support the view that each parkinsonian syndrome has structural deviations in multiple brain areas and that a combination of structural brain measures can help to distinguish parkinsonian syndromes.
International Nuclear Information System (INIS)
Chen, Shun; Tan, Hong-yu; Wu, Zhuo-hua; Sun, Chong-peng; He, Jian-xun; Li, Xin-chun; Shao, Ming
2014-01-01
We explored if magnetic resonance imaging sequences might aid in the clinical differential diagnosis of idiopathic Parkinson's disease (IPD) and multiple system atrophy (MSA). We measured the volumes of the olfactory bulb, the olfactory tract, and olfaction-associated cortical gray matter in 20 IPD patients, 14 MSA patients, and 12 normal subjects, using high-resolution magnetic resonance imaging sequences in combination with voxel-based statistical analysis. We found that, compared to normal subjects and MSA patients, the volumes of the olfactory bulb and tract were significantly reduced in IPD patients. The gray matter volume of IPD patients decreased in the following order: the olfactory area to the right of the piriform cortex, the right amygdala, the left entorhinal cortex, and the left occipital lobe. Further, the total olfactory bulb volume of IPD patients was associated with the duration of disease. The entorhinal cortical gray matter volume was negatively associated with the UPDRS III score. Conclusion: Structural volumes measured by high-resolution magnetic resonance imaging may potentially be used for differential diagnosis of IPD from MSA
Energy Technology Data Exchange (ETDEWEB)
Chen, Shun, E-mail: shchen_2013@163.com [Department of Neurology, The First Affiliated Hospital of Guangzhou Medical College (China); Tan, Hong-yu, E-mail: honhyutan@21cn.com [Department of Neurology, The First Affiliated Hospital of Guangzhou Medical College (China); Wu, Zhuo-hua, E-mail: zhh88@126.com [Department of Neurology, The First Affiliated Hospital of Guangzhou Medical College (China); Sun, Chong-peng, E-mail: Suncp2002@gmail.com [Imaging Center, The First Affiliated Hospital of Guangzhou Medical College (China); He, Jian-xun, E-mail: xundog@163.com [Imaging Center, The First Affiliated Hospital of Guangzhou Medical College (China); Li, Xin-chun, E-mail: xinchunli@163.com [Imaging Center, The First Affiliated Hospital of Guangzhou Medical College (China); Shao, Ming, E-mail: yimshao@126.com [Department of Neurology, The First Affiliated Hospital of Guangzhou Medical College (China)
2014-03-15
We explored if magnetic resonance imaging sequences might aid in the clinical differential diagnosis of idiopathic Parkinson's disease (IPD) and multiple system atrophy (MSA). We measured the volumes of the olfactory bulb, the olfactory tract, and olfaction-associated cortical gray matter in 20 IPD patients, 14 MSA patients, and 12 normal subjects, using high-resolution magnetic resonance imaging sequences in combination with voxel-based statistical analysis. We found that, compared to normal subjects and MSA patients, the volumes of the olfactory bulb and tract were significantly reduced in IPD patients. The gray matter volume of IPD patients decreased in the following order: the olfactory area to the right of the piriform cortex, the right amygdala, the left entorhinal cortex, and the left occipital lobe. Further, the total olfactory bulb volume of IPD patients was associated with the duration of disease. The entorhinal cortical gray matter volume was negatively associated with the UPDRS III score. Conclusion: Structural volumes measured by high-resolution magnetic resonance imaging may potentially be used for differential diagnosis of IPD from MSA.
Online Statistical Modeling (Regression Analysis) for Independent Responses
Made Tirta, I.; Anggraeni, Dian; Pandutama, Martinus
2017-06-01
Regression analysis (statistical analmodelling) are among statistical methods which are frequently needed in analyzing quantitative data, especially to model relationship between response and explanatory variables. Nowadays, statistical models have been developed into various directions to model various type and complex relationship of data. Rich varieties of advanced and recent statistical modelling are mostly available on open source software (one of them is R). However, these advanced statistical modelling, are not very friendly to novice R users, since they are based on programming script or command line interface. Our research aims to developed web interface (based on R and shiny), so that most recent and advanced statistical modelling are readily available, accessible and applicable on web. We have previously made interface in the form of e-tutorial for several modern and advanced statistical modelling on R especially for independent responses (including linear models/LM, generalized linier models/GLM, generalized additive model/GAM and generalized additive model for location scale and shape/GAMLSS). In this research we unified them in the form of data analysis, including model using Computer Intensive Statistics (Bootstrap and Markov Chain Monte Carlo/ MCMC). All are readily accessible on our online Virtual Statistics Laboratory. The web (interface) make the statistical modeling becomes easier to apply and easier to compare them in order to find the most appropriate model for the data.
Application of Ontology Technology in Health Statistic Data Analysis.
Guo, Minjiang; Hu, Hongpu; Lei, Xingyun
2017-01-01
Research Purpose: establish health management ontology for analysis of health statistic data. Proposed Methods: this paper established health management ontology based on the analysis of the concepts in China Health Statistics Yearbook, and used protégé to define the syntactic and semantic structure of health statistical data. six classes of top-level ontology concepts and their subclasses had been extracted and the object properties and data properties were defined to establish the construction of these classes. By ontology instantiation, we can integrate multi-source heterogeneous data and enable administrators to have an overall understanding and analysis of the health statistic data. ontology technology provides a comprehensive and unified information integration structure of the health management domain and lays a foundation for the efficient analysis of multi-source and heterogeneous health system management data and enhancement of the management efficiency.
Explorations in Statistics: The Analysis of Change
Curran-Everett, Douglas; Williams, Calvin L.
2015-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This tenth installment of "Explorations in Statistics" explores the analysis of a potential change in some physiological response. As researchers, we often express absolute change as percent change so we can…
A fully customizable MATLAB Framework for MSA based on ISO 5725 Standard
International Nuclear Information System (INIS)
D’Aucelli, Giuseppe Maria; Giaquinto, Nicola; Mannatrizio, Sabino; Savino, Mario
2017-01-01
In this paper, a full featured MATLAB framework for Measurement System Analysis, fully compliant with the ISO 5725 Repeatability and Reproducibility (R and R) assessment is presented. While preserving the operations prescribed in the ISO standard, the software presents distinct improvements. First of all, all computations are made using exact closed-form formulae (instead of statistical tables) allowing a consistent analysis without limitations on the number of participating laboratories and measurements, and using custom significance levels of statistical tests. Second, a double threshold decision system for each test step has been implemented, helping the statistician to decide on the elimination of outliers/stragglers. Third, ANOVA analysis has been included. The software therefore, besides producing quickly and efficiently all the graphical and numerical results required in an inter-laboratory experiment, provide guidelines for properly updating the ISO 5725 standard. (paper)
Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc
2015-01-01
In the second part of a series on pitfalls in statistical analysis, we look at various ways in which a statistically significant study result can be expressed. We debunk some of the myths regarding the ‘P’ value, explain the importance of ‘confidence intervals’ and clarify the importance of including both values in a paper PMID:25878958
TECHNIQUE OF THE STATISTICAL ANALYSIS OF INVESTMENT APPEAL OF THE REGION
Directory of Open Access Journals (Sweden)
А. А. Vershinina
2014-01-01
Full Text Available The technique of the statistical analysis of investment appeal of the region is given in scientific article for direct foreign investments. Definition of a technique of the statistical analysis is given, analysis stages reveal, the mathematico-statistical tools are considered.
Statistical analysis of network data with R
Kolaczyk, Eric D
2014-01-01
Networks have permeated everyday life through everyday realities like the Internet, social networks, and viral marketing. As such, network analysis is an important growth area in the quantitative sciences, with roots in social network analysis going back to the 1930s and graph theory going back centuries. Measurement and analysis are integral components of network research. As a result, statistical methods play a critical role in network analysis. This book is the first of its kind in network research. It can be used as a stand-alone resource in which multiple R packages are used to illustrate how to conduct a wide range of network analyses, from basic manipulation and visualization, to summary and characterization, to modeling of network data. The central package is igraph, which provides extensive capabilities for studying network graphs in R. This text builds on Eric D. Kolaczyk’s book Statistical Analysis of Network Data (Springer, 2009).
Differential diagnosis of Parkinsonism using dual phase F 18 FP CIT PET imaging
International Nuclear Information System (INIS)
Jin, So Young; Oh, Min Young; Ok, Seung Jun; Oh, Jung Su; Lee, Sang Ju; Chung, Sun Ju; Lee, Chong Sik; Kim, Jae Seung
2012-01-01
Dopamine transporter (DAT) imaging can demonstrate presynaptic dopaminergic neuronal loss in Parkinson's disease (PD). However, differentiating atypical parkinsonism (APD) from PD is often difficult. We investigated the usefulness of dual phase F 18 FP CIT positron emission tomography (PET) imaging in the differential diagnosis of parkinsonism. Ninety eight subjects [five normal, seven drug induced parkinsonism (DIP), five essential tremor (ET), 24 PD, 20 multiple system atrophy parkinson type (MSA-P), 13 multiple system atrophy cerebellar type (MSA-C), 13 progressive supranuclear palsy (PSP), and 11 dementia with Lewy bodies(DLB)] underwent F 18 FP CIT PET. PET images were acquired at 5 min (early phase) and 3 h (late phase) after F 18 FP CIT administration (185MBq). Regional uptake pattern of cerebral and cerebellar hemispheres was assessed on early phase images, using visual, quantitative, and statistical parametric mapping (SPM) analyses. Striatal DAT binding was normal in normal, ET, DIP, and MSA C groups, but abnormal in PD, MSA P PSP, and DLB groups. No difference was found in regional uptake on early phase images among normal DAT binding groups, except in the MSA C group. Abnormal DAT binding groups showed different regional uptake pattern on early phase images compared with PD in SPM analysis (FDR<0.05). When discriminating APD from PD, visual interpretation of the early phase image showed high diagnostic sensitivity and specificity (75.4% and 100%, respectively). Regarding the ability to distinguish specific APD, sensitivities were 81% for MSA P, 77% for MSA C, 23% for PSP, and 54.5% for DLB. Dual phase F 18 FP CIT PET imaging is useful in demonstrating striatal DAT loss in neurodegenerative parkinsonism, and also in differentiating APD, particularly MSA, from PD
Differential diagnosis of Parkinsonism using dual phase F 18 FP CIT PET imaging
Energy Technology Data Exchange (ETDEWEB)
Jin, So Young; Oh, Min Young; Ok, Seung Jun; Oh, Jung Su; Lee, Sang Ju; Chung, Sun Ju; Lee, Chong Sik; Kim, Jae Seung [Univ. of Ulsan, Seoul (Korea, Republic of)
2012-03-15
Dopamine transporter (DAT) imaging can demonstrate presynaptic dopaminergic neuronal loss in Parkinson's disease (PD). However, differentiating atypical parkinsonism (APD) from PD is often difficult. We investigated the usefulness of dual phase F 18 FP CIT positron emission tomography (PET) imaging in the differential diagnosis of parkinsonism. Ninety eight subjects [five normal, seven drug induced parkinsonism (DIP), five essential tremor (ET), 24 PD, 20 multiple system atrophy parkinson type (MSA-P), 13 multiple system atrophy cerebellar type (MSA-C), 13 progressive supranuclear palsy (PSP), and 11 dementia with Lewy bodies(DLB)] underwent F 18 FP CIT PET. PET images were acquired at 5 min (early phase) and 3 h (late phase) after F 18 FP CIT administration (185MBq). Regional uptake pattern of cerebral and cerebellar hemispheres was assessed on early phase images, using visual, quantitative, and statistical parametric mapping (SPM) analyses. Striatal DAT binding was normal in normal, ET, DIP, and MSA C groups, but abnormal in PD, MSA P PSP, and DLB groups. No difference was found in regional uptake on early phase images among normal DAT binding groups, except in the MSA C group. Abnormal DAT binding groups showed different regional uptake pattern on early phase images compared with PD in SPM analysis (FDR<0.05). When discriminating APD from PD, visual interpretation of the early phase image showed high diagnostic sensitivity and specificity (75.4% and 100%, respectively). Regarding the ability to distinguish specific APD, sensitivities were 81% for MSA P, 77% for MSA C, 23% for PSP, and 54.5% for DLB. Dual phase F 18 FP CIT PET imaging is useful in demonstrating striatal DAT loss in neurodegenerative parkinsonism, and also in differentiating APD, particularly MSA, from PD.
Semiclassical analysis, Witten Laplacians, and statistical mechanis
Helffer, Bernard
2002-01-01
This important book explains how the technique of Witten Laplacians may be useful in statistical mechanics. It considers the problem of analyzing the decay of correlations, after presenting its origin in statistical mechanics. In addition, it compares the Witten Laplacian approach with other techniques, such as the transfer matrix approach and its semiclassical analysis. The author concludes by providing a complete proof of the uniform Log-Sobolev inequality. Contents: Witten Laplacians Approach; Problems in Statistical Mechanics with Discrete Spins; Laplace Integrals and Transfer Operators; S
A novel statistic for genome-wide interaction analysis.
Directory of Open Access Journals (Sweden)
Xuesen Wu
2010-09-01
Full Text Available Although great progress in genome-wide association studies (GWAS has been made, the significant SNP associations identified by GWAS account for only a few percent of the genetic variance, leading many to question where and how we can find the missing heritability. There is increasing interest in genome-wide interaction analysis as a possible source of finding heritability unexplained by current GWAS. However, the existing statistics for testing interaction have low power for genome-wide interaction analysis. To meet challenges raised by genome-wide interactional analysis, we have developed a novel statistic for testing interaction between two loci (either linked or unlinked. The null distribution and the type I error rates of the new statistic for testing interaction are validated using simulations. Extensive power studies show that the developed statistic has much higher power to detect interaction than classical logistic regression. The results identified 44 and 211 pairs of SNPs showing significant evidence of interactions with FDR<0.001 and 0.001
A statistical approach to plasma profile analysis
International Nuclear Information System (INIS)
Kardaun, O.J.W.F.; McCarthy, P.J.; Lackner, K.; Riedel, K.S.
1990-05-01
A general statistical approach to the parameterisation and analysis of tokamak profiles is presented. The modelling of the profile dependence on both the radius and the plasma parameters is discussed, and pertinent, classical as well as robust, methods of estimation are reviewed. Special attention is given to statistical tests for discriminating between the various models, and to the construction of confidence intervals for the parameterised profiles and the associated global quantities. The statistical approach is shown to provide a rigorous approach to the empirical testing of plasma profile invariance. (orig.)
Shaikh, Masood Ali
2017-09-01
Assessment of research articles in terms of study designs used, statistical tests applied and the use of statistical analysis programmes help determine research activity profile and trends in the country. In this descriptive study, all original articles published by Journal of Pakistan Medical Association (JPMA) and Journal of the College of Physicians and Surgeons Pakistan (JCPSP), in the year 2015 were reviewed in terms of study designs used, application of statistical tests, and the use of statistical analysis programmes. JPMA and JCPSP published 192 and 128 original articles, respectively, in the year 2015. Results of this study indicate that cross-sectional study design, bivariate inferential statistical analysis entailing comparison between two variables/groups, and use of statistical software programme SPSS to be the most common study design, inferential statistical analysis, and statistical analysis software programmes, respectively. These results echo previously published assessment of these two journals for the year 2014.
Statistical analysis of brake squeal noise
Oberst, S.; Lai, J. C. S.
2011-06-01
Despite substantial research efforts applied to the prediction of brake squeal noise since the early 20th century, the mechanisms behind its generation are still not fully understood. Squealing brakes are of significant concern to the automobile industry, mainly because of the costs associated with warranty claims. In order to remedy the problems inherent in designing quieter brakes and, therefore, to understand the mechanisms, a design of experiments study, using a noise dynamometer, was performed by a brake system manufacturer to determine the influence of geometrical parameters (namely, the number and location of slots) of brake pads on brake squeal noise. The experimental results were evaluated with a noise index and ranked for warm and cold brake stops. These data are analysed here using statistical descriptors based on population distributions, and a correlation analysis, to gain greater insight into the functional dependency between the time-averaged friction coefficient as the input and the peak sound pressure level data as the output quantity. The correlation analysis between the time-averaged friction coefficient and peak sound pressure data is performed by applying a semblance analysis and a joint recurrence quantification analysis. Linear measures are compared with complexity measures (nonlinear) based on statistics from the underlying joint recurrence plots. Results show that linear measures cannot be used to rank the noise performance of the four test pad configurations. On the other hand, the ranking of the noise performance of the test pad configurations based on the noise index agrees with that based on nonlinear measures: the higher the nonlinearity between the time-averaged friction coefficient and peak sound pressure, the worse the squeal. These results highlight the nonlinear character of brake squeal and indicate the potential of using nonlinear statistical analysis tools to analyse disc brake squeal.
The Statistical Analysis of Time Series
Anderson, T W
2011-01-01
The Wiley Classics Library consists of selected books that have become recognized classics in their respective fields. With these new unabridged and inexpensive editions, Wiley hopes to extend the life of these important works by making them available to future generations of mathematicians and scientists. Currently available in the Series: T. W. Anderson Statistical Analysis of Time Series T. S. Arthanari & Yadolah Dodge Mathematical Programming in Statistics Emil Artin Geometric Algebra Norman T. J. Bailey The Elements of Stochastic Processes with Applications to the Natural Sciences George
Analysis of room transfer function and reverberant signal statistics
DEFF Research Database (Denmark)
Georganti, Eleftheria; Mourjopoulos, John; Jacobsen, Finn
2008-01-01
For some time now, statistical analysis has been a valuable tool in analyzing room transfer functions (RTFs). This work examines existing statistical time-frequency models and techniques for RTF analysis (e.g., Schroeder's stochastic model and the standard deviation over frequency bands for the RTF...... magnitude and phase). RTF fractional octave smoothing, as with 1-slash 3 octave analysis, may lead to RTF simplifications that can be useful for several audio applications, like room compensation, room modeling, auralisation purposes. The aim of this work is to identify the relationship of optimal response...... and the corresponding ratio of the direct and reverberant signal. In addition, this work examines the statistical quantities for speech and audio signals prior to their reproduction within rooms and when recorded in rooms. Histograms and other statistical distributions are used to compare RTF minima of typical...
Energy Technology Data Exchange (ETDEWEB)
Surova, Yulia; Hall, Sara; Widner, Haakan [Lund University, Department of Clinical Sciences, Lund (Sweden); Skaane University Hospital, Department of Neurology, Lund (Sweden); Nilsson, Markus [Lund University, Lund University Bioimaging Center, Lund (Sweden); Laett, Jimmy [Skaane University Hospital, Center for Medical Imaging and Physiology, Lund (Sweden); Lampinen, Bjoern [Lund University, Department of Medical Radiation Physics, Lund (Sweden); Lindberg, Olof [Malmoe, Lund University, Department of Clinical Sciences, Malmoe (Sweden); Nilsson, Christer [Skaane University Hospital, Department of Neurology, Lund (Sweden); Malmoe, Lund University, Department of Clinical Sciences, Malmoe (Sweden); Westen, Danielle van [Lund University, Department of Clinical Sciences, Lund (Sweden); Skaane University Hospital, Center for Medical Imaging and Physiology, Lund (Sweden); Hansson, Oskar [Malmoe, Lund University, Department of Clinical Sciences, Malmoe (Sweden); Skaane University Hospital, Memory Clinic, Lund (Sweden)
2015-11-15
The aim of this study is to identify disease-specific changes of the thalamus, basal ganglia, pons, and midbrain in patients with progressive supranuclear palsy (PSP), Parkinson's disease (PD), and multiple system atrophy with predominant parkinsonism (MSA-P) using diffusion tensor imaging and volumetric analysis. MRI diffusion and volumetric data were acquired in a derivation of 30 controls and 8 patients with PSP and a validation cohort comprised of controls (n = 21) and patients with PSP (n = 27), PD (n = 10), and MSA-P (n = 11). Analysis was performed using regions of interest (ROI), tract-based spatial statistic (TBSS), and tractography and results compared between diagnostic groups. In the derivation cohort, we observed increased mean diffusivity (MD) in the thalamus, superior cerebellar peduncle, and the midbrain in PSP compared to controls. Furthermore, volumetric analysis showed reduced thalamic volumes in PSP. In the validation cohort, the observations of increased MD were replicated by ROI-based analysis and in the thalamus by TBSS-based analysis. Such differences were not found for patients with PD in any of the cohorts. Tractography of the dentatorubrothalamic tract (DRTT) showed increased MD in PSP patients from both cohorts compared to controls and in the validation cohort in PSP compared to PD and MSA patients. Increased MD in the thalamus and along the DRTT correlated with disease stage and motor function in PSP. Patients with PSP, but not PD or MSA-P, exhibit signs of structural abnormalities in the thalamus and in the DRTT. These changes are associated with disease stage and impaired motor function. (orig.)
International Nuclear Information System (INIS)
Surova, Yulia; Hall, Sara; Widner, Haakan; Nilsson, Markus; Laett, Jimmy; Lampinen, Bjoern; Lindberg, Olof; Nilsson, Christer; Westen, Danielle van; Hansson, Oskar
2015-01-01
The aim of this study is to identify disease-specific changes of the thalamus, basal ganglia, pons, and midbrain in patients with progressive supranuclear palsy (PSP), Parkinson's disease (PD), and multiple system atrophy with predominant parkinsonism (MSA-P) using diffusion tensor imaging and volumetric analysis. MRI diffusion and volumetric data were acquired in a derivation of 30 controls and 8 patients with PSP and a validation cohort comprised of controls (n = 21) and patients with PSP (n = 27), PD (n = 10), and MSA-P (n = 11). Analysis was performed using regions of interest (ROI), tract-based spatial statistic (TBSS), and tractography and results compared between diagnostic groups. In the derivation cohort, we observed increased mean diffusivity (MD) in the thalamus, superior cerebellar peduncle, and the midbrain in PSP compared to controls. Furthermore, volumetric analysis showed reduced thalamic volumes in PSP. In the validation cohort, the observations of increased MD were replicated by ROI-based analysis and in the thalamus by TBSS-based analysis. Such differences were not found for patients with PD in any of the cohorts. Tractography of the dentatorubrothalamic tract (DRTT) showed increased MD in PSP patients from both cohorts compared to controls and in the validation cohort in PSP compared to PD and MSA patients. Increased MD in the thalamus and along the DRTT correlated with disease stage and motor function in PSP. Patients with PSP, but not PD or MSA-P, exhibit signs of structural abnormalities in the thalamus and in the DRTT. These changes are associated with disease stage and impaired motor function. (orig.)
Transit safety & security statistics & analysis 2002 annual report (formerly SAMIS)
2004-12-01
The Transit Safety & Security Statistics & Analysis 2002 Annual Report (formerly SAMIS) is a compilation and analysis of mass transit accident, casualty, and crime statistics reported under the Federal Transit Administrations (FTAs) National Tr...
Transit safety & security statistics & analysis 2003 annual report (formerly SAMIS)
2005-12-01
The Transit Safety & Security Statistics & Analysis 2003 Annual Report (formerly SAMIS) is a compilation and analysis of mass transit accident, casualty, and crime statistics reported under the Federal Transit Administrations (FTAs) National Tr...
Statistical Modelling of Wind Proles - Data Analysis and Modelling
DEFF Research Database (Denmark)
Jónsson, Tryggvi; Pinson, Pierre
The aim of the analysis presented in this document is to investigate whether statistical models can be used to make very short-term predictions of wind profiles.......The aim of the analysis presented in this document is to investigate whether statistical models can be used to make very short-term predictions of wind profiles....
Statistical analysis of long term spatial and temporal trends of ...
Indian Academy of Sciences (India)
Statistical analysis of long term spatial and temporal trends of temperature ... CGCM3; HadCM3; modified Mann–Kendall test; statistical analysis; Sutlej basin. ... Water Resources Systems Division, National Institute of Hydrology, Roorkee 247 ...
Progression and prognosis in multiple system atrophy: an analysis of 230 Japanese patients.
Watanabe, Hirohisa; Saito, Yufuko; Terao, Shinichi; Ando, Tetsuo; Kachi, Teruhiko; Mukai, Eiichiro; Aiba, Ikuko; Abe, Yuji; Tamakoshi, Akiko; Doyu, Manabu; Hirayama, Masaaki; Sobue, Gen
2002-05-01
We investigated the disease progression and survival in 230 Japanese patients with multiple system atrophy (MSA; 131 men, 99 women; 208 probable MSA, 22 definite; mean age at onset, 55.4 years). Cerebellar dysfunction (multiple system atrophy-cerebellar; MSA-C) predominated in 155 patients, and parkinsonism (multiple system atrophy-parkinsonian; MSA-P) in 75. The median time from initial symptom to combined motor and autonomic dysfunction was 2 years (range 1-10). Median intervals from onset to aid-requiring walking, confinement to a wheelchair, a bedridden state and death were 3, 5, 8 and 9 years, respectively. Patients manifesting combined motor and autonomic involvement within 3 years of onset had a significantly increased risk of not only developing advanced disease stage but also shorter survival (P bedridden state, P bedridden state (P = 0.03) and death (P bedridden state and survival were no worse. Gender was not associated with differences in worsening of function or survival. On MRI, a hyperintense rim at the lateral edge of the dorsolateral putamen was seen in 34.5% of cases, and a 'hot cross bun' sign in the pontine basis (PB) in 63.3%. These putaminal and pontine abnormalities became more prominent as MSA-P and MSA-C features advanced. The atrophy of the cerebellar vermis and PB showed a significant correlation particularly with the interval following the appearance of cerebellar symptoms in MSA-C (r = 0.71, P r = 0.76 and P < 0.01, respectively), but the relationship between atrophy and functional status was highly variable among the individuals, suggesting that other factors influenced the functional deterioration. Atrophy of the corpus callosum was seen in a subpopulation of MSA, suggesting hemispheric involvement in a subgroup of MSA patients. The present study suggested that many factors are involved in the progression of MSA but, most importantly, the interval from initial symptom to combined motor and autonomic dysfunction can predict functional
CORSSA: The Community Online Resource for Statistical Seismicity Analysis
Michael, Andrew J.; Wiemer, Stefan
2010-01-01
Statistical seismology is the application of rigorous statistical methods to earthquake science with the goal of improving our knowledge of how the earth works. Within statistical seismology there is a strong emphasis on the analysis of seismicity data in order to improve our scientific understanding of earthquakes and to improve the evaluation and testing of earthquake forecasts, earthquake early warning, and seismic hazards assessments. Given the societal importance of these applications, statistical seismology must be done well. Unfortunately, a lack of educational resources and available software tools make it difficult for students and new practitioners to learn about this discipline. The goal of the Community Online Resource for Statistical Seismicity Analysis (CORSSA) is to promote excellence in statistical seismology by providing the knowledge and resources necessary to understand and implement the best practices, so that the reader can apply these methods to their own research. This introduction describes the motivation for and vision of CORRSA. It also describes its structure and contents.
Multivariate statistical analysis a high-dimensional approach
Serdobolskii, V
2000-01-01
In the last few decades the accumulation of large amounts of in formation in numerous applications. has stimtllated an increased in terest in multivariate analysis. Computer technologies allow one to use multi-dimensional and multi-parametric models successfully. At the same time, an interest arose in statistical analysis with a de ficiency of sample data. Nevertheless, it is difficult to describe the recent state of affairs in applied multivariate methods as satisfactory. Unimprovable (dominating) statistical procedures are still unknown except for a few specific cases. The simplest problem of estimat ing the mean vector with minimum quadratic risk is unsolved, even for normal distributions. Commonly used standard linear multivari ate procedures based on the inversion of sample covariance matrices can lead to unstable results or provide no solution in dependence of data. Programs included in standard statistical packages cannot process 'multi-collinear data' and there are no theoretical recommen ...
Applied multivariate statistical analysis
Härdle, Wolfgang Karl
2015-01-01
Focusing on high-dimensional applications, this 4th edition presents the tools and concepts used in multivariate data analysis in a style that is also accessible for non-mathematicians and practitioners. It surveys the basic principles and emphasizes both exploratory and inferential statistics; a new chapter on Variable Selection (Lasso, SCAD and Elastic Net) has also been added. All chapters include practical exercises that highlight applications in different multivariate data analysis fields: in quantitative financial studies, where the joint dynamics of assets are observed; in medicine, where recorded observations of subjects in different locations form the basis for reliable diagnoses and medication; and in quantitative marketing, where consumers’ preferences are collected in order to construct models of consumer behavior. All of these examples involve high to ultra-high dimensions and represent a number of major fields in big data analysis. The fourth edition of this book on Applied Multivariate ...
Statistical evaluation of vibration analysis techniques
Milner, G. Martin; Miller, Patrice S.
1987-01-01
An evaluation methodology is presented for a selection of candidate vibration analysis techniques applicable to machinery representative of the environmental control and life support system of advanced spacecraft; illustrative results are given. Attention is given to the statistical analysis of small sample experiments, the quantification of detection performance for diverse techniques through the computation of probability of detection versus probability of false alarm, and the quantification of diagnostic performance.
HistFitter software framework for statistical data analysis
Baak, M.; Côte, D.; Koutsman, A.; Lorenz, J.; Short, D.
2015-01-01
We present a software framework for statistical data analysis, called HistFitter, that has been used extensively by the ATLAS Collaboration to analyze big datasets originating from proton-proton collisions at the Large Hadron Collider at CERN. Since 2012 HistFitter has been the standard statistical tool in searches for supersymmetric particles performed by ATLAS. HistFitter is a programmable and flexible framework to build, book-keep, fit, interpret and present results of data models of nearly arbitrary complexity. Starting from an object-oriented configuration, defined by users, the framework builds probability density functions that are automatically fitted to data and interpreted with statistical tests. A key innovation of HistFitter is its design, which is rooted in core analysis strategies of particle physics. The concepts of control, signal and validation regions are woven into its very fabric. These are progressively treated with statistically rigorous built-in methods. Being capable of working with mu...
Statistical analysis on extreme wave height
Digital Repository Service at National Institute of Oceanography (India)
Teena, N.V.; SanilKumar, V.; Sudheesh, K.; Sajeev, R.
-294. • WAFO (2000) – A MATLAB toolbox for analysis of random waves and loads, Lund University, Sweden, homepage http://www.maths.lth.se/matstat/wafo/,2000. 15 Table 1: Statistical results of data and fitted distribution for cumulative distribution...
Statistical Analysis of Zebrafish Locomotor Response.
Liu, Yiwen; Carmer, Robert; Zhang, Gaonan; Venkatraman, Prahatha; Brown, Skye Ashton; Pang, Chi-Pui; Zhang, Mingzhi; Ma, Ping; Leung, Yuk Fai
2015-01-01
Zebrafish larvae display rich locomotor behaviour upon external stimulation. The movement can be simultaneously tracked from many larvae arranged in multi-well plates. The resulting time-series locomotor data have been used to reveal new insights into neurobiology and pharmacology. However, the data are of large scale, and the corresponding locomotor behavior is affected by multiple factors. These issues pose a statistical challenge for comparing larval activities. To address this gap, this study has analyzed a visually-driven locomotor behaviour named the visual motor response (VMR) by the Hotelling's T-squared test. This test is congruent with comparing locomotor profiles from a time period. Different wild-type (WT) strains were compared using the test, which shows that they responded differently to light change at different developmental stages. The performance of this test was evaluated by a power analysis, which shows that the test was sensitive for detecting differences between experimental groups with sample numbers that were commonly used in various studies. In addition, this study investigated the effects of various factors that might affect the VMR by multivariate analysis of variance (MANOVA). The results indicate that the larval activity was generally affected by stage, light stimulus, their interaction, and location in the plate. Nonetheless, different factors affected larval activity differently over time, as indicated by a dynamical analysis of the activity at each second. Intriguingly, this analysis also shows that biological and technical repeats had negligible effect on larval activity. This finding is consistent with that from the Hotelling's T-squared test, and suggests that experimental repeats can be combined to enhance statistical power. Together, these investigations have established a statistical framework for analyzing VMR data, a framework that should be generally applicable to other locomotor data with similar structure.
Time Series Analysis Based on Running Mann Whitney Z Statistics
A sensitive and objective time series analysis method based on the calculation of Mann Whitney U statistics is described. This method samples data rankings over moving time windows, converts those samples to Mann-Whitney U statistics, and then normalizes the U statistics to Z statistics using Monte-...
Sensitivity analysis of ranked data: from order statistics to quantiles
Heidergott, B.F.; Volk-Makarewicz, W.
2015-01-01
In this paper we provide the mathematical theory for sensitivity analysis of order statistics of continuous random variables, where the sensitivity is with respect to a distributional parameter. Sensitivity analysis of order statistics over a finite number of observations is discussed before
2015-01-01
Population growth will result in a significant anthropogenic environmental change worldwide through increases in developed land (DL) consumption. DL consumption is an important environmental and socioeconomic process affecting humans and ecosystems. Attention has been given to DL modeling inside highly populated cities. However, modeling DL consumption should expand to non-metropolitan areas where arguably the environmental consequences are more significant. Here, we study all counties within the conterminous U.S. and based on satellite-derived product (National Land Cover Dataset 2001) we calculate the associated DL for each county. By using county population data from the 2000 census we present a comparative study on DL consumption and we propose a model linking population with expected DL consumption. Results indicate distinct geographic patterns of comparatively low and high consuming counties moving from east to west. We also demonstrate that the relationship of DL consumption with population is mostly linear, altering the notion that expected population growth will have lower DL consumption if added in counties with larger population. Added DL consumption is independent of a county’s starting population and only dependent on whether the county belongs to a Metropolitan Statistical Area (MSA). In the overlapping MSA and non-MSA population range there is also a constant DL efficiency gain of approximately 20km2 for a given population for MSA counties which suggests that transitioning from rural to urban counties has significantly higher benefits in lower populations. In addition, we analyze the socioeconomic composition of counties with extremely high or low DL consumption. High DL consumption counties have statistically lower Black/African American population, higher poverty rate and lower income per capita than average in both NMSA and MSA counties. Our analysis offers a baseline to investigate further land consumption strategies in anticipation of growing
Feature-Based Statistical Analysis of Combustion Simulation Data
Energy Technology Data Exchange (ETDEWEB)
Bennett, J; Krishnamoorthy, V; Liu, S; Grout, R; Hawkes, E; Chen, J; Pascucci, V; Bremer, P T
2011-11-18
We present a new framework for feature-based statistical analysis of large-scale scientific data and demonstrate its effectiveness by analyzing features from Direct Numerical Simulations (DNS) of turbulent combustion. Turbulent flows are ubiquitous and account for transport and mixing processes in combustion, astrophysics, fusion, and climate modeling among other disciplines. They are also characterized by coherent structure or organized motion, i.e. nonlocal entities whose geometrical features can directly impact molecular mixing and reactive processes. While traditional multi-point statistics provide correlative information, they lack nonlocal structural information, and hence, fail to provide mechanistic causality information between organized fluid motion and mixing and reactive processes. Hence, it is of great interest to capture and track flow features and their statistics together with their correlation with relevant scalar quantities, e.g. temperature or species concentrations. In our approach we encode the set of all possible flow features by pre-computing merge trees augmented with attributes, such as statistical moments of various scalar fields, e.g. temperature, as well as length-scales computed via spectral analysis. The computation is performed in an efficient streaming manner in a pre-processing step and results in a collection of meta-data that is orders of magnitude smaller than the original simulation data. This meta-data is sufficient to support a fully flexible and interactive analysis of the features, allowing for arbitrary thresholds, providing per-feature statistics, and creating various global diagnostics such as Cumulative Density Functions (CDFs), histograms, or time-series. We combine the analysis with a rendering of the features in a linked-view browser that enables scientists to interactively explore, visualize, and analyze the equivalent of one terabyte of simulation data. We highlight the utility of this new framework for combustion
Statistical learning methods in high-energy and astrophysics analysis
Energy Technology Data Exchange (ETDEWEB)
Zimmermann, J. [Forschungszentrum Juelich GmbH, Zentrallabor fuer Elektronik, 52425 Juelich (Germany) and Max-Planck-Institut fuer Physik, Foehringer Ring 6, 80805 Munich (Germany)]. E-mail: zimmerm@mppmu.mpg.de; Kiesling, C. [Max-Planck-Institut fuer Physik, Foehringer Ring 6, 80805 Munich (Germany)
2004-11-21
We discuss several popular statistical learning methods used in high-energy- and astro-physics analysis. After a short motivation for statistical learning we present the most popular algorithms and discuss several examples from current research in particle- and astro-physics. The statistical learning methods are compared with each other and with standard methods for the respective application.
Statistical learning methods in high-energy and astrophysics analysis
International Nuclear Information System (INIS)
Zimmermann, J.; Kiesling, C.
2004-01-01
We discuss several popular statistical learning methods used in high-energy- and astro-physics analysis. After a short motivation for statistical learning we present the most popular algorithms and discuss several examples from current research in particle- and astro-physics. The statistical learning methods are compared with each other and with standard methods for the respective application
The fuzzy approach to statistical analysis
Coppi, Renato; Gil, Maria A.; Kiers, Henk A. L.
2006-01-01
For the last decades, research studies have been developed in which a coalition of Fuzzy Sets Theory and Statistics has been established with different purposes. These namely are: (i) to introduce new data analysis problems in which the objective involves either fuzzy relationships or fuzzy terms;
Statistical analysis applied to safety culture self-assessment
International Nuclear Information System (INIS)
Macedo Soares, P.P.
2002-01-01
Interviews and opinion surveys are instruments used to assess the safety culture in an organization as part of the Safety Culture Enhancement Programme. Specific statistical tools are used to analyse the survey results. This paper presents an example of an opinion survey with the corresponding application of the statistical analysis and the conclusions obtained. Survey validation, Frequency statistics, Kolmogorov-Smirnov non-parametric test, Student (T-test) and ANOVA means comparison tests and LSD post-hoc multiple comparison test, are discussed. (author)
Foundation of statistical energy analysis in vibroacoustics
Le Bot, A
2015-01-01
This title deals with the statistical theory of sound and vibration. The foundation of statistical energy analysis is presented in great detail. In the modal approach, an introduction to random vibration with application to complex systems having a large number of modes is provided. For the wave approach, the phenomena of propagation, group speed, and energy transport are extensively discussed. Particular emphasis is given to the emergence of diffuse field, the central concept of the theory.
Statistical Analysis of Big Data on Pharmacogenomics
Fan, Jianqing; Liu, Han
2013-01-01
This paper discusses statistical methods for estimating complex correlation structure from large pharmacogenomic datasets. We selectively review several prominent statistical methods for estimating large covariance matrix for understanding correlation structure, inverse covariance matrix for network modeling, large-scale simultaneous tests for selecting significantly differently expressed genes and proteins and genetic markers for complex diseases, and high dimensional variable selection for identifying important molecules for understanding molecule mechanisms in pharmacogenomics. Their applications to gene network estimation and biomarker selection are used to illustrate the methodological power. Several new challenges of Big data analysis, including complex data distribution, missing data, measurement error, spurious correlation, endogeneity, and the need for robust statistical methods, are also discussed. PMID:23602905
HistFitter software framework for statistical data analysis
Energy Technology Data Exchange (ETDEWEB)
Baak, M. [CERN, Geneva (Switzerland); Besjes, G.J. [Radboud University Nijmegen, Nijmegen (Netherlands); Nikhef, Amsterdam (Netherlands); Cote, D. [University of Texas, Arlington (United States); Koutsman, A. [TRIUMF, Vancouver (Canada); Lorenz, J. [Ludwig-Maximilians-Universitaet Muenchen, Munich (Germany); Excellence Cluster Universe, Garching (Germany); Short, D. [University of Oxford, Oxford (United Kingdom)
2015-04-15
We present a software framework for statistical data analysis, called HistFitter, that has been used extensively by the ATLAS Collaboration to analyze big datasets originating from proton-proton collisions at the Large Hadron Collider at CERN. Since 2012 HistFitter has been the standard statistical tool in searches for supersymmetric particles performed by ATLAS. HistFitter is a programmable and flexible framework to build, book-keep, fit, interpret and present results of data models of nearly arbitrary complexity. Starting from an object-oriented configuration, defined by users, the framework builds probability density functions that are automatically fit to data and interpreted with statistical tests. Internally HistFitter uses the statistics packages RooStats and HistFactory. A key innovation of HistFitter is its design, which is rooted in analysis strategies of particle physics. The concepts of control, signal and validation regions are woven into its fabric. These are progressively treated with statistically rigorous built-in methods. Being capable of working with multiple models at once that describe the data, HistFitter introduces an additional level of abstraction that allows for easy bookkeeping, manipulation and testing of large collections of signal hypotheses. Finally, HistFitter provides a collection of tools to present results with publication quality style through a simple command-line interface. (orig.)
HistFitter software framework for statistical data analysis
International Nuclear Information System (INIS)
Baak, M.; Besjes, G.J.; Cote, D.; Koutsman, A.; Lorenz, J.; Short, D.
2015-01-01
We present a software framework for statistical data analysis, called HistFitter, that has been used extensively by the ATLAS Collaboration to analyze big datasets originating from proton-proton collisions at the Large Hadron Collider at CERN. Since 2012 HistFitter has been the standard statistical tool in searches for supersymmetric particles performed by ATLAS. HistFitter is a programmable and flexible framework to build, book-keep, fit, interpret and present results of data models of nearly arbitrary complexity. Starting from an object-oriented configuration, defined by users, the framework builds probability density functions that are automatically fit to data and interpreted with statistical tests. Internally HistFitter uses the statistics packages RooStats and HistFactory. A key innovation of HistFitter is its design, which is rooted in analysis strategies of particle physics. The concepts of control, signal and validation regions are woven into its fabric. These are progressively treated with statistically rigorous built-in methods. Being capable of working with multiple models at once that describe the data, HistFitter introduces an additional level of abstraction that allows for easy bookkeeping, manipulation and testing of large collections of signal hypotheses. Finally, HistFitter provides a collection of tools to present results with publication quality style through a simple command-line interface. (orig.)
Robust statistics and geochemical data analysis
International Nuclear Information System (INIS)
Di, Z.
1987-01-01
Advantages of robust procedures over ordinary least-squares procedures in geochemical data analysis is demonstrated using NURE data from the Hot Springs Quadrangle, South Dakota, USA. Robust principal components analysis with 5% multivariate trimming successfully guarded the analysis against perturbations by outliers and increased the number of interpretable factors. Regression with SINE estimates significantly increased the goodness-of-fit of the regression and improved the correspondence of delineated anomalies with known uranium prospects. Because of the ubiquitous existence of outliers in geochemical data, robust statistical procedures are suggested as routine procedures to replace ordinary least-squares procedures
Using Pre-Statistical Analysis to Streamline Monitoring Assessments
International Nuclear Information System (INIS)
Reed, J.K.
1999-01-01
A variety of statistical methods exist to aid evaluation of groundwater quality and subsequent decision making in regulatory programs. These methods are applied because of large temporal and spatial extrapolations commonly applied to these data. In short, statistical conclusions often serve as a surrogate for knowledge. However, facilities with mature monitoring programs that have generated abundant data have inherently less uncertainty because of the sheer quantity of analytical results. In these cases, statistical tests can be less important, and ''expert'' data analysis should assume an important screening role.The WSRC Environmental Protection Department, working with the General Separations Area BSRI Environmental Restoration project team has developed a method for an Integrated Hydrogeological Analysis (IHA) of historical water quality data from the F and H Seepage Basins groundwater remediation project. The IHA combines common sense analytical techniques and a GIS presentation that force direct interactive evaluation of the data. The IHA can perform multiple data analysis tasks required by the RCRA permit. These include: (1) Development of a groundwater quality baseline prior to remediation startup, (2) Targeting of constituents for removal from RCRA GWPS, (3) Targeting of constituents for removal from UIC, permit, (4) Targeting of constituents for reduced, (5)Targeting of monitoring wells not producing representative samples, (6) Reduction in statistical evaluation, and (7) Identification of contamination from other facilities
Conjunction analysis and propositional logic in fMRI data analysis using Bayesian statistics.
Rudert, Thomas; Lohmann, Gabriele
2008-12-01
To evaluate logical expressions over different effects in data analyses using the general linear model (GLM) and to evaluate logical expressions over different posterior probability maps (PPMs). In functional magnetic resonance imaging (fMRI) data analysis, the GLM was applied to estimate unknown regression parameters. Based on the GLM, Bayesian statistics can be used to determine the probability of conjunction, disjunction, implication, or any other arbitrary logical expression over different effects or contrast. For second-level inferences, PPMs from individual sessions or subjects are utilized. These PPMs can be combined to a logical expression and its probability can be computed. The methods proposed in this article are applied to data from a STROOP experiment and the methods are compared to conjunction analysis approaches for test-statistics. The combination of Bayesian statistics with propositional logic provides a new approach for data analyses in fMRI. Two different methods are introduced for propositional logic: the first for analyses using the GLM and the second for common inferences about different probability maps. The methods introduced extend the idea of conjunction analysis to a full propositional logic and adapt it from test-statistics to Bayesian statistics. The new approaches allow inferences that are not possible with known standard methods in fMRI. (c) 2008 Wiley-Liss, Inc.
Directory of Open Access Journals (Sweden)
Fan-Yun Pai
2015-11-01
Full Text Available To consistently produce high quality products, a quality management system, such as the ISO9001, 2000 or TS 16949 must be practically implemented. One core instrument of the TS16949 MSA (Measurement System Analysis is to rank the capability of a measurement system and ensure the quality characteristics of the product would likely be transformed through the whole manufacturing process. It is important to reduce the risk of Type I errors (acceptable goods are misjudged as defective parts and Type II errors (defective parts are misjudged as good parts. An ideal measuring system would have the statistical characteristic of zero error, but such a system could hardly exist. Hence, to maintain better control of the variance that might occur in the manufacturing process, MSA is necessary for better quality control. Ball screws, which are a key component in precision machines, have significant attributes with respect to positioning and transmitting. Failures of lead accuracy and axial-gap of a ball screw can cause negative and expensive effects in machine positioning accuracy. Consequently, a functional measurement system can incur great savings by detecting Type I and Type II errors. If the measurement system fails with respect to specification of the product, it will likely misjudge Type I and Type II errors. Inspectors normally follow the MSA regulations for accuracy measurement, but the choice of measuring system does not merely depend on some simple indices. In this paper, we examine the stability of a measuring system by using a Monte Carlo simulation to establish bias, linearity variance of the normal distribution, and the probability density function. Further, we forecast the possible area distribution in the real case. After the simulation, the measurement capability will be improved, which helps the user classify the measurement system and establish measurement regulations for better performance and monitoring of the precision of the ball screw.
Kleijnen, J.P.C.
1995-01-01
This tutorial discusses what-if analysis and optimization of System Dynamics models. These problems are solved, using the statistical techniques of regression analysis and design of experiments (DOE). These issues are illustrated by applying the statistical techniques to a System Dynamics model for
Multivariate Statistical Methods as a Tool of Financial Analysis of Farm Business
Czech Academy of Sciences Publication Activity Database
Novák, J.; Sůvová, H.; Vondráček, Jiří
2002-01-01
Roč. 48, č. 1 (2002), s. 9-12 ISSN 0139-570X Institutional research plan: AV0Z1030915 Keywords : financial analysis * financial ratios * multivariate statistical methods * correlation analysis * discriminant analysis * cluster analysis Subject RIV: BB - Applied Statistics, Operational Research
Tuuli, Methodius G; Odibo, Anthony O
2011-08-01
The objective of this article is to discuss the rationale for common statistical tests used for the analysis and interpretation of prenatal diagnostic imaging studies. Examples from the literature are used to illustrate descriptive and inferential statistics. The uses and limitations of linear and logistic regression analyses are discussed in detail.
Statistical analysis of environmental data
International Nuclear Information System (INIS)
Beauchamp, J.J.; Bowman, K.O.; Miller, F.L. Jr.
1975-10-01
This report summarizes the analyses of data obtained by the Radiological Hygiene Branch of the Tennessee Valley Authority from samples taken around the Browns Ferry Nuclear Plant located in Northern Alabama. The data collection was begun in 1968 and a wide variety of types of samples have been gathered on a regular basis. The statistical analysis of environmental data involving very low-levels of radioactivity is discussed. Applications of computer calculations for data processing are described
Highly Robust Statistical Methods in Medical Image Analysis
Czech Academy of Sciences Publication Activity Database
Kalina, Jan
2012-01-01
Roč. 32, č. 2 (2012), s. 3-16 ISSN 0208-5216 R&D Projects: GA MŠk(CZ) 1M06014 Institutional research plan: CEZ:AV0Z10300504 Keywords : robust statistics * classification * faces * robust image analysis * forensic science Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.208, year: 2012 http://www.ibib.waw.pl/bbe/bbefulltext/BBE_32_2_003_FT.pdf
Statistical Power Analysis with Missing Data A Structural Equation Modeling Approach
Davey, Adam
2009-01-01
Statistical power analysis has revolutionized the ways in which we conduct and evaluate research. Similar developments in the statistical analysis of incomplete (missing) data are gaining more widespread applications. This volume brings statistical power and incomplete data together under a common framework, in a way that is readily accessible to those with only an introductory familiarity with structural equation modeling. It answers many practical questions such as: How missing data affects the statistical power in a study How much power is likely with different amounts and types
Statistical Analysis of Data for Timber Strengths
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard
2003-01-01
Statistical analyses are performed for material strength parameters from a large number of specimens of structural timber. Non-parametric statistical analysis and fits have been investigated for the following distribution types: Normal, Lognormal, 2 parameter Weibull and 3-parameter Weibull...... fits to the data available, especially if tail fits are used whereas the Log Normal distribution generally gives a poor fit and larger coefficients of variation, especially if tail fits are used. The implications on the reliability level of typical structural elements and on partial safety factors...... for timber are investigated....
Numeric computation and statistical data analysis on the Java platform
Chekanov, Sergei V
2016-01-01
Numerical computation, knowledge discovery and statistical data analysis integrated with powerful 2D and 3D graphics for visualization are the key topics of this book. The Python code examples powered by the Java platform can easily be transformed to other programming languages, such as Java, Groovy, Ruby and BeanShell. This book equips the reader with a computational platform which, unlike other statistical programs, is not limited by a single programming language. The author focuses on practical programming aspects and covers a broad range of topics, from basic introduction to the Python language on the Java platform (Jython), to descriptive statistics, symbolic calculations, neural networks, non-linear regression analysis and many other data-mining topics. He discusses how to find regularities in real-world data, how to classify data, and how to process data for knowledge discoveries. The code snippets are so short that they easily fit into single pages. Numeric Computation and Statistical Data Analysis ...
A Divergence Statistics Extension to VTK for Performance Analysis
Energy Technology Data Exchange (ETDEWEB)
Pebay, Philippe Pierre [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bennett, Janine Camille [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-02-01
This report follows the series of previous documents ([PT08, BPRT09b, PT09, BPT09, PT10, PB13], where we presented the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k -means, order and auto-correlative statistics engines which we developed within the Visualization Tool Kit ( VTK ) as a scalable, parallel and versatile statistics package. We now report on a new engine which we developed for the calculation of divergence statistics, a concept which we hereafter explain and whose main goal is to quantify the discrepancy, in a stasticial manner akin to measuring a distance, between an observed empirical distribution and a theoretical, "ideal" one. The ease of use of the new diverence statistics engine is illustrated by the means of C++ code snippets. Although this new engine does not yet have a parallel implementation, it has already been applied to HPC performance analysis, of which we provide an example.
Developments in statistical analysis in quantitative genetics
DEFF Research Database (Denmark)
Sorensen, Daniel
2009-01-01
of genetic means and variances, models for the analysis of categorical and count data, the statistical genetics of a model postulating that environmental variance is partly under genetic control, and a short discussion of models that incorporate massive genetic marker information. We provide an overview......A remarkable research impetus has taken place in statistical genetics since the last World Conference. This has been stimulated by breakthroughs in molecular genetics, automated data-recording devices and computer-intensive statistical methods. The latter were revolutionized by the bootstrap...... and by Markov chain Monte Carlo (McMC). In this overview a number of specific areas are chosen to illustrate the enormous flexibility that McMC has provided for fitting models and exploring features of data that were previously inaccessible. The selected areas are inferences of the trajectories over time...
On the Statistical Validation of Technical Analysis
Directory of Open Access Journals (Sweden)
Rosane Riera Freire
2007-06-01
Full Text Available Technical analysis, or charting, aims on visually identifying geometrical patterns in price charts in order to antecipate price "trends". In this paper we revisit the issue of thecnical analysis validation which has been tackled in the literature without taking care for (i the presence of heterogeneity and (ii statistical dependence in the analyzed data - various agglutinated return time series from distinct financial securities. The main purpose here is to address the first cited problem by suggesting a validation methodology that also "homogenizes" the securities according to the finite dimensional probability distribution of their return series. The general steps go through the identification of the stochastic processes for the securities returns, the clustering of similar securities and, finally, the identification of presence, or absence, of informatinal content obtained from those price patterns. We illustrate the proposed methodology with a real data exercise including several securities of the global market. Our investigation shows that there is a statistically significant informational content in two out of three common patterns usually found through technical analysis, namely: triangle, rectangle and head and shoulders.
Data management and statistical analysis for environmental assessment
International Nuclear Information System (INIS)
Wendelberger, J.R.; McVittie, T.I.
1995-01-01
Data management and statistical analysis for environmental assessment are important issues on the interface of computer science and statistics. Data collection for environmental decision making can generate large quantities of various types of data. A database/GIS system developed is described which provides efficient data storage as well as visualization tools which may be integrated into the data analysis process. FIMAD is a living database and GIS system. The system has changed and developed over time to meet the needs of the Los Alamos National Laboratory Restoration Program. The system provides a repository for data which may be accessed by different individuals for different purposes. The database structure is driven by the large amount and varied types of data required for environmental assessment. The integration of the database with the GIS system provides the foundation for powerful visualization and analysis capabilities
Compliance strategy for statistically based neutron overpower protection safety analysis methodology
International Nuclear Information System (INIS)
Holliday, E.; Phan, B.; Nainer, O.
2009-01-01
The methodology employed in the safety analysis of the slow Loss of Regulation (LOR) event in the OPG and Bruce Power CANDU reactors, referred to as Neutron Overpower Protection (NOP) analysis, is a statistically based methodology. Further enhancement to this methodology includes the use of Extreme Value Statistics (EVS) for the explicit treatment of aleatory and epistemic uncertainties, and probabilistic weighting of the initial core states. A key aspect of this enhanced NOP methodology is to demonstrate adherence, or compliance, with the analysis basis. This paper outlines a compliance strategy capable of accounting for the statistical nature of the enhanced NOP methodology. (author)
Diagnosis checking of statistical analysis in RCTs indexed in PubMed.
Lee, Paul H; Tse, Andy C Y
2017-11-01
Statistical analysis is essential for reporting of the results of randomized controlled trials (RCTs), as well as evaluating their effectiveness. However, the validity of a statistical analysis also depends on whether the assumptions of that analysis are valid. To review all RCTs published in journals indexed in PubMed during December 2014 to provide a complete picture of how RCTs handle assumptions of statistical analysis. We reviewed all RCTs published in December 2014 that appeared in journals indexed in PubMed using the Cochrane highly sensitive search strategy. The 2014 impact factors of the journals were used as proxies for their quality. The type of statistical analysis used and whether the assumptions of the analysis were tested were reviewed. In total, 451 papers were included. Of the 278 papers that reported a crude analysis for the primary outcomes, 31 (27·2%) reported whether the outcome was normally distributed. Of the 172 papers that reported an adjusted analysis for the primary outcomes, diagnosis checking was rarely conducted, with only 20%, 8·6% and 7% checked for generalized linear model, Cox proportional hazard model and multilevel model, respectively. Study characteristics (study type, drug trial, funding sources, journal type and endorsement of CONSORT guidelines) were not associated with the reporting of diagnosis checking. The diagnosis of statistical analyses in RCTs published in PubMed-indexed journals was usually absent. Journals should provide guidelines about the reporting of a diagnosis of assumptions. © 2017 Stichting European Society for Clinical Investigation Journal Foundation.
A κ-generalized statistical mechanics approach to income analysis
Clementi, F.; Gallegati, M.; Kaniadakis, G.
2009-02-01
This paper proposes a statistical mechanics approach to the analysis of income distribution and inequality. A new distribution function, having its roots in the framework of κ-generalized statistics, is derived that is particularly suitable for describing the whole spectrum of incomes, from the low-middle income region up to the high income Pareto power-law regime. Analytical expressions for the shape, moments and some other basic statistical properties are given. Furthermore, several well-known econometric tools for measuring inequality, which all exist in a closed form, are considered. A method for parameter estimation is also discussed. The model is shown to fit remarkably well the data on personal income for the United States, and the analysis of inequality performed in terms of its parameters is revealed as very powerful.
A κ-generalized statistical mechanics approach to income analysis
International Nuclear Information System (INIS)
Clementi, F; Gallegati, M; Kaniadakis, G
2009-01-01
This paper proposes a statistical mechanics approach to the analysis of income distribution and inequality. A new distribution function, having its roots in the framework of κ-generalized statistics, is derived that is particularly suitable for describing the whole spectrum of incomes, from the low–middle income region up to the high income Pareto power-law regime. Analytical expressions for the shape, moments and some other basic statistical properties are given. Furthermore, several well-known econometric tools for measuring inequality, which all exist in a closed form, are considered. A method for parameter estimation is also discussed. The model is shown to fit remarkably well the data on personal income for the United States, and the analysis of inequality performed in terms of its parameters is revealed as very powerful
Normality Tests for Statistical Analysis: A Guide for Non-Statisticians
Ghasemi, Asghar; Zahediasl, Saleh
2012-01-01
Statistical errors are common in scientific literature and about 50% of the published articles have at least one error. The assumption of normality needs to be checked for many statistical procedures, namely parametric tests, because their validity depends on it. The aim of this commentary is to overview checking for normality in statistical analysis using SPSS. PMID:23843808
Hendikawati, P.; Arifudin, R.; Zahid, M. Z.
2018-03-01
This study aims to design an android Statistics Data Analysis application that can be accessed through mobile devices to making it easier for users to access. The Statistics Data Analysis application includes various topics of basic statistical along with a parametric statistics data analysis application. The output of this application system is parametric statistics data analysis that can be used for students, lecturers, and users who need the results of statistical calculations quickly and easily understood. Android application development is created using Java programming language. The server programming language uses PHP with the Code Igniter framework, and the database used MySQL. The system development methodology used is the Waterfall methodology with the stages of analysis, design, coding, testing, and implementation and system maintenance. This statistical data analysis application is expected to support statistical lecturing activities and make students easier to understand the statistical analysis of mobile devices.
Statistical analysis of metallicity in spiral galaxies
Energy Technology Data Exchange (ETDEWEB)
Galeotti, P [Consiglio Nazionale delle Ricerche, Turin (Italy). Lab. di Cosmo-Geofisica; Turin Univ. (Italy). Ist. di Fisica Generale)
1981-04-01
A principal component analysis of metallicity and other integral properties of 33 spiral galaxies is presented; the involved parameters are: morphological type, diameter, luminosity and metallicity. From the statistical analysis it is concluded that the sample has only two significant dimensions and additonal tests, involving different parameters, show similar results. Thus it seems that only type and luminosity are independent variables, being the other integral properties of spiral galaxies correlated with them.
Five decades of promotion techniques in cigarette advertising: a longitudinal content analysis.
Paek, Hye-Jin; Reid, Leonard N; Jeong, Hyun Ju; Choi, Hojoon; Krugman, Dean
2012-01-01
This study examines frequencies and types of promotion techniques featured in five decades of cigarette advertising relative to five major smoking eras. Analysis of 1,133 cigarette advertisements collected through multistage sampling of 1954 through 2003 issues of three youth-oriented magazines found that 7.6% of the analyzed ads featured at least one promotion technique. Across smoking eras the proportion of promotion in the ads steadily increased from 1.6% in the "pre-broadcast ban era" to 10.9% in the "the pre-Master Settlement Agreement (MSA) era" and 9% in "post-MSA era." The increased use of sponsorships/events in cigarette ads for youth-oriented brands warrants more attention from tobacco control experts and government regulators.
Statistical Analysis of Protein Ensembles
Máté, Gabriell; Heermann, Dieter
2014-04-01
As 3D protein-configuration data is piling up, there is an ever-increasing need for well-defined, mathematically rigorous analysis approaches, especially that the vast majority of the currently available methods rely heavily on heuristics. We propose an analysis framework which stems from topology, the field of mathematics which studies properties preserved under continuous deformations. First, we calculate a barcode representation of the molecules employing computational topology algorithms. Bars in this barcode represent different topological features. Molecules are compared through their barcodes by statistically determining the difference in the set of their topological features. As a proof-of-principle application, we analyze a dataset compiled of ensembles of different proteins, obtained from the Ensemble Protein Database. We demonstrate that our approach correctly detects the different protein groupings.
State analysis of BOP using statistical and heuristic methods
International Nuclear Information System (INIS)
Heo, Gyun Young; Chang, Soon Heung
2003-01-01
Under the deregulation environment, the performance enhancement of BOP in nuclear power plants is being highlighted. To analyze performance level of BOP, we use the performance test procedures provided from an authorized institution such as ASME. However, through plant investigation, it was proved that the requirements of the performance test procedures about the reliability and quantity of sensors was difficult to be satisfied. As a solution of this, state analysis method that are the expanded concept of signal validation, was proposed on the basis of the statistical and heuristic approaches. Authors recommended the statistical linear regression model by analyzing correlation among BOP parameters as a reference state analysis method. Its advantage is that its derivation is not heuristic, it is possible to calculate model uncertainty, and it is easy to apply to an actual plant. The error of the statistical linear regression model is below 3% under normal as well as abnormal system states. Additionally a neural network model was recommended since the statistical model is impossible to apply to the validation of all of the sensors and is sensitive to the outlier that is the signal located out of a statistical distribution. Because there are a lot of sensors need to be validated in BOP, wavelet analysis (WA) were applied as a pre-processor for the reduction of input dimension and for the enhancement of training accuracy. The outlier localization capability of WA enhanced the robustness of the neural network. The trained neural network restored the degraded signals to the values within ±3% of the true signals
Precision Statistical Analysis of Images Based on Brightness Distribution
Directory of Open Access Journals (Sweden)
Muzhir Shaban Al-Ani
2017-07-01
Full Text Available Study the content of images is considered an important topic in which reasonable and accurate analysis of images are generated. Recently image analysis becomes a vital field because of huge number of images transferred via transmission media in our daily life. These crowded media with images lead to highlight in research area of image analysis. In this paper, the implemented system is passed into many steps to perform the statistical measures of standard deviation and mean values of both color and grey images. Whereas the last step of the proposed method concerns to compare the obtained results in different cases of the test phase. In this paper, the statistical parameters are implemented to characterize the content of an image and its texture. Standard deviation, mean and correlation values are used to study the intensity distribution of the tested images. Reasonable results are obtained for both standard deviation and mean value via the implementation of the system. The major issue addressed in the work is concentrated on brightness distribution via statistical measures applying different types of lighting.
Fisher statistics for analysis of diffusion tensor directional information.
Hutchinson, Elizabeth B; Rutecki, Paul A; Alexander, Andrew L; Sutula, Thomas P
2012-04-30
A statistical approach is presented for the quantitative analysis of diffusion tensor imaging (DTI) directional information using Fisher statistics, which were originally developed for the analysis of vectors in the field of paleomagnetism. In this framework, descriptive and inferential statistics have been formulated based on the Fisher probability density function, a spherical analogue of the normal distribution. The Fisher approach was evaluated for investigation of rat brain DTI maps to characterize tissue orientation in the corpus callosum, fornix, and hilus of the dorsal hippocampal dentate gyrus, and to compare directional properties in these regions following status epilepticus (SE) or traumatic brain injury (TBI) with values in healthy brains. Direction vectors were determined for each region of interest (ROI) for each brain sample and Fisher statistics were applied to calculate the mean direction vector and variance parameters in the corpus callosum, fornix, and dentate gyrus of normal rats and rats that experienced TBI or SE. Hypothesis testing was performed by calculation of Watson's F-statistic and associated p-value giving the likelihood that grouped observations were from the same directional distribution. In the fornix and midline corpus callosum, no directional differences were detected between groups, however in the hilus, significant (pstatistical comparison of tissue structural orientation. Copyright © 2012 Elsevier B.V. All rights reserved.
Statistical analysis of RHIC beam position monitors performance
Calaga, R.; Tomás, R.
2004-04-01
A detailed statistical analysis of beam position monitors (BPM) performance at RHIC is a critical factor in improving regular operations and future runs. Robust identification of malfunctioning BPMs plays an important role in any orbit or turn-by-turn analysis. Singular value decomposition and Fourier transform methods, which have evolved as powerful numerical techniques in signal processing, will aid in such identification from BPM data. This is the first attempt at RHIC to use a large set of data to statistically enhance the capability of these two techniques and determine BPM performance. A comparison from run 2003 data shows striking agreement between the two methods and hence can be used to improve BPM functioning at RHIC and possibly other accelerators.
Statistical analysis of RHIC beam position monitors performance
Directory of Open Access Journals (Sweden)
R. Calaga
2004-04-01
Full Text Available A detailed statistical analysis of beam position monitors (BPM performance at RHIC is a critical factor in improving regular operations and future runs. Robust identification of malfunctioning BPMs plays an important role in any orbit or turn-by-turn analysis. Singular value decomposition and Fourier transform methods, which have evolved as powerful numerical techniques in signal processing, will aid in such identification from BPM data. This is the first attempt at RHIC to use a large set of data to statistically enhance the capability of these two techniques and determine BPM performance. A comparison from run 2003 data shows striking agreement between the two methods and hence can be used to improve BPM functioning at RHIC and possibly other accelerators.
Statistics Education Research in Malaysia and the Philippines: A Comparative Analysis
Reston, Enriqueta; Krishnan, Saras; Idris, Noraini
2014-01-01
This paper presents a comparative analysis of statistics education research in Malaysia and the Philippines by modes of dissemination, research areas, and trends. An electronic search for published research papers in the area of statistics education from 2000-2012 yielded 20 for Malaysia and 19 for the Philippines. Analysis of these papers showed…
Statistical analysis of next generation sequencing data
Nettleton, Dan
2014-01-01
Next Generation Sequencing (NGS) is the latest high throughput technology to revolutionize genomic research. NGS generates massive genomic datasets that play a key role in the big data phenomenon that surrounds us today. To extract signals from high-dimensional NGS data and make valid statistical inferences and predictions, novel data analytic and statistical techniques are needed. This book contains 20 chapters written by prominent statisticians working with NGS data. The topics range from basic preprocessing and analysis with NGS data to more complex genomic applications such as copy number variation and isoform expression detection. Research statisticians who want to learn about this growing and exciting area will find this book useful. In addition, many chapters from this book could be included in graduate-level classes in statistical bioinformatics for training future biostatisticians who will be expected to deal with genomic data in basic biomedical research, genomic clinical trials and personalized med...
Selected papers on analysis, probability, and statistics
Nomizu, Katsumi
1994-01-01
This book presents papers that originally appeared in the Japanese journal Sugaku. The papers fall into the general area of mathematical analysis as it pertains to probability and statistics, dynamical systems, differential equations and analytic function theory. Among the topics discussed are: stochastic differential equations, spectra of the Laplacian and Schrödinger operators, nonlinear partial differential equations which generate dissipative dynamical systems, fractal analysis on self-similar sets and the global structure of analytic functions.
Analysis of statistical misconception in terms of statistical reasoning
Maryati, I.; Priatna, N.
2018-05-01
Reasoning skill is needed for everyone to face globalization era, because every person have to be able to manage and use information from all over the world which can be obtained easily. Statistical reasoning skill is the ability to collect, group, process, interpret, and draw conclusion of information. Developing this skill can be done through various levels of education. However, the skill is low because many people assume that statistics is just the ability to count and using formulas and so do students. Students still have negative attitude toward course which is related to research. The purpose of this research is analyzing students’ misconception in descriptive statistic course toward the statistical reasoning skill. The observation was done by analyzing the misconception test result and statistical reasoning skill test; observing the students’ misconception effect toward statistical reasoning skill. The sample of this research was 32 students of math education department who had taken descriptive statistic course. The mean value of misconception test was 49,7 and standard deviation was 10,6 whereas the mean value of statistical reasoning skill test was 51,8 and standard deviation was 8,5. If the minimal value is 65 to state the standard achievement of a course competence, students’ mean value is lower than the standard competence. The result of students’ misconception study emphasized on which sub discussion that should be considered. Based on the assessment result, it was found that students’ misconception happen on this: 1) writing mathematical sentence and symbol well, 2) understanding basic definitions, 3) determining concept that will be used in solving problem. In statistical reasoning skill, the assessment was done to measure reasoning from: 1) data, 2) representation, 3) statistic format, 4) probability, 5) sample, and 6) association.
Comparative analysis of positive and negative attitudes toward statistics
Ghulami, Hassan Rahnaward; Ab Hamid, Mohd Rashid; Zakaria, Roslinazairimah
2015-02-01
Many statistics lecturers and statistics education researchers are interested to know the perception of their students' attitudes toward statistics during the statistics course. In statistics course, positive attitude toward statistics is a vital because it will be encourage students to get interested in the statistics course and in order to master the core content of the subject matters under study. Although, students who have negative attitudes toward statistics they will feel depressed especially in the given group assignment, at risk for failure, are often highly emotional, and could not move forward. Therefore, this study investigates the students' attitude towards learning statistics. Six latent constructs have been the measurement of students' attitudes toward learning statistic such as affect, cognitive competence, value, difficulty, interest, and effort. The questionnaire was adopted and adapted from the reliable and validate instrument of Survey of Attitudes towards Statistics (SATS). This study is conducted among engineering undergraduate engineering students in the university Malaysia Pahang (UMP). The respondents consist of students who were taking the applied statistics course from different faculties. From the analysis, it is found that the questionnaire is acceptable and the relationships among the constructs has been proposed and investigated. In this case, students show full effort to master the statistics course, feel statistics course enjoyable, have confidence that they have intellectual capacity, and they have more positive attitudes then negative attitudes towards statistics learning. In conclusion in terms of affect, cognitive competence, value, interest and effort construct the positive attitude towards statistics was mostly exhibited. While negative attitudes mostly exhibited by difficulty construct.
Energy Technology Data Exchange (ETDEWEB)
Wang, Po-Shan [National Yang-Ming University, Department of Neurology, School of Medicine, Taipei (China); Taipei Veterans General Hospital, Neurological Institute, Taipei (China); Taipei Municipal Gan-Dau Hospital, Neurological Institute, Taipei (China); National Yang-Ming University, Institute of Brain Science, Taipei (China); Wu, Hsiu-Mei [National Yang-Ming University, Department of Neurology, School of Medicine, Taipei (China); Taipei Veterans General Hospital, Department of Radiology, Taipei (China); Lin, Ching-Po [National Yang-Ming University, Institute of Brain Science, Taipei (China); Soong, Bing-Wen [National Yang-Ming University, Department of Neurology, School of Medicine, Taipei (China); Taipei Veterans General Hospital, Neurological Institute, Taipei (China)
2011-07-15
We performed diffusion tensor imaging to determine if multiple system atrophy (MSA)-cerebellar (C) and MSA-Parkinsonism (P) show similar changes, as shown in pathological studies. Nineteen patients with MSA-C, 12 patients with MSA-P, 20 patients with Parkinson disease, and 20 healthy controls were evaluated with the use of voxel-based morphometry analysis of diffusion tensor imaging. There was an increase in apparent diffusion coefficient values in the middle cerebellar peduncles and cerebellum and a decrease in fractional anisotropy in the pyramidal tract, middle cerebellar peduncles, and white matter of the cerebellum in patients with MSA-C and MSA-P compared to the controls (P<0.001). In addition, isotropic diffusion-weighted image values were reduced in the cerebellar cortex and deep cerebellar nuclei in patients with MSA-C and increased in the basal ganglia in patients with MSA-P. These results indicate that despite their disparate clinical manifestations, patients with MSA-C and MSA-P share similar diffusion tensor imaging features in the infratentorial region. Further, the combination of FA, ADC and iDWI images can be used to distinguish between MSA (either form) and Parkinson disease, which has potential therapeutic implications. (orig.)
International Nuclear Information System (INIS)
Wang, Po-Shan; Wu, Hsiu-Mei; Lin, Ching-Po; Soong, Bing-Wen
2011-01-01
We performed diffusion tensor imaging to determine if multiple system atrophy (MSA)-cerebellar (C) and MSA-Parkinsonism (P) show similar changes, as shown in pathological studies. Nineteen patients with MSA-C, 12 patients with MSA-P, 20 patients with Parkinson disease, and 20 healthy controls were evaluated with the use of voxel-based morphometry analysis of diffusion tensor imaging. There was an increase in apparent diffusion coefficient values in the middle cerebellar peduncles and cerebellum and a decrease in fractional anisotropy in the pyramidal tract, middle cerebellar peduncles, and white matter of the cerebellum in patients with MSA-C and MSA-P compared to the controls (P<0.001). In addition, isotropic diffusion-weighted image values were reduced in the cerebellar cortex and deep cerebellar nuclei in patients with MSA-C and increased in the basal ganglia in patients with MSA-P. These results indicate that despite their disparate clinical manifestations, patients with MSA-C and MSA-P share similar diffusion tensor imaging features in the infratentorial region. Further, the combination of FA, ADC and iDWI images can be used to distinguish between MSA (either form) and Parkinson disease, which has potential therapeutic implications. (orig.)
Wang, Po-Shan; Wu, Hsiu-Mei; Lin, Ching-Po; Soong, Bing-Wen
2011-07-01
We performed diffusion tensor imaging to determine if multiple system atrophy (MSA)-cerebellar (C) and MSA-Parkinsonism (P) show similar changes, as shown in pathological studies. Nineteen patients with MSA-C, 12 patients with MSA-P, 20 patients with Parkinson disease, and 20 healthy controls were evaluated with the use of voxel-based morphometry analysis of diffusion tensor imaging. There was an increase in apparent diffusion coefficient values in the middle cerebellar peduncles and cerebellum and a decrease in fractional anisotropy in the pyramidal tract, middle cerebellar peduncles, and white matter of the cerebellum in patients with MSA-C and MSA-P compared to the controls (P < 0.001). In addition, isotropic diffusion-weighted image values were reduced in the cerebellar cortex and deep cerebellar nuclei in patients with MSA-C and increased in the basal ganglia in patients with MSA-P. These results indicate that despite their disparate clinical manifestations, patients with MSA-C and MSA-P share similar diffusion tensor imaging features in the infratentorial region. Further, the combination of FA, ADC and iDWI images can be used to distinguish between MSA (either form) and Parkinson disease, which has potential therapeutic implications.
Vapor Pressure Data Analysis and Statistics
2016-12-01
near 8, 2000, and 200, respectively. The A (or a) value is directly related to vapor pressure and will be greater for high vapor pressure materials...1, (10) where n is the number of data points, Yi is the natural logarithm of the i th experimental vapor pressure value, and Xi is the...VAPOR PRESSURE DATA ANALYSIS AND STATISTICS ECBC-TR-1422 Ann Brozena RESEARCH AND TECHNOLOGY DIRECTORATE
Statistical analysis of planktic foraminifera of the surface Continental ...
African Journals Online (AJOL)
Planktic foraminiferal assemblage recorded from selected samples obtained from shallow continental shelf sediments off southwestern Nigeria were subjected to statistical analysis. The Principal Component Analysis (PCA) was used to determine variants of planktic parameters. Values obtained for these parameters were ...
Imaging mass spectrometry statistical analysis.
Jones, Emrys A; Deininger, Sören-Oliver; Hogendoorn, Pancras C W; Deelder, André M; McDonnell, Liam A
2012-08-30
Imaging mass spectrometry is increasingly used to identify new candidate biomarkers. This clinical application of imaging mass spectrometry is highly multidisciplinary: expertise in mass spectrometry is necessary to acquire high quality data, histology is required to accurately label the origin of each pixel's mass spectrum, disease biology is necessary to understand the potential meaning of the imaging mass spectrometry results, and statistics to assess the confidence of any findings. Imaging mass spectrometry data analysis is further complicated because of the unique nature of the data (within the mass spectrometry field); several of the assumptions implicit in the analysis of LC-MS/profiling datasets are not applicable to imaging. The very large size of imaging datasets and the reporting of many data analysis routines, combined with inadequate training and accessible reviews, have exacerbated this problem. In this paper we provide an accessible review of the nature of imaging data and the different strategies by which the data may be analyzed. Particular attention is paid to the assumptions of the data analysis routines to ensure that the reader is apprised of their correct usage in imaging mass spectrometry research. Copyright © 2012 Elsevier B.V. All rights reserved.
Clinical Features in a Danish Population-Based Cohort of Probable Multiple System Atrophy Patients
DEFF Research Database (Denmark)
Starhof, Charlotte; Korbo, Lise; Lassen, Christina Funch
2016-01-01
Background: Multiple system atrophy (MSA) is a rare, sporadic and progressive neurodegenerative disorder. We aimed to describe the clinical features of Danish probable MSA patients, evaluate their initial response to dopaminergic therapy and examine mortality. Methods: From the Danish National...... the criteria for probable MSA. We recorded clinical features, examined differences by MSA subtype and used Kaplan-Meier survival analysis to examine mortality. Results: The mean age at onset of patients with probable MSA was 60.2 years (range 36-75 years) and mean time to wheelchair dependency was 4.7 years...
Applied Behavior Analysis and Statistical Process Control?
Hopkins, B. L.
1995-01-01
Incorporating statistical process control (SPC) methods into applied behavior analysis is discussed. It is claimed that SPC methods would likely reduce applied behavior analysts' intimate contacts with problems and would likely yield poor treatment and research decisions. Cases and data presented by Pfadt and Wheeler (1995) are cited as examples.…
Statistical analysis of JET disruptions
International Nuclear Information System (INIS)
Tanga, A.; Johnson, M.F.
1991-07-01
In the operation of JET and of any tokamak many discharges are terminated by a major disruption. The disruptive termination of a discharge is usually an unwanted event which may cause damage to the structure of the vessel. In a reactor disruptions are potentially a very serious problem, hence the importance of studying them and devising methods to avoid disruptions. Statistical information has been collected about the disruptions which have occurred at JET over a long span of operations. The analysis is focused on the operational aspects of the disruptions rather than on the underlining physics. (Author)
Huh, Young Eun; Park, Jongkyu; Suh, Mee Kyung; Lee, Sang Eun; Kim, Jumin; Jeong, Yuri; Kim, Hee-Tae; Cho, Jin Whan
2015-08-01
In Parkinson variant of multiple system atrophy (MSA-P), patterns of early speech impairment and their distinguishing features from Parkinson's disease (PD) require further exploration. Here, we compared speech data among patients with early-stage MSA-P, PD, and healthy subjects using quantitative acoustic and perceptual analyses. Variables were analyzed for men and women in view of gender-specific features of speech. Acoustic analysis revealed that male patients with MSA-P exhibited more profound speech abnormalities than those with PD, regarding increased voice pitch, prolonged pause time, and reduced speech rate. This might be due to widespread pathology of MSA-P in nigrostriatal or extra-striatal structures related to speech production. Although several perceptual measures were mildly impaired in MSA-P and PD patients, none of these parameters showed a significant difference between patient groups. Detailed speech analysis using acoustic measures may help distinguish between MSA-P and PD early in the disease process. Copyright © 2015 Elsevier Inc. All rights reserved.
Simulation Experiments in Practice : Statistical Design and Regression Analysis
Kleijnen, J.P.C.
2007-01-01
In practice, simulation analysts often change only one factor at a time, and use graphical analysis of the resulting Input/Output (I/O) data. Statistical theory proves that more information is obtained when applying Design Of Experiments (DOE) and linear regression analysis. Unfortunately, classic
Statistical analysis of the Ft. Calhoun reactor coolant pump system
International Nuclear Information System (INIS)
Patel, Bimal; Heising, C.D.
1997-01-01
In engineering science, statistical quality control techniques have traditionally been applied to control manufacturing processes. An application to commercial nuclear power plant maintenance and control is presented that can greatly improve plant safety. As a demonstration of such an approach, a specific system is analyzed: the reactor coolant pumps (RCPs) of the Ft. Calhoun nuclear power plant. This research uses capability analysis, Shewhart X-bar, R charts, canonical correlation methods, and design of experiments to analyze the process for the state of statistical control. The results obtained show that six out of ten parameters are under control specification limits and four parameters are not in the state of statistical control. The analysis shows that statistical process control methods can be applied as an early warning system capable of identifying significant equipment problems well in advance of traditional control room alarm indicators. Such a system would provide operators with ample time to respond to possible emergency situations and thus improve plant safety and reliability. (Author)
Research and Development of Statistical Analysis Software System of Maize Seedling Experiment
Hui Cao
2014-01-01
In this study, software engineer measures were used to develop a set of software system for maize seedling experiments statistics and analysis works. During development works, B/S structure software design method was used and a set of statistics indicators for maize seedling evaluation were established. The experiments results indicated that this set of software system could finish quality statistics and analysis for maize seedling very well. The development of this software system explored a...
Statistical trend analysis methods for temporal phenomena
Energy Technology Data Exchange (ETDEWEB)
Lehtinen, E.; Pulkkinen, U. [VTT Automation, (Finland); Poern, K. [Poern Consulting, Nykoeping (Sweden)
1997-04-01
We consider point events occurring in a random way in time. In many applications the pattern of occurrence is of intrinsic interest as indicating a trend or some other systematic feature in the rate of occurrence. The purpose of this report is to survey briefly different statistical trend analysis methods and illustrate their applicability to temporal phenomena in particular. The trend testing of point events is usually seen as the testing of the hypotheses concerning the intensity of the occurrence of events. When the intensity function is parametrized, the testing of trend is a typical parametric testing problem. In industrial applications the operational experience generally does not suggest any specified model and method in advance. Therefore, and particularly, if the Poisson process assumption is very questionable, it is desirable to apply tests that are valid for a wide variety of possible processes. The alternative approach for trend testing is to use some non-parametric procedure. In this report we have presented four non-parametric tests: The Cox-Stuart test, the Wilcoxon signed ranks test, the Mann test, and the exponential ordered scores test. In addition to the classical parametric and non-parametric approaches we have also considered the Bayesian trend analysis. First we discuss a Bayesian model, which is based on a power law intensity model. The Bayesian statistical inferences are based on the analysis of the posterior distribution of the trend parameters, and the probability of trend is immediately seen from these distributions. We applied some of the methods discussed in an example case. It should be noted, that this report is a feasibility study rather than a scientific evaluation of statistical methods, and the examples can only be seen as demonstrations of the methods. 14 refs, 10 figs.
Statistical trend analysis methods for temporal phenomena
International Nuclear Information System (INIS)
Lehtinen, E.; Pulkkinen, U.; Poern, K.
1997-04-01
We consider point events occurring in a random way in time. In many applications the pattern of occurrence is of intrinsic interest as indicating a trend or some other systematic feature in the rate of occurrence. The purpose of this report is to survey briefly different statistical trend analysis methods and illustrate their applicability to temporal phenomena in particular. The trend testing of point events is usually seen as the testing of the hypotheses concerning the intensity of the occurrence of events. When the intensity function is parametrized, the testing of trend is a typical parametric testing problem. In industrial applications the operational experience generally does not suggest any specified model and method in advance. Therefore, and particularly, if the Poisson process assumption is very questionable, it is desirable to apply tests that are valid for a wide variety of possible processes. The alternative approach for trend testing is to use some non-parametric procedure. In this report we have presented four non-parametric tests: The Cox-Stuart test, the Wilcoxon signed ranks test, the Mann test, and the exponential ordered scores test. In addition to the classical parametric and non-parametric approaches we have also considered the Bayesian trend analysis. First we discuss a Bayesian model, which is based on a power law intensity model. The Bayesian statistical inferences are based on the analysis of the posterior distribution of the trend parameters, and the probability of trend is immediately seen from these distributions. We applied some of the methods discussed in an example case. It should be noted, that this report is a feasibility study rather than a scientific evaluation of statistical methods, and the examples can only be seen as demonstrations of the methods
StOCNET : Software for the statistical analysis of social networks
Huisman, M.; van Duijn, M.A.J.
2003-01-01
StOCNET3 is an open software system in a Windows environment for the advanced statistical analysis of social networks. It provides a platform to make a number of recently developed and therefore not (yet) standard statistical methods available to a wider audience. A flexible user interface utilizing
AutoBayes: A System for Generating Data Analysis Programs from Statistical Models
Fischer, Bernd; Schumann, Johann
2003-01-01
Data analysis is an important scientific task which is required whenever information needs to be extracted from raw data. Statistical approaches to data analysis, which use methods from probability theory and numerical analysis, are well-founded but dificult to implement: the development of a statistical data analysis program for any given application is time-consuming and requires substantial knowledge and experience in several areas. In this paper, we describe AutoBayes, a program synthesis...
Network similarity and statistical analysis of earthquake seismic data
Deyasi, Krishanu; Chakraborty, Abhijit; Banerjee, Anirban
2016-01-01
We study the structural similarity of earthquake networks constructed from seismic catalogs of different geographical regions. A hierarchical clustering of underlying undirected earthquake networks is shown using Jensen-Shannon divergence in graph spectra. The directed nature of links indicates that each earthquake network is strongly connected, which motivates us to study the directed version statistically. Our statistical analysis of each earthquake region identifies the hub regions. We cal...
Statistical analysis and interpolation of compositional data in materials science.
Pesenson, Misha Z; Suram, Santosh K; Gregoire, John M
2015-02-09
Compositional data are ubiquitous in chemistry and materials science: analysis of elements in multicomponent systems, combinatorial problems, etc., lead to data that are non-negative and sum to a constant (for example, atomic concentrations). The constant sum constraint restricts the sampling space to a simplex instead of the usual Euclidean space. Since statistical measures such as mean and standard deviation are defined for the Euclidean space, traditional correlation studies, multivariate analysis, and hypothesis testing may lead to erroneous dependencies and incorrect inferences when applied to compositional data. Furthermore, composition measurements that are used for data analytics may not include all of the elements contained in the material; that is, the measurements may be subcompositions of a higher-dimensional parent composition. Physically meaningful statistical analysis must yield results that are invariant under the number of composition elements, requiring the application of specialized statistical tools. We present specifics and subtleties of compositional data processing through discussion of illustrative examples. We introduce basic concepts, terminology, and methods required for the analysis of compositional data and utilize them for the spatial interpolation of composition in a sputtered thin film. The results demonstrate the importance of this mathematical framework for compositional data analysis (CDA) in the fields of materials science and chemistry.
An Application of Multivariate Statistical Analysis for Query-Driven Visualization
Energy Technology Data Exchange (ETDEWEB)
Gosink, Luke J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Garth, Christoph [Univ. of California, Davis, CA (United States); Anderson, John C. [Univ. of California, Davis, CA (United States); Bethel, E. Wes [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Joy, Kenneth I. [Univ. of California, Davis, CA (United States)
2011-03-01
Driven by the ability to generate ever-larger, increasingly complex data, there is an urgent need in the scientific community for scalable analysis methods that can rapidly identify salient trends in scientific data. Query-Driven Visualization (QDV) strategies are among the small subset of techniques that can address both large and highly complex datasets. This paper extends the utility of QDV strategies with a statistics-based framework that integrates non-parametric distribution estimation techniques with a new segmentation strategy to visually identify statistically significant trends and features within the solution space of a query. In this framework, query distribution estimates help users to interactively explore their query's solution and visually identify the regions where the combined behavior of constrained variables is most important, statistically, to their inquiry. Our new segmentation strategy extends the distribution estimation analysis by visually conveying the individual importance of each variable to these regions of high statistical significance. We demonstrate the analysis benefits these two strategies provide and show how they may be used to facilitate the refinement of constraints over variables expressed in a user's query. We apply our method to datasets from two different scientific domains to demonstrate its broad applicability.
Explorations in Statistics: The Analysis of Ratios and Normalized Data
Curran-Everett, Douglas
2013-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This ninth installment of "Explorations in Statistics" explores the analysis of ratios and normalized--or standardized--data. As researchers, we compute a ratio--a numerator divided by a denominator--to compute a…
Grosveld, Ferdinand W.; Schiller, Noah H.; Cabell, Randolph H.
2011-01-01
Comet Enflow is a commercially available, high frequency vibroacoustic analysis software founded on Energy Finite Element Analysis (EFEA) and Energy Boundary Element Analysis (EBEA). Energy Finite Element Analysis (EFEA) was validated on a floor-equipped composite cylinder by comparing EFEA vibroacoustic response predictions with Statistical Energy Analysis (SEA) and experimental results. Statistical Energy Analysis (SEA) predictions were made using the commercial software program VA One 2009 from ESI Group. The frequency region of interest for this study covers the one-third octave bands with center frequencies from 100 Hz to 4000 Hz.
Simulation Experiments in Practice : Statistical Design and Regression Analysis
Kleijnen, J.P.C.
2007-01-01
In practice, simulation analysts often change only one factor at a time, and use graphical analysis of the resulting Input/Output (I/O) data. The goal of this article is to change these traditional, naïve methods of design and analysis, because statistical theory proves that more information is
Finger tapping analysis in patients with Parkinson's disease and atypical parkinsonism.
Djurić-Jovičić, Milica; Petrović, Igor; Ječmenica-Lukić, Milica; Radovanović, Saša; Dragašević-Mišković, Nataša; Belić, Minja; Miler-Jerković, Vera; Popović, Mirjana B; Kostić, Vladimir S
2016-08-01
The goal of this study was to investigate repetitive finger tapping patterns in patients with Parkinson's disease (PD), progressive supranuclear palsy-Richardson syndrome (PSP-R), or multiple system atrophy of parkinsonian type (MSA-P). The finger tapping performance was objectively assessed in PD (n=13), PSP-R (n=15), and MSA-P (n=14) patients and matched healthy controls (HC; n=14), using miniature inertial sensors positioned on the thumb and index finger, providing spatio-temporal kinematic parameters. The main finding was the lack or only minimal progressive reduction in amplitude during the finger tapping in PSP-R patients, similar to HC, but significantly different from the sequence effect (progressive decrement) in both PD and MSA-P patients. The mean negative amplitude slope of -0.12°/cycle revealed less progression of amplitude decrement even in comparison to HC (-0.21°/cycle, p=0.032), and particularly from PD (-0.56°/cycle, p=0.001), and MSA-P patients (-1.48°/cycle, p=0.003). No significant differences were found in the average finger separation amplitudes between PD, PSP-R and MSA-P patients (pmsa-pd=0.726, pmsa-psp=0.363, ppsp-pd=0.726). The lack of clinically significant sequence effect during finger tapping differentiated PSP-R from both PD and MSA-P patients, and might be specific for PSP-R. The finger tapping kinematic parameter of amplitude slope may be a neurophysiological marker able to differentiate particular forms of parkinsonism. Copyright © 2016 Elsevier Ltd. All rights reserved.
Statistical trend analysis methodology for rare failures in changing technical systems
International Nuclear Information System (INIS)
Ott, K.O.; Hoffmann, H.J.
1983-07-01
A methodology for a statistical trend analysis (STA) in failure rates is presented. It applies primarily to relatively rare events in changing technologies or components. The formulation is more general and the assumptions are less restrictive than in a previously published version. Relations of the statistical analysis and probabilistic assessment (PRA) are discussed in terms of categorization of decisions for action following particular failure events. The significance of tentatively identified trends is explored. In addition to statistical tests for trend significance, a combination of STA and PRA results quantifying the trend complement is proposed. The STA approach is compared with other concepts for trend characterization. (orig.)
Analysis of thrips distribution: application of spatial statistics and Kriging
John Aleong; Bruce L. Parker; Margaret Skinner; Diantha Howard
1991-01-01
Kriging is a statistical technique that provides predictions for spatially and temporally correlated data. Observations of thrips distribution and density in Vermont soils are made in both space and time. Traditional statistical analysis of such data assumes that the counts taken over space and time are independent, which is not necessarily true. Therefore, to analyze...
Statistical wind analysis for near-space applications
Roney, Jason A.
2007-09-01
Statistical wind models were developed based on the existing observational wind data for near-space altitudes between 60 000 and 100 000 ft (18 30 km) above ground level (AGL) at two locations, Akon, OH, USA, and White Sands, NM, USA. These two sites are envisioned as playing a crucial role in the first flights of high-altitude airships. The analysis shown in this paper has not been previously applied to this region of the stratosphere for such an application. Standard statistics were compiled for these data such as mean, median, maximum wind speed, and standard deviation, and the data were modeled with Weibull distributions. These statistics indicated, on a yearly average, there is a lull or a “knee” in the wind between 65 000 and 72 000 ft AGL (20 22 km). From the standard statistics, trends at both locations indicated substantial seasonal variation in the mean wind speed at these heights. The yearly and monthly statistical modeling indicated that Weibull distributions were a reasonable model for the data. Forecasts and hindcasts were done by using a Weibull model based on 2004 data and comparing the model with the 2003 and 2005 data. The 2004 distribution was also a reasonable model for these years. Lastly, the Weibull distribution and cumulative function were used to predict the 50%, 95%, and 99% winds, which are directly related to the expected power requirements of a near-space station-keeping airship. These values indicated that using only the standard deviation of the mean may underestimate the operational conditions.
Analysis of photon statistics with Silicon Photomultiplier
International Nuclear Information System (INIS)
D'Ascenzo, N.; Saveliev, V.; Wang, L.; Xie, Q.
2015-01-01
The Silicon Photomultiplier (SiPM) is a novel silicon-based photodetector, which represents the modern perspective of low photon flux detection. The aim of this paper is to provide an introduction on the statistical analysis methods needed to understand and estimate in quantitative way the correct features and description of the response of the SiPM to a coherent source of light
Development of statistical analysis code for meteorological data (W-View)
International Nuclear Information System (INIS)
Tachibana, Haruo; Sekita, Tsutomu; Yamaguchi, Takenori
2003-03-01
A computer code (W-View: Weather View) was developed to analyze the meteorological data statistically based on 'the guideline of meteorological statistics for the safety analysis of nuclear power reactor' (Nuclear Safety Commission on January 28, 1982; revised on March 29, 2001). The code gives statistical meteorological data to assess the public dose in case of normal operation and severe accident to get the license of nuclear reactor operation. This code was revised from the original code used in a large office computer code to enable a personal computer user to analyze the meteorological data simply and conveniently and to make the statistical data tables and figures of meteorology. (author)
Statistical analysis of the Ft. Calhoun reactor coolant pump system
International Nuclear Information System (INIS)
Heising, Carolyn D.
1998-01-01
In engineering science, statistical quality control techniques have traditionally been applied to control manufacturing processes. An application to commercial nuclear power plant maintenance and control is presented that can greatly improve plant safety. As a demonstration of such an approach to plant maintenance and control, a specific system is analyzed: the reactor coolant pumps (RCPs) of the Ft. Calhoun nuclear power plant. This research uses capability analysis, Shewhart X-bar, R-charts, canonical correlation methods, and design of experiments to analyze the process for the state of statistical control. The results obtained show that six out of ten parameters are under control specifications limits and four parameters are not in the state of statistical control. The analysis shows that statistical process control methods can be applied as an early warning system capable of identifying significant equipment problems well in advance of traditional control room alarm indicators Such a system would provide operators with ample time to respond to possible emergency situations and thus improve plant safety and reliability. (author)
Propensity Score Analysis: An Alternative Statistical Approach for HRD Researchers
Keiffer, Greggory L.; Lane, Forrest C.
2016-01-01
Purpose: This paper aims to introduce matching in propensity score analysis (PSA) as an alternative statistical approach for researchers looking to make causal inferences using intact groups. Design/methodology/approach: An illustrative example demonstrated the varying results of analysis of variance, analysis of covariance and PSA on a heuristic…
Simulation Experiments in Practice: Statistical Design and Regression Analysis
Kleijnen, J.P.C.
2007-01-01
In practice, simulation analysts often change only one factor at a time, and use graphical analysis of the resulting Input/Output (I/O) data. The goal of this article is to change these traditional, naïve methods of design and analysis, because statistical theory proves that more information is obtained when applying Design Of Experiments (DOE) and linear regression analysis. Unfortunately, classic DOE and regression analysis assume a single simulation response that is normally and independen...
Statistical analysis of thermal conductivity of nanofluid containing ...
Indian Academy of Sciences (India)
Thermal conductivity measurements of nanofluids were analysed via two-factor completely randomized design and comparison of data means is carried out with Duncan's multiple-range test. Statistical analysis of experimental data show that temperature and weight fraction have a reasonable impact on the thermal ...
Longitudinal data analysis a handbook of modern statistical methods
Fitzmaurice, Garrett; Verbeke, Geert; Molenberghs, Geert
2008-01-01
Although many books currently available describe statistical models and methods for analyzing longitudinal data, they do not highlight connections between various research threads in the statistical literature. Responding to this void, Longitudinal Data Analysis provides a clear, comprehensive, and unified overview of state-of-the-art theory and applications. It also focuses on the assorted challenges that arise in analyzing longitudinal data. After discussing historical aspects, leading researchers explore four broad themes: parametric modeling, nonparametric and semiparametric methods, joint
Pestman, Wiebe R
2009-01-01
This textbook provides a broad and solid introduction to mathematical statistics, including the classical subjects hypothesis testing, normal regression analysis, and normal analysis of variance. In addition, non-parametric statistics and vectorial statistics are considered, as well as applications of stochastic analysis in modern statistics, e.g., Kolmogorov-Smirnov testing, smoothing techniques, robustness and density estimation. For students with some elementary mathematical background. With many exercises. Prerequisites from measure theory and linear algebra are presented.
Bayesian Sensitivity Analysis of Statistical Models with Missing Data.
Zhu, Hongtu; Ibrahim, Joseph G; Tang, Niansheng
2014-04-01
Methods for handling missing data depend strongly on the mechanism that generated the missing values, such as missing completely at random (MCAR) or missing at random (MAR), as well as other distributional and modeling assumptions at various stages. It is well known that the resulting estimates and tests may be sensitive to these assumptions as well as to outlying observations. In this paper, we introduce various perturbations to modeling assumptions and individual observations, and then develop a formal sensitivity analysis to assess these perturbations in the Bayesian analysis of statistical models with missing data. We develop a geometric framework, called the Bayesian perturbation manifold, to characterize the intrinsic structure of these perturbations. We propose several intrinsic influence measures to perform sensitivity analysis and quantify the effect of various perturbations to statistical models. We use the proposed sensitivity analysis procedure to systematically investigate the tenability of the non-ignorable missing at random (NMAR) assumption. Simulation studies are conducted to evaluate our methods, and a dataset is analyzed to illustrate the use of our diagnostic measures.
Advanced data analysis in neuroscience integrating statistical and computational models
Durstewitz, Daniel
2017-01-01
This book is intended for use in advanced graduate courses in statistics / machine learning, as well as for all experimental neuroscientists seeking to understand statistical methods at a deeper level, and theoretical neuroscientists with a limited background in statistics. It reviews almost all areas of applied statistics, from basic statistical estimation and test theory, linear and nonlinear approaches for regression and classification, to model selection and methods for dimensionality reduction, density estimation and unsupervised clustering. Its focus, however, is linear and nonlinear time series analysis from a dynamical systems perspective, based on which it aims to convey an understanding also of the dynamical mechanisms that could have generated observed time series. Further, it integrates computational modeling of behavioral and neural dynamics with statistical estimation and hypothesis testing. This way computational models in neuroscience are not only explanat ory frameworks, but become powerfu...
Quantitative analysis and IBM SPSS statistics a guide for business and finance
Aljandali, Abdulkader
2016-01-01
This guide is for practicing statisticians and data scientists who use IBM SPSS for statistical analysis of big data in business and finance. This is the first of a two-part guide to SPSS for Windows, introducing data entry into SPSS, along with elementary statistical and graphical methods for summarizing and presenting data. Part I also covers the rudiments of hypothesis testing and business forecasting while Part II will present multivariate statistical methods, more advanced forecasting methods, and multivariate methods. IBM SPSS Statistics offers a powerful set of statistical and information analysis systems that run on a wide variety of personal computers. The software is built around routines that have been developed, tested, and widely used for more than 20 years. As such, IBM SPSS Statistics is extensively used in industry, commerce, banking, local and national governments, and education. Just a small subset of users of the package include the major clearing banks, the BBC, British Gas, British Airway...
What type of statistical model to choose for the analysis of radioimmunoassays
International Nuclear Information System (INIS)
Huet, S.
1984-01-01
The current techniques used for statistical analysis of radioimmunoassays are not very satisfactory for either the statistician or the biologist. They are based on an attempt to make the response curve linear to avoid complicated computations. The present article shows that this practice has considerable effects (often neglected) on the statistical assumptions which must be formulated. A more strict analysis is proposed by applying the four-parameter logistic model. The advantages of this method are: the statistical assumptions formulated are based on observed data, and the model can be applied to almost all radioimmunoassays [fr
Computerized statistical analysis with bootstrap method in nuclear medicine
International Nuclear Information System (INIS)
Zoccarato, O.; Sardina, M.; Zatta, G.; De Agostini, A.; Barbesti, S.; Mana, O.; Tarolo, G.L.
1988-01-01
Statistical analysis of data samples involves some hypothesis about the features of data themselves. The accuracy of these hypotheses can influence the results of statistical inference. Among the new methods of computer-aided statistical analysis, the bootstrap method appears to be one of the most powerful, thanks to its ability to reproduce many artificial samples starting from a single original sample and because it works without hypothesis about data distribution. The authors applied the bootstrap method to two typical situation of Nuclear Medicine Department. The determination of the normal range of serum ferritin, as assessed by radioimmunoassay and defined by the mean value ±2 standard deviations, starting from an experimental sample of small dimension, shows an unacceptable lower limit (ferritin plasmatic levels below zero). On the contrary, the results obtained by elaborating 5000 bootstrap samples gives ans interval of values (10.95 ng/ml - 72.87 ng/ml) corresponding to the normal ranges commonly reported. Moreover the authors applied the bootstrap method in evaluating the possible error associated with the correlation coefficient determined between left ventricular ejection fraction (LVEF) values obtained by first pass radionuclide angiocardiography with 99m Tc and 195m Au. The results obtained indicate a high degree of statistical correlation and give the range of r 2 values to be considered acceptable for this type of studies
Software for statistical data analysis used in Higgs searches
International Nuclear Information System (INIS)
Gumpert, Christian; Moneta, Lorenzo; Cranmer, Kyle; Kreiss, Sven; Verkerke, Wouter
2014-01-01
The analysis and interpretation of data collected by the Large Hadron Collider (LHC) requires advanced statistical tools in order to quantify the agreement between observation and theoretical models. RooStats is a project providing a statistical framework for data analysis with the focus on discoveries, confidence intervals and combination of different measurements in both Bayesian and frequentist approaches. It employs the RooFit data modelling language where mathematical concepts such as variables, (probability density) functions and integrals are represented as C++ objects. RooStats and RooFit rely on the persistency technology of the ROOT framework. The usage of a common data format enables the concept of digital publishing of complicated likelihood functions. The statistical tools have been developed in close collaboration with the LHC experiments to ensure their applicability to real-life use cases. Numerous physics results have been produced using the RooStats tools, with the discovery of the Higgs boson by the ATLAS and CMS experiments being certainly the most popular among them. We will discuss tools currently used by LHC experiments to set exclusion limits, to derive confidence intervals and to estimate discovery significances based on frequentist statistics and the asymptotic behaviour of likelihood functions. Furthermore, new developments in RooStats and performance optimisation necessary to cope with complex models depending on more than 1000 variables will be reviewed
PRECISE - pregabalin in addition to usual care: Statistical analysis plan
S. Mathieson (Stephanie); L. Billot (Laurent); C. Maher (Chris); A.J. McLachlan (Andrew J.); J. Latimer (Jane); B.W. Koes (Bart); M.J. Hancock (Mark J.); I. Harris (Ian); R.O. Day (Richard O.); J. Pik (Justin); S. Jan (Stephen); C.-W.C. Lin (Chung-Wei Christine)
2016-01-01
textabstractBackground: Sciatica is a severe, disabling condition that lacks high quality evidence for effective treatment strategies. This a priori statistical analysis plan describes the methodology of analysis for the PRECISE study. Methods/design: PRECISE is a prospectively registered, double
Statistical margin to DNB safety analysis approach for LOFT
International Nuclear Information System (INIS)
Atkinson, S.A.
1982-01-01
A method was developed and used for LOFT thermal safety analysis to estimate the statistical margin to DNB for the hot rod, and to base safety analysis on desired DNB probability limits. This method is an advanced approach using response surface analysis methods, a very efficient experimental design, and a 2nd-order response surface equation with a 2nd-order error propagation analysis to define the MDNBR probability density function. Calculations for limiting transients were used in the response surface analysis thereby including transient interactions and trip uncertainties in the MDNBR probability density
Multivariate statistical analysis of atom probe tomography data
International Nuclear Information System (INIS)
Parish, Chad M.; Miller, Michael K.
2010-01-01
The application of spectrum imaging multivariate statistical analysis methods, specifically principal component analysis (PCA), to atom probe tomography (APT) data has been investigated. The mathematical method of analysis is described and the results for two example datasets are analyzed and presented. The first dataset is from the analysis of a PM 2000 Fe-Cr-Al-Ti steel containing two different ultrafine precipitate populations. PCA properly describes the matrix and precipitate phases in a simple and intuitive manner. A second APT example is from the analysis of an irradiated reactor pressure vessel steel. Fine, nm-scale Cu-enriched precipitates having a core-shell structure were identified and qualitatively described by PCA. Advantages, disadvantages, and future prospects for implementing these data analysis methodologies for APT datasets, particularly with regard to quantitative analysis, are also discussed.
Development of statistical analysis code for meteorological data (W-View)
Energy Technology Data Exchange (ETDEWEB)
Tachibana, Haruo; Sekita, Tsutomu; Yamaguchi, Takenori [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
2003-03-01
A computer code (W-View: Weather View) was developed to analyze the meteorological data statistically based on 'the guideline of meteorological statistics for the safety analysis of nuclear power reactor' (Nuclear Safety Commission on January 28, 1982; revised on March 29, 2001). The code gives statistical meteorological data to assess the public dose in case of normal operation and severe accident to get the license of nuclear reactor operation. This code was revised from the original code used in a large office computer code to enable a personal computer user to analyze the meteorological data simply and conveniently and to make the statistical data tables and figures of meteorology. (author)
CORSSA: Community Online Resource for Statistical Seismicity Analysis
Zechar, J. D.; Hardebeck, J. L.; Michael, A. J.; Naylor, M.; Steacy, S.; Wiemer, S.; Zhuang, J.
2011-12-01
Statistical seismology is critical to the understanding of seismicity, the evaluation of proposed earthquake prediction and forecasting methods, and the assessment of seismic hazard. Unfortunately, despite its importance to seismology-especially to those aspects with great impact on public policy-statistical seismology is mostly ignored in the education of seismologists, and there is no central repository for the existing open-source software tools. To remedy these deficiencies, and with the broader goal to enhance the quality of statistical seismology research, we have begun building the Community Online Resource for Statistical Seismicity Analysis (CORSSA, www.corssa.org). We anticipate that the users of CORSSA will range from beginning graduate students to experienced researchers. More than 20 scientists from around the world met for a week in Zurich in May 2010 to kick-start the creation of CORSSA: the format and initial table of contents were defined; a governing structure was organized; and workshop participants began drafting articles. CORSSA materials are organized with respect to six themes, each will contain between four and eight articles. CORSSA now includes seven articles with an additional six in draft form along with forums for discussion, a glossary, and news about upcoming meetings, special issues, and recent papers. Each article is peer-reviewed and presents a balanced discussion, including illustrative examples and code snippets. Topics in the initial set of articles include: introductions to both CORSSA and statistical seismology, basic statistical tests and their role in seismology; understanding seismicity catalogs and their problems; basic techniques for modeling seismicity; and methods for testing earthquake predictability hypotheses. We have also begun curating a collection of statistical seismology software packages.
Recent advances in statistical energy analysis
Heron, K. H.
1992-01-01
Statistical Energy Analysis (SEA) has traditionally been developed using modal summation and averaging approach, and has led to the need for many restrictive SEA assumptions. The assumption of 'weak coupling' is particularly unacceptable when attempts are made to apply SEA to structural coupling. It is now believed that this assumption is more a function of the modal formulation rather than a necessary formulation of SEA. The present analysis ignores this restriction and describes a wave approach to the calculation of plate-plate coupling loss factors. Predictions based on this method are compared with results obtained from experiments using point excitation on one side of an irregular six-sided box structure. Conclusions show that the use and calculation of infinite transmission coefficients is the way forward for the development of a purely predictive SEA code.
Statistical analysis of tourism destination competitiveness
Directory of Open Access Journals (Sweden)
Attilio Gardini
2013-05-01
Full Text Available The growing relevance of tourism industry for modern advanced economies has increased the interest among researchers and policy makers in the statistical analysis of destination competitiveness. In this paper we outline a new model of destination competitiveness based on sound theoretical grounds and we develop a statistical test of the model on sample data based on Italian tourist destination decisions and choices. Our model focuses on the tourism decision process which starts from the demand schedule for holidays and ends with the choice of a specific holiday destination. The demand schedule is a function of individual preferences and of destination positioning, while the final decision is a function of the initial demand schedule and the information concerning services for accommodation and recreation in the selected destinations. Moreover, we extend previous studies that focused on image or attributes (such as climate and scenery by paying more attention to the services for accommodation and recreation in the holiday destinations. We test the proposed model using empirical data collected from a sample of 1.200 Italian tourists interviewed in 2007 (October - December. Data analysis shows that the selection probability for the destination included in the consideration set is not proportional to the share of inclusion because the share of inclusion is determined by the brand image, while the selection of the effective holiday destination is influenced by the real supply conditions. The analysis of Italian tourists preferences underline the existence of a latent demand for foreign holidays which points out a risk of market share reduction for Italian tourism system in the global market. We also find a snow ball effect which helps the most popular destinations, mainly in the northern Italian regions.
Visual and statistical analysis of 18F-FDG PET in primary progressive aphasia
International Nuclear Information System (INIS)
Matias-Guiu, Jordi A.; Moreno-Ramos, Teresa; Garcia-Ramos, Rocio; Fernandez-Matarrubia, Marta; Oreja-Guevara, Celia; Matias-Guiu, Jorge; Cabrera-Martin, Maria Nieves; Perez-Castejon, Maria Jesus; Rodriguez-Rey, Cristina; Ortega-Candil, Aida; Carreras, Jose Luis
2015-01-01
Diagnosing progressive primary aphasia (PPA) and its variants is of great clinical importance, and fluorodeoxyglucose (FDG) positron emission tomography (PET) may be a useful diagnostic technique. The purpose of this study was to evaluate interobserver variability in the interpretation of FDG PET images in PPA as well as the diagnostic sensitivity and specificity of the technique. We also aimed to compare visual and statistical analyses of these images. There were 10 raters who analysed 44 FDG PET scans from 33 PPA patients and 11 controls. Five raters analysed the images visually, while the other five used maps created using Statistical Parametric Mapping software. Two spatial normalization procedures were performed: global mean normalization and cerebellar normalization. Clinical diagnosis was considered the gold standard. Inter-rater concordance was moderate for visual analysis (Fleiss' kappa 0.568) and substantial for statistical analysis (kappa 0.756-0.881). Agreement was good for all three variants of PPA except for the nonfluent/agrammatic variant studied with visual analysis. The sensitivity and specificity of each rater's diagnosis of PPA was high, averaging 87.8 and 89.9 % for visual analysis and 96.9 and 90.9 % for statistical analysis using global mean normalization, respectively. In cerebellar normalization, sensitivity was 88.9 % and specificity 100 %. FDG PET demonstrated high diagnostic accuracy for the diagnosis of PPA and its variants. Inter-rater concordance was higher for statistical analysis, especially for the nonfluent/agrammatic variant. These data support the use of FDG PET to evaluate patients with PPA and show that statistical analysis methods are particularly useful for identifying the nonfluent/agrammatic variant of PPA. (orig.)
Australasian Resuscitation In Sepsis Evaluation trial statistical analysis plan.
Delaney, Anthony; Peake, Sandra L; Bellomo, Rinaldo; Cameron, Peter; Holdgate, Anna; Howe, Belinda; Higgins, Alisa; Presneill, Jeffrey; Webb, Steve
2013-10-01
The Australasian Resuscitation In Sepsis Evaluation (ARISE) study is an international, multicentre, randomised, controlled trial designed to evaluate the effectiveness of early goal-directed therapy compared with standard care for patients presenting to the ED with severe sepsis. In keeping with current practice, and taking into considerations aspects of trial design and reporting specific to non-pharmacologic interventions, this document outlines the principles and methods for analysing and reporting the trial results. The document is prepared prior to completion of recruitment into the ARISE study, without knowledge of the results of the interim analysis conducted by the data safety and monitoring committee and prior to completion of the two related international studies. The statistical analysis plan was designed by the ARISE chief investigators, and reviewed and approved by the ARISE steering committee. The data collected by the research team as specified in the study protocol, and detailed in the study case report form were reviewed. Information related to baseline characteristics, characteristics of delivery of the trial interventions, details of resuscitation and other related therapies, and other relevant data are described with appropriate comparisons between groups. The primary, secondary and tertiary outcomes for the study are defined, with description of the planned statistical analyses. A statistical analysis plan was developed, along with a trial profile, mock-up tables and figures. A plan for presenting baseline characteristics, microbiological and antibiotic therapy, details of the interventions, processes of care and concomitant therapies, along with adverse events are described. The primary, secondary and tertiary outcomes are described along with identification of subgroups to be analysed. A statistical analysis plan for the ARISE study has been developed, and is available in the public domain, prior to the completion of recruitment into the
Measuring the Success of an Academic Development Programme: A Statistical Analysis
Smith, L. C.
2009-01-01
This study uses statistical analysis to estimate the impact of first-year academic development courses in microeconomics, statistics, accountancy, and information systems, offered by the University of Cape Town's Commerce Academic Development Programme, on students' graduation performance relative to that achieved by mainstream students. The data…
Analysis of Variance in Statistical Image Processing
Kurz, Ludwik; Hafed Benteftifa, M.
1997-04-01
A key problem in practical image processing is the detection of specific features in a noisy image. Analysis of variance (ANOVA) techniques can be very effective in such situations, and this book gives a detailed account of the use of ANOVA in statistical image processing. The book begins by describing the statistical representation of images in the various ANOVA models. The authors present a number of computationally efficient algorithms and techniques to deal with such problems as line, edge, and object detection, as well as image restoration and enhancement. By describing the basic principles of these techniques, and showing their use in specific situations, the book will facilitate the design of new algorithms for particular applications. It will be of great interest to graduate students and engineers in the field of image processing and pattern recognition.
Study of relationship between MUF correlation and detection sensitivity of statistical analysis
International Nuclear Information System (INIS)
Tamura, Toshiaki; Ihara, Hitoshi; Yamamoto, Yoichi; Ikawa, Koji
1989-11-01
Various kinds of statistical analysis are proposed to NRTA (Near Real Time Materials Accountancy) which was devised to satisfy the timeliness goal of one of the detection goals of IAEA. It will be presumed that different statistical analysis results will occur between the case of considered rigorous error propagation (with MUF correlation) and the case of simplified error propagation (without MUF correlation). Therefore, measurement simulation and decision analysis were done using flow simulation of 800 MTHM/Y model reprocessing plant, and relationship between MUF correlation and detection sensitivity and false alarm of statistical analysis was studied. Specific character of material accountancy for 800 MTHM/Y model reprocessing plant was grasped by this simulation. It also became clear that MUF correlation decreases not only false alarm but also detection probability for protracted loss in case of CUMUF test and Page's test applied to NRTA. (author)
2010-05-05
...] Guidance for Industry on Documenting Statistical Analysis Programs and Data Files; Availability AGENCY... documenting statistical analyses and data files submitted to the Center for Veterinary Medicine (CVM) for the... on Documenting Statistical Analysis Programs and Data Files; Availability'' giving interested persons...
Point defect characterization in HAADF-STEM images using multivariate statistical analysis
International Nuclear Information System (INIS)
Sarahan, Michael C.; Chi, Miaofang; Masiel, Daniel J.; Browning, Nigel D.
2011-01-01
Quantitative analysis of point defects is demonstrated through the use of multivariate statistical analysis. This analysis consists of principal component analysis for dimensional estimation and reduction, followed by independent component analysis to obtain physically meaningful, statistically independent factor images. Results from these analyses are presented in the form of factor images and scores. Factor images show characteristic intensity variations corresponding to physical structure changes, while scores relate how much those variations are present in the original data. The application of this technique is demonstrated on a set of experimental images of dislocation cores along a low-angle tilt grain boundary in strontium titanate. A relationship between chemical composition and lattice strain is highlighted in the analysis results, with picometer-scale shifts in several columns measurable from compositional changes in a separate column. -- Research Highlights: → Multivariate analysis of HAADF-STEM images. → Distinct structural variations among SrTiO 3 dislocation cores. → Picometer atomic column shifts correlated with atomic column population changes.
Zavaglia, Melissa; Hilgetag, Claus C
2016-06-01
Spatial attention is a prime example for the distributed network functions of the brain. Lesion studies in animal models have been used to investigate intact attentional mechanisms as well as perspectives for rehabilitation in the injured brain. Here, we systematically analyzed behavioral data from cooling deactivation and permanent lesion experiments in the cat, where unilateral deactivation of the posterior parietal cortex (in the vicinity of the posterior middle suprasylvian cortex, pMS) or the superior colliculus (SC) cause a severe neglect in the contralateral hemifield. Counterintuitively, additional deactivation of structures in the opposite hemisphere reverses the deficit. Using such lesion data, we employed a game-theoretical approach, multi-perturbation Shapley value analysis (MSA), for inferring functional contributions and network interactions of bilateral pMS and SC from behavioral performance in visual attention studies. The approach provides an objective theoretical strategy for lesion inferences and allows a unique quantitative characterization of regional functional contributions and interactions on the basis of multi-perturbations. The quantitative analysis demonstrated that right posterior parietal cortex and superior colliculus made the strongest positive contributions to left-field orienting, while left brain regions had negative contributions, implying that their perturbation may reverse the effects of contralateral lesions or improve normal function. An analysis of functional modulations and interactions among the regions revealed redundant interactions (implying functional overlap) between regions within each hemisphere, and synergistic interactions between bilateral regions. To assess the reliability of the MSA method in the face of variable and incomplete input data, we performed a sensitivity analysis, investigating how much the contribution values of the four regions depended on the performance of specific configurations and on the
STATCAT, Statistical Analysis of Parametric and Non-Parametric Data
International Nuclear Information System (INIS)
David, Hugh
1990-01-01
1 - Description of program or function: A suite of 26 programs designed to facilitate the appropriate statistical analysis and data handling of parametric and non-parametric data, using classical and modern univariate and multivariate methods. 2 - Method of solution: Data is read entry by entry, using a choice of input formats, and the resultant data bank is checked for out-of- range, rare, extreme or missing data. The completed STATCAT data bank can be treated by a variety of descriptive and inferential statistical methods, and modified, using other standard programs as required
FADTTS: functional analysis of diffusion tensor tract statistics.
Zhu, Hongtu; Kong, Linglong; Li, Runze; Styner, Martin; Gerig, Guido; Lin, Weili; Gilmore, John H
2011-06-01
The aim of this paper is to present a functional analysis of a diffusion tensor tract statistics (FADTTS) pipeline for delineating the association between multiple diffusion properties along major white matter fiber bundles with a set of covariates of interest, such as age, diagnostic status and gender, and the structure of the variability of these white matter tract properties in various diffusion tensor imaging studies. The FADTTS integrates five statistical tools: (i) a multivariate varying coefficient model for allowing the varying coefficient functions in terms of arc length to characterize the varying associations between fiber bundle diffusion properties and a set of covariates, (ii) a weighted least squares estimation of the varying coefficient functions, (iii) a functional principal component analysis to delineate the structure of the variability in fiber bundle diffusion properties, (iv) a global test statistic to test hypotheses of interest, and (v) a simultaneous confidence band to quantify the uncertainty in the estimated coefficient functions. Simulated data are used to evaluate the finite sample performance of FADTTS. We apply FADTTS to investigate the development of white matter diffusivities along the splenium of the corpus callosum tract and the right internal capsule tract in a clinical study of neurodevelopment. FADTTS can be used to facilitate the understanding of normal brain development, the neural bases of neuropsychiatric disorders, and the joint effects of environmental and genetic factors on white matter fiber bundles. The advantages of FADTTS compared with the other existing approaches are that they are capable of modeling the structured inter-subject variability, testing the joint effects, and constructing their simultaneous confidence bands. However, FADTTS is not crucial for estimation and reduces to the functional analysis method for the single measure. Copyright © 2011 Elsevier Inc. All rights reserved.
Statistical process control methods allow the analysis and improvement of anesthesia care.
Fasting, Sigurd; Gisvold, Sven E
2003-10-01
Quality aspects of the anesthetic process are reflected in the rate of intraoperative adverse events. The purpose of this report is to illustrate how the quality of the anesthesia process can be analyzed using statistical process control methods, and exemplify how this analysis can be used for quality improvement. We prospectively recorded anesthesia-related data from all anesthetics for five years. The data included intraoperative adverse events, which were graded into four levels, according to severity. We selected four adverse events, representing important quality and safety aspects, for statistical process control analysis. These were: inadequate regional anesthesia, difficult emergence from general anesthesia, intubation difficulties and drug errors. We analyzed the underlying process using 'p-charts' for statistical process control. In 65,170 anesthetics we recorded adverse events in 18.3%; mostly of lesser severity. Control charts were used to define statistically the predictable normal variation in problem rate, and then used as a basis for analysis of the selected problems with the following results: Inadequate plexus anesthesia: stable process, but unacceptably high failure rate; Difficult emergence: unstable process, because of quality improvement efforts; Intubation difficulties: stable process, rate acceptable; Medication errors: methodology not suited because of low rate of errors. By applying statistical process control methods to the analysis of adverse events, we have exemplified how this allows us to determine if a process is stable, whether an intervention is required, and if quality improvement efforts have the desired effect.
Effect of the absolute statistic on gene-sampling gene-set analysis methods.
Nam, Dougu
2017-06-01
Gene-set enrichment analysis and its modified versions have commonly been used for identifying altered functions or pathways in disease from microarray data. In particular, the simple gene-sampling gene-set analysis methods have been heavily used for datasets with only a few sample replicates. The biggest problem with this approach is the highly inflated false-positive rate. In this paper, the effect of absolute gene statistic on gene-sampling gene-set analysis methods is systematically investigated. Thus far, the absolute gene statistic has merely been regarded as a supplementary method for capturing the bidirectional changes in each gene set. Here, it is shown that incorporating the absolute gene statistic in gene-sampling gene-set analysis substantially reduces the false-positive rate and improves the overall discriminatory ability. Its effect was investigated by power, false-positive rate, and receiver operating curve for a number of simulated and real datasets. The performances of gene-set analysis methods in one-tailed (genome-wide association study) and two-tailed (gene expression data) tests were also compared and discussed.
An improved method for statistical analysis of raw accelerator mass spectrometry data
International Nuclear Information System (INIS)
Gutjahr, A.; Phillips, F.; Kubik, P.W.; Elmore, D.
1987-01-01
Hierarchical statistical analysis is an appropriate method for statistical treatment of raw accelerator mass spectrometry (AMS) data. Using Monte Carlo simulations we show that this method yields more accurate estimates of isotope ratios and analytical uncertainty than the generally used propagation of errors approach. The hierarchical analysis is also useful in design of experiments because it can be used to identify sources of variability. 8 refs., 2 figs
International Nuclear Information System (INIS)
Hassan, Amal I.; Ghoneim, Mona A. M.; Mahmoud, Manal G.; Asker, Mohsen M. S.; Mohamed, Saher S.
2016-01-01
Damage to normal tissues is a consequence of both therapeutic and accidental exposures to ionizing radiation. A water-soluble heteropolysaccharide called AXEPS, composed of glucose, galactose, rhamnose and glucouronic acid in a molar ratio of nearly 1.0:1.6:0.4:2.3, respectively, was isolated from culture medium of strain Alcaligenes xylosoxidans MSA3 by ethanol precipitation followed by freeze-drying. Chemical analysis, Fourier-transform infrared (FTIR) and chromatographic studies revealed that the molecular weight was 1.6 × 10 4 g mol −1 . This study was designed to investigate the radioprotective and biological effects of AXEPS in alleviating the toxicity of ionizing radiation in female albino rats. A total of 32 female albino rats were divided into four groups. In the control group, rats were administered vehicle by tube for four weeks. The second group was administered AXEPS (100 mg/kg) orally by gavage for four weeks. Animals in the third group were exposed to whole-body γ-rays (5 Gy) and remained for 2 weeks without treatment. The fourth group received AXEPS (100 mg/kg) orally by gavage for two weeks before being exposed to whole-body γ-rays (5 Gy), then 24 h post γ-rays, they received AXEPS (100 mg/kg) in a treatment continuing till the end of the experiment (15 days after the whole–body γ-irradiation). Oral administration of AXEPS (100 mg/kg) significantly reversed the oxidative stress effects of radiation, as evidenced by the decrease in DNA damage in the bone marrow. Assessment of apoptosis and cell proliferation markers revealed that caspase-3 significantly increased in the irradiated group. Moreover, a significant decrease in the hematological constituents of peripheral blood, the chemotactic index and CD8+ T cells were observed in animals in the irradiation-only group, whereas an increase in the lymphocyte index was observed in animals in that group. In contrast, AXEPS treatment prevented these alterations. From our results, we conclude that
Statistical Image Analysis of Tomograms with Application to Fibre Geometry Characterisation
DEFF Research Database (Denmark)
Emerson, Monica Jane
The goal of this thesis is to develop statistical image analysis tools to characterise the micro-structure of complex materials used in energy technologies, with a strong focus on fibre composites. These quantification tools are based on extracting geometrical parameters defining structures from 2D...... with high resolution both in space and time to observe fast micro-structural changes. This thesis demonstrates that statistical image analysis combined with X-ray CT opens up numerous possibilities for understanding the behaviour of fibre composites under real life conditions. Besides enabling...
A tutorial on how to do a Mokken scale analysis on your test and questionnaire data
Sijtsma, K.; van der Ark, L.A.
Over the past decade, Mokken scale analysis (MSA) has rapidly grown in popularity among researchers from many different research areas. This tutorial provides researchers with a set of techniques and a procedure for their application, such that the construction of scales that have superior
A tutorial on how to do a Mokken scale analysis on your test and questionnaire data
Sijtsma, K.; van der Ark, L.A.
2017-01-01
Over the past decade, Mokken scale analysis (MSA) has rapidly grown in popularity among researchers from many different research areas. This tutorial provides researchers with a set of techniques and a procedure for their application, such that the construction of scales that have superior
The art of data analysis how to answer almost any question using basic statistics
Jarman, Kristin H
2013-01-01
A friendly and accessible approach to applying statistics in the real worldWith an emphasis on critical thinking, The Art of Data Analysis: How to Answer Almost Any Question Using Basic Statistics presents fun and unique examples, guides readers through the entire data collection and analysis process, and introduces basic statistical concepts along the way.Leaving proofs and complicated mathematics behind, the author portrays the more engaging side of statistics and emphasizes its role as a problem-solving tool. In addition, light-hearted case studies
Statistics in experimental design, preprocessing, and analysis of proteomics data.
Jung, Klaus
2011-01-01
High-throughput experiments in proteomics, such as 2-dimensional gel electrophoresis (2-DE) and mass spectrometry (MS), yield usually high-dimensional data sets of expression values for hundreds or thousands of proteins which are, however, observed on only a relatively small number of biological samples. Statistical methods for the planning and analysis of experiments are important to avoid false conclusions and to receive tenable results. In this chapter, the most frequent experimental designs for proteomics experiments are illustrated. In particular, focus is put on studies for the detection of differentially regulated proteins. Furthermore, issues of sample size planning, statistical analysis of expression levels as well as methods for data preprocessing are covered.
Application of Multivariable Statistical Techniques in Plant-wide WWTP Control Strategies Analysis
DEFF Research Database (Denmark)
Flores Alsina, Xavier; Comas, J.; Rodríguez-Roda, I.
2007-01-01
The main objective of this paper is to present the application of selected multivariable statistical techniques in plant-wide wastewater treatment plant (WWTP) control strategies analysis. In this study, cluster analysis (CA), principal component analysis/factor analysis (PCA/FA) and discriminant...... analysis (DA) are applied to the evaluation matrix data set obtained by simulation of several control strategies applied to the plant-wide IWA Benchmark Simulation Model No 2 (BSM2). These techniques allow i) to determine natural groups or clusters of control strategies with a similar behaviour, ii......) to find and interpret hidden, complex and casual relation features in the data set and iii) to identify important discriminant variables within the groups found by the cluster analysis. This study illustrates the usefulness of multivariable statistical techniques for both analysis and interpretation...
The Statistical Analysis Techniques to Support the NGNP Fuel Performance Experiments
International Nuclear Information System (INIS)
Pham, Bihn T.; Einerson, Jeffrey J.
2010-01-01
This paper describes the development and application of statistical analysis techniques to support the AGR experimental program on NGNP fuel performance. The experiments conducted in the Idaho National Laboratory's Advanced Test Reactor employ fuel compacts placed in a graphite cylinder shrouded by a steel capsule. The tests are instrumented with thermocouples embedded in graphite blocks and the target quantity (fuel/graphite temperature) is regulated by the He-Ne gas mixture that fills the gap volume. Three techniques for statistical analysis, namely control charting, correlation analysis, and regression analysis, are implemented in the SAS-based NGNP Data Management and Analysis System (NDMAS) for automated processing and qualification of the AGR measured data. The NDMAS also stores daily neutronic (power) and thermal (heat transfer) code simulation results along with the measurement data, allowing for their combined use and comparative scrutiny. The ultimate objective of this work includes (a) a multi-faceted system for data monitoring and data accuracy testing, (b) identification of possible modes of diagnostics deterioration and changes in experimental conditions, (c) qualification of data for use in code validation, and (d) identification and use of data trends to support effective control of test conditions with respect to the test target. Analysis results and examples given in the paper show the three statistical analysis techniques providing a complementary capability to warn of thermocouple failures. It also suggests that the regression analysis models relating calculated fuel temperatures and thermocouple readings can enable online regulation of experimental parameters (i.e. gas mixture content), to effectively maintain the target quantity (fuel temperature) within a given range.
The statistical analysis techniques to support the NGNP fuel performance experiments
Energy Technology Data Exchange (ETDEWEB)
Pham, Binh T., E-mail: Binh.Pham@inl.gov; Einerson, Jeffrey J.
2013-10-15
This paper describes the development and application of statistical analysis techniques to support the Advanced Gas Reactor (AGR) experimental program on Next Generation Nuclear Plant (NGNP) fuel performance. The experiments conducted in the Idaho National Laboratory’s Advanced Test Reactor employ fuel compacts placed in a graphite cylinder shrouded by a steel capsule. The tests are instrumented with thermocouples embedded in graphite blocks and the target quantity (fuel temperature) is regulated by the He–Ne gas mixture that fills the gap volume. Three techniques for statistical analysis, namely control charting, correlation analysis, and regression analysis, are implemented in the NGNP Data Management and Analysis System for automated processing and qualification of the AGR measured data. The neutronic and thermal code simulation results are used for comparative scrutiny. The ultimate objective of this work includes (a) a multi-faceted system for data monitoring and data accuracy testing, (b) identification of possible modes of diagnostics deterioration and changes in experimental conditions, (c) qualification of data for use in code validation, and (d) identification and use of data trends to support effective control of test conditions with respect to the test target. Analysis results and examples given in the paper show the three statistical analysis techniques providing a complementary capability to warn of thermocouple failures. It also suggests that the regression analysis models relating calculated fuel temperatures and thermocouple readings can enable online regulation of experimental parameters (i.e. gas mixture content), to effectively maintain the fuel temperature within a given range.
Statistical Challenges of Big Data Analysis in Medicine
Czech Academy of Sciences Publication Activity Database
Kalina, Jan
2015-01-01
Roč. 3, č. 1 (2015), s. 24-27 ISSN 1805-8698 R&D Projects: GA ČR GA13-23940S Grant - others:CESNET Development Fund(CZ) 494/2013 Institutional support: RVO:67985807 Keywords : big data * variable selection * classification * cluster analysis Subject RIV: BB - Applied Statistics, Operational Research http://www.ijbh.org/ijbh2015-1.pdf
Statistical Analysis of Hypercalcaemia Data related to Transferability
DEFF Research Database (Denmark)
Frølich, Anne; Nielsen, Bo Friis
2005-01-01
In this report we describe statistical analysis related to a study of hypercalcaemia carried out in the Copenhagen area in the ten year period from 1984 to 1994. Results from the study have previously been publised in a number of papers [3, 4, 5, 6, 7, 8, 9] and in various abstracts and posters...... at conferences during the late eighties and early nineties. In this report we give a more detailed description of many of the analysis and provide some new results primarily by simultaneous studies of several databases....
Statistical analysis of questionnaires a unified approach based on R and Stata
Bartolucci, Francesco; Gnaldi, Michela
2015-01-01
Statistical Analysis of Questionnaires: A Unified Approach Based on R and Stata presents special statistical methods for analyzing data collected by questionnaires. The book takes an applied approach to testing and measurement tasks, mirroring the growing use of statistical methods and software in education, psychology, sociology, and other fields. It is suitable for graduate students in applied statistics and psychometrics and practitioners in education, health, and marketing.The book covers the foundations of classical test theory (CTT), test reliability, va
Reducing bias in the analysis of counting statistics data
International Nuclear Information System (INIS)
Hammersley, A.P.; Antoniadis, A.
1997-01-01
In the analysis of counting statistics data it is common practice to estimate the variance of the measured data points as the data points themselves. This practice introduces a bias into the results of further analysis which may be significant, and under certain circumstances lead to false conclusions. In the case of normal weighted least squares fitting this bias is quantified and methods to avoid it are proposed. (orig.)
Benchmark validation of statistical models: Application to mediation analysis of imagery and memory.
MacKinnon, David P; Valente, Matthew J; Wurpts, Ingrid C
2018-03-29
This article describes benchmark validation, an approach to validating a statistical model. According to benchmark validation, a valid model generates estimates and research conclusions consistent with a known substantive effect. Three types of benchmark validation-(a) benchmark value, (b) benchmark estimate, and (c) benchmark effect-are described and illustrated with examples. Benchmark validation methods are especially useful for statistical models with assumptions that are untestable or very difficult to test. Benchmark effect validation methods were applied to evaluate statistical mediation analysis in eight studies using the established effect that increasing mental imagery improves recall of words. Statistical mediation analysis led to conclusions about mediation that were consistent with established theory that increased imagery leads to increased word recall. Benchmark validation based on established substantive theory is discussed as a general way to investigate characteristics of statistical models and a complement to mathematical proof and statistical simulation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Shadish, William R; Hedges, Larry V; Pustejovsky, James E
2014-04-01
This article presents a d-statistic for single-case designs that is in the same metric as the d-statistic used in between-subjects designs such as randomized experiments and offers some reasons why such a statistic would be useful in SCD research. The d has a formal statistical development, is accompanied by appropriate power analyses, and can be estimated using user-friendly SPSS macros. We discuss both advantages and disadvantages of d compared to other approaches such as previous d-statistics, overlap statistics, and multilevel modeling. It requires at least three cases for computation and assumes normally distributed outcomes and stationarity, assumptions that are discussed in some detail. We also show how to test these assumptions. The core of the article then demonstrates in depth how to compute d for one study, including estimation of the autocorrelation and the ratio of between case variance to total variance (between case plus within case variance), how to compute power using a macro, and how to use the d to conduct a meta-analysis of studies using single-case designs in the free program R, including syntax in an appendix. This syntax includes how to read data, compute fixed and random effect average effect sizes, prepare a forest plot and a cumulative meta-analysis, estimate various influence statistics to identify studies contributing to heterogeneity and effect size, and do various kinds of publication bias analyses. This d may prove useful for both the analysis and meta-analysis of data from SCDs. Copyright © 2013 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
Bayesian statistics applied to neutron activation data for reactor flux spectrum analysis
International Nuclear Information System (INIS)
Chiesa, Davide; Previtali, Ezio; Sisti, Monica
2014-01-01
Highlights: • Bayesian statistics to analyze the neutron flux spectrum from activation data. • Rigorous statistical approach for accurate evaluation of the neutron flux groups. • Cross section and activation data uncertainties included for the problem solution. • Flexible methodology applied to analyze different nuclear reactor flux spectra. • The results are in good agreement with the MCNP simulations of neutron fluxes. - Abstract: In this paper, we present a statistical method, based on Bayesian statistics, to analyze the neutron flux spectrum from the activation data of different isotopes. The experimental data were acquired during a neutron activation experiment performed at the TRIGA Mark II reactor of Pavia University (Italy) in four irradiation positions characterized by different neutron spectra. In order to evaluate the neutron flux spectrum, subdivided in energy groups, a system of linear equations, containing the group effective cross sections and the activation rate data, has to be solved. However, since the system’s coefficients are experimental data affected by uncertainties, a rigorous statistical approach is fundamental for an accurate evaluation of the neutron flux groups. For this purpose, we applied the Bayesian statistical analysis, that allows to include the uncertainties of the coefficients and the a priori information about the neutron flux. A program for the analysis of Bayesian hierarchical models, based on Markov Chain Monte Carlo (MCMC) simulations, was used to define the problem statistical model and solve it. The first analysis involved the determination of the thermal, resonance-intermediate and fast flux components and the dependence of the results on the Prior distribution choice was investigated to confirm the reliability of the Bayesian analysis. After that, the main resonances of the activation cross sections were analyzed to implement multi-group models with finer energy subdivisions that would allow to determine the
Reactor noise analysis by statistical pattern recognition methods
International Nuclear Information System (INIS)
Howington, L.C.; Gonzalez, R.C.
1976-01-01
A multivariate statistical pattern recognition system for reactor noise analysis is presented. The basis of the system is a transformation for decoupling correlated variables and algorithms for inferring probability density functions. The system is adaptable to a variety of statistical properties of the data, and it has learning, tracking, updating, and data compacting capabilities. System design emphasizes control of the false-alarm rate. Its abilities to learn normal patterns, to recognize deviations from these patterns, and to reduce the dimensionality of data with minimum error were evaluated by experiments at the Oak Ridge National Laboratory (ORNL) High-Flux Isotope Reactor. Power perturbations of less than 0.1 percent of the mean value in selected frequency ranges were detected by the pattern recognition system
Data analysis using the Gnu R system for statistical computation
Energy Technology Data Exchange (ETDEWEB)
Simone, James; /Fermilab
2011-07-01
R is a language system for statistical computation. It is widely used in statistics, bioinformatics, machine learning, data mining, quantitative finance, and the analysis of clinical drug trials. Among the advantages of R are: it has become the standard language for developing statistical techniques, it is being actively developed by a large and growing global user community, it is open source software, it is highly portable (Linux, OS-X and Windows), it has a built-in documentation system, it produces high quality graphics and it is easily extensible with over four thousand extension library packages available covering statistics and applications. This report gives a very brief introduction to R with some examples using lattice QCD simulation results. It then discusses the development of R packages designed for chi-square minimization fits for lattice n-pt correlation functions.
International Nuclear Information System (INIS)
Robeyns, J.; Parmentier, F.; Peeters, G.
2001-01-01
In the framework of safety analysis for the Belgian nuclear power plants and for the reload compatibility studies, Tractebel Energy Engineering (TEE) has developed, to define a 95/95 DNBR criterion, a statistical thermal design method based on the analytical full statistical approach: the Statistical Thermal Design Procedure (STDP). In that methodology, each DNBR value in the core assemblies is calculated with an adapted CHF (Critical Heat Flux) correlation implemented in the sub-channel code Cobra for core thermal hydraulic analysis. The uncertainties of the correlation are represented by the statistical parameters calculated from an experimental database. The main objective of a sub-channel analysis is to prove that in all class 1 and class 2 situations, the minimum DNBR (Departure from Nucleate Boiling Ratio) remains higher than the Safety Analysis Limit (SAL). The SAL value is calculated from the Statistical Design Limit (SDL) value adjusted with some penalties and deterministic factors. The search of a realistic value for the SDL is the objective of the statistical thermal design methods. In this report, we apply a full statistical approach to define the DNBR criterion or SDL (Statistical Design Limit) with the strict observance of the design criteria defined in the Standard Review Plan. The same statistical approach is used to define the expected number of rods experiencing DNB. (author)
Analytical and statistical analysis of elemental composition of lichens
International Nuclear Information System (INIS)
Calvelo, S.; Baccala, N.; Bubach, D.; Arribere, M.A.; Riberio Guevara, S.
1997-01-01
The elemental composition of lichens from remote southern South America regions has been studied with analytical and statistical techniques to determine if the values obtained reflect species, growth forms or habitat characteristics. The enrichment factors are calculated discriminated by species and collection site and compared with data available in the literature. The elemental concentrations are standardized and compared for different species. The information was statistically processed, a cluster analysis was performed using the three first principal axes of the PCA; the three groups formed are presented. Their relationship with the species, collection sites and the lichen growth forms are interpreted. (author)
Doherty, Julie; Giles, Melanie; Gallagher, Alison M; Simpson, Ellen Elizabeth Anne
2018-03-01
Although physical activity guidelines recommend muscle-strengthening activities (MSA), public health initiatives tend to focus on increasing aerobic activity and fail to mention MSA. This study sought to identify the issues influencing pre-, peri- and post-menopausal women's intentions to perform MSA with a view to informing future interventions for these populations. Mixed methods guided by the Theory of Planned Behaviour (TPB) were used to explore factors that influence women's intentions to perform MSA. In stage one, 34 women participated in either a focus group or interview. Discussions were transcribed verbatim and analysed based on menopausal status using a deductive approach. In stage two, 186 women (M = 47 years, SD = 9) completed a questionnaire to assess participant demographics, levels of MSA, affective and instrumental attitudes, injunctive and descriptive norms, self-efficacy and perceived behavioural control. Quantitative data were analysed using descriptive statistics, bivariate correlations, regression analyses and analysis of variances. Behavioural beliefs were: improved muscular health; psychological benefits; improved body shape. Normative beliefs were: health professionals; family members; work colleagues. Control beliefs were: equipment; motivation; time constraints; knowledge; physical capability; fear of judgement. However, these beliefs were not well established. Self-efficacy was the strongest predictor of intentions (spc 2 = 0.11) followed by affective attitudes (spc 2 = 0.09), with no significant differences on TPB variables between groups. If rising rates of musculoskeletal conditions in women are to be prevented, there is an urgent need to increase women's knowledge of recommended levels of muscle strengthening, with a view to promoting positive attitudes and enhancing women's sense of self-efficacy across all menopausal phases. Crown Copyright © 2018. Published by Elsevier B.V. All rights reserved.
China's medical savings accounts: an analysis of the price elasticity of demand for health care.
Yu, Hao
2017-07-01
Although medical savings accounts (MSAs) have drawn intensive attention across the world for their potential in cost control, there is limited evidence of their impact on the demand for health care. This paper is intended to fill that gap. First, we built up a dynamic model of a consumer's problem of utility maximization in the presence of a nonlinear price schedule embedded in an MSA. Second, the model was implemented using data from a 2-year MSA pilot program in China. The estimated price elasticity under MSAs was between -0.42 and -0.58, i.e., higher than that reported in the literature. The relatively high price elasticity suggests that MSAs as an insurance feature may help control costs. However, the long-term effect of MSAs on health costs is subject to further analysis.
Ohyanagi, S.; Dileonardo, C.
2013-12-01
As a natural phenomenon earthquake occurrence is difficult to predict. Statistical analysis of earthquake data was performed using candlestick chart and Bollinger Band methods. These statistical methods, commonly used in the financial world to analyze market trends were tested against earthquake data. Earthquakes above Mw 4.0 located on shore of Sanriku (37.75°N ~ 41.00°N, 143.00°E ~ 144.50°E) from February 1973 to May 2013 were selected for analysis. Two specific patterns in earthquake occurrence were recognized through the analysis. One is a spread of candlestick prior to the occurrence of events greater than Mw 6.0. A second pattern shows convergence in the Bollinger Band, which implies a positive or negative change in the trend of earthquakes. Both patterns match general models for the buildup and release of strain through the earthquake cycle, and agree with both the characteristics of the candlestick chart and Bollinger Band analysis. These results show there is a high correlation between patterns in earthquake occurrence and trend analysis by these two statistical methods. The results of this study agree with the appropriateness of the application of these financial analysis methods to the analysis of earthquake occurrence.
Parametric analysis of the statistical model of the stick-slip process
Lima, Roberta; Sampaio, Rubens
2017-06-01
In this paper it is performed a parametric analysis of the statistical model of the response of a dry-friction oscillator. The oscillator is a spring-mass system which moves over a base with a rough surface. Due to this roughness, the mass is subject to a dry-frictional force modeled as a Coulomb friction. The system is stochastically excited by an imposed bang-bang base motion. The base velocity is modeled by a Poisson process for which a probabilistic model is fully specified. The excitation induces in the system stochastic stick-slip oscillations. The system response is composed by a random sequence alternating stick and slip-modes. With realizations of the system, a statistical model is constructed for this sequence. In this statistical model, the variables of interest of the sequence are modeled as random variables, as for example, the number of time intervals in which stick or slip occur, the instants at which they begin, and their duration. Samples of the system response are computed by integration of the dynamic equation of the system using independent samples of the base motion. Statistics and histograms of the random variables which characterize the stick-slip process are estimated for the generated samples. The objective of the paper is to analyze how these estimated statistics and histograms vary with the system parameters, i.e., to make a parametric analysis of the statistical model of the stick-slip process.
Shiavi, Richard
2007-01-01
Introduction to Applied Statistical Signal Analysis is designed for the experienced individual with a basic background in mathematics, science, and computer. With this predisposed knowledge, the reader will coast through the practical introduction and move on to signal analysis techniques, commonly used in a broad range of engineering areas such as biomedical engineering, communications, geophysics, and speech.Introduction to Applied Statistical Signal Analysis intertwines theory and implementation with practical examples and exercises. Topics presented in detail include: mathematical
Visual and statistical analysis of {sup 18}F-FDG PET in primary progressive aphasia
Energy Technology Data Exchange (ETDEWEB)
Matias-Guiu, Jordi A.; Moreno-Ramos, Teresa; Garcia-Ramos, Rocio; Fernandez-Matarrubia, Marta; Oreja-Guevara, Celia; Matias-Guiu, Jorge [Hospital Clinico San Carlos, Department of Neurology, Madrid (Spain); Cabrera-Martin, Maria Nieves; Perez-Castejon, Maria Jesus; Rodriguez-Rey, Cristina; Ortega-Candil, Aida; Carreras, Jose Luis [San Carlos Health Research Institute (IdISSC) Complutense University of Madrid, Department of Nuclear Medicine, Hospital Clinico San Carlos, Madrid (Spain)
2015-05-01
Diagnosing progressive primary aphasia (PPA) and its variants is of great clinical importance, and fluorodeoxyglucose (FDG) positron emission tomography (PET) may be a useful diagnostic technique. The purpose of this study was to evaluate interobserver variability in the interpretation of FDG PET images in PPA as well as the diagnostic sensitivity and specificity of the technique. We also aimed to compare visual and statistical analyses of these images. There were 10 raters who analysed 44 FDG PET scans from 33 PPA patients and 11 controls. Five raters analysed the images visually, while the other five used maps created using Statistical Parametric Mapping software. Two spatial normalization procedures were performed: global mean normalization and cerebellar normalization. Clinical diagnosis was considered the gold standard. Inter-rater concordance was moderate for visual analysis (Fleiss' kappa 0.568) and substantial for statistical analysis (kappa 0.756-0.881). Agreement was good for all three variants of PPA except for the nonfluent/agrammatic variant studied with visual analysis. The sensitivity and specificity of each rater's diagnosis of PPA was high, averaging 87.8 and 89.9 % for visual analysis and 96.9 and 90.9 % for statistical analysis using global mean normalization, respectively. In cerebellar normalization, sensitivity was 88.9 % and specificity 100 %. FDG PET demonstrated high diagnostic accuracy for the diagnosis of PPA and its variants. Inter-rater concordance was higher for statistical analysis, especially for the nonfluent/agrammatic variant. These data support the use of FDG PET to evaluate patients with PPA and show that statistical analysis methods are particularly useful for identifying the nonfluent/agrammatic variant of PPA. (orig.)
PVeStA: A Parallel Statistical Model Checking and Quantitative Analysis Tool
AlTurki, Musab
2011-01-01
Statistical model checking is an attractive formal analysis method for probabilistic systems such as, for example, cyber-physical systems which are often probabilistic in nature. This paper is about drastically increasing the scalability of statistical model checking, and making such scalability of analysis available to tools like Maude, where probabilistic systems can be specified at a high level as probabilistic rewrite theories. It presents PVeStA, an extension and parallelization of the VeStA statistical model checking tool [10]. PVeStA supports statistical model checking of probabilistic real-time systems specified as either: (i) discrete or continuous Markov Chains; or (ii) probabilistic rewrite theories in Maude. Furthermore, the properties that it can model check can be expressed in either: (i) PCTL/CSL, or (ii) the QuaTEx quantitative temporal logic. As our experiments show, the performance gains obtained from parallelization can be very high. © 2011 Springer-Verlag.
Statistical analysis of extreme values from insurance, finance, hydrology and other fields
Reiss, Rolf-Dieter
1997-01-01
The statistical analysis of extreme data is important for various disciplines, including hydrology, insurance, finance, engineering and environmental sciences. This book provides a self-contained introduction to the parametric modeling, exploratory analysis and statistical interference for extreme values. The entire text of this third edition has been thoroughly updated and rearranged to meet the new requirements. Additional sections and chapters, elaborated on more than 100 pages, are particularly concerned with topics like dependencies, the conditional analysis and the multivariate modeling of extreme data. Parts I–III about the basic extreme value methodology remain unchanged to some larger extent, yet notable are, e.g., the new sections about "An Overview of Reduced-Bias Estimation" (co-authored by M.I. Gomes), "The Spectral Decomposition Methodology", and "About Tail Independence" (co-authored by M. Frick), and the new chapter about "Extreme Value Statistics of Dependent Random Variables" (co-authored ...
Energy Technology Data Exchange (ETDEWEB)
Bosman, Tommy [Division of Nuclear Medicine, P7, Ghent University Hospital, De Pintelaan 185, 9000 Ghent (Belgium); Van Laere, Koen [Department of Nuclear Medicine, Leuven University Hospital, Herestraat 49, 3000 Leuven (Belgium); Santens, Patrick [Department of Neurology, Ghent University Hospital, Ghent (Belgium)
2003-01-01
The clinical differentiation between typical idiopathic Parkinson's disease (IPD) and atypical parkinsonian disorders such as multiple system atrophy (MSA) is complicated by the presence of signs and symptoms common to both forms. The goal of this study was to re-evaluate the contribution of brain perfusion single-photon emission tomography (SPET) with anatomical standardisation and automated analysis in the differentiation of IPD and MSA. This was achieved by discriminant analysis in comparison with a large set of age- and gender-matched healthy volunteers. Technetium-99m ethyl cysteinate dimer SPET was performed on 140 subjects: 81 IPD patients (age 62.6{+-}10.2 years; disease duration 11.0{+-}6.4 years; 50 males/31 females), 15 MSA patients (61.5{+-}9.2 years; disease duration 3.0{+-}2.2 years; 9 males/6 females) and 44 age- and gender-matched healthy volunteers (age 59.2{+-}11.9 years; 27 males/17 females). Patients were matched for severity (Hoehn and Yahr stage). Automated predefined volume of interest (VOI) analysis was carried out after anatomical standardisation. Stepwise discriminant analysis with cross-validation using the leave-one-out method was used to determine the subgroup of variables giving the highest accuracy for this differential diagnosis. Between MSA and IPD, the only regions with highly significant differences in uptake after Bonferroni correction were the putamen VOIs. Comparing MSA versus normals and IPD, with putamen VOI values as discriminating variables, cross-validated performance showed correct classification of MSA patients with a sensitivity of 73.3%, a specificity of 84% and an accuracy of 83.6%. Additional input from the right caudate head and the left prefrontal and left mesial temporal cortex allowed 100% discrimination even after cross-validation. Discrimination between the IPD group alone and healthy volunteers was accurate in 94% of the cases after cross-validation, with a sensitivity of 91.4% and a specificity of 100
International Nuclear Information System (INIS)
Bosman, Tommy; Van Laere, Koen; Santens, Patrick
2003-01-01
The clinical differentiation between typical idiopathic Parkinson's disease (IPD) and atypical parkinsonian disorders such as multiple system atrophy (MSA) is complicated by the presence of signs and symptoms common to both forms. The goal of this study was to re-evaluate the contribution of brain perfusion single-photon emission tomography (SPET) with anatomical standardisation and automated analysis in the differentiation of IPD and MSA. This was achieved by discriminant analysis in comparison with a large set of age- and gender-matched healthy volunteers. Technetium-99m ethyl cysteinate dimer SPET was performed on 140 subjects: 81 IPD patients (age 62.6±10.2 years; disease duration 11.0±6.4 years; 50 males/31 females), 15 MSA patients (61.5±9.2 years; disease duration 3.0±2.2 years; 9 males/6 females) and 44 age- and gender-matched healthy volunteers (age 59.2±11.9 years; 27 males/17 females). Patients were matched for severity (Hoehn and Yahr stage). Automated predefined volume of interest (VOI) analysis was carried out after anatomical standardisation. Stepwise discriminant analysis with cross-validation using the leave-one-out method was used to determine the subgroup of variables giving the highest accuracy for this differential diagnosis. Between MSA and IPD, the only regions with highly significant differences in uptake after Bonferroni correction were the putamen VOIs. Comparing MSA versus normals and IPD, with putamen VOI values as discriminating variables, cross-validated performance showed correct classification of MSA patients with a sensitivity of 73.3%, a specificity of 84% and an accuracy of 83.6%. Additional input from the right caudate head and the left prefrontal and left mesial temporal cortex allowed 100% discrimination even after cross-validation. Discrimination between the IPD group alone and healthy volunteers was accurate in 94% of the cases after cross-validation, with a sensitivity of 91.4% and a specificity of 100%. The three
Power flow as a complement to statistical energy analysis and finite element analysis
Cuschieri, J. M.
1987-01-01
Present methods of analysis of the structural response and the structure-borne transmission of vibrational energy use either finite element (FE) techniques or statistical energy analysis (SEA) methods. The FE methods are a very useful tool at low frequencies where the number of resonances involved in the analysis is rather small. On the other hand SEA methods can predict with acceptable accuracy the response and energy transmission between coupled structures at relatively high frequencies where the structural modal density is high and a statistical approach is the appropriate solution. In the mid-frequency range, a relatively large number of resonances exist which make finite element method too costly. On the other hand SEA methods can only predict an average level form. In this mid-frequency range a possible alternative is to use power flow techniques, where the input and flow of vibrational energy to excited and coupled structural components can be expressed in terms of input and transfer mobilities. This power flow technique can be extended from low to high frequencies and this can be integrated with established FE models at low frequencies and SEA models at high frequencies to form a verification of the method. This method of structural analysis using power flo and mobility methods, and its integration with SEA and FE analysis is applied to the case of two thin beams joined together at right angles.
Cao, Siqin
2017-12-22
The 3D reference interaction site model (3DRISM) is a powerful tool to study the thermodynamic and structural properties of liquids. However, for hydrophobic solutes, the inhomogeneity of the solvent density around them poses a great challenge to the 3DRISM theory. To address this issue, we have previously introduced the hydrophobic-induced density inhomogeneity theory (HI) for purely hydrophobic solutes. To further consider the complex hydrophobic solutes containing partial charges, here we propose the D2MSA closure to incorporate the short-range and long-range interactions with the D2 closure and the mean spherical approximation, respectively. We demonstrate that our new theory can compute the solvent distributions around real hydrophobic solutes in water and complex organic solvents that agree well with the explicit solvent molecular dynamics simulations.
Cao, Siqin; Zhu, Lizhe; Huang, Xuhui
2017-01-01
The 3D reference interaction site model (3DRISM) is a powerful tool to study the thermodynamic and structural properties of liquids. However, for hydrophobic solutes, the inhomogeneity of the solvent density around them poses a great challenge to the 3DRISM theory. To address this issue, we have previously introduced the hydrophobic-induced density inhomogeneity theory (HI) for purely hydrophobic solutes. To further consider the complex hydrophobic solutes containing partial charges, here we propose the D2MSA closure to incorporate the short-range and long-range interactions with the D2 closure and the mean spherical approximation, respectively. We demonstrate that our new theory can compute the solvent distributions around real hydrophobic solutes in water and complex organic solvents that agree well with the explicit solvent molecular dynamics simulations.
Cao, Siqin; Zhu, Lizhe; Huang, Xuhui
2018-04-01
The 3D reference interaction site model (3DRISM) is a powerful tool to study the thermodynamic and structural properties of liquids. However, for hydrophobic solutes, the inhomogeneity of the solvent density around them poses a great challenge to the 3DRISM theory. To address this issue, we have previously introduced the hydrophobic-induced density inhomogeneity theory (HI) for purely hydrophobic solutes. To further consider the complex hydrophobic solutes containing partial charges, here we propose the D2MSA closure to incorporate the short-range and long-range interactions with the D2 closure and the mean spherical approximation, respectively. We demonstrate that our new theory can compute the solvent distributions around real hydrophobic solutes in water and complex organic solvents that agree well with the explicit solvent molecular dynamics simulations.
Bayesian Statistics and Uncertainty Quantification for Safety Boundary Analysis in Complex Systems
He, Yuning; Davies, Misty Dawn
2014-01-01
The analysis of a safety-critical system often requires detailed knowledge of safe regions and their highdimensional non-linear boundaries. We present a statistical approach to iteratively detect and characterize the boundaries, which are provided as parameterized shape candidates. Using methods from uncertainty quantification and active learning, we incrementally construct a statistical model from only few simulation runs and obtain statistically sound estimates of the shape parameters for safety boundaries.
Validation of statistical models for creep rupture by parametric analysis
Energy Technology Data Exchange (ETDEWEB)
Bolton, J., E-mail: john.bolton@uwclub.net [65, Fisher Ave., Rugby, Warks CV22 5HW (United Kingdom)
2012-01-15
Statistical analysis is an efficient method for the optimisation of any candidate mathematical model of creep rupture data, and for the comparative ranking of competing models. However, when a series of candidate models has been examined and the best of the series has been identified, there is no statistical criterion to determine whether a yet more accurate model might be devised. Hence there remains some uncertainty that the best of any series examined is sufficiently accurate to be considered reliable as a basis for extrapolation. This paper proposes that models should be validated primarily by parametric graphical comparison to rupture data and rupture gradient data. It proposes that no mathematical model should be considered reliable for extrapolation unless the visible divergence between model and data is so small as to leave no apparent scope for further reduction. This study is based on the data for a 12% Cr alloy steel used in BS PD6605:1998 to exemplify its recommended statistical analysis procedure. The models considered in this paper include a) a relatively simple model, b) the PD6605 recommended model and c) a more accurate model of somewhat greater complexity. - Highlights: Black-Right-Pointing-Pointer The paper discusses the validation of creep rupture models derived from statistical analysis. Black-Right-Pointing-Pointer It demonstrates that models can be satisfactorily validated by a visual-graphic comparison of models to data. Black-Right-Pointing-Pointer The method proposed utilises test data both as conventional rupture stress and as rupture stress gradient. Black-Right-Pointing-Pointer The approach is shown to be more reliable than a well-established and widely used method (BS PD6605).
Kruschke, John K; Liddell, Torrin M
2018-02-01
In the practice of data analysis, there is a conceptual distinction between hypothesis testing, on the one hand, and estimation with quantified uncertainty on the other. Among frequentists in psychology, a shift of emphasis from hypothesis testing to estimation has been dubbed "the New Statistics" (Cumming 2014). A second conceptual distinction is between frequentist methods and Bayesian methods. Our main goal in this article is to explain how Bayesian methods achieve the goals of the New Statistics better than frequentist methods. The article reviews frequentist and Bayesian approaches to hypothesis testing and to estimation with confidence or credible intervals. The article also describes Bayesian approaches to meta-analysis, randomized controlled trials, and power analysis.
Statistical analysis of solar proton events
Directory of Open Access Journals (Sweden)
V. Kurt
2004-06-01
Full Text Available A new catalogue of 253 solar proton events (SPEs with energy >10MeV and peak intensity >10 protons/cm2.s.sr (pfu at the Earth's orbit for three complete 11-year solar cycles (1970-2002 is given. A statistical analysis of this data set of SPEs and their associated flares that occurred during this time period is presented. It is outlined that 231 of these proton events are flare related and only 22 of them are not associated with Ha flares. It is also noteworthy that 42 of these events are registered as Ground Level Enhancements (GLEs in neutron monitors. The longitudinal distribution of the associated flares shows that a great number of these events are connected with west flares. This analysis enables one to understand the long-term dependence of the SPEs and the related flare characteristics on the solar cycle which are useful for space weather prediction.
STATISTICAL ANALYSIS OF THE HEAVY NEUTRAL ATOMS MEASURED BY IBEX
International Nuclear Information System (INIS)
Park, Jeewoo; Kucharek, Harald; Möbius, Eberhard; Galli, André; Livadiotis, George; Fuselier, Steve A.; McComas, David J.
2015-01-01
We investigate the directional distribution of heavy neutral atoms in the heliosphere by using heavy neutral maps generated with the IBEX-Lo instrument over three years from 2009 to 2011. The interstellar neutral (ISN) O and Ne gas flow was found in the first-year heavy neutral map at 601 keV and its flow direction and temperature were studied. However, due to the low counting statistics, researchers have not treated the full sky maps in detail. The main goal of this study is to evaluate the statistical significance of each pixel in the heavy neutral maps to get a better understanding of the directional distribution of heavy neutral atoms in the heliosphere. Here, we examine three statistical analysis methods: the signal-to-noise filter, the confidence limit method, and the cluster analysis method. These methods allow us to exclude background from areas where the heavy neutral signal is statistically significant. These methods also allow the consistent detection of heavy neutral atom structures. The main emission feature expands toward lower longitude and higher latitude from the observational peak of the ISN O and Ne gas flow. We call this emission the extended tail. It may be an imprint of the secondary oxygen atoms generated by charge exchange between ISN hydrogen atoms and oxygen ions in the outer heliosheath
A time warping approach to multiple sequence alignment.
Arribas-Gil, Ana; Matias, Catherine
2017-04-25
We propose an approach for multiple sequence alignment (MSA) derived from the dynamic time warping viewpoint and recent techniques of curve synchronization developed in the context of functional data analysis. Starting from pairwise alignments of all the sequences (viewed as paths in a certain space), we construct a median path that represents the MSA we are looking for. We establish a proof of concept that our method could be an interesting ingredient to include into refined MSA techniques. We present a simple synthetic experiment as well as the study of a benchmark dataset, together with comparisons with 2 widely used MSA softwares.
Explorations in statistics: the analysis of ratios and normalized data.
Curran-Everett, Douglas
2013-09-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This ninth installment of Explorations in Statistics explores the analysis of ratios and normalized-or standardized-data. As researchers, we compute a ratio-a numerator divided by a denominator-to compute a proportion for some biological response or to derive some standardized variable. In each situation, we want to control for differences in the denominator when the thing we really care about is the numerator. But there is peril lurking in a ratio: only if the relationship between numerator and denominator is a straight line through the origin will the ratio be meaningful. If not, the ratio will misrepresent the true relationship between numerator and denominator. In contrast, regression techniques-these include analysis of covariance-are versatile: they can accommodate an analysis of the relationship between numerator and denominator when a ratio is useless.
Parametric statistical change point analysis
Chen, Jie
2000-01-01
This work is an in-depth study of the change point problem from a general point of view and a further examination of change point analysis of the most commonly used statistical models Change point problems are encountered in such disciplines as economics, finance, medicine, psychology, signal processing, and geology, to mention only several The exposition is clear and systematic, with a great deal of introductory material included Different models are presented in each chapter, including gamma and exponential models, rarely examined thus far in the literature Other models covered in detail are the multivariate normal, univariate normal, regression, and discrete models Extensive examples throughout the text emphasize key concepts and different methodologies are used, namely the likelihood ratio criterion, and the Bayesian and information criterion approaches A comprehensive bibliography and two indices complete the study
Perceptual and statistical analysis of cardiac phase and amplitude images
International Nuclear Information System (INIS)
Houston, A.; Craig, A.
1991-01-01
A perceptual experiment was conducted using cardiac phase and amplitude images. Estimates of statistical parameters were derived from the images and the diagnostic potential of human and statistical decisions compared. Five methods were used to generate the images from 75 gated cardiac studies, 39 of which were classified as pathological. The images were presented to 12 observers experienced in nuclear medicine. The observers rated the images using a five-category scale based on their confidence of an abnormality presenting. Circular and linear statistics were used to analyse phase and amplitude image data, respectively. Estimates of mean, standard deviation (SD), skewness, kurtosis and the first term of the spatial correlation function were evaluated in the region of the left ventricle. A receiver operating characteristic analysis was performed on both sets of data and the human and statistical decisions compared. For phase images, circular SD was shown to discriminate better between normal and abnormal than experienced observers, but no single statistic discriminated as well as the human observer for amplitude images. (orig.)
U.S. Naturalizations: Fiscal Year 2003 Metropolitan Statistical Area (MSA) of Residence
Department of Homeland Security — Naturalization is the process by which U.S. citizenship is conferred upon foreign citizens or nationals after fulfilling the requirements established by Congress in...
Statistical analysis of the count and profitability of air conditioners.
Rady, El Houssainy A; Mohamed, Salah M; Abd Elmegaly, Alaa A
2018-08-01
This article presents the statistical analysis of the number and profitability of air conditioners in an Egyptian company. Checking the same distribution for each categorical variable has been made using Kruskal-Wallis test.
2010-01-01
... means a natural person, corporation, or other business entity. (m) Relevant metropolitan statistical... median family income for the metropolitan statistical area (MSA), if a depository organization is located... exclusively to the business of retail merchandising or manufacturing; (ii) A person whose management functions...
Statistical analysis of subjective preferences for video enhancement
Woods, Russell L.; Satgunam, PremNandhini; Bronstad, P. Matthew; Peli, Eli
2010-02-01
Measuring preferences for moving video quality is harder than for static images due to the fleeting and variable nature of moving video. Subjective preferences for image quality can be tested by observers indicating their preference for one image over another. Such pairwise comparisons can be analyzed using Thurstone scaling (Farrell, 1999). Thurstone (1927) scaling is widely used in applied psychology, marketing, food tasting and advertising research. Thurstone analysis constructs an arbitrary perceptual scale for the items that are compared (e.g. enhancement levels). However, Thurstone scaling does not determine the statistical significance of the differences between items on that perceptual scale. Recent papers have provided inferential statistical methods that produce an outcome similar to Thurstone scaling (Lipovetsky and Conklin, 2004). Here, we demonstrate that binary logistic regression can analyze preferences for enhanced video.
Cichonska, Anna; Rousu, Juho; Marttinen, Pekka; Kangas, Antti J; Soininen, Pasi; Lehtimäki, Terho; Raitakari, Olli T; Järvelin, Marjo-Riitta; Salomaa, Veikko; Ala-Korpela, Mika; Ripatti, Samuli; Pirinen, Matti
2016-07-01
A dominant approach to genetic association studies is to perform univariate tests between genotype-phenotype pairs. However, analyzing related traits together increases statistical power, and certain complex associations become detectable only when several variants are tested jointly. Currently, modest sample sizes of individual cohorts, and restricted availability of individual-level genotype-phenotype data across the cohorts limit conducting multivariate tests. We introduce metaCCA, a computational framework for summary statistics-based analysis of a single or multiple studies that allows multivariate representation of both genotype and phenotype. It extends the statistical technique of canonical correlation analysis to the setting where original individual-level records are not available, and employs a covariance shrinkage algorithm to achieve robustness.Multivariate meta-analysis of two Finnish studies of nuclear magnetic resonance metabolomics by metaCCA, using standard univariate output from the program SNPTEST, shows an excellent agreement with the pooled individual-level analysis of original data. Motivated by strong multivariate signals in the lipid genes tested, we envision that multivariate association testing using metaCCA has a great potential to provide novel insights from already published summary statistics from high-throughput phenotyping technologies. Code is available at https://github.com/aalto-ics-kepaco anna.cichonska@helsinki.fi or matti.pirinen@helsinki.fi Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Statistical Analysis of the Exchange Rate of Bitcoin.
Chu, Jeffrey; Nadarajah, Saralees; Chan, Stephen
2015-01-01
Bitcoin, the first electronic payment system, is becoming a popular currency. We provide a statistical analysis of the log-returns of the exchange rate of Bitcoin versus the United States Dollar. Fifteen of the most popular parametric distributions in finance are fitted to the log-returns. The generalized hyperbolic distribution is shown to give the best fit. Predictions are given for future values of the exchange rate.
Statistical analysis and Monte Carlo simulation of growing self-avoiding walks on percolation
Energy Technology Data Exchange (ETDEWEB)
Zhang Yuxia [Department of Physics, Wuhan University, Wuhan 430072 (China); Sang Jianping [Department of Physics, Wuhan University, Wuhan 430072 (China); Department of Physics, Jianghan University, Wuhan 430056 (China); Zou Xianwu [Department of Physics, Wuhan University, Wuhan 430072 (China)]. E-mail: xwzou@whu.edu.cn; Jin Zhunzhi [Department of Physics, Wuhan University, Wuhan 430072 (China)
2005-09-26
The two-dimensional growing self-avoiding walk on percolation was investigated by statistical analysis and Monte Carlo simulation. We obtained the expression of the mean square displacement and effective exponent as functions of time and percolation probability by statistical analysis and made a comparison with simulations. We got a reduced time to scale the motion of walkers in growing self-avoiding walks on regular and percolation lattices.
Small Unit Decision Making (SUDM) Assessment Battery
2014-11-21
above. KMO MSA or Keiser-Meyer-Olkin Measure of Sampling tests whether it is appropriate to conduct a factor analysis with the sample population. A KMO ...fell on factor one and the factor. Table 7. Summary of Unrotated Factor Analysis for Each Instrument Variable Number of Items KMO MSA Number of
Paek, Hye-Jin; Reid, Leonard N; Choi, Hojoon; Jeong, Hyun Ju
2010-10-01
Tobacco studies indicate that health-related information in cigarette advertising leads consumers to underestimate the detrimental health effects of smoking and contributes to their smoking-related perceptions, beliefs, and attitudes. This study examined the frequencies and kinds of implicit health information in cigarette advertising across five distinct smoking eras covering the years 1954-2003. Analysis of 1,135 cigarette advertisements collected through multistage probability sampling of three popular consumer magazines found that the level of implicit health information (i.e., "light" cigarette, cigarette pack color, verbal and visual health cues, cigarette portrayals, and human model-cigarette interaction) in post-Master Settlement Agreement [MSA] era ads is similar to the level in ads from early smoking eras. Specifically, "light" cigarettes were frequently promoted, and presence of light colors in cigarette packs seemed dominant after the probroadcast ban era. Impressionistic verbal health cues (e.g., soft, mild, and refreshing) appeared more frequently in post-MSA era ads than in pre-MSA era ads. Most notably, a majority of the cigarette ads portrayed models smoking, lighting, or offering a cigarette to others. The potential impact of implicit health information is discussed in the contexts of social cognition and Social Cognitive Theory. Policy implications regarding our findings are also detailed.
Dominick, Wayne D. (Editor); Bassari, Jinous; Triantafyllopoulos, Spiros
1984-01-01
The University of Southwestern Louisiana (USL) NASA PC R and D statistical analysis support package is designed to be a three-level package to allow statistical analysis for a variety of applications within the USL Data Base Management System (DBMS) contract work. The design addresses usage of the statistical facilities as a library package, as an interactive statistical analysis system, and as a batch processing package.
A method for statistical steady state thermal analysis of reactor cores
International Nuclear Information System (INIS)
Whetton, P.A.
1981-01-01
In a previous publication the author presented a method for undertaking statistical steady state thermal analyses of reactor cores. The present paper extends the technique to an assessment of confidence limits for the resulting probability functions which define the probability that a given thermal response value will be exceeded in a reactor core. Establishing such confidence limits is considered an integral part of any statistical thermal analysis and essential if such analysis are to be considered in any regulatory process. In certain applications the use of a best estimate probability function may be justifiable but it is recognised that a demonstrably conservative probability function is required for any regulatory considerations. (orig.)
A statistical test for outlier identification in data envelopment analysis
Directory of Open Access Journals (Sweden)
Morteza Khodabin
2010-09-01
Full Text Available In the use of peer group data to assess individual, typical or best practice performance, the effective detection of outliers is critical for achieving useful results. In these ‘‘deterministic’’ frontier models, statistical theory is now mostly available. This paper deals with the statistical pared sample method and its capability of detecting outliers in data envelopment analysis. In the presented method, each observation is deleted from the sample once and the resulting linear program is solved, leading to a distribution of efficiency estimates. Based on the achieved distribution, a pared test is designed to identify the potential outlier(s. We illustrate the method through a real data set. The method could be used in a first step, as an exploratory data analysis, before using any frontier estimation.
Radar Derived Spatial Statistics of Summer Rain. Volume 2; Data Reduction and Analysis
Konrad, T. G.; Kropfli, R. A.
1975-01-01
Data reduction and analysis procedures are discussed along with the physical and statistical descriptors used. The statistical modeling techniques are outlined and examples of the derived statistical characterization of rain cells in terms of the several physical descriptors are presented. Recommendations concerning analyses which can be pursued using the data base collected during the experiment are included.
Instrumental Neutron Activation Analysis and Multivariate Statistics for Pottery Provenance
Glascock, M. D.; Neff, H.; Vaughn, K. J.
2004-06-01
The application of instrumental neutron activation analysis and multivariate statistics to archaeological studies of ceramics and clays is described. A small pottery data set from the Nasca culture in southern Peru is presented for illustration.
Instrumental Neutron Activation Analysis and Multivariate Statistics for Pottery Provenance
International Nuclear Information System (INIS)
Glascock, M. D.; Neff, H.; Vaughn, K. J.
2004-01-01
The application of instrumental neutron activation analysis and multivariate statistics to archaeological studies of ceramics and clays is described. A small pottery data set from the Nasca culture in southern Peru is presented for illustration.
Instrumental Neutron Activation Analysis and Multivariate Statistics for Pottery Provenance
Energy Technology Data Exchange (ETDEWEB)
Glascock, M. D.; Neff, H. [University of Missouri, Research Reactor Center (United States); Vaughn, K. J. [Pacific Lutheran University, Department of Anthropology (United States)
2004-06-15
The application of instrumental neutron activation analysis and multivariate statistics to archaeological studies of ceramics and clays is described. A small pottery data set from the Nasca culture in southern Peru is presented for illustration.
Statistical analysis and data management
International Nuclear Information System (INIS)
Anon.
1981-01-01
This report provides an overview of the history of the WIPP Biology Program. The recommendations of the American Institute of Biological Sciences (AIBS) for the WIPP biology program are summarized. The data sets available for statistical analyses and problems associated with these data sets are also summarized. Biological studies base maps are presented. A statistical model is presented to evaluate any correlation between climatological data and small mammal captures. No statistically significant relationship between variance in small mammal captures on Dr. Gennaro's 90m x 90m grid and precipitation records from the Duval Potash Mine were found
Detecting errors in micro and trace analysis by using statistics
DEFF Research Database (Denmark)
Heydorn, K.
1993-01-01
By assigning a standard deviation to each step in an analytical method it is possible to predict the standard deviation of each analytical result obtained by this method. If the actual variability of replicate analytical results agrees with the expected, the analytical method is said...... to be in statistical control. Significant deviations between analytical results from different laboratories reveal the presence of systematic errors, and agreement between different laboratories indicate the absence of systematic errors. This statistical approach, referred to as the analysis of precision, was applied...
Statistical analysis of the BOIL program in RSYST-III
International Nuclear Information System (INIS)
Beck, W.; Hausch, H.J.
1978-11-01
The paper describes a statistical analysis in the RSYST-III program system. Using the example of the BOIL program, it is shown how the effects of inaccurate input data on the output data can be discovered. The existing possibilities of data generation, data handling, and data evaluation are outlined. (orig.) [de
Multivariate statistical analysis of precipitation chemistry in Northwestern Spain
International Nuclear Information System (INIS)
Prada-Sanchez, J.M.; Garcia-Jurado, I.; Gonzalez-Manteiga, W.; Fiestras-Janeiro, M.G.; Espada-Rios, M.I.; Lucas-Dominguez, T.
1993-01-01
149 samples of rainwater were collected in the proximity of a power station in northwestern Spain at three rainwater monitoring stations. The resulting data are analyzed using multivariate statistical techniques. Firstly, the Principal Component Analysis shows that there are three main sources of pollution in the area (a marine source, a rural source and an acid source). The impact from pollution from these sources on the immediate environment of the stations is studied using Factorial Discriminant Analysis. 8 refs., 7 figs., 11 tabs
Multivariate statistical analysis of precipitation chemistry in Northwestern Spain
Energy Technology Data Exchange (ETDEWEB)
Prada-Sanchez, J.M.; Garcia-Jurado, I.; Gonzalez-Manteiga, W.; Fiestras-Janeiro, M.G.; Espada-Rios, M.I.; Lucas-Dominguez, T. (University of Santiago, Santiago (Spain). Faculty of Mathematics, Dept. of Statistics and Operations Research)
1993-07-01
149 samples of rainwater were collected in the proximity of a power station in northwestern Spain at three rainwater monitoring stations. The resulting data are analyzed using multivariate statistical techniques. Firstly, the Principal Component Analysis shows that there are three main sources of pollution in the area (a marine source, a rural source and an acid source). The impact from pollution from these sources on the immediate environment of the stations is studied using Factorial Discriminant Analysis. 8 refs., 7 figs., 11 tabs.
SWToolbox: A surface-water tool-box for statistical analysis of streamflow time series
Kiang, Julie E.; Flynn, Kate; Zhai, Tong; Hummel, Paul; Granato, Gregory
2018-03-07
This report is a user guide for the low-flow analysis methods provided with version 1.0 of the Surface Water Toolbox (SWToolbox) computer program. The software combines functionality from two software programs—U.S. Geological Survey (USGS) SWSTAT and U.S. Environmental Protection Agency (EPA) DFLOW. Both of these programs have been used primarily for computation of critical low-flow statistics. The main analysis methods are the computation of hydrologic frequency statistics such as the 7-day minimum flow that occurs on average only once every 10 years (7Q10), computation of design flows including biologically based flows, and computation of flow-duration curves and duration hydrographs. Other annual, monthly, and seasonal statistics can also be computed. The interface facilitates retrieval of streamflow discharge data from the USGS National Water Information System and outputs text reports for a record of the analysis. Tools for graphing data and screening tests are available to assist the analyst in conducting the analysis.
Anomalous heat transfer modes of nanofluids: a review based on statistical analysis
2011-01-01
This paper contains the results of a concise statistical review analysis of a large amount of publications regarding the anomalous heat transfer modes of nanofluids. The application of nanofluids as coolants is a novel practise with no established physical foundations explaining the observed anomalous heat transfer. As a consequence, traditional methods of performing a literature review may not be adequate in presenting objectively the results representing the bulk of the available literature. The current literature review analysis aims to resolve the problems faced by researchers in the past by employing an unbiased statistical analysis to present and reveal the current trends and general belief of the scientific community regarding the anomalous heat transfer modes of nanofluids. The thermal performance analysis indicated that statistically there exists a variable enhancement for conduction, convection/mixed heat transfer, pool boiling heat transfer and critical heat flux modes. The most popular proposed mechanisms in the literature to explain heat transfer in nanofluids are revealed, as well as possible trends between nanofluid properties and thermal performance. The review also suggests future experimentation to provide more conclusive answers to the control mechanisms and influential parameters of heat transfer in nanofluids. PMID:21711932
Anomalous heat transfer modes of nanofluids: a review based on statistical analysis
Sergis, Antonis; Hardalupas, Yannis
2011-05-01
This paper contains the results of a concise statistical review analysis of a large amount of publications regarding the anomalous heat transfer modes of nanofluids. The application of nanofluids as coolants is a novel practise with no established physical foundations explaining the observed anomalous heat transfer. As a consequence, traditional methods of performing a literature review may not be adequate in presenting objectively the results representing the bulk of the available literature. The current literature review analysis aims to resolve the problems faced by researchers in the past by employing an unbiased statistical analysis to present and reveal the current trends and general belief of the scientific community regarding the anomalous heat transfer modes of nanofluids. The thermal performance analysis indicated that statistically there exists a variable enhancement for conduction, convection/mixed heat transfer, pool boiling heat transfer and critical heat flux modes. The most popular proposed mechanisms in the literature to explain heat transfer in nanofluids are revealed, as well as possible trends between nanofluid properties and thermal performance. The review also suggests future experimentation to provide more conclusive answers to the control mechanisms and influential parameters of heat transfer in nanofluids.
Anomalous heat transfer modes of nanofluids: a review based on statistical analysis
Directory of Open Access Journals (Sweden)
Sergis Antonis
2011-01-01
Full Text Available Abstract This paper contains the results of a concise statistical review analysis of a large amount of publications regarding the anomalous heat transfer modes of nanofluids. The application of nanofluids as coolants is a novel practise with no established physical foundations explaining the observed anomalous heat transfer. As a consequence, traditional methods of performing a literature review may not be adequate in presenting objectively the results representing the bulk of the available literature. The current literature review analysis aims to resolve the problems faced by researchers in the past by employing an unbiased statistical analysis to present and reveal the current trends and general belief of the scientific community regarding the anomalous heat transfer modes of nanofluids. The thermal performance analysis indicated that statistically there exists a variable enhancement for conduction, convection/mixed heat transfer, pool boiling heat transfer and critical heat flux modes. The most popular proposed mechanisms in the literature to explain heat transfer in nanofluids are revealed, as well as possible trends between nanofluid properties and thermal performance. The review also suggests future experimentation to provide more conclusive answers to the control mechanisms and influential parameters of heat transfer in nanofluids.
Modeling of asphalt-rubber rotational viscosity by statistical analysis and neural networks
Directory of Open Access Journals (Sweden)
Luciano Pivoto Specht
2007-03-01
Full Text Available It is of a great importance to know binders' viscosity in order to perform handling, mixing, application processes and asphalt mixes compaction in highway surfacing. This paper presents the results of viscosity measurement in asphalt-rubber binders prepared in laboratory. The binders were prepared varying the rubber content, rubber particle size, duration and temperature of mixture, all following a statistical design plan. The statistical analysis and artificial neural networks were used to create mathematical models for prediction of the binders viscosity. The comparison between experimental data and simulated results with the generated models showed best performance of the neural networks analysis in contrast to the statistic models. The results indicated that the rubber content and duration of mixture have major influence on the observed viscosity for the considered interval of parameters variation.
Common pitfalls in statistical analysis: Odds versus risk
Ranganathan, Priya; Aggarwal, Rakesh; Pramesh, C. S.
2015-01-01
In biomedical research, we are often interested in quantifying the relationship between an exposure and an outcome. “Odds” and “Risk” are the most common terms which are used as measures of association between variables. In this article, which is the fourth in the series of common pitfalls in statistical analysis, we explain the meaning of risk and odds and the difference between the two. PMID:26623395
Statistical Analysis of the Exchange Rate of Bitcoin.
Directory of Open Access Journals (Sweden)
Jeffrey Chu
Full Text Available Bitcoin, the first electronic payment system, is becoming a popular currency. We provide a statistical analysis of the log-returns of the exchange rate of Bitcoin versus the United States Dollar. Fifteen of the most popular parametric distributions in finance are fitted to the log-returns. The generalized hyperbolic distribution is shown to give the best fit. Predictions are given for future values of the exchange rate.
Statistical Analysis of the Exchange Rate of Bitcoin
Chu, Jeffrey; Nadarajah, Saralees; Chan, Stephen
2015-01-01
Bitcoin, the first electronic payment system, is becoming a popular currency. We provide a statistical analysis of the log-returns of the exchange rate of Bitcoin versus the United States Dollar. Fifteen of the most popular parametric distributions in finance are fitted to the log-returns. The generalized hyperbolic distribution is shown to give the best fit. Predictions are given for future values of the exchange rate. PMID:26222702
78 FR 73927 - Single Family Housing Guaranteed Loan Program
2013-12-09
.... The term ``MSA (Metropolitan Statistical Area)'' was added as it is a term the Office of Management... statistics. The term ``new dwelling'' was amended to achieve consistency with other Agency program... Government. One respondent recommended establishing a delinquency goal to improve and monitor a lender's...
Analysis of Variance with Summary Statistics in Microsoft® Excel®
Larson, David A.; Hsu, Ko-Cheng
2010-01-01
Students regularly are asked to solve Single Factor Analysis of Variance problems given only the sample summary statistics (number of observations per category, category means, and corresponding category standard deviations). Most undergraduate students today use Excel for data analysis of this type. However, Excel, like all other statistical…
The Australasian Resuscitation in Sepsis Evaluation (ARISE) trial statistical analysis plan.
Delaney, Anthony P; Peake, Sandra L; Bellomo, Rinaldo; Cameron, Peter; Holdgate, Anna; Howe, Belinda; Higgins, Alisa; Presneill, Jeffrey; Webb, Steve
2013-09-01
The Australasian Resuscitation in Sepsis Evaluation (ARISE) study is an international, multicentre, randomised, controlled trial designed to evaluate the effectiveness of early goal-directed therapy compared with standard care for patients presenting to the emergency department with severe sepsis. In keeping with current practice, and considering aspects of trial design and reporting specific to non-pharmacological interventions, our plan outlines the principles and methods for analysing and reporting the trial results. The document is prepared before completion of recruitment into the ARISE study, without knowledge of the results of the interim analysis conducted by the data safety and monitoring committee and before completion of the two related international studies. Our statistical analysis plan was designed by the ARISE chief investigators, and reviewed and approved by the ARISE steering committee. We reviewed the data collected by the research team as specified in the study protocol and detailed in the study case report form. We describe information related to baseline characteristics, characteristics of delivery of the trial interventions, details of resuscitation, other related therapies and other relevant data with appropriate comparisons between groups. We define the primary, secondary and tertiary outcomes for the study, with description of the planned statistical analyses. We have developed a statistical analysis plan with a trial profile, mock-up tables and figures. We describe a plan for presenting baseline characteristics, microbiological and antibiotic therapy, details of the interventions, processes of care and concomitant therapies and adverse events. We describe the primary, secondary and tertiary outcomes with identification of subgroups to be analysed. We have developed a statistical analysis plan for the ARISE study, available in the public domain, before the completion of recruitment into the study. This will minimise analytical bias and
Statistical Analysis Of Tank 19F Floor Sample Results
International Nuclear Information System (INIS)
Harris, S.
2010-01-01
Representative sampling has been completed for characterization of the residual material on the floor of Tank 19F as per the statistical sampling plan developed by Harris and Shine. Samples from eight locations have been obtained from the tank floor and two of the samples were archived as a contingency. Six samples, referred to in this report as the current scrape samples, have been submitted to and analyzed by SRNL. This report contains the statistical analysis of the floor sample analytical results to determine if further data are needed to reduce uncertainty. Included are comparisons with the prior Mantis samples results to determine if they can be pooled with the current scrape samples to estimate the upper 95% confidence limits (UCL95%) for concentration. Statistical analysis revealed that the Mantis and current scrape sample results are not compatible. Therefore, the Mantis sample results were not used to support the quantification of analytes in the residual material. Significant spatial variability among the current scrape sample results was not found. Constituent concentrations were similar between the North and South hemispheres as well as between the inner and outer regions of the tank floor. The current scrape sample results from all six samples fall within their 3-sigma limits. In view of the results from numerous statistical tests, the data were pooled from all six current scrape samples. As such, an adequate sample size was provided for quantification of the residual material on the floor of Tank 19F. The uncertainty is quantified in this report by an UCL95% on each analyte concentration. The uncertainty in analyte concentration was calculated as a function of the number of samples, the average, and the standard deviation of the analytical results. The UCL95% was based entirely on the six current scrape sample results (each averaged across three analytical determinations).
Vector-field statistics for the analysis of time varying clinical gait data.
Donnelly, C J; Alexander, C; Pataky, T C; Stannage, K; Reid, S; Robinson, M A
2017-01-01
In clinical settings, the time varying analysis of gait data relies heavily on the experience of the individual(s) assessing these biological signals. Though three dimensional kinematics are recognised as time varying waveforms (1D), exploratory statistical analysis of these data are commonly carried out with multiple discrete or 0D dependent variables. In the absence of an a priori 0D hypothesis, clinicians are at risk of making type I and II errors in their analyis of time varying gait signatures in the event statistics are used in concert with prefered subjective clinical assesment methods. The aim of this communication was to determine if vector field waveform statistics were capable of providing quantitative corroboration to practically significant differences in time varying gait signatures as determined by two clinically trained gait experts. The case study was a left hemiplegic Cerebral Palsy (GMFCS I) gait patient following a botulinum toxin (BoNT-A) injection to their left gastrocnemius muscle. When comparing subjective clinical gait assessments between two testers, they were in agreement with each other for 61% of the joint degrees of freedom and phases of motion analysed. For tester 1 and tester 2, they were in agreement with the vector-field analysis for 78% and 53% of the kinematic variables analysed. When the subjective analyses of tester 1 and tester 2 were pooled together and then compared to the vector-field analysis, they were in agreement for 83% of the time varying kinematic variables analysed. These outcomes demonstrate that in principle, vector-field statistics corroborates with what a team of clinical gait experts would classify as practically meaningful pre- versus post time varying kinematic differences. The potential for vector-field statistics to be used as a useful clinical tool for the objective analysis of time varying clinical gait data is established. Future research is recommended to assess the usefulness of vector-field analyses
Lü, Yiran; Hao, Shuxin; Zhang, Guoqing; Liu, Jie; Liu, Yue; Xu, Dongqun
2018-01-01
To implement the online statistical analysis function in information system of air pollution and health impact monitoring, and obtain the data analysis information real-time. Using the descriptive statistical method as well as time-series analysis and multivariate regression analysis, SQL language and visual tools to implement online statistical analysis based on database software. Generate basic statistical tables and summary tables of air pollution exposure and health impact data online; Generate tendency charts of each data part online and proceed interaction connecting to database; Generate butting sheets which can lead to R, SAS and SPSS directly online. The information system air pollution and health impact monitoring implements the statistical analysis function online, which can provide real-time analysis result to its users.
Introduction to statistics and data analysis with exercises, solutions and applications in R
Heumann, Christian; Shalabh
2016-01-01
This introductory statistics textbook conveys the essential concepts and tools needed to develop and nurture statistical thinking. It presents descriptive, inductive and explorative statistical methods and guides the reader through the process of quantitative data analysis. In the experimental sciences and interdisciplinary research, data analysis has become an integral part of any scientific study. Issues such as judging the credibility of data, analyzing the data, evaluating the reliability of the obtained results and finally drawing the correct and appropriate conclusions from the results are vital. The text is primarily intended for undergraduate students in disciplines like business administration, the social sciences, medicine, politics, macroeconomics, etc. It features a wealth of examples, exercises and solutions with computer code in the statistical programming language R as well as supplementary material that will enable the reader to quickly adapt all methods to their own applications.
Methodology сomparative statistical analysis of Russian industry based on cluster analysis
Directory of Open Access Journals (Sweden)
Sergey S. Shishulin
2017-01-01
Full Text Available The article is devoted to researching of the possibilities of applying multidimensional statistical analysis in the study of industrial production on the basis of comparing its growth rates and structure with other developed and developing countries of the world. The purpose of this article is to determine the optimal set of statistical methods and the results of their application to industrial production data, which would give the best access to the analysis of the result.Data includes such indicators as output, output, gross value added, the number of employed and other indicators of the system of national accounts and operational business statistics. The objects of observation are the industry of the countrys of the Customs Union, the United States, Japan and Erope in 2005-2015. As the research tool used as the simplest methods of transformation, graphical and tabular visualization of data, and methods of statistical analysis. In particular, based on a specialized software package (SPSS, the main components method, discriminant analysis, hierarchical methods of cluster analysis, Ward’s method and k-means were applied.The application of the method of principal components to the initial data makes it possible to substantially and effectively reduce the initial space of industrial production data. Thus, for example, in analyzing the structure of industrial production, the reduction was from fifteen industries to three basic, well-interpreted factors: the relatively extractive industries (with a low degree of processing, high-tech industries and consumer goods (medium-technology sectors. At the same time, as a result of comparison of the results of application of cluster analysis to the initial data and data obtained on the basis of the principal components method, it was established that clustering industrial production data on the basis of new factors significantly improves the results of clustering.As a result of analyzing the parameters of
Data analysis for radiological characterisation: Geostatistical and statistical complementarity
International Nuclear Information System (INIS)
Desnoyers, Yvon; Dubot, Didier
2012-01-01
Radiological characterisation may cover a large range of evaluation objectives during a decommissioning and dismantling (D and D) project: removal of doubt, delineation of contaminated materials, monitoring of the decontamination work and final survey. At each stage, collecting relevant data to be able to draw the conclusions needed is quite a big challenge. In particular two radiological characterisation stages require an advanced sampling process and data analysis, namely the initial categorization and optimisation of the materials to be removed and the final survey to demonstrate compliance with clearance levels. On the one hand the latter is widely used and well developed in national guides and norms, using random sampling designs and statistical data analysis. On the other hand a more complex evaluation methodology has to be implemented for the initial radiological characterisation, both for sampling design and for data analysis. The geostatistical framework is an efficient way to satisfy the radiological characterisation requirements providing a sound decision-making approach for the decommissioning and dismantling of nuclear premises. The relevance of the geostatistical methodology relies on the presence of a spatial continuity for radiological contamination. Thus geo-statistics provides reliable methods for activity estimation, uncertainty quantification and risk analysis, leading to a sound classification of radiological waste (surfaces and volumes). This way, the radiological characterization of contaminated premises can be divided into three steps. First, the most exhaustive facility analysis provides historical and qualitative information. Then, a systematic (exhaustive or not) surface survey of the contamination is implemented on a regular grid. Finally, in order to assess activity levels and contamination depths, destructive samples are collected at several locations within the premises (based on the surface survey results) and analysed. Combined with
Multivariate statistical pattern recognition system for reactor noise analysis
International Nuclear Information System (INIS)
Gonzalez, R.C.; Howington, L.C.; Sides, W.H. Jr.; Kryter, R.C.
1976-01-01
A multivariate statistical pattern recognition system for reactor noise analysis was developed. The basis of the system is a transformation for decoupling correlated variables and algorithms for inferring probability density functions. The system is adaptable to a variety of statistical properties of the data, and it has learning, tracking, and updating capabilities. System design emphasizes control of the false-alarm rate. The ability of the system to learn normal patterns of reactor behavior and to recognize deviations from these patterns was evaluated by experiments at the ORNL High-Flux Isotope Reactor (HFIR). Power perturbations of less than 0.1 percent of the mean value in selected frequency ranges were detected by the system
Multivariate statistical pattern recognition system for reactor noise analysis
International Nuclear Information System (INIS)
Gonzalez, R.C.; Howington, L.C.; Sides, W.H. Jr.; Kryter, R.C.
1975-01-01
A multivariate statistical pattern recognition system for reactor noise analysis was developed. The basis of the system is a transformation for decoupling correlated variables and algorithms for inferring probability density functions. The system is adaptable to a variety of statistical properties of the data, and it has learning, tracking, and updating capabilities. System design emphasizes control of the false-alarm rate. The ability of the system to learn normal patterns of reactor behavior and to recognize deviations from these patterns was evaluated by experiments at the ORNL High-Flux Isotope Reactor (HFIR). Power perturbations of less than 0.1 percent of the mean value in selected frequency ranges were detected by the system. 19 references
RESEARCH OF THE DATA BANK OF STATISTICAL ANALYSIS OF THE ADVERTISING MARKET
Directory of Open Access Journals (Sweden)
Ekaterina F. Devochkina
2014-01-01
Full Text Available The article contains the description of the process of making statistical accounting of the Russian advertising market. The author pays attention to the forms of state statistical accounting of different years, marks their different features and shortage. Also the article contains analysis of alternative sources of numerical information of Russian advertising market.
Statistical Analysis for High-Dimensional Data : The Abel Symposium 2014
Bühlmann, Peter; Glad, Ingrid; Langaas, Mette; Richardson, Sylvia; Vannucci, Marina
2016-01-01
This book features research contributions from The Abel Symposium on Statistical Analysis for High Dimensional Data, held in Nyvågar, Lofoten, Norway, in May 2014. The focus of the symposium was on statistical and machine learning methodologies specifically developed for inference in “big data” situations, with particular reference to genomic applications. The contributors, who are among the most prominent researchers on the theory of statistics for high dimensional inference, present new theories and methods, as well as challenging applications and computational solutions. Specific themes include, among others, variable selection and screening, penalised regression, sparsity, thresholding, low dimensional structures, computational challenges, non-convex situations, learning graphical models, sparse covariance and precision matrices, semi- and non-parametric formulations, multiple testing, classification, factor models, clustering, and preselection. Highlighting cutting-edge research and casting light on...
Halo statistics analysis within medium volume cosmological N-body simulation
Directory of Open Access Journals (Sweden)
Martinović N.
2015-01-01
Full Text Available In this paper we present halo statistics analysis of a ΛCDM N body cosmological simulation (from first halo formation until z = 0. We study mean major merger rate as a function of time, where for time we consider both per redshift and per Gyr dependence. For latter we find that it scales as the well known power law (1 + zn for which we obtain n = 2.4. The halo mass function and halo growth function are derived and compared both with analytical and empirical fits. We analyse halo growth through out entire simulation, making it possible to continuously monitor evolution of halo number density within given mass ranges. The halo formation redshift is studied exploring possibility for a new simple preliminary analysis during the simulation run. Visualization of the simulation is portrayed as well. At redshifts z = 0−7 halos from simulation have good statistics for further analysis especially in mass range of 1011 − 1014 M./h. [176021 ’Visible and invisible matter in nearby galaxies: theory and observations
SOCR Analyses - an Instructional Java Web-based Statistical Analysis Toolkit.
Chu, Annie; Cui, Jenny; Dinov, Ivo D
2009-03-01
The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test.The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website.In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most
Cao, Bei; Guo, XiaoYan; Chen, Ke; Song, Wei; Huang, Rui; Wei, QianQian; Zhao, Bi; Shang, Hui-Fang
2016-03-01
Oxidative stress is involved in the pathogenesis of multiple system atrophy (MSA). Creatine, which is converted to creatinine, has an anti-oxidative effect. Our aim is to clarify the correlations between creatinine and the occurrence as well as the progression of MSA. A total of 115 patients with probable MSA and 115 age- and gender-matched healthy controls were included in the study. The serum creatinine level of all patients and controls were evaluated and compared. The mean age of MSA patients was 58.18 ± 8.67 years and the mean disease duration was 2.85 ± 1.71 years. The creatinine level of MSA patients was significantly lower than that of healthy controls (P creatinine quartiles compared with the lowest creatinine quartiles. In a gender-specific analysis, patients with the highest quartiles and second quartiles of creatinine level had decreased occurrence than patients with the lowest quartile in females, but not in males. The serum level of creatinine was not found correlated with the mean rate of annualised changes, neither with other independent factors, such as age, body mass index (BMI), sex, Unified MSA Rating Scale (UMSARS) scores and disease duration at the initial visit in patients with MSA. High level of serum creatinine may be associated with a low occurrence of MSA in Chinese population, especially in female. However, serum creatinine does not deteriorate or ameliorate the progression of MSA.
Short-run and Current Analysis Model in Statistics
Directory of Open Access Journals (Sweden)
Constantin Anghelache
2006-01-01
Full Text Available Using the short-run statistic indicators is a compulsory requirement implied in the current analysis. Therefore, there is a system of EUROSTAT indicators on short run which has been set up in this respect, being recommended for utilization by the member-countries. On the basis of these indicators, there are regular, usually monthly, analysis being achieved in respect of: the production dynamic determination; the evaluation of the short-run investment volume; the development of the turnover; the wage evolution: the employment; the price indexes and the consumer price index (inflation; the volume of exports and imports and the extent to which the imports are covered by the exports and the sold of trade balance. The EUROSTAT system of indicators of conjuncture is conceived as an open system, so that it can be, at any moment extended or restricted, allowing indicators to be amended or even removed, depending on the domestic users requirements as well as on the specific requirements of the harmonization and integration. For the short-run analysis, there is also the World Bank system of indicators of conjuncture, which is utilized, relying on the data sources offered by the World Bank, The World Institute for Resources or other international organizations statistics. The system comprises indicators of the social and economic development and focuses on the indicators for the following three fields: human resources, environment and economic performances. At the end of the paper, there is a case study on the situation of Romania, for which we used all these indicators.
Short-run and Current Analysis Model in Statistics
Directory of Open Access Journals (Sweden)
Constantin Mitrut
2006-03-01
Full Text Available Using the short-run statistic indicators is a compulsory requirement implied in the current analysis. Therefore, there is a system of EUROSTAT indicators on short run which has been set up in this respect, being recommended for utilization by the member-countries. On the basis of these indicators, there are regular, usually monthly, analysis being achieved in respect of: the production dynamic determination; the evaluation of the short-run investment volume; the development of the turnover; the wage evolution: the employment; the price indexes and the consumer price index (inflation; the volume of exports and imports and the extent to which the imports are covered by the exports and the sold of trade balance. The EUROSTAT system of indicators of conjuncture is conceived as an open system, so that it can be, at any moment extended or restricted, allowing indicators to be amended or even removed, depending on the domestic users requirements as well as on the specific requirements of the harmonization and integration. For the short-run analysis, there is also the World Bank system of indicators of conjuncture, which is utilized, relying on the data sources offered by the World Bank, The World Institute for Resources or other international organizations statistics. The system comprises indicators of the social and economic development and focuses on the indicators for the following three fields: human resources, environment and economic performances. At the end of the paper, there is a case study on the situation of Romania, for which we used all these indicators.
Statistical analysis of proteomics, metabolomics, and lipidomics data using mass spectrometry
Mertens, Bart
2017-01-01
This book presents an overview of computational and statistical design and analysis of mass spectrometry-based proteomics, metabolomics, and lipidomics data. This contributed volume provides an introduction to the special aspects of statistical design and analysis with mass spectrometry data for the new omic sciences. The text discusses common aspects of design and analysis between and across all (or most) forms of mass spectrometry, while also providing special examples of application with the most common forms of mass spectrometry. Also covered are applications of computational mass spectrometry not only in clinical study but also in the interpretation of omics data in plant biology studies. Omics research fields are expected to revolutionize biomolecular research by the ability to simultaneously profile many compounds within either patient blood, urine, tissue, or other biological samples. Mass spectrometry is one of the key analytical techniques used in these new omic sciences. Liquid chromatography mass ...
Three-Dimensional Assembly Tolerance Analysis Based on the Jacobian-Torsor Statistical Model
Directory of Open Access Journals (Sweden)
Peng Heping
2017-01-01
Full Text Available The unified Jacobian-Torsor model has been developed for deterministic (worst case tolerance analysis. This paper presents a comprehensive model for performing statistical tolerance analysis by integrating the unified Jacobian-Torsor model and Monte Carlo simulation. In this model, an assembly is sub-divided into surfaces, the Small Displacements Torsor (SDT parameters are used to express the relative position between any two surfaces of the assembly. Then, 3D dimension-chain can be created by using a surface graph of the assembly and the unified Jacobian-Torsor model is developed based on the effect of each functional element on the whole functional requirements of products. Finally, Monte Carlo simulation is implemented for the statistical tolerance analysis. A numerical example is given to demonstrate the capability of the proposed method in handling three-dimensional assembly tolerance analysis.
SAS and R data management, statistical analysis, and graphics
Kleinman, Ken
2009-01-01
An All-in-One Resource for Using SAS and R to Carry out Common TasksProvides a path between languages that is easier than reading complete documentationSAS and R: Data Management, Statistical Analysis, and Graphics presents an easy way to learn how to perform an analytical task in both SAS and R, without having to navigate through the extensive, idiosyncratic, and sometimes unwieldy software documentation. The book covers many common tasks, such as data management, descriptive summaries, inferential procedures, regression analysis, and the creation of graphics, along with more complex applicat
Statistical methods for data analysis in particle physics
AUTHOR|(CDS)2070643
2015-01-01
This concise set of course-based notes provides the reader with the main concepts and tools to perform statistical analysis of experimental data, in particular in the field of high-energy physics (HEP). First, an introduction to probability theory and basic statistics is given, mainly as reminder from advanced undergraduate studies, yet also in view to clearly distinguish the Frequentist versus Bayesian approaches and interpretations in subsequent applications. More advanced concepts and applications are gradually introduced, culminating in the chapter on upper limits as many applications in HEP concern hypothesis testing, where often the main goal is to provide better and better limits so as to be able to distinguish eventually between competing hypotheses or to rule out some of them altogether. Many worked examples will help newcomers to the field and graduate students to understand the pitfalls in applying theoretical concepts to actual data
Statistical Analysis of 30 Years Rainfall Data: A Case Study
Arvind, G.; Ashok Kumar, P.; Girish Karthi, S.; Suribabu, C. R.
2017-07-01
Rainfall is a prime input for various engineering design such as hydraulic structures, bridges and culverts, canals, storm water sewer and road drainage system. The detailed statistical analysis of each region is essential to estimate the relevant input value for design and analysis of engineering structures and also for crop planning. A rain gauge station located closely in Trichy district is selected for statistical analysis where agriculture is the prime occupation. The daily rainfall data for a period of 30 years is used to understand normal rainfall, deficit rainfall, Excess rainfall and Seasonal rainfall of the selected circle headquarters. Further various plotting position formulae available is used to evaluate return period of monthly, seasonally and annual rainfall. This analysis will provide useful information for water resources planner, farmers and urban engineers to assess the availability of water and create the storage accordingly. The mean, standard deviation and coefficient of variation of monthly and annual rainfall was calculated to check the rainfall variability. From the calculated results, the rainfall pattern is found to be erratic. The best fit probability distribution was identified based on the minimum deviation between actual and estimated values. The scientific results and the analysis paved the way to determine the proper onset and withdrawal of monsoon results which were used for land preparation and sowing.
Differential evolution-simulated annealing for multiple sequence alignment
Addawe, R. C.; Addawe, J. M.; Sueño, M. R. K.; Magadia, J. C.
2017-10-01
Multiple sequence alignments (MSA) are used in the analysis of molecular evolution and sequence structure relationships. In this paper, a hybrid algorithm, Differential Evolution - Simulated Annealing (DESA) is applied in optimizing multiple sequence alignments (MSAs) based on structural information, non-gaps percentage and totally conserved columns. DESA is a robust algorithm characterized by self-organization, mutation, crossover, and SA-like selection scheme of the strategy parameters. Here, the MSA problem is treated as a multi-objective optimization problem of the hybrid evolutionary algorithm, DESA. Thus, we name the algorithm as DESA-MSA. Simulated sequences and alignments were generated to evaluate the accuracy and efficiency of DESA-MSA using different indel sizes, sequence lengths, deletion rates and insertion rates. The proposed hybrid algorithm obtained acceptable solutions particularly for the MSA problem evaluated based on the three objectives.
Oliveira Mendes, Thiago de; Pinto, Liliane Pereira; Santos, Laurita dos; Tippavajhala, Vamshi Krishna; Téllez Soto, Claudio Alberto; Martin, Airton Abrahão
2016-07-01
The analysis of biological systems by spectroscopic techniques involves the evaluation of hundreds to thousands of variables. Hence, different statistical approaches are used to elucidate regions that discriminate classes of samples and to propose new vibrational markers for explaining various phenomena like disease monitoring, mechanisms of action of drugs, food, and so on. However, the technical statistics are not always widely discussed in applied sciences. In this context, this work presents a detailed discussion including the various steps necessary for proper statistical analysis. It includes univariate parametric and nonparametric tests, as well as multivariate unsupervised and supervised approaches. The main objective of this study is to promote proper understanding of the application of various statistical tools in these spectroscopic methods used for the analysis of biological samples. The discussion of these methods is performed on a set of in vivo confocal Raman spectra of human skin analysis that aims to identify skin aging markers. In the Appendix, a complete routine of data analysis is executed in a free software that can be used by the scientific community involved in these studies.
A method for statistical steady state thermal analysis of reactor cores
International Nuclear Information System (INIS)
Whetton, P.A.
1980-01-01
This paper presents a method for performing a statistical steady state thermal analysis of a reactor core. The technique is only outlined here since detailed thermal equations are dependent on the core geometry. The method has been applied to a pressurised water reactor core and the results are presented for illustration purposes. Random hypothetical cores are generated using the Monte-Carlo method. The technique shows that by splitting the parameters into two types, denoted core-wise and in-core, the Monte Carlo method may be used inexpensively. The idea of using extremal statistics to characterise the low probability events (i.e. the tails of a distribution) is introduced together with a method of forming the final probability distribution. After establishing an acceptable probability of exceeding a thermal design criterion, the final probability distribution may be used to determine the corresponding thermal response value. If statistical and deterministic (i.e. conservative) thermal response values are compared, information on the degree of pessimism in the deterministic method of analysis may be inferred and the restrictive performance limitations imposed by this method relieved. (orig.)
Statistical analysis of first period of operation of FTU Tokamak
International Nuclear Information System (INIS)
Crisanti, F.; Apruzzese, G.; Frigione, D.; Kroegler, H.; Lovisetto, L.; Mazzitelli, G.; Podda, S.
1996-09-01
On the FTU Tokamak the plasma physics operations started on the 20/4/90. The first plasma had a plasma current Ip=0.75 MA for about a second. The experimental phase lasted until 7/7/94, when a long shut-down begun for installing the toroidal limiter in the inner side of the vacuum vessel. In these four years of operations plasma experiments have been successfully exploited, e.g. experiments of single and multiple pellet injections; full current drive up to Ip=300 KA was obtained by using waves at the frequency of the Lower Hybrid; analysis of ohmic plasma parameters with different materials (from the low Z silicon to high Z tungsten) as plasma facing element was performed. In this work a statistical analysis of the full period of operation is presented. Moreover, a comparison with the statistical data from other Tokamaks is attempted
Griffiths, Dawn
2009-01-01
Wouldn't it be great if there were a statistics book that made histograms, probability distributions, and chi square analysis more enjoyable than going to the dentist? Head First Statistics brings this typically dry subject to life, teaching you everything you want and need to know about statistics through engaging, interactive, and thought-provoking material, full of puzzles, stories, quizzes, visual aids, and real-world examples. Whether you're a student, a professional, or just curious about statistical analysis, Head First's brain-friendly formula helps you get a firm grasp of statistics
Using R and RStudio for data management, statistical analysis and graphics
Horton, Nicholas J
2015-01-01
This is the second edition of the popular book on using R for statistical analysis and graphics. The authors, who run a popular blog supplementing their books, have focused on adding many new examples to this new edition. These examples are presented primarily in new chapters based on the following themes: simulation, probability, statistics, mathematics/computing, and graphics. The authors have also added many other updates, including a discussion of RStudio-a very popular development environment for R.
Statistical analysis of absorptive laser damage in dielectric thin films
International Nuclear Information System (INIS)
Budgor, A.B.; Luria-Budgor, K.F.
1978-01-01
The Weibull distribution arises as an example of the theory of extreme events. It is commonly used to fit statistical data arising in the failure analysis of electrical components and in DC breakdown of materials. This distribution is employed to analyze time-to-damage and intensity-to-damage statistics obtained when irradiating thin film coated samples of SiO 2 , ZrO 2 , and Al 2 O 3 with tightly focused laser beams. The data used is furnished by Milam. The fit to the data is excellent; and least squared correlation coefficients greater than 0.9 are often obtained
International Nuclear Information System (INIS)
Hirao, Keiichi; Yamane, Toshimi; Minamino, Yoritoshi
1991-01-01
This report is to show how the life due to stress corrosion cracking breakdown of fuel cladding tubes is evaluated by applying the statistical techniques to that examined by a few testing methods. The statistical distribution of the limiting values of constant load stress corrosion cracking life, the statistical analysis by making the probabilistic interpretation of constant load stress corrosion cracking life, and the statistical analysis of stress corrosion cracking life by the slow strain rate test (SSRT) method are described. (K.I.)
Directory of Open Access Journals (Sweden)
Mun Kyung eSunwoo
2014-06-01
Full Text Available Multiple system atrophy (MSA is an adult-onset sporadic neurodegenerative disease. Because the prognosis of MSA is fatal, neuroprotective or regenerative strategies may be invaluable in MSA treatment. Previously, we obtained clinical and imaging evidence that mesenchymal stem cell (MSC treatment could have a neuroprotective role in MSA patients. In the present study, we evaluated the effects of MSC therapy on longitudinal changes in subcortical deep grey matter volumes and cortical thickness and their association with cognitive performance. Clinical and imaging data were obtained from our previous randomized trial of autologous MSC in MSA patients. During 1-year follow-up, we assessed longitudinal differences in subcortical deep grey matter volumes and cortical thickness between placebo (n=15 and MSC groups (n=11. Next, we performed correlation analysis between the changes in cortical thickness and changes in the Korean version of the Montreal Cognitive Assessment (MoCA scores and detailed cognitive performance. There were no significant differences in age, gender ratio, disease duration, clinical severity, MoCA score, or education level between the groups. The subcortical volumetric analysis revealed that the changes in deep grey matter volumes of the caudate, putamen, and thalamus did not differ significantly between the groups. The areas of cortical thinning over time in the placebo group were more extensive, including the frontal, temporal, and parietal areas, whereas these areas in the MSC group were less extensive. Correlation analysis indicated that declines in MoCA scores and phonemic fluency were significantly correlated with cortical thinning of the frontal and posterior temporal areas and anterior temporal areas in MSA patients, respectively. These results suggest that MSC treatment in patients with MSA may modulate cortical thinning over time and related cognitive performance, inferring a future therapeutic candidate for cognitive
Implementation and statistical analysis of Metropolis algorithm for SU(3)
International Nuclear Information System (INIS)
Katznelson, E.; Nobile, A.
1984-12-01
In this paper we study the statistical properties of an implementation of the Metropolis algorithm for SU(3) gauge theory. It is shown that the results have normal distribution. We demonstrate that in this case error analysis can be carried on in a simple way and we show that applying it to both the measurement strategy and the output data analysis has an important influence on the performance and reliability of the simulation. (author)
International Nuclear Information System (INIS)
EI-Shanshoury, G.I.
2011-01-01
Several statistical distributions are used to model various reliability and maintainability parameters. The applied distribution depends on the' nature of the data being analyzed. The presented paper deals with analysis of some statistical distributions used in reliability to reach the best fit of distribution analysis. The calculations rely on circuit quantity parameters obtained by using Relex 2009 computer program. The statistical analysis of ten different distributions indicated that Weibull distribution gives the best fit distribution for modeling the reliability of the data set of Temperature Alarm Circuit (TAC). However, the Exponential distribution is found to be the best fit distribution for modeling the failure rate
Statistical mechanical analysis of LMFBR fuel cladding tubes
International Nuclear Information System (INIS)
Poncelet, J.-P.; Pay, A.
1977-01-01
The most important design requirement on fuel pin cladding for LMFBR's is its mechanical integrity. Disruptive factors include internal pressure from mixed oxide fuel fission gas release, thermal stresses and high temperature creep, neutron-induced differential void-swelling as a source of stress in the cladding and irradiation creep of stainless steel material, corrosion by fission products. Under irradiation these load-restraining mechanisms are accentuated by stainless steel embrittlement and strength alterations. To account for the numerous uncertainties involved in the analysis by theoretical models and computer codes statistical tools are unavoidably requested, i.e. Monte Carlo simulation methods. Thanks to these techniques, uncertainties in nominal characteristics, material properties and environmental conditions can be linked up in a correct way and used for a more accurate conceptual design. First, a thermal creep damage index is set up through a sufficiently sophisticated clad physical analysis including arbitrary time dependence of power and neutron flux as well as effects of sodium temperature, burnup and steel mechanical behavior. Although this strain limit approach implies a more general but time consuming model., on the counterpart the net output is improved and e.g. clad temperature, stress and strain maxima may be easily assessed. A full spectrum of variables are statistically treated to account for their probability distributions. Creep damage probability may be obtained and can contribute to a quantitative fuel probability estimation
A robust statistical method for association-based eQTL analysis.
Directory of Open Access Journals (Sweden)
Ning Jiang
Full Text Available It has been well established that theoretical kernel for recently surging genome-wide association study (GWAS is statistical inference of linkage disequilibrium (LD between a tested genetic marker and a putative locus affecting a disease trait. However, LD analysis is vulnerable to several confounding factors of which population stratification is the most prominent. Whilst many methods have been proposed to correct for the influence either through predicting the structure parameters or correcting inflation in the test statistic due to the stratification, these may not be feasible or may impose further statistical problems in practical implementation.We propose here a novel statistical method to control spurious LD in GWAS from population structure by incorporating a control marker into testing for significance of genetic association of a polymorphic marker with phenotypic variation of a complex trait. The method avoids the need of structure prediction which may be infeasible or inadequate in practice and accounts properly for a varying effect of population stratification on different regions of the genome under study. Utility and statistical properties of the new method were tested through an intensive computer simulation study and an association-based genome-wide mapping of expression quantitative trait loci in genetically divergent human populations.The analyses show that the new method confers an improved statistical power for detecting genuine genetic association in subpopulations and an effective control of spurious associations stemmed from population structure when compared with other two popularly implemented methods in the literature of GWAS.
Ratner, Bruce
2011-01-01
The second edition of a bestseller, Statistical and Machine-Learning Data Mining: Techniques for Better Predictive Modeling and Analysis of Big Data is still the only book, to date, to distinguish between statistical data mining and machine-learning data mining. The first edition, titled Statistical Modeling and Analysis for Database Marketing: Effective Techniques for Mining Big Data, contained 17 chapters of innovative and practical statistical data mining techniques. In this second edition, renamed to reflect the increased coverage of machine-learning data mining techniques, the author has
Constitution of an incident database suited to statistical analysis and examples
International Nuclear Information System (INIS)
Verpeaux, J.L.
1990-01-01
The Nuclear Protection and Safety Institute (IPSN) has set up and is developing an incidents database, which is used for the management and analysis of incidents encountered in French PWR plants. IPSN has already carried out several incidents or safety important events statistical analysis, and is improving its database on the basis of the experience it gained from this various studies. A description of the analysis method and of the developed database is presented
A new statistic for the analysis of circular data in gamma-ray astronomy
Protheroe, R. J.
1985-01-01
A new statistic is proposed for the analysis of circular data. The statistic is designed specifically for situations where a test of uniformity is required which is powerful against alternatives in which a small fraction of the observations is grouped in a small range of directions, or phases.
Statistical Compilation of the ICT Sector and Policy Analysis | CRDI ...
International Development Research Centre (IDRC) Digital Library (Canada)
Statistical Compilation of the ICT Sector and Policy Analysis. As the presence and influence of information and communication technologies (ICTs) continues to widen and deepen, so too does its impact on economic development. However, much work needs to be done before the linkages between economic development ...
Statistical Compilation of the ICT Sector and Policy Analysis | IDRC ...
International Development Research Centre (IDRC) Digital Library (Canada)
Statistical Compilation of the ICT Sector and Policy Analysis. As the presence and influence of information and communication technologies (ICTs) continues to widen and deepen, so too does its impact on economic development. However, much work needs to be done before the linkages between economic development ...
DEFF Research Database (Denmark)
Jones, Allan; Sommerlund, Bo
2007-01-01
The uses of null hypothesis significance testing (NHST) and statistical power analysis within psychological research are critically discussed. The article looks at the problems of relying solely on NHST when dealing with small and large sample sizes. The use of power-analysis in estimating...... the potential error introduced by small and large samples is advocated. Power analysis is not recommended as a replacement to NHST but as an additional source of information about the phenomena under investigation. Moreover, the importance of conceptual analysis in relation to statistical analysis of hypothesis...
Directory of Open Access Journals (Sweden)
Hilary I. Okagbue
2018-04-01
Full Text Available This data article contains the statistical analysis of the total, percentage and distribution of editorial board composition of 111 Hindawi journals indexed in Emerging Sources Citation Index (ESCI across the continents. The reliability of the data was shown using correlation, goodness-of-fit test, analysis of variance and statistical variability tests. Keywords: Hindawi, Bibliometrics, Data analysis, ESCI, Random, Smart campus, Web of science, Ranking analytics, Statistics
Statistical analysis of the determinations of the Sun's Galactocentric distance
Malkin, Zinovy
2013-02-01
Based on several tens of R0 measurements made during the past two decades, several studies have been performed to derive the best estimate of R0. Some used just simple averaging to derive a result, whereas others provided comprehensive analyses of possible errors in published results. In either case, detailed statistical analyses of data used were not performed. However, a computation of the best estimates of the Galactic rotation constants is not only an astronomical but also a metrological task. Here we perform an analysis of 53 R0 measurements (published in the past 20 years) to assess the consistency of the data. Our analysis shows that they are internally consistent. It is also shown that any trend in the R0 estimates from the last 20 years is statistically negligible, which renders the presence of a bandwagon effect doubtful. On the other hand, the formal errors in the published R0 estimates improve significantly with time.
Statistical analysis of global horizontal solar irradiation GHI in Fez city, Morocco
Bounoua, Z.; Mechaqrane, A.
2018-05-01
An accurate knowledge of the solar energy reaching the ground is necessary for sizing and optimizing the performances of solar installations. This paper describes a statistical analysis of the global horizontal solar irradiation (GHI) at Fez city, Morocco. For better reliability, we have first applied a set of check procedures to test the quality of hourly GHI measurements. We then eliminate the erroneous values which are generally due to measurement or the cosine effect errors. Statistical analysis show that the annual mean daily values of GHI is of approximately 5 kWh/m²/day. Daily monthly mean values and other parameter are also calculated.
Statistical and machine learning approaches for network analysis
Dehmer, Matthias
2012-01-01
Explore the multidisciplinary nature of complex networks through machine learning techniques Statistical and Machine Learning Approaches for Network Analysis provides an accessible framework for structurally analyzing graphs by bringing together known and novel approaches on graph classes and graph measures for classification. By providing different approaches based on experimental data, the book uniquely sets itself apart from the current literature by exploring the application of machine learning techniques to various types of complex networks. Comprised of chapters written by internation
Analysis of spectral data with rare events statistics
International Nuclear Information System (INIS)
Ilyushchenko, V.I.; Chernov, N.I.
1990-01-01
The case is considered of analyzing experimental data, when the results of individual experimental runs cannot be summed due to large systematic errors. A statistical analysis of the hypothesis about the persistent peaks in the spectra has been performed by means of the Neyman-Pearson test. The computations demonstrate the confidence level for the hypothesis about the presence of a persistent peak in the spectrum is proportional to the square root of the number of independent experimental runs, K. 5 refs
STATISTICAL ANALYSIS OF TANK 18F FLOOR SAMPLE RESULTS
Energy Technology Data Exchange (ETDEWEB)
Harris, S.
2010-09-02
Representative sampling has been completed for characterization of the residual material on the floor of Tank 18F as per the statistical sampling plan developed by Shine [1]. Samples from eight locations have been obtained from the tank floor and two of the samples were archived as a contingency. Six samples, referred to in this report as the current scrape samples, have been submitted to and analyzed by SRNL [2]. This report contains the statistical analysis of the floor sample analytical results to determine if further data are needed to reduce uncertainty. Included are comparisons with the prior Mantis samples results [3] to determine if they can be pooled with the current scrape samples to estimate the upper 95% confidence limits (UCL{sub 95%}) for concentration. Statistical analysis revealed that the Mantis and current scrape sample results are not compatible. Therefore, the Mantis sample results were not used to support the quantification of analytes in the residual material. Significant spatial variability among the current sample results was not found. Constituent concentrations were similar between the North and South hemispheres as well as between the inner and outer regions of the tank floor. The current scrape sample results from all six samples fall within their 3-sigma limits. In view of the results from numerous statistical tests, the data were pooled from all six current scrape samples. As such, an adequate sample size was provided for quantification of the residual material on the floor of Tank 18F. The uncertainty is quantified in this report by an upper 95% confidence limit (UCL{sub 95%}) on each analyte concentration. The uncertainty in analyte concentration was calculated as a function of the number of samples, the average, and the standard deviation of the analytical results. The UCL{sub 95%} was based entirely on the six current scrape sample results (each averaged across three analytical determinations).
Tanavalee, Chotetawan; Luksanapruksa, Panya; Singhatanadgige, Weerasak
2016-06-01
Microsoft Excel (MS Excel) is a commonly used program for data collection and statistical analysis in biomedical research. However, this program has many limitations, including fewer functions that can be used for analysis and a limited number of total cells compared with dedicated statistical programs. MS Excel cannot complete analyses with blank cells, and cells must be selected manually for analysis. In addition, it requires multiple steps of data transformation and formulas to plot survival analysis graphs, among others. The Megastat add-on program, which will be supported by MS Excel 2016 soon, would eliminate some limitations of using statistic formulas within MS Excel.
40 CFR 52.320 - Identification of plan.
2010-07-01
....6 and introductory text of Section III.C.2.a of Regulation 1 adopted by the Colorado Air Quality... Fort Collins-Loveland, Colorado Springs, and Boulder-Denver Metropolitan Statistical Areas (MSA) as...
2010-01-01
... purposes of determining median family income. Metropolitan area means a metropolitan statistical area (“MSA... default or delinquency, unless the rate is increased or the new amount financed exceeds the unpaid balance...
A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis.
Lin, Johnny; Bentler, Peter M
2012-01-01
Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne's asymptotically distribution-free method and Satorra Bentler's mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler's statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby's study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic.
Statistical power analysis a simple and general model for traditional and modern hypothesis tests
Murphy, Kevin R; Wolach, Allen
2014-01-01
Noted for its accessible approach, this text applies the latest approaches of power analysis to both null hypothesis and minimum-effect testing using the same basic unified model. Through the use of a few simple procedures and examples, the authors show readers with little expertise in statistical analysis how to obtain the values needed to carry out the power analysis for their research. Illustrations of how these analyses work and how they can be used to choose the appropriate criterion for defining statistically significant outcomes are sprinkled throughout. The book presents a simple and g
Statistical Analysis of CFD Solutions from the Fourth AIAA Drag Prediction Workshop
Morrison, Joseph H.
2010-01-01
A graphical framework is used for statistical analysis of the results from an extensive N-version test of a collection of Reynolds-averaged Navier-Stokes computational fluid dynamics codes. The solutions were obtained by code developers and users from the U.S., Europe, Asia, and Russia using a variety of grid systems and turbulence models for the June 2009 4th Drag Prediction Workshop sponsored by the AIAA Applied Aerodynamics Technical Committee. The aerodynamic configuration for this workshop was a new subsonic transport model, the Common Research Model, designed using a modern approach for the wing and included a horizontal tail. The fourth workshop focused on the prediction of both absolute and incremental drag levels for wing-body and wing-body-horizontal tail configurations. This work continues the statistical analysis begun in the earlier workshops and compares the results from the grid convergence study of the most recent workshop with earlier workshops using the statistical framework.
Error analysis of terrestrial laser scanning data by means of spherical statistics and 3D graphs.
Cuartero, Aurora; Armesto, Julia; Rodríguez, Pablo G; Arias, Pedro
2010-01-01
This paper presents a complete analysis of the positional errors of terrestrial laser scanning (TLS) data based on spherical statistics and 3D graphs. Spherical statistics are preferred because of the 3D vectorial nature of the spatial error. Error vectors have three metric elements (one module and two angles) that were analyzed by spherical statistics. A study case has been presented and discussed in detail. Errors were calculating using 53 check points (CP) and CP coordinates were measured by a digitizer with submillimetre accuracy. The positional accuracy was analyzed by both the conventional method (modular errors analysis) and the proposed method (angular errors analysis) by 3D graphics and numerical spherical statistics. Two packages in R programming language were performed to obtain graphics automatically. The results indicated that the proposed method is advantageous as it offers a more complete analysis of the positional accuracy, such as angular error component, uniformity of the vector distribution, error isotropy, and error, in addition the modular error component by linear statistics.
CFAssay: statistical analysis of the colony formation assay
International Nuclear Information System (INIS)
Braselmann, Herbert; Michna, Agata; Heß, Julia; Unger, Kristian
2015-01-01
Colony formation assay is the gold standard to determine cell reproductive death after treatment with ionizing radiation, applied for different cell lines or in combination with other treatment modalities. Associated linear-quadratic cell survival curves can be calculated with different methods. For easy code exchange and methodological standardisation among collaborating laboratories a software package CFAssay for R (R Core Team, R: A Language and Environment for Statistical Computing, 2014) was established to perform thorough statistical analysis of linear-quadratic cell survival curves after treatment with ionizing radiation and of two-way designs of experiments with chemical treatments only. CFAssay offers maximum likelihood and related methods by default and the least squares or weighted least squares method can be optionally chosen. A test for comparision of cell survival curves and an ANOVA test for experimental two-way designs are provided. For the two presented examples estimated parameters do not differ much between maximum-likelihood and least squares. However the dispersion parameter of the quasi-likelihood method is much more sensitive for statistical variation in the data than the multiple R 2 coefficient of determination from the least squares method. The dispersion parameter for goodness of fit and different plot functions in CFAssay help to evaluate experimental data quality. As open source software interlaboratory code sharing between users is facilitated
Procedure for statistical analysis of one-parameter discrepant experimental data
International Nuclear Information System (INIS)
Badikov, Sergey A.; Chechev, Valery P.
2012-01-01
A new, Mandel–Paule-type procedure for statistical processing of one-parameter discrepant experimental data is described. The procedure enables one to estimate a contribution of unrecognized experimental errors into the total experimental uncertainty as well as to include it in analysis. A definition of discrepant experimental data for an arbitrary number of measurements is introduced as an accompanying result. In the case of negligible unrecognized experimental errors, the procedure simply reduces to the calculation of the weighted average and its internal uncertainty. The procedure was applied to the statistical analysis of half-life experimental data; Mean half-lives for 20 actinides were calculated and results were compared to the ENSDF and DDEP evaluations. On the whole, the calculated half-lives are consistent with the ENSDF and DDEP evaluations. However, the uncertainties calculated in this work essentially exceed the ENSDF and DDEP evaluations for discrepant experimental data. This effect can be explained by adequately taking into account unrecognized experimental errors. - Highlights: ► A new statistical procedure for processing one-parametric discrepant experimental data has been presented. ► Procedure estimates a contribution of unrecognized errors in the total experimental uncertainty. ► Procedure was applied for processing half-life discrepant experimental data. ► Results of the calculations are compared to the ENSDF and DDEP evaluations.
Noise removing in encrypted color images by statistical analysis
Islam, N.; Puech, W.
2012-03-01
Cryptographic techniques are used to secure confidential data from unauthorized access but these techniques are very sensitive to noise. A single bit change in encrypted data can have catastrophic impact over the decrypted data. This paper addresses the problem of removing bit error in visual data which are encrypted using AES algorithm in the CBC mode. In order to remove the noise, a method is proposed which is based on the statistical analysis of each block during the decryption. The proposed method exploits local statistics of the visual data and confusion/diffusion properties of the encryption algorithm to remove the errors. Experimental results show that the proposed method can be used at the receiving end for the possible solution for noise removing in visual data in encrypted domain.
Statistical Analysis of Radio Propagation Channel in Ruins Environment
Directory of Open Access Journals (Sweden)
Jiao He
2015-01-01
Full Text Available The cellphone based localization system for search and rescue in complex high density ruins has attracted a great interest in recent years, where the radio channel characteristics are critical for design and development of such a system. This paper presents a spatial smoothing estimation via rotational invariance technique (SS-ESPRIT for radio channel characterization of high density ruins. The radio propagations at three typical mobile communication bands (0.9, 1.8, and 2 GHz are investigated in two different scenarios. Channel parameters, such as arrival time, delays, and complex amplitudes, are statistically analyzed. Furthermore, a channel simulator is built based on these statistics. By comparison analysis of average excess delay and delay spread, the validation results show a good agreement between the measurements and channel modeling results.
Statistical Analysis of Sport Movement Observations: the Case of Orienteering
Amouzandeh, K.; Karimipour, F.
2017-09-01
Study of movement observations is becoming more popular in several applications. Particularly, analyzing sport movement time series has been considered as a demanding area. However, most of the attempts made on analyzing movement sport data have focused on spatial aspects of movement to extract some movement characteristics, such as spatial patterns and similarities. This paper proposes statistical analysis of sport movement observations, which refers to analyzing changes in the spatial movement attributes (e.g. distance, altitude and slope) and non-spatial movement attributes (e.g. speed and heart rate) of athletes. As the case study, an example dataset of movement observations acquired during the "orienteering" sport is presented and statistically analyzed.
SeDA: A software package for the statistical analysis of the instrument drift
International Nuclear Information System (INIS)
Lee, H. J.; Jang, S. C.; Lim, T. J.
2006-01-01
The setpoints for safety-related equipment are affected by many sources of an uncertainty. ANSI/ISA-S67.04.01-2000 [1] and ISA-RP6 7.04.02-2000 [2] suggested the statistical approaches for ensuring that the safety-related instrument setpoints were established and maintained within the technical specification limits [3]. However, Jang et al. [4] indicated that the preceding methodologies for a setpoint drift analysis might be insufficient to manage a setpoint drift on an instrumentation device and proposed new statistical analysis procedures for the management of a setpoint drift, based on the plant specific as-found/as-left data. Although IHPA (Instrument History Performance Analysis) is a widely known commercial software package to analyze an instrument setpoint drift, several steps in the new procedure cannot be performed by using it because it is based on the statistical approaches suggested in the ANSI/ISA-S67.04.01 -2000 [1] and ISA-RP67.04.02-2000 [2], In this paper we present a software package (SeDA: Setpoint Drift Analysis) that implements new methodologies, and which is easy to use, as it is accompanied by powerful graphical tools. (authors)
Multivariate meta-analysis: a robust approach based on the theory of U-statistic.
Ma, Yan; Mazumdar, Madhu
2011-10-30
Meta-analysis is the methodology for combining findings from similar research studies asking the same question. When the question of interest involves multiple outcomes, multivariate meta-analysis is used to synthesize the outcomes simultaneously taking into account the correlation between the outcomes. Likelihood-based approaches, in particular restricted maximum likelihood (REML) method, are commonly utilized in this context. REML assumes a multivariate normal distribution for the random-effects model. This assumption is difficult to verify, especially for meta-analysis with small number of component studies. The use of REML also requires iterative estimation between parameters, needing moderately high computation time, especially when the dimension of outcomes is large. A multivariate method of moments (MMM) is available and is shown to perform equally well to REML. However, there is a lack of information on the performance of these two methods when the true data distribution is far from normality. In this paper, we propose a new nonparametric and non-iterative method for multivariate meta-analysis on the basis of the theory of U-statistic and compare the properties of these three procedures under both normal and skewed data through simulation studies. It is shown that the effect on estimates from REML because of non-normal data distribution is marginal and that the estimates from MMM and U-statistic-based approaches are very similar. Therefore, we conclude that for performing multivariate meta-analysis, the U-statistic estimation procedure is a viable alternative to REML and MMM. Easy implementation of all three methods are illustrated by their application to data from two published meta-analysis from the fields of hip fracture and periodontal disease. We discuss ideas for future research based on U-statistic for testing significance of between-study heterogeneity and for extending the work to meta-regression setting. Copyright © 2011 John Wiley & Sons, Ltd.
Statistical analysis in MSW collection performance assessment.
Teixeira, Carlos Afonso; Avelino, Catarina; Ferreira, Fátima; Bentes, Isabel
2014-09-01
The increase of Municipal Solid Waste (MSW) generated over the last years forces waste managers pursuing more effective collection schemes, technically viable, environmentally effective and economically sustainable. The assessment of MSW services using performance indicators plays a crucial role for improving service quality. In this work, we focus on the relevance of regular system monitoring as a service assessment tool. In particular, we select and test a core-set of MSW collection performance indicators (effective collection distance, effective collection time and effective fuel consumption) that highlights collection system strengths and weaknesses and supports pro-active management decision-making and strategic planning. A statistical analysis was conducted with data collected in mixed collection system of Oporto Municipality, Portugal, during one year, a week per month. This analysis provides collection circuits' operational assessment and supports effective short-term municipality collection strategies at the level of, e.g., collection frequency and timetables, and type of containers. Copyright © 2014 Elsevier Ltd. All rights reserved.
Implementation of statistical analysis methods for medical physics data
International Nuclear Information System (INIS)
Teixeira, Marilia S.; Pinto, Nivia G.P.; Barroso, Regina C.; Oliveira, Luis F.
2009-01-01
The objective of biomedical research with different radiation natures is to contribute for the understanding of the basic physics and biochemistry of the biological systems, the disease diagnostic and the development of the therapeutic techniques. The main benefits are: the cure of tumors through the therapy, the anticipated detection of diseases through the diagnostic, the using as prophylactic mean for blood transfusion, etc. Therefore, for the better understanding of the biological interactions occurring after exposure to radiation, it is necessary for the optimization of therapeutic procedures and strategies for reduction of radioinduced effects. The group pf applied physics of the Physics Institute of UERJ have been working in the characterization of biological samples (human tissues, teeth, saliva, soil, plants, sediments, air, water, organic matrixes, ceramics, fossil material, among others) using X-rays diffraction and X-ray fluorescence. The application of these techniques for measurement, analysis and interpretation of the biological tissues characteristics are experimenting considerable interest in the Medical and Environmental Physics. All quantitative data analysis must be initiated with descriptive statistic calculation (means and standard deviations) in order to obtain a previous notion on what the analysis will reveal. It is well known que o high values of standard deviation found in experimental measurements of biologicals samples can be attributed to biological factors, due to the specific characteristics of each individual (age, gender, environment, alimentary habits, etc). This work has the main objective the development of a program for the use of specific statistic methods for the optimization of experimental data an analysis. The specialized programs for this analysis are proprietary, another objective of this work is the implementation of a code which is free and can be shared by the other research groups. As the program developed since the
Voxel-based morphometry in the parkinson variant of multiple system atrophy
International Nuclear Information System (INIS)
Zhao Yanping; Wang Han; Li Zhou; Feng Feng
2010-01-01
Objective: To assess patterns of the gray and white matter atrophy in patients with multiple system atrophy-P (MSA-P) variant of whole brain compared with normal controls. Methods: Three dimensional fast spoiled gradient echo (3D-FSPGR) T 1 WI of whole brain were obtained from 13 patients with probable MSA-P and 14 age-matched normal controls. The volume of gray matter (GM) and white matter (WM) of MSA-P patients and normal controls was analyzed with voxel-based morphometry (VBM) using statistical parametric mapping (SPM) 8. Results: Compared with the controls, the MSA-P patients showed decreased gray matter and white matter in broad areas. Gray matter loss mainly symmetrically distributed in bilateral supplementary motor area (SMA), dorsal posterior cingulate cortex (DPCC), medial frontal gyrus, superior temporal gyrus, cerebellum cortex, eta Unilateral involvement of cortices mainly located in right primary motor cortex, somatosensory association cortex (SAC), and left ventral anterior cingulate cortex (VACC). There was white matter loss in bilateral superior frontal gyrus, bilateral precuneus, bilateral sub-gyrus of frontal lobe, left superior temporal gyrus, left cingulate gyrus, right orbitofrontal area, right sub- gyrus of temporal lobe, etc. Conclusion: VBM method is an automatic and comprehensive volumetry method and can objectively detect the difference of the whole brain structure in patients with probable MSA- P comparing with normal controls. (authors)
Statistical analysis of AFM topographic images of self-assembled quantum dots
Energy Technology Data Exchange (ETDEWEB)
Sevriuk, V. A.; Brunkov, P. N., E-mail: brunkov@mail.ioffe.ru; Shalnev, I. V.; Gutkin, A. A.; Klimko, G. V.; Gronin, S. V.; Sorokin, S. V.; Konnikov, S. G. [Russian Academy of Sciences, Ioffe Physical-Technical Institute (Russian Federation)
2013-07-15
To obtain statistical data on quantum-dot sizes, AFM topographic images of the substrate on which the dots under study are grown are analyzed. Due to the nonideality of the substrate containing height differences on the order of the size of nanoparticles at distances of 1-10 {mu}m and the insufficient resolution of closely arranged dots due to the finite curvature radius of the AFM probe, automation of the statistical analysis of their large dot array requires special techniques for processing topographic images to eliminate the loss of a particle fraction during conventional processing. As such a technique, convolution of the initial matrix of the AFM image with a specially selected matrix is used. This makes it possible to determine the position of each nanoparticle and, using the initial matrix, to measure their geometrical parameters. The results of statistical analysis by this method of self-assembled InAs quantum dots formed on the surface of an AlGaAs epitaxial layer are presented. It is shown that their concentration, average size, and half-width of height distribution depend strongly on the In flow and total amount of deposited InAs which are varied within insignificant limits.
Statistical Analysis of Environmental Tritium around Wolsong Site
Energy Technology Data Exchange (ETDEWEB)
Kim, Ju Youl [FNC Technology Co., Yongin (Korea, Republic of)
2010-04-15
To find the relationship among airborne tritium, tritium in rainwater, TFWT (Tissue Free Water Tritium) and TBT (Tissue Bound Tritium), statistical analysis is conducted based on tritium data measured at KHNP employees' house around Wolsong nuclear power plants during 10 years from 1999 to 2008. The results show that tritium in such media exhibits a strong seasonal and annual periodicity. Tritium concentration in rainwater is observed to be highly correlated with TFWT and directly transmitted to TFWT without delay. The response of environmental radioactivity of tritium around Wolsong site is analyzed using time-series technique and non-parametric trend analysis. Tritium in the atmosphere and rainwater is strongly auto-correlated by seasonal and annual periodicity. TFWT concentration in pine needle is proven to be more sensitive to rainfall phenomenon than other weather variables. Non-parametric trend analysis of TFWT concentration within pine needle shows a increasing slope in terms of confidence level of 95%. This study demonstrates a usefulness of time-series and trend analysis for the interpretation of environmental radioactivity relationship with various environmental media.
Statistical Analysis of the Polarimetric Cloud Analysis and Seeding Test (POLCAST) Field Projects
Ekness, Jamie Lynn
The North Dakota farming industry brings in more than $4.1 billion annually in cash receipts. Unfortunately, agriculture sales vary significantly from year to year, which is due in large part to weather events such as hail storms and droughts. One method to mitigate drought is to use hygroscopic seeding to increase the precipitation efficiency of clouds. The North Dakota Atmospheric Research Board (NDARB) sponsored the Polarimetric Cloud Analysis and Seeding Test (POLCAST) research project to determine the effectiveness of hygroscopic seeding in North Dakota. The POLCAST field projects obtained airborne and radar observations, while conducting randomized cloud seeding. The Thunderstorm Identification Tracking and Nowcasting (TITAN) program is used to analyze radar data (33 usable cases) in determining differences in the duration of the storm, rain rate and total rain amount between seeded and non-seeded clouds. The single ratio of seeded to non-seeded cases is 1.56 (0.28 mm/0.18 mm) or 56% increase for the average hourly rainfall during the first 60 minutes after target selection. A seeding effect is indicated with the lifetime of the storms increasing by 41 % between seeded and non-seeded clouds for the first 60 minutes past seeding decision. A double ratio statistic, a comparison of radar derived rain amount of the last 40 minutes of a case (seed/non-seed), compared to the first 20 minutes (seed/non-seed), is used to account for the natural variability of the cloud system and gives a double ratio of 1.85. The Mann-Whitney test on the double ratio of seeded to non-seeded cases (33 cases) gives a significance (p-value) of 0.063. Bootstrapping analysis of the POLCAST set indicates that 50 cases would provide statistically significant results based on the Mann-Whitney test of the double ratio. All the statistical analysis conducted on the POLCAST data set show that hygroscopic seeding in North Dakota does increase precipitation. While an additional POLCAST field
Statistical Analysis of Designed Experiments Theory and Applications
Tamhane, Ajit C
2012-01-01
A indispensable guide to understanding and designing modern experiments The tools and techniques of Design of Experiments (DOE) allow researchers to successfully collect, analyze, and interpret data across a wide array of disciplines. Statistical Analysis of Designed Experiments provides a modern and balanced treatment of DOE methodology with thorough coverage of the underlying theory and standard designs of experiments, guiding the reader through applications to research in various fields such as engineering, medicine, business, and the social sciences. The book supplies a foundation for the
Multivariate statistical analysis of major and trace element data for ...
African Journals Online (AJOL)
Multivariate statistical analysis of major and trace element data for niobium exploration in the peralkaline granites of the anorogenic ring-complex province of Nigeria. PO Ogunleye, EC Ike, I Garba. Abstract. No Abstract Available Journal of Mining and Geology Vol.40(2) 2004: 107-117. Full Text: EMAIL FULL TEXT EMAIL ...
Statistical shape analysis using 3D Poisson equation--A quantitatively validated approach.
Gao, Yi; Bouix, Sylvain
2016-05-01
Statistical shape analysis has been an important area of research with applications in biology, anatomy, neuroscience, agriculture, paleontology, etc. Unfortunately, the proposed methods are rarely quantitatively evaluated, and as shown in recent studies, when they are evaluated, significant discrepancies exist in their outputs. In this work, we concentrate on the problem of finding the consistent location of deformation between two population of shapes. We propose a new shape analysis algorithm along with a framework to perform a quantitative evaluation of its performance. Specifically, the algorithm constructs a Signed Poisson Map (SPoM) by solving two Poisson equations on the volumetric shapes of arbitrary topology, and statistical analysis is then carried out on the SPoMs. The method is quantitatively evaluated on synthetic shapes and applied on real shape data sets in brain structures. Copyright © 2016 Elsevier B.V. All rights reserved.
Mascaró, Maite; Sacristán, Ana Isabel; Rufino, Marta M.
2016-01-01
For the past 4 years, we have been involved in a project that aims to enhance the teaching and learning of experimental analysis and statistics, of environmental and biological sciences students, through computational programming activities (using R code). In this project, through an iterative design, we have developed sequences of R-code-based…
Statistical analysis of angular correlation measurements
International Nuclear Information System (INIS)
Oliveira, R.A.A.M. de.
1986-01-01
Obtaining the multipole mixing ratio, δ, of γ transitions in angular correlation measurements is a statistical problem characterized by the small number of angles in which the observation is made and by the limited statistic of counting, α. The inexistence of a sufficient statistics for the estimator of δ, is shown. Three different estimators for δ were constructed and their properties of consistency, bias and efficiency were tested. Tests were also performed in experimental results obtained in γ-γ directional correlation measurements. (Author) [pt
Statistical mechanical analysis of the linear vector channel in digital communication
International Nuclear Information System (INIS)
Takeda, Koujin; Hatabu, Atsushi; Kabashima, Yoshiyuki
2007-01-01
A statistical mechanical framework to analyze linear vector channel models in digital wireless communication is proposed for a large system. The framework is a generalization of that proposed for code-division multiple-access systems in Takeda et al (2006 Europhys. Lett. 76 1193) and enables the analysis of the system in which the elements of the channel transfer matrix are statistically correlated with each other. The significance of the proposed scheme is demonstrated by assessing the performance of an existing model of multi-input multi-output communication systems
Monte Carlo based statistical power analysis for mediation models: methods and software.
Zhang, Zhiyong
2014-12-01
The existing literature on statistical power analysis for mediation models often assumes data normality and is based on a less powerful Sobel test instead of the more powerful bootstrap test. This study proposes to estimate statistical power to detect mediation effects on the basis of the bootstrap method through Monte Carlo simulation. Nonnormal data with excessive skewness and kurtosis are allowed in the proposed method. A free R package called bmem is developed to conduct the power analysis discussed in this study. Four examples, including a simple mediation model, a multiple-mediator model with a latent mediator, a multiple-group mediation model, and a longitudinal mediation model, are provided to illustrate the proposed method.
A statistical analysis of the impact of advertising signs on road safety.
Yannis, George; Papadimitriou, Eleonora; Papantoniou, Panagiotis; Voulgari, Chrisoula
2013-01-01
This research aims to investigate the impact of advertising signs on road safety. An exhaustive review of international literature was carried out on the effect of advertising signs on driver behaviour and safety. Moreover, a before-and-after statistical analysis with control groups was applied on several road sites with different characteristics in the Athens metropolitan area, in Greece, in order to investigate the correlation between the placement or removal of advertising signs and the related occurrence of road accidents. Road accident data for the 'before' and 'after' periods on the test sites and the control sites were extracted from the database of the Hellenic Statistical Authority, and the selected 'before' and 'after' periods vary from 2.5 to 6 years. The statistical analysis shows no statistical correlation between road accidents and advertising signs in none of the nine sites examined, as the confidence intervals of the estimated safety effects are non-significant at 95% confidence level. This can be explained by the fact that, in the examined road sites, drivers are overloaded with information (traffic signs, directions signs, labels of shops, pedestrians and other vehicles, etc.) so that the additional information load from advertising signs may not further distract them.
Building the Community Online Resource for Statistical Seismicity Analysis (CORSSA)
Michael, A. J.; Wiemer, S.; Zechar, J. D.; Hardebeck, J. L.; Naylor, M.; Zhuang, J.; Steacy, S.; Corssa Executive Committee
2010-12-01
Statistical seismology is critical to the understanding of seismicity, the testing of proposed earthquake prediction and forecasting methods, and the assessment of seismic hazard. Unfortunately, despite its importance to seismology - especially to those aspects with great impact on public policy - statistical seismology is mostly ignored in the education of seismologists, and there is no central repository for the existing open-source software tools. To remedy these deficiencies, and with the broader goal to enhance the quality of statistical seismology research, we have begun building the Community Online Resource for Statistical Seismicity Analysis (CORSSA). CORSSA is a web-based educational platform that is authoritative, up-to-date, prominent, and user-friendly. We anticipate that the users of CORSSA will range from beginning graduate students to experienced researchers. More than 20 scientists from around the world met for a week in Zurich in May 2010 to kick-start the creation of CORSSA: the format and initial table of contents were defined; a governing structure was organized; and workshop participants began drafting articles. CORSSA materials are organized with respect to six themes, each containing between four and eight articles. The CORSSA web page, www.corssa.org, officially unveiled on September 6, 2010, debuts with an initial set of approximately 10 to 15 articles available online for viewing and commenting with additional articles to be added over the coming months. Each article will be peer-reviewed and will present a balanced discussion, including illustrative examples and code snippets. Topics in the initial set of articles will include: introductions to both CORSSA and statistical seismology, basic statistical tests and their role in seismology; understanding seismicity catalogs and their problems; basic techniques for modeling seismicity; and methods for testing earthquake predictability hypotheses. A special article will compare and review
Sorooshian, Armin; Crosbie, Ewan; Maudlin, Lindsay C.; Youn, Jong-Sang; Wang, Zhen; Shingler, Taylor; Ortega, Amber M.; Hersey, Scott; Woods, Roy K.
2015-08-01
This study reports on ambient measurements of organosulfur (OS) and methanesulfonate (MSA) over the western United States and coastal areas. Particulate OS levels are highest in summertime and generally increase as a function of sulfate (a precursor) and sodium (a marine tracer) with peak levels at coastal sites. The ratio of OS to total sulfur is also highest at coastal sites, with increasing values as a function of normalized difference vegetation index and the ratio of organic carbon to elemental carbon. Correlative analysis points to significant relationships between OS and biogenic emissions from marine and continental sources, factors that coincide with secondary production, and vanadium due to a suspected catalytic role. A major OS species, methanesulfonate (MSA), was examined with intensive field measurements, and the resulting data support the case for vanadium's catalytic influence. Mass size distributions reveal a dominant MSA peak between aerodynamic diameters of 0.32-0.56 µm at a desert and coastal site with nearly all MSA mass (≥84%) in submicrometer sizes; MSA:non-sea-salt sulfate ratios vary widely as a function of particle size and proximity to the ocean. Airborne data indicate that relative to the marine boundary layer, particulate MSA levels are enhanced in urban and agricultural areas and also the free troposphere when impacted by biomass burning. Some combination of fires and marine-derived emissions leads to higher MSA levels than either source alone. Finally, MSA differences in cloud water and out-of-cloud aerosol are discussed.
Negrín-Báez, D; Navarro, A; Afonso, J M; Toro, M A; Zamorano, M J
2016-04-01
Lack of operculum, a neurocranial deformity, is the most common external abnormality to be found among industrially produced gilthead seabream (Sparus aurata L.), and this entails significant financial losses. This study conducts, for the first time in this species, a quantitative trait loci (QTL) analysis of the lack of operculum. A total of 142 individuals from a paternal half-sibling family (six full-sibling families) were selected for QTL mapping. They had previously shown a highly significant association with the prevalence of lack of operculum in a segregation analysis. All the fish were genotyped for 106 microsatellite markers using a set of multiplex PCRs (ReMsa1-ReMsa13). A linear regression methodology was used for the QTL analysis. Four QTL were detected for this deformity, two of which (QTLOP1 and QTLOP2) were significant. They were located at LG (linkage group) nine and LG10 respectively. Both QTL showed a large effect (about 27%), and furthermore, the association between lack of operculum and sire allelic segregation observed was statistically significant in the QTLOP1 analysis. These results represent a significant step towards including marker-assisted selection for this deformity in genetic breeding programmes to reduce the incidence of the deformity in the species. © 2016 Stichting International Foundation for Animal Genetics.
Kratochwill, Thomas R; Levin, Joel R
2014-04-01
In this commentary, we add to the spirit of the articles appearing in the special series devoted to meta- and statistical analysis of single-case intervention-design data. Following a brief discussion of historical factors leading to our initial involvement in statistical analysis of such data, we discuss: (a) the value added by including statistical-analysis recommendations in the What Works Clearinghouse Standards for single-case intervention designs; (b) the importance of visual analysis in single-case intervention research, along with the distinctive role that could be played by single-case effect-size measures; and (c) the elevated internal validity and statistical-conclusion validity afforded by the incorporation of various forms of randomization into basic single-case design structures. For the future, we envision more widespread application of quantitative analyses, as critical adjuncts to visual analysis, in both primary single-case intervention research studies and literature reviews in the behavioral, educational, and health sciences. Copyright © 2014 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
Statistical analysis of the spatial distribution of galaxies and clusters
International Nuclear Information System (INIS)
Cappi, Alberto
1993-01-01
This thesis deals with the analysis of the distribution of galaxies and clusters, describing some observational problems and statistical results. First chapter gives a theoretical introduction, aiming to describe the framework of the formation of structures, tracing the history of the Universe from the Planck time, t_p = 10"-"4"3 sec and temperature corresponding to 10"1"9 GeV, to the present epoch. The most usual statistical tools and models of the galaxy distribution, with their advantages and limitations, are described in chapter two. A study of the main observed properties of galaxy clustering, together with a detailed statistical analysis of the effects of selecting galaxies according to apparent magnitude or diameter, is reported in chapter three. Chapter four delineates some properties of groups of galaxies, explaining the reasons of discrepant results on group distributions. Chapter five is a study of the distribution of galaxy clusters, with different statistical tools, like correlations, percolation, void probability function and counts in cells; it is found the same scaling-invariant behaviour of galaxies. Chapter six describes our finding that rich galaxy clusters too belong to the fundamental plane of elliptical galaxies, and gives a discussion of its possible implications. Finally chapter seven reviews the possibilities offered by multi-slit and multi-fibre spectrographs, and I present some observational work on nearby and distant galaxy clusters. In particular, I show the opportunities offered by ongoing surveys of galaxies coupled with multi-object fibre spectrographs, focusing on the ESO Key Programme A galaxy redshift survey in the south galactic pole region to which I collaborate and on MEFOS, a multi-fibre instrument with automatic positioning. Published papers related to the work described in this thesis are reported in the last appendix. (author) [fr
Operational statistical analysis of the results of computer-based testing of students
Directory of Open Access Journals (Sweden)
Виктор Иванович Нардюжев
2018-12-01
Full Text Available The article is devoted to the issues of statistical analysis of results of computer-based testing for evaluation of educational achievements of students. The issues are relevant due to the fact that computerbased testing in Russian universities has become an important method for evaluation of educational achievements of students and quality of modern educational process. Usage of modern methods and programs for statistical analysis of results of computer-based testing and assessment of quality of developed tests is an actual problem for every university teacher. The article shows how the authors solve this problem using their own program “StatInfo”. For several years the program has been successfully applied in a credit system of education at such technological stages as loading computerbased testing protocols into a database, formation of queries, generation of reports, lists, and matrices of answers for statistical analysis of quality of test items. Methodology, experience and some results of its usage by university teachers are described in the article. Related topics of a test development, models, algorithms, technologies, and software for large scale computer-based testing has been discussed by the authors in their previous publications which are presented in the reference list.
International Nuclear Information System (INIS)
Molchan, G.M.; Kronrod, T.L.; Dmitrieva, O.E.
1995-03-01
The catalog of earthquakes of Italy (1900-1993) is analyzed in the present work. The following problems have been considered: 1) a choice of the operating magnitude, 2) an analysis of data completeness, and 3) a grouping (in time and in space). The catalog has been separated into main shocks and aftershocks. Statistical estimations of seismicity parameters (a,b) are performed for the seismogenetic zones defined by GNDT. The non-standard elements of the analysis performed are: (a) statistical estimation and comparison of seismicity parameters under the condition of arbitrary data grouping in magnitude, time and space; (b) use of a not conventional statistical method for the aftershock identification; the method is based on the idea of optimizing two kinds of errors in the aftershock identification process; (c) use of the aftershock zones to reveal seismically- interrelated seismogenic zones. This procedure contributes to the stability of the estimation of the ''b-value'' Refs, 25 figs, tabs
Analysis and classification of ECG-waves and rhythms using circular statistics and vector strength
Directory of Open Access Journals (Sweden)
Janßen Jan-Dirk
2017-09-01
Full Text Available The most common way to analyse heart rhythm is to calculate the RR-interval and the heart rate variability. For further evaluation, descriptive statistics are often used. Here we introduce a new and more natural heart rhythm analysis tool that is based on circular statistics and vector strength. Vector strength is a tool to measure the periodicity or lack of periodicity of a signal. We divide the signal into non-overlapping window segments and project the detected R-waves around the unit circle using the complex exponential function and the median RR-interval. In addition, we calculate the vector strength and apply circular statistics as wells as an angular histogram on the R-wave vectors. This approach enables an intuitive visualization and analysis of rhythmicity. Our results show that ECG-waves and rhythms can be easily visualized, analysed and classified by circular statistics and vector strength.
Consolidity analysis for fully fuzzy functions, matrices, probability and statistics
Directory of Open Access Journals (Sweden)
Walaa Ibrahim Gabr
2015-03-01
Full Text Available The paper presents a comprehensive review of the know-how for developing the systems consolidity theory for modeling, analysis, optimization and design in fully fuzzy environment. The solving of systems consolidity theory included its development for handling new functions of different dimensionalities, fuzzy analytic geometry, fuzzy vector analysis, functions of fuzzy complex variables, ordinary differentiation of fuzzy functions and partial fraction of fuzzy polynomials. On the other hand, the handling of fuzzy matrices covered determinants of fuzzy matrices, the eigenvalues of fuzzy matrices, and solving least-squares fuzzy linear equations. The approach demonstrated to be also applicable in a systematic way in handling new fuzzy probabilistic and statistical problems. This included extending the conventional probabilistic and statistical analysis for handling fuzzy random data. Application also covered the consolidity of fuzzy optimization problems. Various numerical examples solved have demonstrated that the new consolidity concept is highly effective in solving in a compact form the propagation of fuzziness in linear, nonlinear, multivariable and dynamic problems with different types of complexities. Finally, it is demonstrated that the implementation of the suggested fuzzy mathematics can be easily embedded within normal mathematics through building special fuzzy functions library inside the computational Matlab Toolbox or using other similar software languages.
A Guideline to Univariate Statistical Analysis for LC/MS-Based Untargeted Metabolomics-Derived Data
Directory of Open Access Journals (Sweden)
Maria Vinaixa
2012-10-01
Full Text Available Several metabolomic software programs provide methods for peak picking, retention time alignment and quantification of metabolite features in LC/MS-based metabolomics. Statistical analysis, however, is needed in order to discover those features significantly altered between samples. By comparing the retention time and MS/MS data of a model compound to that from the altered feature of interest in the research sample, metabolites can be then unequivocally identified. This paper reports on a comprehensive overview of a workflow for statistical analysis to rank relevant metabolite features that will be selected for further MS/MS experiments. We focus on univariate data analysis applied in parallel on all detected features. Characteristics and challenges of this analysis are discussed and illustrated using four different real LC/MS untargeted metabolomic datasets. We demonstrate the influence of considering or violating mathematical assumptions on which univariate statistical test rely, using high-dimensional LC/MS datasets. Issues in data analysis such as determination of sample size, analytical variation, assumption of normality and homocedasticity, or correction for multiple testing are discussed and illustrated in the context of our four untargeted LC/MS working examples.
Directory of Open Access Journals (Sweden)
Rawid Banchuin
2014-01-01
Full Text Available In this research, the analysis of statistical variations in subthreshold MOSFET's high frequency characteristics defined in terms of gate capacitance and transition frequency, have been shown and the resulting comprehensive analytical models of such variations in terms of their variances have been proposed. Major imperfection in the physical level properties including random dopant fluctuation and effects of variations in MOSFET's manufacturing process, have been taken into account in the proposed analysis and modeling. The up to dated comprehensive analytical model of statistical variation in MOSFET's parameter has been used as the basis of analysis and modeling. The resulting models have been found to be both analytic and comprehensive as they are the precise mathematical expressions in terms of physical level variables of MOSFET. Furthermore, they have been verified at the nanometer level by using 65~nm level BSIM4 based benchmarks and have been found to be very accurate with smaller than 5 % average percentages of errors. Hence, the performed analysis gives the resulting models which have been found to be the potential mathematical tool for the statistical and variability aware analysis and design of subthreshold MOSFET based VHF circuits, systems and applications.
Spatial Analysis Along Networks Statistical and Computational Methods
Okabe, Atsuyuki
2012-01-01
In the real world, there are numerous and various events that occur on and alongside networks, including the occurrence of traffic accidents on highways, the location of stores alongside roads, the incidence of crime on streets and the contamination along rivers. In order to carry out analyses of those events, the researcher needs to be familiar with a range of specific techniques. Spatial Analysis Along Networks provides a practical guide to the necessary statistical techniques and their computational implementation. Each chapter illustrates a specific technique, from Stochastic Point Process
STATISTICAL ANALYSIS OF SPORT MOVEMENT OBSERVATIONS: THE CASE OF ORIENTEERING
Directory of Open Access Journals (Sweden)
K. Amouzandeh
2017-09-01
Full Text Available Study of movement observations is becoming more popular in several applications. Particularly, analyzing sport movement time series has been considered as a demanding area. However, most of the attempts made on analyzing movement sport data have focused on spatial aspects of movement to extract some movement characteristics, such as spatial patterns and similarities. This paper proposes statistical analysis of sport movement observations, which refers to analyzing changes in the spatial movement attributes (e.g. distance, altitude and slope and non-spatial movement attributes (e.g. speed and heart rate of athletes. As the case study, an example dataset of movement observations acquired during the “orienteering” sport is presented and statistically analyzed.
Statistical Analysis Of Failure Strength Of Material Using Weibull Distribution
International Nuclear Information System (INIS)
Entin Hartini; Mike Susmikanti; Antonius Sitompul
2008-01-01
In evaluation of ceramic and glass materials strength a statistical approach is necessary Strength of ceramic and glass depend on its measure and size distribution of flaws in these material. The distribution of strength for ductile material is narrow and close to a Gaussian distribution while strength of brittle materials as ceramic and glass following Weibull distribution. The Weibull distribution is an indicator of the failure of material strength resulting from a distribution of flaw size. In this paper, cumulative probability of material strength to failure probability, cumulative probability of failure versus fracture stress and cumulative probability of reliability of material were calculated. Statistical criteria calculation supporting strength analysis of Silicon Nitride material were done utilizing MATLAB. (author)
On two methods of statistical image analysis
Missimer, J; Knorr, U; Maguire, RP; Herzog, H; Seitz, RJ; Tellman, L; Leenders, K.L.
1999-01-01
The computerized brain atlas (CBA) and statistical parametric mapping (SPM) are two procedures for voxel-based statistical evaluation of PET activation studies. Each includes spatial standardization of image volumes, computation of a statistic, and evaluation of its significance. In addition,
2012-04-20
... criterion due to lack of reliable poverty statistics. 4. QCTs are those census tracts in which 50 percent or... statistical area (MSA) definitions with modifications to account for substantial differences in rental housing... percent of households have an income less than 60 percent of the AMGI or, where the poverty rate is at...
PROSA: A computer program for statistical analysis of near-real-time-accountancy (NRTA) data
International Nuclear Information System (INIS)
Beedgen, R.; Bicking, U.
1987-04-01
The computer program PROSA (Program for Statistical Analysis of NRTA Data) is a tool to decide on the basis of statistical considerations if, in a given sequence of materials balance periods, a loss of material might have occurred or not. The evaluation of the material balance data is based on statistical test procedures. In PROSA three truncated sequential tests are applied to a sequence of material balances. The manual describes the statistical background of PROSA and how to use the computer program on an IBM-PC with DOS 3.1. (orig.) [de
Statistical testing and power analysis for brain-wide association study.
Gong, Weikang; Wan, Lin; Lu, Wenlian; Ma, Liang; Cheng, Fan; Cheng, Wei; Grünewald, Stefan; Feng, Jianfeng
2018-04-05
The identification of connexel-wise associations, which involves examining functional connectivities between pairwise voxels across the whole brain, is both statistically and computationally challenging. Although such a connexel-wise methodology has recently been adopted by brain-wide association studies (BWAS) to identify connectivity changes in several mental disorders, such as schizophrenia, autism and depression, the multiple correction and power analysis methods designed specifically for connexel-wise analysis are still lacking. Therefore, we herein report the development of a rigorous statistical framework for connexel-wise significance testing based on the Gaussian random field theory. It includes controlling the family-wise error rate (FWER) of multiple hypothesis testings using topological inference methods, and calculating power and sample size for a connexel-wise study. Our theoretical framework can control the false-positive rate accurately, as validated empirically using two resting-state fMRI datasets. Compared with Bonferroni correction and false discovery rate (FDR), it can reduce false-positive rate and increase statistical power by appropriately utilizing the spatial information of fMRI data. Importantly, our method bypasses the need of non-parametric permutation to correct for multiple comparison, thus, it can efficiently tackle large datasets with high resolution fMRI images. The utility of our method is shown in a case-control study. Our approach can identify altered functional connectivities in a major depression disorder dataset, whereas existing methods fail. A software package is available at https://github.com/weikanggong/BWAS. Copyright © 2018 Elsevier B.V. All rights reserved.
Statistical analysis of magnetically soft particles in magnetorheological elastomers
Gundermann, T.; Cremer, P.; Löwen, H.; Menzel, A. M.; Odenbach, S.
2017-04-01
The physical properties of magnetorheological elastomers (MRE) are a complex issue and can be influenced and controlled in many ways, e.g. by applying a magnetic field, by external mechanical stimuli, or by an electric potential. In general, the response of MRE materials to these stimuli is crucially dependent on the distribution of the magnetic particles inside the elastomer. Specific knowledge of the interactions between particles or particle clusters is of high relevance for understanding the macroscopic rheological properties and provides an important input for theoretical calculations. In order to gain a better insight into the correlation between the macroscopic effects and microstructure and to generate a database for theoretical analysis, x-ray micro-computed tomography (X-μCT) investigations as a base for a statistical analysis of the particle configurations were carried out. Different MREs with quantities of 2-15 wt% (0.27-2.3 vol%) of iron powder and different allocations of the particles inside the matrix were prepared. The X-μCT results were edited by an image processing software regarding the geometrical properties of the particles with and without the influence of an external magnetic field. Pair correlation functions for the positions of the particles inside the elastomer were calculated to statistically characterize the distributions of the particles in the samples.
Pattern recognition in menstrual bleeding diaries by statistical cluster analysis
Directory of Open Access Journals (Sweden)
Wessel Jens
2009-07-01
Full Text Available Abstract Background The aim of this paper is to empirically identify a treatment-independent statistical method to describe clinically relevant bleeding patterns by using bleeding diaries of clinical studies on various sex hormone containing drugs. Methods We used the four cluster analysis methods single, average and complete linkage as well as the method of Ward for the pattern recognition in menstrual bleeding diaries. The optimal number of clusters was determined using the semi-partial R2, the cubic cluster criterion, the pseudo-F- and the pseudo-t2-statistic. Finally, the interpretability of the results from a gynecological point of view was assessed. Results The method of Ward yielded distinct clusters of the bleeding diaries. The other methods successively chained the observations into one cluster. The optimal number of distinctive bleeding patterns was six. We found two desirable and four undesirable bleeding patterns. Cyclic and non cyclic bleeding patterns were well separated. Conclusion Using this cluster analysis with the method of Ward medications and devices having an impact on bleeding can be easily compared and categorized.
Directory of Open Access Journals (Sweden)
Raghunath Satpathy
2016-01-01
Full Text Available 2-Haloalkanoic acid dehalogenase enzymes have broad range of applications, starting from bioremediation to chemical synthesis of useful compounds that are widely distributed in fungi and bacteria. In the present study, a total of 81 full-length protein sequences of 2-haloalkanoic acid dehalogenase from bacteria and fungi were retrieved from NCBI database. Sequence analysis such as multiple sequence alignment (MSA, conserved motif identification, computation of amino acid composition, and phylogenetic tree construction were performed on these primary sequences. From MSA analysis, it was observed that the sequences share conserved lysine (K and aspartate (D residues in them. Also, phylogenetic tree indicated a subcluster comprised of both fungal and bacterial species. Due to nonavailability of experimental 3D structure for fungal 2-haloalkanoic acid dehalogenase in the PDB, molecular modelling study was performed for both fungal and bacterial sources of enzymes present in the subcluster. Further structural analysis revealed a common evolutionary topology shared between both fungal and bacterial enzymes. Studies on the buried amino acids showed highly conserved Leu and Ser in the core, despite variation in their amino acid percentage. Additionally, a surface exposed tryptophan was conserved in all of these selected models.
Statistical Analysis and validation
Hoefsloot, H.C.J.; Horvatovich, P.; Bischoff, R.
2013-01-01
In this chapter guidelines are given for the selection of a few biomarker candidates from a large number of compounds with a relative low number of samples. The main concepts concerning the statistical validation of the search for biomarkers are discussed. These complicated methods and concepts are
Variability analysis of AGN: a review of results using new statistical criteria
Zibecchi, L.; Andruchow, I.; Cellone, S. A.; Romero, G. E.; Combi, J. A.
We present here a re-analysis of the variability results of a sample of active galactic nuclei (AGN), which have been observed on several sessions with the 2.15 m "Jorge Sahade" telescope (CASLEO), San Juan, Argentina, and whose results are published (Romero et al. 1999, 2000, 2002; Cellone et al. 2000). The motivation for this new analysis is the implementation, dur- ing the last years, of improvements in the statistical criteria applied, taking quantitatively into account the incidence of the photometric errors (Cellone et al. 2007). This work is framed as a first step in an integral study on the statistical estimators of AGN variability. This study is motivated by the great diversity of statistical tests that have been proposed to analyze the variability of these objects. Since we note that, in some cases, the results of the object variability depend on the test used, we attempt to make a com- parative study of the various tests and analyze, under the given conditions, which of them is the most efficient and reliable.
Watano, Chikako; Shiota, Yuri; Onoda, Keiichi; Sheikh, Abdullah Md; Mishima, Seiji; Nitta, Eri; Yano, Shozo; Yamaguchi, Shuhei; Nagai, Atsushi
2018-02-01
The aim of this study was to evaluate the autonomic neural function in Parkinson's disease (PD) and multiple system atrophy (MSA) with head-up tilt test and spectral analysis of cardiovascular parameters. This study included 15 patients with MSA, 15 patients with PD, and 29 healthy control (HC) subjects. High frequency power of the RR interval (RR-HF), the ratio of low frequency power of RR interval to RR-HF (RR-LF/HF) and LF power of systolic BP were used to evaluate parasympathetic, cardiac sympathetic and vasomotor sympathetic functions, respectively. Both patients with PD and MSA showed orthostatic hypotension and lower parasympathetic function (RR-HF) at tilt position as compared to HC subjects. Cardiac sympathetic function (RR-LF/HF) was significantly high in patients with PD than MSA at supine position. RR-LF/HF tended to increase in MSA and HC, but decreased in PD by tilting. Consequently, the change of the ratio due to tilting (ΔRR-LF/HF) was significantly lower in patients with PD than in HC subjects. Further analysis showed that compared to mild stage of PD, RR-LF/HF at the supine position was significantly higher in advanced stage. By tilting, it was increased in mild stage and decreased in the advanced stage of PD, causing ΔRR-LF/HF to decrease significantly in the advanced stage. Thus, we demonstrated that spectral analysis of cardiovascular parameters is useful to identify sympathetic and parasympathetic disorders in MSA and PD. High cardiac sympathetic function at the supine position, and its reduction by tilting might be a characteristic feature of PD, especially in the advanced stage.
Wakabayashi, Koichi; Mori, Fumiaki; Kakita, Akiyoshi; Takahashi, Hitoshi; Tanaka, Shinya; Utsumi, Jun; Sasaki, Hidenao
2016-12-02
MicroRNAs (miRNAs) are small noncoding RNAs that regulate gene expression. Recently, we have shown that informative miRNA data can be derived from archived formalin-fixed paraffin-embedded (FFPE) samples from postmortem cases of amyotrophic lateral sclerosis and normal controls. miRNA analysis has now been performed on FFPE samples from affected brain regions in patients with multiple system atrophy (MSA) and the same areas in neurologically normal controls. We evaluated 50 samples from patients with MSA (n=13) and controls (n=13). Twenty-six samples were selected for miRNA analysis on the basis of the criteria reported previously: (i) a formalin fixation time of less than 4 weeks, (ii) a total RNA yield per sample of more than 500ng, and (iii) sufficient quality of the RNA electrophoresis pattern. These included 11 cases of MSA and 5 controls. Thus, the success rate for analysis of RNA from FFPE samples was 52% (26 of 50). For MSA, a total of 395 and 383 miRNAs were identified in the pons and cerebellum, respectively; 5 were up-regulated and 33 were down-regulated in the pons and 5 were up-regulated and 18 were down-regulated in the cerebellum. Several miRNAs down-regulated in the pons (miR-129-2-3p and miR-129-5p) and cerebellum (miR-129-2-3p, miR-129-5p and miR-132-3p) had already been identified in frozen cerebellum from MSA patients. These findings suggest that archived FFPE postmortem samples can be a valuable source for miRNA profiling in MSA. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
A Statistical Primer: Understanding Descriptive and Inferential Statistics
Gillian Byrne
2007-01-01
As libraries and librarians move more towards evidence‐based decision making, the data being generated in libraries is growing. Understanding the basics of statistical analysis is crucial for evidence‐based practice (EBP), in order to correctly design and analyze researchas well as to evaluate the research of others. This article covers the fundamentals of descriptive and inferential statistics, from hypothesis construction to sampling to common statistical techniques including chi‐square, co...
Statistical Compilation of the ICT Sector and Policy Analysis | Page 5 ...
International Development Research Centre (IDRC) Digital Library (Canada)
The project is designed to expand the scope of conventional investigation beyond the telecommunications industry to include other vertically integrated components of the ICT sector such as manufacturing and services. ... Statistical Compilation of the ICT Sector and Policy Analysis project : country experiences; Malaysia.
Statistical Analysis of CFD Solutions From the Fifth AIAA Drag Prediction Workshop
Morrison, Joseph H.
2013-01-01
A graphical framework is used for statistical analysis of the results from an extensive N-version test of a collection of Reynolds-averaged Navier-Stokes computational fluid dynamics codes. The solutions were obtained by code developers and users from North America, Europe, Asia, and South America using a common grid sequence and multiple turbulence models for the June 2012 fifth Drag Prediction Workshop sponsored by the AIAA Applied Aerodynamics Technical Committee. The aerodynamic configuration for this workshop was the Common Research Model subsonic transport wing-body previously used for the 4th Drag Prediction Workshop. This work continues the statistical analysis begun in the earlier workshops and compares the results from the grid convergence study of the most recent workshop with previous workshops.
Neutron activation and statistical analysis of pottery from Thera, Greece
International Nuclear Information System (INIS)
Kilikoglou, V.; Grimanis, A.P.; Karayannis, M.I.
1990-01-01
Neutron activation analysis, in combination with multivariate analysis of the generated data, was used for the chemical characterization of prehistoric pottery from the Greek islands of Thera, Melos (islands with similar geology) and Crete. The statistical procedure which proved that Theran pottery could be distinguished from Melian is described. This discrimination, attained for the first time, was mainly based on the concentrations of the trace elements Sm, Yb, Lu and Cr. Also, Cretan imports to both Thera and Melos were clearly separable from local products. (author) 22 refs.; 1 fig.; 4 tabs
The statistical analysis of the mobility and the labor force use
Directory of Open Access Journals (Sweden)
Daniela-Emanuela Dãnãcicã
2006-05-01
Full Text Available The paper approaches some of the classical methods used in statistics for theanalysis of labor force and proposes new ways of current analysis required foradopting optimal economic patterns and strategies. The proposed methods, thelinear mean deviation used in the analysis of the external mobility of the laborforce, the coefficient of variation used in the analysis of the external mobility of thelabor force and two-dimensional table used the coefficient of internal mobilitycalculation, are illustrated by the premises, the calculus methodology, practicalapplications and guidance for their use in adopting and applying optimal economicpolicy.
Statistical analysis of the hydrodynamic pressure in the near field of compressible jets
International Nuclear Information System (INIS)
Camussi, R.; Di Marco, A.; Castelain, T.
2017-01-01
Highlights: • Statistical properties of pressure fluctuations retrieved through wavelet analysis • Time delay PDFs approximated by a log-normal distribution • Amplitude PDFs approximated by a Gamma distribution • Random variable PDFs weakly dependent upon position and Mach number. • A general stochastic model achieved for the distance dependency - Abstract: This paper is devoted to the statistical characterization of the pressure fluctuations measured in the near field of a compressible jet at two subsonic Mach numbers, 0.6 and 0.9. The analysis is focused on the hydrodynamic pressure measured at different distances from the jet exit and analyzed at the typical frequency associated to the Kelvin–Helmholtz instability. Statistical properties are retrieved by the application of the wavelet transform to the experimental data and the computation of the wavelet scalogram around that frequency. This procedure highlights traces of events that appear intermittently in time and have variable strength. A wavelet-based event tracking procedure has been applied providing a statistical characterization of the time delay between successive events and of their energy level. On this basis, two stochastic models are proposed and validated against the experimental data in the different flow conditions
Categorical data processing for real estate objects valuation using statistical analysis
Parygin, D. S.; Malikov, V. P.; Golubev, A. V.; Sadovnikova, N. P.; Petrova, T. M.; Finogeev, A. G.
2018-05-01
Theoretical and practical approaches to the use of statistical methods for studying various properties of infrastructure objects are analyzed in the paper. Methods of forecasting the value of objects are considered. A method for coding categorical variables describing properties of real estate objects is proposed. The analysis of the results of modeling the price of real estate objects using regression analysis and an algorithm based on a comparative approach is carried out.
Statistical Compilation of the ICT Sector and Policy Analysis | Page 2 ...
International Development Research Centre (IDRC) Digital Library (Canada)
... to widen and deepen, so too does its impact on economic development. ... The outcomes of such efforts will subsequently inform policy discourse and ... Studies. Statistical Compilation of the ICT Sector and Policy Analysis project : country experiences; Malaysia ... Asian outlook: New growth dependent on new productivity.
Statistical assessment on a combined analysis of GRYN-ROMN-UCBN upland vegetation vital signs
Irvine, Kathryn M.; Rodhouse, Thomas J.
2014-01-01
As of 2013, Rocky Mountain and Upper Columbia Basin Inventory and Monitoring Networks have multiple years of vegetation data and Greater Yellowstone Network has three years of vegetation data and monitoring is ongoing in all three networks. Our primary objective is to assess whether a combined analysis of these data aimed at exploring correlations with climate and weather data is feasible. We summarize the core survey design elements across protocols and point out the major statistical challenges for a combined analysis at present. The dissimilarity in response designs between ROMN and UCBN-GRYN network protocols presents a statistical challenge that has not been resolved yet. However, the UCBN and GRYN data are compatible as they implement a similar response design; therefore, a combined analysis is feasible and will be pursued in future. When data collected by different networks are combined, the survey design describing the merged dataset is (likely) a complex survey design. A complex survey design is the result of combining datasets from different sampling designs. A complex survey design is characterized by unequal probability sampling, varying stratification, and clustering (see Lohr 2010 Chapter 7 for general overview). Statistical analysis of complex survey data requires modifications to standard methods, one of which is to include survey design weights within a statistical model. We focus on this issue for a combined analysis of upland vegetation from these networks, leaving other topics for future research. We conduct a simulation study on the possible effects of equal versus unequal probability selection of points on parameter estimates of temporal trend using available packages within the R statistical computing package. We find that, as written, using lmer or lm for trend detection in a continuous response and clm and clmm for visually estimated cover classes with “raw” GRTS design weights specified for the weight argument leads to substantially
Statistical analysis plan for the EuroHYP-1 trial
DEFF Research Database (Denmark)
Winkel, Per; Bath, Philip M; Gluud, Christian
2017-01-01
Score; (4) brain infarct size at 48 +/-24 hours; (5) EQ-5D-5 L score, and (6) WHODAS 2.0 score. Other outcomes are: the primary safety outcome serious adverse events; and the incremental cost-effectiveness, and cost utility ratios. The analysis sets include (1) the intention-to-treat population, and (2...... outcome), logistic regression (binary outcomes), general linear model (continuous outcomes), and the Poisson or negative binomial model (rate outcomes). DISCUSSION: Major adjustments compared with the original statistical analysis plan encompass: (1) adjustment of analyses by nationality; (2) power......) the per protocol population. The sample size is estimated to 800 patients (5% type 1 and 20% type 2 errors). All analyses are adjusted for the protocol-specified stratification variables (nationality of centre), and the minimisation variables. In the analysis, we use ordinal regression (the primary...
NEW PARADIGM OF ANALYSIS OF STATISTICAL AND EXPERT DATA IN PROBLEMS OF ECONOMICS AND MANAGEMENT
Orlov A. I.
2014-01-01
The article is devoted to the methods of analysis of statistical and expert data in problems of economics and management that are discussed in the framework of scientific specialization "Mathematical methods of economy", including organizational-economic and economic-mathematical modeling, econometrics and statistics, as well as economic aspects of decision theory, systems analysis, cybernetics, operations research. The main provisions of the new paradigm of this scientific and practical fiel...
Directory of Open Access Journals (Sweden)
Christos Chalkias
2016-03-01
Full Text Available In this paper, an integration landslide susceptibility model by combining expert-based and bivariate statistical analysis (Landslide Susceptibility Index—LSI approaches is presented. Factors related with the occurrence of landslides—such as elevation, slope angle, slope aspect, lithology, land cover, Mean Annual Precipitation (MAP and Peak Ground Acceleration (PGA—were analyzed within a GIS environment. This integrated model produced a landslide susceptibility map which categorized the study area according to the probability level of landslide occurrence. The accuracy of the final map was evaluated by Receiver Operating Characteristics (ROC analysis depending on an independent (validation dataset of landslide events. The prediction ability was found to be 76% revealing that the integration of statistical analysis with human expertise can provide an acceptable landslide susceptibility assessment at regional scale.
Wavelet Statistical Analysis of Low-Latitude Geomagnetic Measurements
Papa, A. R.; Akel, A. F.
2009-05-01
Following previous works by our group (Papa et al., JASTP, 2006), where we analyzed a series of records acquired at the Vassouras National Geomagnetic Observatory in Brazil for the month of October 2000, we introduced a wavelet analysis for the same type of data and for other periods. It is well known that wavelets allow a more detailed study in several senses: the time window for analysis can be drastically reduced if compared to other traditional methods (Fourier, for example) and at the same time allow an almost continuous accompaniment of both amplitude and frequency of signals as time goes by. This advantage brings some possibilities for potentially useful forecasting methods of the type also advanced by our group in previous works (see for example, Papa and Sosman, JASTP, 2008). However, the simultaneous statistical analysis of both time series (in our case amplitude and frequency) is a challenging matter and is in this sense that we have found what we consider our main goal. Some possible trends for future works are advanced.
PVeStA: A Parallel Statistical Model Checking and Quantitative Analysis Tool
AlTurki, Musab; Meseguer, José
2011-01-01
Statistical model checking is an attractive formal analysis method for probabilistic systems such as, for example, cyber-physical systems which are often probabilistic in nature. This paper is about drastically increasing the scalability
Sunwoo, Mun Kyung; Yun, Hyuk Jin; Song, Sook K.; Ham, Ji Hyun; Hong, Jin Yong; Lee, Ji E.; Lee, Hye S.; Sohn, Young H.; Lee, Jong-Min; Lee, Phil Hyu
2014-01-01
Multiple system atrophy (MSA) is an adult-onset, sporadic neurodegenerative disease. Because the prognosis of MSA is fatal, neuroprotective or regenerative strategies may be invaluable in MSA treatment. Previously, we obtained clinical and imaging evidence that mesenchymal stem cell (MSC) treatment could have a neuroprotective role in MSA patients. In the present study, we evaluated the effects of MSC therapy on longitudinal changes in subcortical deep gray matter volumes and cortical thickness and their association with cognitive performance. Clinical and imaging data were obtained from our previous randomized trial of autologous MSC in MSA patients. During 1-year follow-up, we assessed longitudinal differences in automatic segmentation-based subcortical deep gray matter volumes and vertex-wise cortical thickness between placebo (n = 15) and MSC groups (n = 11). Next, we performed correlation analysis between the changes in cortical thickness and changes in the Korean version of the Montreal Cognitive Assessment (MoCA) scores and cognitive performance of each cognitive subdomain using a multiple, comparison correction. There were no significant differences in age at baseline, age at disease onset, gender ratio, disease duration, clinical severity, MoCA score, or education level between the groups. The automated subcortical volumetric analysis revealed that the changes in subcortical deep gray matter volumes of the caudate, putamen, and thalamus did not differ significantly between the groups. The areas of cortical thinning over time in the placebo group were more extensive, including the frontal, temporal, and parietal areas, whereas these areas in the MSC group were less extensive. Correlation analysis indicated that declines in MoCA scores and phonemic fluency during the follow-up period were significantly correlated with cortical thinning of the frontal and posterior temporal areas and anterior temporal areas in MSA patients, respectively. In contrast, no
Statistical analysis of the W Cyg light curve
International Nuclear Information System (INIS)
Klyus, I.A.
1983-01-01
A statistical analysis of the light curve of W Cygni has been carried out. The process of brightness variations brightness of the star is shown to be a stationary stochastic one. The hypothesis of stationarity of the process was checked at the significance level of α=0.05. Oscillations of the brightness with average durations of 131 and 250 days have been found. It is proved that oscillations are narrow-band noise, i.e. cycles. Peaks on the power spectrum corresponding to these cycles exceed 99% confidence interval. It has been stated that the oscillations are independent
Statistical Analysis of a Method to Predict Drug-Polymer Miscibility
DEFF Research Database (Denmark)
Knopp, Matthias Manne; Olesen, Niels Erik; Huang, Yanbin
2016-01-01
In this study, a method proposed to predict drug-polymer miscibility from differential scanning calorimetry measurements was subjected to statistical analysis. The method is relatively fast and inexpensive and has gained popularity as a result of the increasing interest in the formulation of drug...... as provided in this study. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association J Pharm Sci....
A Bifactor Approach to Model Multifaceted Constructs in Statistical Mediation Analysis
Gonzalez, Oscar; MacKinnon, David P.
2018-01-01
Statistical mediation analysis allows researchers to identify the most important mediating constructs in the causal process studied. Identifying specific mediators is especially relevant when the hypothesized mediating construct consists of multiple related facets. The general definition of the construct and its facets might relate differently to…
Statistical Compilation of the ICT Sector and Policy Analysis | Page 4 ...
International Development Research Centre (IDRC) Digital Library (Canada)
Statistical Compilation of the ICT Sector and Policy Analysis. As the presence and influence of information and communication technologies (ICTs) continues to widen and deepen, so too does its impact on economic development. However, much work needs to be done before the linkages between economic development ...
Statistical Compilation of the ICT Sector and Policy Analysis | Page 3 ...
International Development Research Centre (IDRC) Digital Library (Canada)
Statistical Compilation of the ICT Sector and Policy Analysis. As the presence and influence of information and communication technologies (ICTs) continues to widen and deepen, so too does its impact on economic development. However, much work needs to be done before the linkages between economic development ...
ROOT — A C++ framework for petabyte data storage, statistical analysis and visualization
Antcheva, I; Bellenot, B; Biskup,1, M; Brun, R; Buncic, N; Canal, Ph; Casadei, D; Couet, O; Fine, V; Franco,1, L; Ganis, G; Gheata, A; Gonzalez Maline, D; Goto, M; Iwaszkiewicz, J; Kreshuk, A; Marcos Segura, D; Maunder, R; Moneta, L; Naumann, A; Offermann, E; Onuchin, V; Panacek, S; Rademakers, F; Russo, P; Tadel, M
2009-01-01
ROOT is an object-oriented C++ framework conceived in the high-energy physics (HEP) community, designed for storing and analyzing petabytes of data in an efficient way. Any instance of a C++ class can be stored into a ROOT file in a machine-independent compressed binary format. In ROOT the TTree object container is optimized for statistical data analysis over very large data sets by using vertical data storage techniques. These containers can span a large number of files on local disks, the web, or a number of different shared file systems. In order to analyze this data, the user can chose out of a wide set of mathematical and statistical functions, including linear algebra classes, numerical algorithms such as integration and minimization, and various methods for performing regression analysis (fitting). In particular, the RooFit package allows the user to perform complex data modeling and fitting while the RooStats library provides abstractions and implementations for advanced statistical tools. Multivariat...
Quantile regression for the statistical analysis of immunological data with many non-detects.
Eilers, Paul H C; Röder, Esther; Savelkoul, Huub F J; van Wijk, Roy Gerth
2012-07-07
Immunological parameters are hard to measure. A well-known problem is the occurrence of values below the detection limit, the non-detects. Non-detects are a nuisance, because classical statistical analyses, like ANOVA and regression, cannot be applied. The more advanced statistical techniques currently available for the analysis of datasets with non-detects can only be used if a small percentage of the data are non-detects. Quantile regression, a generalization of percentiles to regression models, models the median or higher percentiles and tolerates very high numbers of non-detects. We present a non-technical introduction and illustrate it with an implementation to real data from a clinical trial. We show that by using quantile regression, groups can be compared and that meaningful linear trends can be computed, even if more than half of the data consists of non-detects. Quantile regression is a valuable addition to the statistical methods that can be used for the analysis of immunological datasets with non-detects.
Teo, Guoshou; Kim, Sinae; Tsou, Chih-Chiang; Collins, Ben; Gingras, Anne-Claude; Nesvizhskii, Alexey I; Choi, Hyungwon
2015-11-03
Data independent acquisition (DIA) mass spectrometry is an emerging technique that offers more complete detection and quantification of peptides and proteins across multiple samples. DIA allows fragment-level quantification, which can be considered as repeated measurements of the abundance of the corresponding peptides and proteins in the downstream statistical analysis. However, few statistical approaches are available for aggregating these complex fragment-level data into peptide- or protein-level statistical summaries. In this work, we describe a software package, mapDIA, for statistical analysis of differential protein expression using DIA fragment-level intensities. The workflow consists of three major steps: intensity normalization, peptide/fragment selection, and statistical analysis. First, mapDIA offers normalization of fragment-level intensities by total intensity sums as well as a novel alternative normalization by local intensity sums in retention time space. Second, mapDIA removes outlier observations and selects peptides/fragments that preserve the major quantitative patterns across all samples for each protein. Last, using the selected fragments and peptides, mapDIA performs model-based statistical significance analysis of protein-level differential expression between specified groups of samples. Using a comprehensive set of simulation datasets, we show that mapDIA detects differentially expressed proteins with accurate control of the false discovery rates. We also describe the analysis procedure in detail using two recently published DIA datasets generated for 14-3-3β dynamic interaction network and prostate cancer glycoproteome. The software was written in C++ language and the source code is available for free through SourceForge website http://sourceforge.net/projects/mapdia/.This article is part of a Special Issue entitled: Computational Proteomics. Copyright © 2015 Elsevier B.V. All rights reserved.
GIS and statistical analysis for landslide susceptibility mapping in the Daunia area, Italy
Mancini, F.; Ceppi, C.; Ritrovato, G.
2010-09-01
This study focuses on landslide susceptibility mapping in the Daunia area (Apulian Apennines, Italy) and achieves this by using a multivariate statistical method and data processing in a Geographical Information System (GIS). The Logistic Regression (hereafter LR) method was chosen to produce a susceptibility map over an area of 130 000 ha where small settlements are historically threatened by landslide phenomena. By means of LR analysis, the tendency to landslide occurrences was, therefore, assessed by relating a landslide inventory (dependent variable) to a series of causal factors (independent variables) which were managed in the GIS, while the statistical analyses were performed by means of the SPSS (Statistical Package for the Social Sciences) software. The LR analysis produced a reliable susceptibility map of the investigated area and the probability level of landslide occurrence was ranked in four classes. The overall performance achieved by the LR analysis was assessed by local comparison between the expected susceptibility and an independent dataset extrapolated from the landslide inventory. Of the samples classified as susceptible to landslide occurrences, 85% correspond to areas where landslide phenomena have actually occurred. In addition, the consideration of the regression coefficients provided by the analysis demonstrated that a major role is played by the "land cover" and "lithology" causal factors in determining the occurrence and distribution of landslide phenomena in the Apulian Apennines.
Statistics 101 for Radiologists.
Anvari, Arash; Halpern, Elkan F; Samir, Anthony E
2015-10-01
Diagnostic tests have wide clinical applications, including screening, diagnosis, measuring treatment effect, and determining prognosis. Interpreting diagnostic test results requires an understanding of key statistical concepts used to evaluate test efficacy. This review explains descriptive statistics and discusses probability, including mutually exclusive and independent events and conditional probability. In the inferential statistics section, a statistical perspective on study design is provided, together with an explanation of how to select appropriate statistical tests. Key concepts in recruiting study samples are discussed, including representativeness and random sampling. Variable types are defined, including predictor, outcome, and covariate variables, and the relationship of these variables to one another. In the hypothesis testing section, we explain how to determine if observed differences between groups are likely to be due to chance. We explain type I and II errors, statistical significance, and study power, followed by an explanation of effect sizes and how confidence intervals can be used to generalize observed effect sizes to the larger population. Statistical tests are explained in four categories: t tests and analysis of variance, proportion analysis tests, nonparametric tests, and regression techniques. We discuss sensitivity, specificity, accuracy, receiver operating characteristic analysis, and likelihood ratios. Measures of reliability and agreement, including κ statistics, intraclass correlation coefficients, and Bland-Altman graphs and analysis, are introduced. © RSNA, 2015.
Statistical Analysis of Spectral Properties and Prosodic Parameters of Emotional Speech
Přibil, J.; Přibilová, A.
2009-01-01
The paper addresses reflection of microintonation and spectral properties in male and female acted emotional speech. Microintonation component of speech melody is analyzed regarding its spectral and statistical parameters. According to psychological research of emotional speech, different emotions are accompanied by different spectral noise. We control its amount by spectral flatness according to which the high frequency noise is mixed in voiced frames during cepstral speech synthesis. Our experiments are aimed at statistical analysis of cepstral coefficient values and ranges of spectral flatness in three emotions (joy, sadness, anger), and a neutral state for comparison. Calculated histograms of spectral flatness distribution are visually compared and modelled by Gamma probability distribution. Histograms of cepstral coefficient distribution are evaluated and compared using skewness and kurtosis. Achieved statistical results show good correlation comparing male and female voices for all emotional states portrayed by several Czech and Slovak professional actors.
Multivariate statistical analysis of wildfires in Portugal
Costa, Ricardo; Caramelo, Liliana; Pereira, Mário
2013-04-01
Several studies demonstrate that wildfires in Portugal present high temporal and spatial variability as well as cluster behavior (Pereira et al., 2005, 2011). This study aims to contribute to the characterization of the fire regime in Portugal with the multivariate statistical analysis of the time series of number of fires and area burned in Portugal during the 1980 - 2009 period. The data used in the analysis is an extended version of the Rural Fire Portuguese Database (PRFD) (Pereira et al, 2011), provided by the National Forest Authority (Autoridade Florestal Nacional, AFN), the Portuguese Forest Service, which includes information for more than 500,000 fire records. There are many multiple advanced techniques for examining the relationships among multiple time series at the same time (e.g., canonical correlation analysis, principal components analysis, factor analysis, path analysis, multiple analyses of variance, clustering systems). This study compares and discusses the results obtained with these different techniques. Pereira, M.G., Trigo, R.M., DaCamara, C.C., Pereira, J.M.C., Leite, S.M., 2005: "Synoptic patterns associated with large summer forest fires in Portugal". Agricultural and Forest Meteorology. 129, 11-25. Pereira, M. G., Malamud, B. D., Trigo, R. M., and Alves, P. I.: The history and characteristics of the 1980-2005 Portuguese rural fire database, Nat. Hazards Earth Syst. Sci., 11, 3343-3358, doi:10.5194/nhess-11-3343-2011, 2011 This work is supported by European Union Funds (FEDER/COMPETE - Operational Competitiveness Programme) and by national funds (FCT - Portuguese Foundation for Science and Technology) under the project FCOMP-01-0124-FEDER-022692, the project FLAIR (PTDC/AAC-AMB/104702/2008) and the EU 7th Framework Program through FUME (contract number 243888).
Li, Gaoming; Yi, Dali; Wu, Xiaojiao; Liu, Xiaoyu; Zhang, Yanqi; Liu, Ling; Yi, Dong
2015-01-01
Background Although a substantial number of studies focus on the teaching and application of medical statistics in China, few studies comprehensively evaluate the recognition of and demand for medical statistics. In addition, the results of these various studies differ and are insufficiently comprehensive and systematic. Objectives This investigation aimed to evaluate the general cognition of and demand for medical statistics by undergraduates, graduates, and medical staff in China. Methods We performed a comprehensive database search related to the cognition of and demand for medical statistics from January 2007 to July 2014 and conducted a meta-analysis of non-controlled studies with sub-group analysis for undergraduates, graduates, and medical staff. Results There are substantial differences with respect to the cognition of theory in medical statistics among undergraduates (73.5%), graduates (60.7%), and medical staff (39.6%). The demand for theory in medical statistics is high among graduates (94.6%), undergraduates (86.1%), and medical staff (88.3%). Regarding specific statistical methods, the cognition of basic statistical methods is higher than of advanced statistical methods. The demand for certain advanced statistical methods, including (but not limited to) multiple analysis of variance (ANOVA), multiple linear regression, and logistic regression, is higher than that for basic statistical methods. The use rates of the Statistical Package for the Social Sciences (SPSS) software and statistical analysis software (SAS) are only 55% and 15%, respectively. Conclusion The overall statistical competence of undergraduates, graduates, and medical staff is insufficient, and their ability to practically apply their statistical knowledge is limited, which constitutes an unsatisfactory state of affairs for medical statistics education. Because the demand for skills in this area is increasing, the need to reform medical statistics education in China has become urgent
Glavatskiĭ, A Ia; Guzhovskaia, N V; Lysenko, S N; Kulik, A V
2005-12-01
The authors proposed a possible preoperative diagnostics of the degree of supratentorial brain gliom anaplasia using statistical analysis methods. It relies on a complex examination of 934 patients with I-IV degree anaplasias, which had been treated in the Institute of Neurosurgery from 1990 to 2004. The use of statistical analysis methods for differential diagnostics of the degree of brain gliom anaplasia may optimize a diagnostic algorithm, increase reliability of obtained data and in some cases avoid carrying out irrational operative intrusions. Clinically important signs for the use of statistical analysis methods directed to preoperative diagnostics of brain gliom anaplasia have been defined
Statistics without Tears: Complex Statistics with Simple Arithmetic
Smith, Brian
2011-01-01
One of the often overlooked aspects of modern statistics is the analysis of time series data. Modern introductory statistics courses tend to rush to probabilistic applications involving risk and confidence. Rarely does the first level course linger on such useful and fascinating topics as time series decomposition, with its practical applications…
ZnO crystals obtained by electrodeposition: Statistical analysis of most important process variables
International Nuclear Information System (INIS)
Cembrero, Jesus; Busquets-Mataix, David
2009-01-01
In this paper a comparative study by means of a statistical analysis of the main process variables affecting ZnO crystal electrodeposition is presented. ZnO crystals were deposited on two different substrates, silicon wafer and indium tin oxide. The control variables were substrate types, electrolyte concentration, temperature, exposition time and current density. The morphologies of the different substrates were observed using scanning electron microscopy. The percentage of substrate area covered by ZnO deposit was calculated by computational image analysis. The design of the applied experiments was based on a two-level factorial analysis involving a series of 32 experiments and an analysis of variance. Statistical results reveal that variables exerting a significant influence on the area covered by ZnO deposit are electrolyte concentration, substrate type and time of deposition, together with a combined two-factor interaction between temperature and current density. However, morphology is also influenced by surface roughness of the substrates
[The research protocol VI: How to choose the appropriate statistical test. Inferential statistics].
Flores-Ruiz, Eric; Miranda-Novales, María Guadalupe; Villasís-Keever, Miguel Ángel
2017-01-01
The statistical analysis can be divided in two main components: descriptive analysis and inferential analysis. An inference is to elaborate conclusions from the tests performed with the data obtained from a sample of a population. Statistical tests are used in order to establish the probability that a conclusion obtained from a sample is applicable to the population from which it was obtained. However, choosing the appropriate statistical test in general poses a challenge for novice researchers. To choose the statistical test it is necessary to take into account three aspects: the research design, the number of measurements and the scale of measurement of the variables. Statistical tests are divided into two sets, parametric and nonparametric. Parametric tests can only be used if the data show a normal distribution. Choosing the right statistical test will make it easier for readers to understand and apply the results.
The research protocol VI: How to choose the appropriate statistical test. Inferential statistics
Directory of Open Access Journals (Sweden)
Eric Flores-Ruiz
2017-10-01
Full Text Available The statistical analysis can be divided in two main components: descriptive analysis and inferential analysis. An inference is to elaborate conclusions from the tests performed with the data obtained from a sample of a population. Statistical tests are used in order to establish the probability that a conclusion obtained from a sample is applicable to the population from which it was obtained. However, choosing the appropriate statistical test in general poses a challenge for novice researchers. To choose the statistical test it is necessary to take into account three aspects: the research design, the number of measurements and the scale of measurement of the variables. Statistical tests are divided into two sets, parametric and nonparametric. Parametric tests can only be used if the data show a normal distribution. Choosing the right statistical test will make it easier for readers to understand and apply the results.
Per Object statistical analysis
DEFF Research Database (Denmark)
2008-01-01
of a specific class in turn, and uses as pair of PPO stages to derive the statistics and then assign them to the objects' Object Variables. It may be that this could all be done in some other, simply way, but several other ways that were tried did not succeed. The procedure ouptut has been tested against...
Statistical analysis of installed wind capacity in the United States
International Nuclear Information System (INIS)
Staid, Andrea; Guikema, Seth D.
2013-01-01
There is a large disparity in the amount of wind power capacity installed in each of the states in the U.S. It is often thought that the different policies of individual state governments are the main reason for these differences, but this may not necessarily be the case. The aim of this paper is to use statistical methods to study the factors that have the most influence on the amount of installed wind capacity in each state. From this analysis, we were able to use these variables to accurately predict the installed wind capacity and to gain insight into the driving factors for wind power development and the reasons behind the differences among states. Using our best model, we find that the most important variables for explaining the amount of wind capacity have to do with the physical and geographic characteristics of the state as opposed to policies in place that favor renewable energy. - Highlights: • We conduct a statistical analysis of factors influencing wind capacity in the U.S. • We find that state policies do not strongly influence the differences among states. • Driving factors are wind resources, cropland area, and available percentage of land
Statistical Learning in Specific Language Impairment and Autism Spectrum Disorder: A Meta-Analysis
Directory of Open Access Journals (Sweden)
Rita Obeid
2016-08-01
Full Text Available Impairments in statistical learning might be a common deficit among individuals with Specific Language Impairment (SLI and Autism Spectrum Disorder (ASD. Using meta-analysis, we examined statistical learning in SLI (14 studies, 15 comparisons and ASD (13 studies, 20 comparisons to evaluate this hypothesis. Effect sizes were examined as a function of diagnosis across multiple statistical learning tasks (Serial Reaction Time, Contextual Cueing, Artificial Grammar Learning, Speech Stream, Observational Learning, Probabilistic Classification. Individuals with SLI showed deficits in statistical learning relative to age-matched controls g = .47, 95% CI [.28, .66], p < .001. In contrast, statistical learning was intact in individuals with ASD relative to controls, g = –.13, 95% CI [–.34, .08], p = .22. Effect sizes did not vary as a function of task modality or participant age. Our findings inform debates about overlapping social-communicative difficulties in children with SLI and ASD by suggesting distinct underlying mechanisms. In line with the procedural deficit hypothesis (Ullman & Pierpont, 2005, impaired statistical learning may account for phonological and syntactic difficulties associated with SLI. In contrast, impaired statistical learning fails to account for the social-pragmatic difficulties associated with ASD.
International Nuclear Information System (INIS)
Lacombe, J.P.
1985-12-01
Statistic study of Poisson non-homogeneous and spatial processes is the first part of this thesis. A Neyman-Pearson type test is defined concerning the intensity measurement of these processes. Conditions are given for which consistency of the test is assured, and others giving the asymptotic normality of the test statistics. Then some techniques of statistic processing of Poisson fields and their applications to a particle multidetector study are given. Quality tests of the device are proposed togetherwith signal extraction methods [fr
Integrated Data Collection Analysis (IDCA) Program - Statistical Analysis of RDX Standard Data Sets
Energy Technology Data Exchange (ETDEWEB)
Sandstrom, Mary M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Brown, Geoffrey W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Preston, Daniel N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Pollard, Colin J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Warner, Kirstin F. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Sorensen, Daniel N. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Remmers, Daniel L. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Phillips, Jason J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Shelley, Timothy J. [Air Force Research Lab. (AFRL), Tyndall AFB, FL (United States); Reyes, Jose A. [Applied Research Associates, Tyndall AFB, FL (United States); Hsu, Peter C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Reynolds, John G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2015-10-30
The Integrated Data Collection Analysis (IDCA) program is conducting a Proficiency Test for Small- Scale Safety and Thermal (SSST) testing of homemade explosives (HMEs). Described here are statistical analyses of the results for impact, friction, electrostatic discharge, and differential scanning calorimetry analysis of the RDX Type II Class 5 standard. The material was tested as a well-characterized standard several times during the proficiency study to assess differences among participants and the range of results that may arise for well-behaved explosive materials. The analyses show that there are detectable differences among the results from IDCA participants. While these differences are statistically significant, most of them can be disregarded for comparison purposes to assess potential variability when laboratories attempt to measure identical samples using methods assumed to be nominally the same. The results presented in this report include the average sensitivity results for the IDCA participants and the ranges of values obtained. The ranges represent variation about the mean values of the tests of between 26% and 42%. The magnitude of this variation is attributed to differences in operator, method, and environment as well as the use of different instruments that are also of varying age. The results appear to be a good representation of the broader safety testing community based on the range of methods, instruments, and environments included in the IDCA Proficiency Test.
STATISTICS. The reusable holdout: Preserving validity in adaptive data analysis.
Dwork, Cynthia; Feldman, Vitaly; Hardt, Moritz; Pitassi, Toniann; Reingold, Omer; Roth, Aaron
2015-08-07
Misapplication of statistical data analysis is a common cause of spurious discoveries in scientific research. Existing approaches to ensuring the validity of inferences drawn from data assume a fixed procedure to be performed, selected before the data are examined. In common practice, however, data analysis is an intrinsically adaptive process, with new analyses generated on the basis of data exploration, as well as the results of previous analyses on the same data. We demonstrate a new approach for addressing the challenges of adaptivity based on insights from privacy-preserving data analysis. As an application, we show how to safely reuse a holdout data set many times to validate the results of adaptively chosen analyses. Copyright © 2015, American Association for the Advancement of Science.
International Conference on Modern Problems of Stochastic Analysis and Statistics
2017-01-01
This book brings together the latest findings in the area of stochastic analysis and statistics. The individual chapters cover a wide range of topics from limit theorems, Markov processes, nonparametric methods, acturial science, population dynamics, and many others. The volume is dedicated to Valentin Konakov, head of the International Laboratory of Stochastic Analysis and its Applications on the occasion of his 70th birthday. Contributions were prepared by the participants of the international conference of the international conference “Modern problems of stochastic analysis and statistics”, held at the Higher School of Economics in Moscow from May 29 - June 2, 2016. It offers a valuable reference resource for researchers and graduate students interested in modern stochastics.
Statistical analysis of longitudinal quality of life data with missing measurements
Zwinderman, A. H.
1992-01-01
The statistical analysis of longitudinal quality of life data in the presence of missing data is discussed. In cancer trials missing data are generated due to the fact that patients die, drop out, or are censored. These missing data are problematic in the monitoring of the quality of life during the
A framework for the economic analysis of data collection methods for vital statistics.
Jimenez-Soto, Eliana; Hodge, Andrew; Nguyen, Kim-Huong; Dettrick, Zoe; Lopez, Alan D
2014-01-01
Over recent years there has been a strong movement towards the improvement of vital statistics and other types of health data that inform evidence-based policies. Collecting such data is not cost free. To date there is no systematic framework to guide investment decisions on methods of data collection for vital statistics or health information in general. We developed a framework to systematically assess the comparative costs and outcomes/benefits of the various data methods for collecting vital statistics. The proposed framework is four-pronged and utilises two major economic approaches to systematically assess the available data collection methods: cost-effectiveness analysis and efficiency analysis. We built a stylised example of a hypothetical low-income country to perform a simulation exercise in order to illustrate an application of the framework. Using simulated data, the results from the stylised example show that the rankings of the data collection methods are not affected by the use of either cost-effectiveness or efficiency analysis. However, the rankings are affected by how quantities are measured. There have been several calls for global improvements in collecting useable data, including vital statistics, from health information systems to inform public health policies. Ours is the first study that proposes a systematic framework to assist countries undertake an economic evaluation of DCMs. Despite numerous challenges, we demonstrate that a systematic assessment of outputs and costs of DCMs is not only necessary, but also feasible. The proposed framework is general enough to be easily extended to other areas of health information.
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
Gyro's fault diagnosis plays a critical role in inertia navigation systems for higher reliability and precision. A new fault diagnosis strategy based on the statistical parameter analysis (SPA) and support vector machine(SVM) classification model was proposed for dynamically tuned gyroscopes (DTG). The SPA, a kind of time domain analysis approach, was introduced to compute a set of statistical parameters of vibration signal as the state features of DTG, with which the SVM model, a novel learning machine based on statistical learning theory (SLT), was applied and constructed to train and identify the working state of DTG. The experimental results verify that the proposed diagnostic strategy can simply and effectively extract the state features of DTG, and it outperforms the radial-basis function (RBF) neural network based diagnostic method and can more reliably and accurately diagnose the working state of DTG.
International Nuclear Information System (INIS)
Peebles, D.E.; Ohlhausen, J.A.; Kotula, P.G.; Hutton, S.; Blomfield, C.
2004-01-01
The acquisition of spectral images for x-ray photoelectron spectroscopy (XPS) is a relatively new approach, although it has been used with other analytical spectroscopy tools for some time. This technique provides full spectral information at every pixel of an image, in order to provide a complete chemical mapping of the imaged surface area. Multivariate statistical analysis techniques applied to the spectral image data allow the determination of chemical component species, and their distribution and concentrations, with minimal data acquisition and processing times. Some of these statistical techniques have proven to be very robust and efficient methods for deriving physically realistic chemical components without input by the user other than the spectral matrix itself. The benefits of multivariate analysis of the spectral image data include significantly improved signal to noise, improved image contrast and intensity uniformity, and improved spatial resolution - which are achieved due to the effective statistical aggregation of the large number of often noisy data points in the image. This work demonstrates the improvements in chemical component determination and contrast, signal-to-noise level, and spatial resolution that can be obtained by the application of multivariate statistical analysis to XPS spectral images
On Statistical Analysis of Competing Risks with Application to the Time of First Goal
Czech Academy of Sciences Publication Activity Database
Volf, Petr
2016-01-01
Roč. 2, č. 10 (2016), s. 606-623, č. článku 2. ISSN 2411-2518 R&D Projects: GA ČR GA13-14445S Institutional support: RVO:67985556 Keywords : survival analysis * competing risks * sports statistics Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2016/SI/volf-0466157.pdf
Statistics for nuclear engineers and scientists. Part 1. Basic statistical inference
Energy Technology Data Exchange (ETDEWEB)
Beggs, W.J.
1981-02-01
This report is intended for the use of engineers and scientists working in the nuclear industry, especially at the Bettis Atomic Power Laboratory. It serves as the basis for several Bettis in-house statistics courses. The objectives of the report are to introduce the reader to the language and concepts of statistics and to provide a basic set of techniques to apply to problems of the collection and analysis of data. Part 1 covers subjects of basic inference. The subjects include: descriptive statistics; probability; simple inference for normally distributed populations, and for non-normal populations as well; comparison of two populations; the analysis of variance; quality control procedures; and linear regression analysis.
Statistical analysis of probabilistic models of software product lines with quantitative constraints
DEFF Research Database (Denmark)
Beek, M.H. ter; Legay, A.; Lluch Lafuente, Alberto
2015-01-01
We investigate the suitability of statistical model checking for the analysis of probabilistic models of software product lines with complex quantitative constraints and advanced feature installation options. Such models are specified in the feature-oriented language QFLan, a rich process algebra...... of certain behaviour to the expected average cost of products. This is supported by a Maude implementation of QFLan, integrated with the SMT solver Z3 and the distributed statistical model checker MultiVeStA. Our approach is illustrated with a bikes product line case study....
2014-01-01
Background Analysis of variance (ANOVA), change-score analysis (CSA) and analysis of covariance (ANCOVA) respond differently to baseline imbalance in randomized controlled trials. However, no empirical studies appear to have quantified the differential bias and precision of estimates derived from these methods of analysis, and their relative statistical power, in relation to combinations of levels of key trial characteristics. This simulation study therefore examined the relative bias, precision and statistical power of these three analyses using simulated trial data. Methods 126 hypothetical trial scenarios were evaluated (126 000 datasets), each with continuous data simulated by using a combination of levels of: treatment effect; pretest-posttest correlation; direction and magnitude of baseline imbalance. The bias, precision and power of each method of analysis were calculated for each scenario. Results Compared to the unbiased estimates produced by ANCOVA, both ANOVA and CSA are subject to bias, in relation to pretest-posttest correlation and the direction of baseline imbalance. Additionally, ANOVA and CSA are less precise than ANCOVA, especially when pretest-posttest correlation ≥ 0.3. When groups are balanced at baseline, ANCOVA is at least as powerful as the other analyses. Apparently greater power of ANOVA and CSA at certain imbalances is achieved in respect of a biased treatment effect. Conclusions Across a range of correlations between pre- and post-treatment scores and at varying levels and direction of baseline imbalance, ANCOVA remains the optimum statistical method for the analysis of continuous outcomes in RCTs, in terms of bias, precision and statistical power. PMID:24712304
International Nuclear Information System (INIS)
Chung, Y. A.; Juh, R. H.; Kim, S. H.; Park, Y. H.; Lee, S. Y.; Sohn, H. S.; Chung, S. K.
2002-01-01
The basal ganglia are usually poorly delineated in Parkinson's diseases on IPT images. We have studied simultaneous dual isotope brain SPECTs using I-123-IPT and Tc-99m-HMPAO, in order to overcome this limitation of IPT imaging. 17 patients (M: 7, F: 10) with Parkinson's disease (Idiopathic parkison's disease: 12, Multiple system atrophy: 5) and 4 normal volunteers (N) underwent the dual isotope brain SPECT following simultaneously injection of 370 MBq Tc-99m-HMPAO (energy window: 130-146 keV) and 111 MBq I-123-IPT (energy window: 152-168 keV). We first obtained parameters of spatial normalization during spatial normalization of Tc-99m-HMPAO brain SPECT using SPECT template. Using these parameters, we could spatially normalized I-123-IPT brain PSECT to standard space, because these images were obtained simultaneously. The difference between each groups(N vs IPD, N vs MSA, IPD vs MSA) were compared with t-test (p<0.01). We demonstrated decreased perfusion in the head and body caudate and globus pallidus on MSA compared with IPD. No significant hypo- and hyperperfusion area was observed in the other analysis. The method proposed in this study can effectively evaluate the dopamine function, and is easily applicable to conventional gamma camera system with any dual energy window acquisition modes
International Nuclear Information System (INIS)
Zhang, Jinzhao; Segurado, Jacobo; Schneidesch, Christophe
2013-01-01
Since 1980's, Tractebel Engineering (TE) has being developed and applied a multi-physical modelling and safety analyses capability, based on a code package consisting of the best estimate 3D neutronic (PANTHER), system thermal hydraulic (RELAP5), core sub-channel thermal hydraulic (COBRA-3C), and fuel thermal mechanic (FRAPCON/FRAPTRAN) codes. A series of methodologies have been developed to perform and to license the reactor safety analysis and core reload design, based on the deterministic bounding approach. Following the recent trends in research and development as well as in industrial applications, TE has been working since 2010 towards the application of the statistical sensitivity and uncertainty analysis methods to the multi-physical modelling and licensing safety analyses. In this paper, the TE multi-physical modelling and safety analyses capability is first described, followed by the proposed TE best estimate plus statistical uncertainty analysis method (BESUAM). The chosen statistical sensitivity and uncertainty analysis methods (non-parametric order statistic method or bootstrap) and tool (DAKOTA) are then presented, followed by some preliminary results of their applications to FRAPCON/FRAPTRAN simulation of OECD RIA fuel rod codes benchmark and RELAP5/MOD3.3 simulation of THTF tests. (authors)
Statistical analysis of ultrasonic measurements in concrete
Chiang, Chih-Hung; Chen, Po-Chih
2002-05-01
Stress wave techniques such as measurements of ultrasonic pulse velocity are often used to evaluate concrete quality in structures. For proper interpretation of measurement results, the dependence of pulse transit time on the average acoustic impedance and the material homogeneity along the sound path need to be examined. Semi-direct measurement of pulse velocity could be more convenient than through transmission measurement. It is not necessary to assess both sides of concrete floors or walls. A novel measurement scheme is proposed and verified based on statistical analysis. It is shown that Semi-direct measurements are very effective for gathering large amount of pulse velocity data from concrete reference specimens. The variability of measurements is comparable with that reported by American Concrete Institute using either break-off or pullout tests.
Cornillon, Pierre-Andre; Husson, Francois; Jegou, Nicolas; Josse, Julie; Kloareg, Maela; Matzner-Lober, Eric; Rouviere, Laurent
2012-01-01
An Overview of RMain ConceptsInstalling RWork SessionHelpR ObjectsFunctionsPackagesExercisesPreparing DataReading Data from FileExporting ResultsManipulating VariablesManipulating IndividualsConcatenating Data TablesCross-TabulationExercisesR GraphicsConventional Graphical FunctionsGraphical Functions with latticeExercisesMaking Programs with RControl FlowsPredefined FunctionsCreating a FunctionExercisesStatistical MethodsIntroduction to the Statistical MethodsA Quick Start with RInstalling ROpening and Closing RThe Command PromptAttribution, Objects, and FunctionSelectionOther Rcmdr PackageImporting (or Inputting) DataGraphsStatistical AnalysisHypothesis TestConfidence Intervals for a MeanChi-Square Test of IndependenceComparison of Two MeansTesting Conformity of a ProportionComparing Several ProportionsThe Power of a TestRegressionSimple Linear RegressionMultiple Linear RegressionPartial Least Squares (PLS) RegressionAnalysis of Variance and CovarianceOne-Way Analysis of VarianceMulti-Way Analysis of Varian...
Statistical analysis of management data
Gatignon, Hubert
2013-01-01
This book offers a comprehensive approach to multivariate statistical analyses. It provides theoretical knowledge of the concepts underlying the most important multivariate techniques and an overview of actual applications.
Automatic Generation of Algorithms for the Statistical Analysis of Planetary Nebulae Images
Fischer, Bernd
2004-01-01
Analyzing data sets collected in experiments or by observations is a Core scientific activity. Typically, experimentd and observational data are &aught with uncertainty, and the analysis is based on a statistical model of the conjectured underlying processes, The large data volumes collected by modern instruments make computer support indispensible for this. Consequently, scientists spend significant amounts of their time with the development and refinement of the data analysis programs. AutoBayes [GF+02, FS03] is a fully automatic synthesis system for generating statistical data analysis programs. Externally, it looks like a compiler: it takes an abstract problem specification and translates it into executable code. Its input is a concise description of a data analysis problem in the form of a statistical model as shown in Figure 1; its output is optimized and fully documented C/C++ code which can be linked dynamically into the Matlab and Octave environments. Internally, however, it is quite different: AutoBayes derives a customized algorithm implementing the given model using a schema-based process, and then further refines and optimizes the algorithm into code. A schema is a parameterized code template with associated semantic constraints which define and restrict the template s applicability. The schema parameters are instantiated in a problem-specific way during synthesis as AutoBayes checks the constraints against the original model or, recursively, against emerging sub-problems. AutoBayes schema library contains problem decomposition operators (which are justified by theorems in a formal logic in the domain of Bayesian networks) as well as machine learning algorithms (e.g., EM, k-Means) and nu- meric optimization methods (e.g., Nelder-Mead simplex, conjugate gradient). AutoBayes augments this schema-based approach by symbolic computation to derive closed-form solutions whenever possible. This is a major advantage over other statistical data analysis systems
Advanced statistical methods in data science
Chen, Jiahua; Lu, Xuewen; Yi, Grace; Yu, Hao
2016-01-01
This book gathers invited presentations from the 2nd Symposium of the ICSA- CANADA Chapter held at the University of Calgary from August 4-6, 2015. The aim of this Symposium was to promote advanced statistical methods in big-data sciences and to allow researchers to exchange ideas on statistics and data science and to embraces the challenges and opportunities of statistics and data science in the modern world. It addresses diverse themes in advanced statistical analysis in big-data sciences, including methods for administrative data analysis, survival data analysis, missing data analysis, high-dimensional and genetic data analysis, longitudinal and functional data analysis, the design and analysis of studies with response-dependent and multi-phase designs, time series and robust statistics, statistical inference based on likelihood, empirical likelihood and estimating functions. The editorial group selected 14 high-quality presentations from this successful symposium and invited the presenters to prepare a fu...
Statistical Tutorial | Center for Cancer Research
Recent advances in cancer biology have resulted in the need for increased statistical analysis of research data. ST is designed as a follow up to Statistical Analysis of Research Data (SARD) held in April 2018. The tutorial will apply the general principles of statistical analysis of research data including descriptive statistics, z- and t-tests of means and mean
Improved score statistics for meta-analysis in single-variant and gene-level association studies.
Yang, Jingjing; Chen, Sai; Abecasis, Gonçalo
2018-06-01
Meta-analysis is now an essential tool for genetic association studies, allowing them to combine large studies and greatly accelerating the pace of genetic discovery. Although the standard meta-analysis methods perform equivalently as the more cumbersome joint analysis under ideal settings, they result in substantial power loss under unbalanced settings with various case-control ratios. Here, we investigate the power loss problem by the standard meta-analysis methods for unbalanced studies, and further propose novel meta-analysis methods performing equivalently to the joint analysis under both balanced and unbalanced settings. We derive improved meta-score-statistics that can accurately approximate the joint-score-statistics with combined individual-level data, for both linear and logistic regression models, with and without covariates. In addition, we propose a novel approach to adjust for population stratification by correcting for known population structures through minor allele frequencies. In the simulated gene-level association studies under unbalanced settings, our method recovered up to 85% power loss caused by the standard methods. We further showed the power gain of our methods in gene-level tests with 26 unbalanced studies of age-related macular degeneration . In addition, we took the meta-analysis of three unbalanced studies of type 2 diabetes as an example to discuss the challenges of meta-analyzing multi-ethnic samples. In summary, our improved meta-score-statistics with corrections for population stratification can be used to construct both single-variant and gene-level association studies, providing a useful framework for ensuring well-powered, convenient, cross-study analyses. © 2018 WILEY PERIODICALS, INC.
Savage, Leonard J
1972-01-01
Classic analysis of the foundations of statistics and development of personal probability, one of the greatest controversies in modern statistical thought. Revised edition. Calculus, probability, statistics, and Boolean algebra are recommended.
Statistical Model of Extreme Shear
DEFF Research Database (Denmark)
Larsen, Gunner Chr.; Hansen, Kurt Schaldemose
2004-01-01
In order to continue cost-optimisation of modern large wind turbines, it is important to continously increase the knowledge on wind field parameters relevant to design loads. This paper presents a general statistical model that offers site-specific prediction of the probability density function...... by a model that, on a statistically consistent basis, describe the most likely spatial shape of an extreme wind shear event. Predictions from the model have been compared with results from an extreme value data analysis, based on a large number of high-sampled full-scale time series measurements...... are consistent, given the inevitabel uncertainties associated with model as well as with the extreme value data analysis. Keywords: Statistical model, extreme wind conditions, statistical analysis, turbulence, wind loading, statistical analysis, turbulence, wind loading, wind shear, wind turbines....
A statistical manual for chemists
Bauer, Edward
1971-01-01
A Statistical Manual for Chemists, Second Edition presents simple and fast statistical tools for data analysis of working chemists. This edition is organized into nine chapters and begins with an overview of the fundamental principles of the statistical techniques used in experimental data analysis. The subsequent chapters deal with the concept of statistical average, experimental design, and analysis of variance. The discussion then shifts to control charts, with particular emphasis on variable charts that are more useful to chemists and chemical engineers. A chapter focuses on the effect
Jocelin, G.; Arivarasan, A.; Ganesan, M.; Prasad, N. Rajendra; Sasikala, G.
2016-04-01
Quantum dots (QDs) are gaining widespread recognition for its luminescence behavior and unique photo physical properties as a bio-marker and inorganic fluorophore. In spite of such rampant advantages, its application is clinically hampered depending on the surface coating decreasing its luminescence efficiency. The present study reports preparation of CdTe QDs capped with biologically active thiol based material, mercaptosuccinic acid (MSA) for diagnosis of oral cancer (KB) cells by acting as a fluorophore marking targeted tumor cells and at the same time exhibiting certain cytotoxic effects. Synthesized MSA coated CdTe QDs is spherical in shape with an average particle size of 3-5nm. In vitro, the rapid uptake of MSA CdTe QDs in oral cancer cell lines were assessed through fluorescence microscopy. Further, this study evaluates the therapeutic efficiency of MSA CdTe QDs in human oral cancer cell lines using MTT analysis. MSA CdTe QDs exhibit significant cytotoxicity in oral cancer cells in a dose dependent manner with low IC50 when compared with other raw CdTe QDs. MSA CdTe QDs were also treated with human lymphocytes (normal cells) to assess and compare the toxicity profile of QDs in normal and oral tumors. The results of our present study strengthen our hypothesis of using MSA CdTe QDs as detector for tracking and fluorescence imaging of oral cancer cells and exhibiting sufficient cytotoxicity in them.
Statistical analysis of the early phase of SBO accident for PWR
Energy Technology Data Exchange (ETDEWEB)
Kozmenkov, Yaroslav, E-mail: y.kozmenkov@hzdr.de; Jobst, Matthias, E-mail: m.jobst@hzdr.de; Kliem, Soeren, E-mail: s.kliem@hzdr.de; Schaefer, Frank, E-mail: f.schaefer@hzdr.de; Wilhelm, Polina, E-mail: p.wilhelm@hzdr.de
2017-04-01
Highlights: • Best estimate model of generic German PWR is used in ATHLET-CD simulations. • Uncertainty and sensitivity analysis of the early phase of SBO accident is presented. • Prediction intervals for occurrence of main events are evaluated. - Abstract: A statistical approach is used to analyse the early phase of station blackout accident for generic German PWR with the best estimate system code ATHLET-CD as a computation tool. The analysis is mainly focused on the timescale uncertainties of the accident events which can be detected at the plant. The developed input deck allows variations of all input uncertainty parameters relevant to the case. The list of identified and quantified input uncertainties includes 30 parameters related to the simulated physical phenomena/processes. Time uncertainties of main events as well as the major contributors to these uncertainties are defined. The uncertainty in decay heat has the highest contribution to the uncertainties of the analysed events. A linear regression analysis is used for predicting times of future events from detected times of occurred/past events. An accuracy of event predictions is estimated and verified. The presented statistical approach could be helpful for assessing and improving existing or elaborating additional emergency operating procedures aimed to prevent severe damage of reactor core.
Directory of Open Access Journals (Sweden)
Zaira M Alieva
2016-01-01
Full Text Available The article analyzes the application of mathematical and statistical methods in the analysis of socio-humanistic texts. The essence of mathematical and statistical methods, presents examples of their use in the study of Humanities and social phenomena. Considers the key issues faced by the expert in the application of mathematical-statistical methods in socio-humanitarian sphere, including the availability of sustainable contrasting socio-humanitarian Sciences and mathematics; the complexity of the allocation of the object that is the bearer of the problem; having the use of a probabilistic approach. The conclusion according to the results of the study.
Research and Development on Food Nutrition Statistical Analysis Software System
Du Li; Ke Yun
2013-01-01
Designing and developing a set of food nutrition component statistical analysis software can realize the automation of nutrition calculation, improve the nutrition processional professional’s working efficiency and achieve the informatization of the nutrition propaganda and education. In the software development process, the software engineering method and database technology are used to calculate the human daily nutritional intake and the intelligent system is used to evaluate the user’s hea...
Applied statistical designs for the researcher
Paulson, Daryl S
2003-01-01
Research and Statistics Basic Review of Parametric Statistics Exploratory Data Analysis Two Sample Tests Completely Randomized One-Factor Analysis of Variance One and Two Restrictions on Randomization Completely Randomized Two-Factor Factorial Designs Two-Factor Factorial Completely Randomized Blocked Designs Useful Small Scale Pilot Designs Nested Statistical Designs Linear Regression Nonparametric Statistics Introduction to Research Synthesis and "Meta-Analysis" and Conclusory Remarks References Index.
Louie, Brian E; Farivar, Alexander S; Shultz, Dale; Brennan, Christina; Vallières, Eric; Aye, Ralph W
2014-08-01
In 2012 the United States Food and Drug Administration approved implantation of a magnetic sphincter to augment the native reflux barrier based on single-series data. We sought to compare our initial experience with magnetic sphincter augmentation (MSA) with laparoscopic Nissen fundoplication (LNF). A retrospective case-control study was performed of consecutive patients undergoing either procedure who had chronic gastrointestinal esophageal disease (GERD) and a hiatal hernia of less than 3 cm. Sixty-six patients underwent operations (34 MSA and 32 LNF). The groups were similar in reflux characteristics and hernia size. Operative time was longer for LNF (118 vs 73 min) and resulted in 1 return to the operating room and 1 readmission. Preoperative symptoms were abolished in both groups. At 6 months or longer postoperatively, scores on the Gastroesophageal Reflux Disease Health Related Quality of Life scale improved from 20.6 to 5.0 for MSA vs 22.8 to 5.1 for LNF. Postoperative DeMeester scores (14.2 vs 5.1, p=0.0001) and the percentage of time pH was less than 4 (4.6 vs 1.1; p=0.0001) were normalized in both groups but statistically different. MSA resulted in improved gassy and bloated feelings (1.32 vs 2.36; p=0.59) and enabled belching in 67% compared with none of the LNFs. MSA results in similar objective control of GERD, symptom resolution, and improved quality of life compared with LNF. MSA seems to restore a more physiologic sphincter that allows physiologic reflux, facilitates belching, and creates less bloating and flatulence. This device has the potential to allow individualized treatment of patients with GERD and increase the surgical treatment of GERD. Copyright © 2014 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
Statistical Mechanics Analysis of ATP Binding to a Multisubunit Enzyme
International Nuclear Information System (INIS)
Zhang Yun-Xin
2014-01-01
Due to inter-subunit communication, multisubunit enzymes usually hydrolyze ATP in a concerted fashion. However, so far the principle of this process remains poorly understood. In this study, from the viewpoint of statistical mechanics, a simple model is presented. In this model, we assume that the binding of ATP will change the potential of the corresponding enzyme subunit, and the degree of this change depends on the state of its adjacent subunits. The probability of enzyme in a given state satisfies the Boltzmann's distribution. Although it looks much simple, this model can fit the recent experimental data of chaperonin TRiC/CCT well. From this model, the dominant state of TRiC/CCT can be obtained. This study provide a new way to understand biophysical processe by statistical mechanics analysis. (interdisciplinary physics and related areas of science and technology)
Avoiding Pitfalls in the Statistical Analysis of Heterogeneous Tumors
Directory of Open Access Journals (Sweden)
Judith-Anne W. Chapman
2009-01-01
Full Text Available Information about tumors is usually obtained from a single assessment of a tumor sample, performed at some point in the course of the development and progression of the tumor, with patient characteristics being surrogates for natural history context. Differences between cells within individual tumors (intratumor heterogeneity and between tumors of different patients (intertumor heterogeneity may mean that a small sample is not representative of the tumor as a whole, particularly for solid tumors which are the focus of this paper. This issue is of increasing importance as high-throughput technologies generate large multi-feature data sets in the areas of genomics, proteomics, and image analysis. Three potential pitfalls in statistical analysis are discussed (sampling, cut-points, and validation and suggestions are made about how to avoid these pitfalls.
Fissure formation in coke. 3: Coke size distribution and statistical analysis
Energy Technology Data Exchange (ETDEWEB)
D.R. Jenkins; D.E. Shaw; M.R. Mahoney [CSIRO, North Ryde, NSW (Australia). Mathematical and Information Sciences
2010-07-15
A model of coke stabilization, based on a fundamental model of fissuring during carbonisation is used to demonstrate the applicability of the fissuring model to actual coke size distributions. The results indicate that the degree of stabilization is important in determining the size distribution. A modified form of the Weibull distribution is shown to provide a better representation of the whole coke size distribution compared to the Rosin-Rammler distribution, which is generally only fitted to the lump coke. A statistical analysis of a large number of experiments in a pilot scale coke oven shows reasonably good prediction of the coke mean size, based on parameters related to blend rank, amount of low rank coal, fluidity and ash. However, the prediction of measures of the spread of the size distribution is more problematic. The fissuring model, the size distribution representation and the statistical analysis together provide a comprehensive capability for understanding and predicting the mean size and distribution of coke lumps produced during carbonisation. 12 refs., 16 figs., 4 tabs.
Statistical methods for the analysis of high-throughput metabolomics data
Directory of Open Access Journals (Sweden)
Fabian J. Theis
2013-01-01
Full Text Available Metabolomics is a relatively new high-throughput technology that aims at measuring all endogenous metabolites within a biological sample in an unbiased fashion. The resulting metabolic profiles may be regarded as functional signatures of the physiological state, and have been shown to comprise effects of genetic regulation as well as environmental factors. This potential to connect genotypic to phenotypic information promises new insights and biomarkers for different research fields, including biomedical and pharmaceutical research. In the statistical analysis of metabolomics data, many techniques from other omics fields can be reused. However recently, a number of tools specific for metabolomics data have been developed as well. The focus of this mini review will be on recent advancements in the analysis of metabolomics data especially by utilizing Gaussian graphical models and independent component analysis.
Different loss of dopamine transporter according to subtype of multiple system atrophy
Energy Technology Data Exchange (ETDEWEB)
Kim, Hae Won [Keimyung University Dongsan Medical Center, Department of Nuclear Medicine, Daegu (Korea, Republic of); Keimyung University, Department of Nuclear Medicine, School of Medicine, Jung-gu, Daegu (Korea, Republic of); Kim, Jae Seung; Oh, Minyoung; Oh, Jungsu S.; Lee, Sang Joo; Oh, Seung Jun [University of Ulsan College of Medicine, Department of Nuclear Medicine, Asan Medical Center, Songpa-gu, Seoul (Korea, Republic of); Chung, Sun Ju; Lee, Chong Sik [University of Ulsan College of Medicine, Department of Neurology, Asan Medical Center, Seoul (Korea, Republic of)
2016-03-15
The aim of this study was to evaluate whether striatal dopamine transporter (DAT) loss as measured by {sup 18}F-fluorinated-N-3-fluoropropyl-2-b-carboxymethoxy-3-b-(4-iodophenyl) nortropane ([{sup 18}F]FP-CIT) PET differs according to the metabolic subtype of multiple system atrophy (MSA) as assessed by [{sup 18}F]FDG PET. This retrospective study included 50 patients with clinically diagnosed MSA who underwent [{sup 18}F]FP-CIT and [{sup 18}F]FDG brain PET scans. The PET images were analysed using 12 striatal subregional volume-of-interest templates (bilateral ventral striatum, anterior caudate, posterior caudate, anterior putamen, posterior putamen, and ventral putamen). The patients were classified into three metabolic subtypes according to the [{sup 18}F]FDG PET findings: MSA-P{sub m} (striatal hypometabolism only), MSA-mixed{sub m} (both striatal and cerebellar hypometabolism), and MSA-C{sub m} (cerebellar hypometabolism only). The subregional glucose metabolic ratio (MR{sub gluc}), subregional DAT binding ratio (BR{sub DAT}), and intersubregional ratio (ISR{sub DAT}; defined as the BR{sub DAT} ratio of one striatal subregion to that of another striatal subregion) were compared according to metabolic subtype. Of the 50 patients, 13 presented with MSA-P{sub m}, 16 presented with MSA-mixed{sub m}, and 21 presented with MSA-C{sub m}. The BR{sub DAT} of all striatal subregions in the MSA-P{sub m} and MSA-mixed{sub m} groups were significantly lower than those in the MSA-C{sub m} group. The posterior putamen/anterior putamen ISR{sub DAT} and anterior putamen/ventral striatum ISR{sub DAT} in the MSA-P{sub m} and MSA-mixed{sub m} groups were significantly lower than those in the MSA-C{sub m} group. Patients with MSA-P{sub m} and MSA-mixed{sub m} showed more severe DAT loss in the striatum than patients with MSA-C{sub m}. Patients with MSA-C{sub m} had more diffuse DAT loss than patients with MSA-P{sub m} and MSA-mixed{sub m}. (orig.)
Directory of Open Access Journals (Sweden)
Amanda Worker
Full Text Available Parkinson's disease (PD, Multiple System Atrophy (MSA and Progressive Supranuclear Palsy (PSP are neurodegenerative diseases that can be difficult to distinguish clinically. The objective of the current study was to use surface-based analysis techniques to assess cortical thickness, surface area and grey matter volume to identify unique morphological patterns of cortical atrophy in PD, MSA and PSP and to relate these patterns of change to disease duration and clinical features.High resolution 3D T1-weighted MRI volumes were acquired from 14 PD patients, 18 MSA, 14 PSP and 19 healthy control participants. Cortical thickness, surface area and volume analyses were carried out using the automated surface-based analysis package FreeSurfer (version 5.1.0. Measures of disease severity and duration were assessed for correlation with cortical morphometric changes in each clinical group.Results show that in PSP, widespread cortical thinning and volume loss occurs within the frontal lobe, particularly the superior frontal gyrus. In addition, PSP patients also displayed increased surface area in the pericalcarine. In comparison, PD and MSA did not display significant changes in cortical morphology.These results demonstrate that patients with clinically established PSP exhibit distinct patterns of cortical atrophy, particularly affecting the frontal lobe. These results could be used in the future to develop a useful clinical application of MRI to distinguish PSP patients from PD and MSA patients.
Noel, Jean; Prieto, Juan C.; Styner, Martin
2017-03-01
Functional Analysis of Diffusion Tensor Tract Statistics (FADTTS) is a toolbox for analysis of white matter (WM) fiber tracts. It allows associating diffusion properties along major WM bundles with a set of covariates of interest, such as age, diagnostic status and gender, and the structure of the variability of these WM tract properties. However, to use this toolbox, a user must have an intermediate knowledge in scripting languages (MATLAB). FADTTSter was created to overcome this issue and make the statistical analysis accessible to any non-technical researcher. FADTTSter is actively being used by researchers at the University of North Carolina. FADTTSter guides non-technical users through a series of steps including quality control of subjects and fibers in order to setup the necessary parameters to run FADTTS. Additionally, FADTTSter implements interactive charts for FADTTS' outputs. This interactive chart enhances the researcher experience and facilitates the analysis of the results. FADTTSter's motivation is to improve usability and provide a new analysis tool to the community that complements FADTTS. Ultimately, by enabling FADTTS to a broader audience, FADTTSter seeks to accelerate hypothesis testing in neuroimaging studies involving heterogeneous clinical data and diffusion tensor imaging. This work is submitted to the Biomedical Applications in Molecular, Structural, and Functional Imaging conference. The source code of this application is available in NITRC.
International Nuclear Information System (INIS)
Lima, Waldir C. de; Lainetti, Paulo E.O.; Lima, Roberto M. de; Peres, Henrique G.
1996-01-01
The purpose of this work is the study for introduction of the statistical control in test and analysis realized in the Departamento de Tecnologia de Combustiveis. Are succinctly introduced: theories of statistical process control, elaboration of control graphs, the definition of standards test (or analysis) and how the standards are employed for determination the control limits in the graphs. The more expressive result is the applied form for the practice quality control, moreover it is also exemplified the utilization of one standard of verification and analysis in the laboratory of control. (author)
Signal processing and statistical analysis of spaced-based measurements
International Nuclear Information System (INIS)
Iranpour, K.
1996-05-01
The reports deals with data obtained by the ROSE rocket project. This project was designed to investigate the low altitude auroral instabilities in the electrojet region. The spectral and statistical analyses indicate the existence of unstable waves in the ionized gas in the region. An experimentally obtained dispersion relation for these waves were established. It was demonstrated that the characteristic phase velocities are much lower than what is expected from the standard theoretical results. This analysis of the ROSE data indicate the cascading of energy from lower to higher frequencies. 44 refs., 54 figs
A Statistical Analysis of Cointegration for I(2) Variables
DEFF Research Database (Denmark)
Johansen, Søren
1995-01-01
be conducted using the ¿ sup2/sup distribution. It is shown to what extent inference on the cointegration ranks can be conducted using the tables already prepared for the analysis of cointegration of I(1) variables. New tables are needed for the test statistics to control the size of the tests. This paper...... contains a multivariate test for the existence of I(2) variables. This test is illustrated using a data set consisting of U.K. and foreign prices and interest rates as well as the exchange rate....
Using R for Data Management, Statistical Analysis, and Graphics
Horton, Nicholas J
2010-01-01
This title offers quick and easy access to key element of documentation. It includes worked examples across a wide variety of applications, tasks, and graphics. "Using R for Data Management, Statistical Analysis, and Graphics" presents an easy way to learn how to perform an analytical task in R, without having to navigate through the extensive, idiosyncratic, and sometimes unwieldy software documentation and vast number of add-on packages. Organized by short, clear descriptive entries, the book covers many common tasks, such as data management, descriptive summaries, inferential proc
Statistical approach to partial equilibrium analysis
Wang, Yougui; Stanley, H. E.
2009-04-01
A statistical approach to market equilibrium and efficiency analysis is proposed in this paper. One factor that governs the exchange decisions of traders in a market, named willingness price, is highlighted and constitutes the whole theory. The supply and demand functions are formulated as the distributions of corresponding willing exchange over the willingness price. The laws of supply and demand can be derived directly from these distributions. The characteristics of excess demand function are analyzed and the necessary conditions for the existence and uniqueness of equilibrium point of the market are specified. The rationing rates of buyers and sellers are introduced to describe the ratio of realized exchange to willing exchange, and their dependence on the market price is studied in the cases of shortage and surplus. The realized market surplus, which is the criterion of market efficiency, can be written as a function of the distributions of willing exchange and the rationing rates. With this approach we can strictly prove that a market is efficient in the state of equilibrium.
Directions for new developments on statistical design and analysis of small population group trials.
Hilgers, Ralf-Dieter; Roes, Kit; Stallard, Nigel
2016-06-14
Most statistical design and analysis methods for clinical trials have been developed and evaluated where at least several hundreds of patients could be recruited. These methods may not be suitable to evaluate therapies if the sample size is unavoidably small, which is usually termed by small populations. The specific sample size cut off, where the standard methods fail, needs to be investigated. In this paper, the authors present their view on new developments for design and analysis of clinical trials in small population groups, where conventional statistical methods may be inappropriate, e.g., because of lack of power or poor adherence to asymptotic approximations due to sample size restrictions. Following the EMA/CHMP guideline on clinical trials in small populations, we consider directions for new developments in the area of statistical methodology for design and analysis of small population clinical trials. We relate the findings to the research activities of three projects, Asterix, IDeAl, and InSPiRe, which have received funding since 2013 within the FP7-HEALTH-2013-INNOVATION-1 framework of the EU. As not all aspects of the wide research area of small population clinical trials can be addressed, we focus on areas where we feel advances are needed and feasible. The general framework of the EMA/CHMP guideline on small population clinical trials stimulates a number of research areas. These serve as the basis for the three projects, Asterix, IDeAl, and InSPiRe, which use various approaches to develop new statistical methodology for design and analysis of small population clinical trials. Small population clinical trials refer to trials with a limited number of patients. Small populations may result form rare diseases or specific subtypes of more common diseases. New statistical methodology needs to be tailored to these specific situations. The main results from the three projects will constitute a useful toolbox for improved design and analysis of small
Multivariate Statistical Analysis of Water Quality data in Indian River Lagoon, Florida
Sayemuzzaman, M.; Ye, M.
2015-12-01
The Indian River Lagoon, is part of the longest barrier island complex in the United States, is a region of particular concern to the environmental scientist because of the rapid rate of human development throughout the region and the geographical position in between the colder temperate zone and warmer sub-tropical zone. Thus, the surface water quality analysis in this region always brings the newer information. In this present study, multivariate statistical procedures were applied to analyze the spatial and temporal water quality in the Indian River Lagoon over the period 1998-2013. Twelve parameters have been analyzed on twelve key water monitoring stations in and beside the lagoon on monthly datasets (total of 27,648 observations). The dataset was treated using cluster analysis (CA), principle component analysis (PCA) and non-parametric trend analysis. The CA was used to cluster twelve monitoring stations into four groups, with stations on the similar surrounding characteristics being in the same group. The PCA was then applied to the similar groups to find the important water quality parameters. The principal components (PCs), PC1 to PC5 was considered based on the explained cumulative variances 75% to 85% in each cluster groups. Nutrient species (phosphorus and nitrogen), salinity, specific conductivity and erosion factors (TSS, Turbidity) were major variables involved in the construction of the PCs. Statistical significant positive or negative trends and the abrupt trend shift were detected applying Mann-Kendall trend test and Sequential Mann-Kendall (SQMK), for each individual stations for the important water quality parameters. Land use land cover change pattern, local anthropogenic activities and extreme climate such as drought might be associated with these trends. This study presents the multivariate statistical assessment in order to get better information about the quality of surface water. Thus, effective pollution control/management of the surface
Diversity of Babesia bovis merozoite surface antigen genes in the Philippines.
Tattiyapong, Muncharee; Sivakumar, Thillaiampalam; Ybanez, Adrian Patalinghug; Ybanez, Rochelle Haidee Daclan; Perez, Zandro Obligado; Guswanto, Azirwan; Igarashi, Ikuo; Yokoyama, Naoaki
2014-02-01
Babesia bovis is the causative agent of fatal babesiosis in cattle. In the present study, we investigated the genetic diversity of B. bovis among Philippine cattle, based on the genes that encode merozoite surface antigens (MSAs). Forty-one B. bovis-positive blood DNA samples from cattle were used to amplify the msa-1, msa-2b, and msa-2c genes. In phylogenetic analyses, the msa-1, msa-2b, and msa-2c gene sequences generated from Philippine B. bovis-positive DNA samples were found in six, three, and four different clades, respectively. All of the msa-1 and most of the msa-2b sequences were found in clades that were formed only by Philippine msa sequences in the respective phylograms. While all the msa-1 sequences from the Philippines showed similarity to those formed by Australian msa-1 sequences, the msa-2b sequences showed similarity to either Australian or Mexican msa-2b sequences. In contrast, msa-2c sequences from the Philippines were distributed across all the clades of the phylogram, although one clade was formed exclusively by Philippine msa-2c sequences. Similarities among the deduced amino acid sequences of MSA-1, MSA-2b, and MSA-2c from the Philippines were 62.2-100, 73.1-100, and 67.3-100%, respectively. The present findings demonstrate that B. bovis populations are genetically diverse in the Philippines. This information will provide a good foundation for the future design and implementation of improved immunological preventive methodologies against bovine babesiosis in the Philippines. The study has also generated a set of data that will be useful for futher understanding of the global genetic diversity of this important parasite. © 2013.
Statistical Analysis of Data for Timber Strengths
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard; Hoffmeyer, P.
Statistical analyses are performed for material strength parameters from approximately 6700 specimens of structural timber. Non-parametric statistical analyses and fits to the following distributions types have been investigated: Normal, Lognormal, 2 parameter Weibull and 3-parameter Weibull...
Riley, Richard D.
2017-01-01
An important question for clinicians appraising a meta‐analysis is: are the findings likely to be valid in their own practice—does the reported effect accurately represent the effect that would occur in their own clinical population? To this end we advance the concept of statistical validity—where the parameter being estimated equals the corresponding parameter for a new independent study. Using a simple (‘leave‐one‐out’) cross‐validation technique, we demonstrate how we may test meta‐analysis estimates for statistical validity using a new validation statistic, Vn, and derive its distribution. We compare this with the usual approach of investigating heterogeneity in meta‐analyses and demonstrate the link between statistical validity and homogeneity. Using a simulation study, the properties of Vn and the Q statistic are compared for univariate random effects meta‐analysis and a tailored meta‐regression model, where information from the setting (included as model covariates) is used to calibrate the summary estimate to the setting of application. Their properties are found to be similar when there are 50 studies or more, but for fewer studies Vn has greater power but a higher type 1 error rate than Q. The power and type 1 error rate of Vn are also shown to depend on the within‐study variance, between‐study variance, study sample size, and the number of studies in the meta‐analysis. Finally, we apply Vn to two published meta‐analyses and conclude that it usefully augments standard methods when deciding upon the likely validity of summary meta‐analysis estimates in clinical practice. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:28620945
Modular reweighting software for statistical mechanical analysis of biased equilibrium data
Sindhikara, Daniel J.
2012-07-01
Here a simple, useful, modular approach and software suite designed for statistical reweighting and analysis of equilibrium ensembles is presented. Statistical reweighting is useful and sometimes necessary for analysis of equilibrium enhanced sampling methods, such as umbrella sampling or replica exchange, and also in experimental cases where biasing factors are explicitly known. Essentially, statistical reweighting allows extrapolation of data from one or more equilibrium ensembles to another. Here, the fundamental separable steps of statistical reweighting are broken up into modules - allowing for application to the general case and avoiding the black-box nature of some “all-inclusive” reweighting programs. Additionally, the programs included are, by-design, written with little dependencies. The compilers required are either pre-installed on most systems, or freely available for download with minimal trouble. Examples of the use of this suite applied to umbrella sampling and replica exchange molecular dynamics simulations will be shown along with advice on how to apply it in the general case. New version program summaryProgram title: Modular reweighting version 2 Catalogue identifier: AEJH_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJH_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 179 118 No. of bytes in distributed program, including test data, etc.: 8 518 178 Distribution format: tar.gz Programming language: C++, Python 2.6+, Perl 5+ Computer: Any Operating system: Any RAM: 50-500 MB Supplementary material: An updated version of the original manuscript (Comput. Phys. Commun. 182 (2011) 2227) is available Classification: 4.13 Catalogue identifier of previous version: AEJH_v1_0 Journal reference of previous version: Comput. Phys. Commun. 182 (2011) 2227 Does the new
A Statistic Analysis Of Romanian Seaside Hydro Tourism
Secara Mirela
2011-01-01
Tourism represents one of the ways of spending spare time for rest, recreation, treatment and entertainment, and the specific aspect of Constanta County economy is touristic and spa capitalization of Romanian seaside. In order to analyze hydro tourism on Romanian seaside we have used statistic indicators within tourism as well as statistic methods such as chronological series, interdependent statistic series, regression and statistic correlation. The major objective of this research is to rai...
Statistical Analysis of Human Body Movement and Group Interactions in Response to Music
Desmet, Frank; Leman, Marc; Lesaffre, Micheline; de Bruyn, Leen
Quantification of time series that relate to physiological data is challenging for empirical music research. Up to now, most studies have focused on time-dependent responses of individual subjects in controlled environments. However, little is known about time-dependent responses of between-subject interactions in an ecological context. This paper provides new findings on the statistical analysis of group synchronicity in response to musical stimuli. Different statistical techniques were applied to time-dependent data obtained from an experiment on embodied listening in individual and group settings. Analysis of inter group synchronicity are described. Dynamic Time Warping (DTW) and Cross Correlation Function (CCF) were found to be valid methods to estimate group coherence of the resulting movements. It was found that synchronicity of movements between individuals (human-human interactions) increases significantly in the social context. Moreover, Analysis of Variance (ANOVA) revealed that the type of music is the predominant factor in both the individual and the social context.
Maximum Likelihood, Consistency and Data Envelopment Analysis: A Statistical Foundation
Rajiv D. Banker
1993-01-01
This paper provides a formal statistical basis for the efficiency evaluation techniques of data envelopment analysis (DEA). DEA estimators of the best practice monotone increasing and concave production function are shown to be also maximum likelihood estimators if the deviation of actual output from the efficient output is regarded as a stochastic variable with a monotone decreasing probability density function. While the best practice frontier estimator is biased below the theoretical front...
Petocz, Peter; Sowey, Eric
2012-01-01
The term "data snooping" refers to the practice of choosing which statistical analyses to apply to a set of data after having first looked at those data. Data snooping contradicts a fundamental precept of applied statistics, that the scheme of analysis is to be planned in advance. In this column, the authors shall elucidate the…
Critical analysis of adsorption data statistically
Kaushal, Achla; Singh, S. K.
2017-10-01
Experimental data can be presented, computed, and critically analysed in a different way using statistics. A variety of statistical tests are used to make decisions about the significance and validity of the experimental data. In the present study, adsorption was carried out to remove zinc ions from contaminated aqueous solution using mango leaf powder. The experimental data was analysed statistically by hypothesis testing applying t test, paired t test and Chi-square test to (a) test the optimum value of the process pH, (b) verify the success of experiment and (c) study the effect of adsorbent dose in zinc ion removal from aqueous solutions. Comparison of calculated and tabulated values of t and χ 2 showed the results in favour of the data collected from the experiment and this has been shown on probability charts. K value for Langmuir isotherm was 0.8582 and m value for Freundlich adsorption isotherm obtained was 0.725, both are mango leaf powder.
Application of Statistical Tools for Data Analysis and Interpretation in Rice Plant Pathology
Directory of Open Access Journals (Sweden)
Parsuram Nayak
2018-01-01
Full Text Available There has been a significant advancement in the application of statistical tools in plant pathology during the past four decades. These tools include multivariate analysis of disease dynamics involving principal component analysis, cluster analysis, factor analysis, pattern analysis, discriminant analysis, multivariate analysis of variance, correspondence analysis, canonical correlation analysis, redundancy analysis, genetic diversity analysis, and stability analysis, which involve in joint regression, additive main effects and multiplicative interactions, and genotype-by-environment interaction biplot analysis. The advanced statistical tools, such as non-parametric analysis of disease association, meta-analysis, Bayesian analysis, and decision theory, take an important place in analysis of disease dynamics. Disease forecasting methods by simulation models for plant diseases have a great potentiality in practical disease control strategies. Common mathematical tools such as monomolecular, exponential, logistic, Gompertz and linked differential equations take an important place in growth curve analysis of disease epidemics. The highly informative means of displaying a range of numerical data through construction of box and whisker plots has been suggested. The probable applications of recent advanced tools of linear and non-linear mixed models like the linear mixed model, generalized linear model, and generalized linear mixed models have been presented. The most recent technologies such as micro-array analysis, though cost effective, provide estimates of gene expressions for thousands of genes simultaneously and need attention by the molecular biologists. Some of these advanced tools can be well applied in different branches of rice research, including crop improvement, crop production, crop protection, social sciences as well as agricultural engineering. The rice research scientists should take advantage of these new opportunities adequately in
Aspects of statistical consulting not taught by acedemia
DEFF Research Database (Denmark)
Kenett, R.; Thyregod, Poul
2006-01-01
Education in statistics is preparing for statistical analysis but not necessarily for statistical consulting. The objective of this paper is to explore the phases that precede and follow statistical analysis. Specifically these include: problem elicitation, data collection and, following statisti......Education in statistics is preparing for statistical analysis but not necessarily for statistical consulting. The objective of this paper is to explore the phases that precede and follow statistical analysis. Specifically these include: problem elicitation, data collection and, following...... statistical data analysis, formulation of findings, and presentation of findings, and recommendations. Some insights derived from a literature review and real-life case studies are provided. Areas for joint research by statisticians and cognitive scientists are outlined....
SOCR: Statistics Online Computational Resource
Directory of Open Access Journals (Sweden)
Ivo D. Dinov
2006-10-01
Full Text Available The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an integrated educational web-based framework for: interactive distribution modeling, virtual online probability experimentation, statistical data analysis, visualization and integration. Following years of experience in statistical teaching at all college levels using established licensed statistical software packages, like STATA, S-PLUS, R, SPSS, SAS, Systat, etc., we have attempted to engineer a new statistics education environment, the Statistics Online Computational Resource (SOCR. This resource performs many of the standard types of statistical analysis, much like other classical tools. In addition, it is designed in a plug-in object-oriented architecture and is completely platform independent, web-based, interactive, extensible and secure. Over the past 4 years we have tested, fine-tuned and reanalyzed the SOCR framework in many of our undergraduate and graduate probability and statistics courses and have evidence that SOCR resources build student's intuition and enhance their learning.
Green, Jeffrey J.; Stone, Courtenay C.; Zegeye, Abera; Charles, Thomas A.
2009-01-01
Because statistical analysis requires the ability to use mathematics, students typically are required to take one or more prerequisite math courses prior to enrolling in the business statistics course. Despite these math prerequisites, however, many students find it difficult to learn business statistics. In this study, we use an ordered probit…