The statistical analysis of single-subject data: a comparative examination.
Nourbakhsh, M R; Ottenbacher, K J
1994-08-01
The purposes of this study were to examine whether the use of three different statistical methods for analyzing single-subject data led to similar results and to identify components of graphed data that influence agreement (or disagreement) among the statistical procedures. Forty-two graphs containing single-subject data were examined. Twenty-one were AB charts of hypothetical data. The other 21 graphs appeared in Journal of Applied Behavioral Analysis, Physical Therapy, Journal of the Association for Persons With Severe Handicaps, and Journal of Behavior Therapy and Experimental Psychiatry. Three different statistical tests--the C statistic, the two-standard deviation band method, and the split-middle method of trend estimation--were used to analyze the 42 graphs. A relatively low degree of agreement (38%) was found among the three statistical tests. The highest rate of agreement for any two statistical procedures (71%) was found for the two-standard deviation band method and the C statistic. A logistic regression analysis revealed that overlap in single-subject graphed data was the best predictor of disagreement among the three statistical tests (beta = .49, P < .03). The results indicate that interpretation of data from single-subject research designs is directly influenced by the method of data analysis selected. Variation exists across both visual and statistical methods of data reduction. The advantages and disadvantages of statistical and visual analysis are described.
Energy Technology Data Exchange (ETDEWEB)
Jussila, Vilho [VTT Technical Research Centre of Finland Ltd, Kemistintie 3, 02230 Espoo (Finland); Li, Yue [Dept. of Civil Engineering, Case Western Reserve University, Cleveland, OH 44106 (United States); Fülöp, Ludovic, E-mail: ludovic.fulop@vtt.fi [VTT Technical Research Centre of Finland Ltd, Kemistintie 3, 02230 Espoo (Finland)
2016-12-01
Highlights: • Floor flexibility plays a non-negligible role in amplifying horizontal vibrations. • COV of in-floor horizontal and vertical acceleration are 0.15–0.25 and 0.25–0.55. • In-floor variation of vibrations is higher in lower floors. • Floor spectra from limited nodes underestimates vibrations by a factor of 1.5–1.75. - Abstract: Floor vibration of a reactor building subjected to seismic loads was investigated, with the aim of quantifying the variability of vibrations on each floor. A detailed 3D building model founded on the bedrock was excited simultaneously in three directions by artificial accelerograms compatible with Finnish ground response spectra. Dynamic simulation for 21 s was carried out using explicit time integration. The extracted results of the simulation were acceleration in several floor locations, transformed to pseudo-acceleration (PSA) spectra in the next stage. At first, the monitored locations on the floors were estimated by engineering judgement in order to arrive at a feasible number of floor nodes for post processing of the data. It became apparent that engineering judgment was insufficient to depict the key locations with high floor vibrations, which resulted in un-conservative vibration estimates. For this reason, a more systematic approach was later considered, in which nodes of the floors were selected with a more refined grid of 2 m. With this method, in addition to the highest PSA peaks in all directions, the full vibration distribution in each floor can be determined. A statistical evaluation of the floor responses was also carried out in order to define floor accelerations and PSAs with high confidence of non-exceedance. The conclusion was that in-floor variability can be as high as 50–60% and models with sufficiently dense node grids should be used in order to achieve a realistic estimate of floor vibration under seismic action. The effects of the shape of the input spectra, damping, and flexibility of the
Statistical Analysis for Subjective and Objective Evaluations of Dental Drill Sounds.
Directory of Open Access Journals (Sweden)
Tomomi Yamada
Full Text Available The sound produced by a dental air turbine handpiece (dental drill can markedly influence the sound environment in a dental clinic. Indeed, many patients report that the sound of a dental drill elicits an unpleasant feeling. Although several manufacturers have attempted to reduce the sound pressure levels produced by dental drills during idling based on ISO 14457, the sound emitted by such drills under active drilling conditions may negatively influence the dental clinic sound environment. The physical metrics related to the unpleasant impressions associated with dental drill sounds have not been determined. In the present study, psychological measurements of dental drill sounds were conducted with the aim of facilitating improvement of the sound environment at dental clinics. Specifically, we examined the impressions elicited by the sounds of 12 types of dental drills in idling and drilling conditions using a semantic differential. The analysis revealed that the impressions of dental drill sounds varied considerably between idling and drilling conditions and among the examined drills. This finding suggests that measuring the sound of a dental drill in idling conditions alone may be insufficient for evaluating the effects of the sound. We related the results of the psychological evaluations to those of measurements of the physical metrics of equivalent continuous A-weighted sound pressure levels (LAeq and sharpness. Factor analysis indicated that impressions of the dental drill sounds consisted of two factors: "metallic and unpleasant" and "powerful". LAeq had a strong relationship with "powerful impression", calculated sharpness was positively related to "metallic impression", and "unpleasant impression" was predicted by the combination of both LAeq and calculated sharpness. The present analyses indicate that, in addition to a reduction in sound pressure level, refining the frequency components of dental drill sounds is important for creating a
Tenan, Matthew S
2016-01-01
Indirect calorimetry and oxygen consumption (VO2) are accepted tools in human physiology research. It has been shown that indirect calorimetry systems exhibit differential measurement error, where the error of a device is systematically different depending on the volume of gas flow. Moreover, systems commonly report multiple decimal places of precision, giving the clinician a false sense of device accuracy. The purpose of this manuscript is to demonstrate the use of a novel statistical tool which models the reliability of two specific indirect calorimetry systems, Douglas bag and Parvomedics 2400 TrueOne, as univariate normal distributions and implements the distribution overlapping coefficient to determine the likelihood that two VO2 measures are the same. A command line implementation of the tool is available for the R programming language as well as a web-based graphical user interface (GUI). This tool is valuable for clinicians performing a single-subject analysis as well as researchers interested in determining if their observed differences exceed the error of the device.
Mathematical and statistical analysis
Houston, A. Glen
1988-01-01
The goal of the mathematical and statistical analysis component of RICIS is to research, develop, and evaluate mathematical and statistical techniques for aerospace technology applications. Specific research areas of interest include modeling, simulation, experiment design, reliability assessment, and numerical analysis.
Applied multivariate statistical analysis
Härdle, Wolfgang Karl
2015-01-01
Focusing on high-dimensional applications, this 4th edition presents the tools and concepts used in multivariate data analysis in a style that is also accessible for non-mathematicians and practitioners. It surveys the basic principles and emphasizes both exploratory and inferential statistics; a new chapter on Variable Selection (Lasso, SCAD and Elastic Net) has also been added. All chapters include practical exercises that highlight applications in different multivariate data analysis fields: in quantitative financial studies, where the joint dynamics of assets are observed; in medicine, where recorded observations of subjects in different locations form the basis for reliable diagnoses and medication; and in quantitative marketing, where consumers’ preferences are collected in order to construct models of consumer behavior. All of these examples involve high to ultra-high dimensions and represent a number of major fields in big data analysis. The fourth edition of this book on Applied Multivariate ...
Statistical data analysis handbook
National Research Council Canada - National Science Library
Wall, Francis J
1986-01-01
It must be emphasized that this is not a text book on statistics. Instead it is a working tool that presents data analysis in clear, concise terms which can be readily understood even by those without formal training in statistics...
Wijk, E.P.A. van; Wijk, R.V.; Bajpai, R.P.; Greef, J. van der
2010-01-01
Photon signals emitted spontaneously from dorsal and palm sides of both hands were recorded using 6000 time windows of size T=50. ms in 50 healthy human subjects. These photon signals demonstrated universal behaviour by variance and mean. The data sets for larger time windows up to T=50. s were
Mason, Lee L.
2010-01-01
Effect sizes for single-subject research were examined to determine to what extent they measure similar aspects of the effects of the treatment. Seventy-five articles on the reduction of problem behavior in children with autism were recharted on standard celeration charts. Pearson product-moment correlations were then conducted between two…
Lange, Catharina; Suppa, Per; Frings, Lars; Brenner, Winfried; Spies, Lothar; Buchert, Ralph
2016-01-01
Positron emission tomography (PET) with the glucose analog F-18-fluorodeoxyglucose (FDG) is widely used in the diagnosis of neurodegenerative diseases. Guidelines recommend voxel-based statistical testing to support visual evaluation of the PET images. However, the performance of voxel-based testing strongly depends on each single preprocessing step involved. To optimize the processing pipeline of voxel-based testing for the prognosis of dementia in subjects with amnestic mild cognitive impairment (MCI). The study included 108 ADNI MCI subjects grouped as 'stable MCI' (n = 77) or 'MCI-to-AD converter' according to their diagnostic trajectory over 3 years. Thirty-two ADNI normals served as controls. Voxel-based testing was performed with the statistical parametric mapping software (SPM8) starting with default settings. The following modifications were added step-by-step: (i) motion correction, (ii) custom-made FDG template, (iii) different reference regions for intensity scaling, and (iv) smoothing was varied between 8 and 18 mm. The t-sum score for hypometabolism within a predefined AD mask was compared between the different settings using receiver operating characteristic (ROC) analysis with respect to differentiation between 'stable MCI' and 'MCI-to-AD converter'. The area (AUC) under the ROC curve was used as performance measure. The default setting provided an AUC of 0.728. The modifications of the processing pipeline improved the AUC up to 0.832 (p = 0.046). Improvement of the AUC was confirmed in an independent validation sample of 241 ADNI MCI subjects (p = 0.048). The prognostic value of voxel-based single subject analysis of brain FDG PET in MCI subjects can be improved considerably by optimizing the processing pipeline.
Wenzel, Fabian; Young, Stewart; Wilke, Florian; Apostolova, Ivayla; Arlt, Sönke; Jahn, Holger; Thiele, Frank; Buchert, Ralph
2010-04-15
A b-spline-based method 'Lobster', originally designed as a general technique for non-linear image registration, was tailored for stereotactical normalization of brain FDG PET scans. Lobster was compared with the normalization methods of SPM2 and Neurostat with respect to the impact on the accuracy of voxel-based statistical analysis. (i) Computer simulation: Seven representative patterns of cortical hypometabolism served as artificial ground truth. They were inserted into 26 normal control scans with different simulated severity levels. After stereotactical normalization and voxel-based testing, statistical maps were compared voxel-by-voxel with the ground truth. This was done at different levels of statistical significance. There was a highly significant effect of the stereotactical normalization method on the area under the resulting ROC curve. Lobster showed the best average performance and was most stable with respect to variation of the severity level. (ii) Clinical evaluation: Statistical maps were obtained for the normal controls as well as patients with Alzheimer's disease (AD, n=44), Lewy-Body disease (LBD, 9), fronto-temporal dementia (FTD, 13), and cortico-basal dementia (CBD, 4). These maps were classified as normal, AD, LBD, FTD, or CBD by two experienced readers. The stereotactical normalization method had no significant effect on classification by of each of the experts, but it appeared to affect agreement between the experts. In conclusion, Lobster is appropriate for use in single-subject analysis of brain FDG PET scans in suspected dementia, both in early diagnosis (mild hypometabolism) and in differential diagnosis in advanced disease stages (moderate to severe hypometabolism). The computer simulation framework developed in the present study appears appropriate for quantitative evaluation of the impact of the different processing steps and their interaction on the performance of voxel-based single-subject analysis. Copyright 2009 Elsevier Inc
Per Object statistical analysis
DEFF Research Database (Denmark)
2008-01-01
This RS code is to do Object-by-Object analysis of each Object's sub-objects, e.g. statistical analysis of an object's individual image data pixels. Statistics, such as percentiles (so-called "quartiles") are derived by the process, but the return of that can only be a Scene Variable, not an Object...... an analysis of the values of the object's pixels in MS-Excel. The shell of the proceedure could also be used for purposes other than just the derivation of Object - Sub-object statistics, e.g. rule-based assigment processes....... Variable. This procedure was developed in order to be able to export objects as ESRI shape data with the 90-percentile of the Hue of each object's pixels as an item in the shape attribute table. This procedure uses a sub-level single pixel chessboard segmentation, loops for each of the objects...
Beginning statistics with data analysis
Mosteller, Frederick; Rourke, Robert EK
2013-01-01
This introduction to the world of statistics covers exploratory data analysis, methods for collecting data, formal statistical inference, and techniques of regression and analysis of variance. 1983 edition.
Handbook of statistical methods single subject design
Satake, Eiki; Maxwell, David L
2008-01-01
This book is a practical guide of the most commonly used approaches in analyzing and interpreting single-subject data. It arranges the methodologies used in a logical sequence using an array of research studies from the existing published literature to illustrate specific applications. The book provides a brief discussion of each approach such as visual, inferential, and probabilistic model, the applications for which it is intended, and a step-by-step illustration of the test as used in an actual research study.
Associative Analysis in Statistics
Directory of Open Access Journals (Sweden)
Mihaela Muntean
2015-03-01
Full Text Available In the last years, the interest in technologies such as in-memory analytics and associative search has increased. This paper explores how you can use in-memory analytics and an associative model in statistics. The word “associative” puts the emphasis on understanding how datasets relate to one another. The paper presents the main characteristics of “associative” data model. Also, the paper presents how to design an associative model for labor market indicators analysis. The source is the EU Labor Force Survey. Also, this paper presents how to make associative analysis.
Energy Technology Data Exchange (ETDEWEB)
Kim, In-Ju; Kim, Seong-Jang; Kim, Yong-Ki (Dept. of Nuclear Medicine, Pusan National Univ. Hospital, Busan (Korea); Medical Research Institute, Pusan National Univ., Busan (Korea)). e-mail: growthkim@daum.net/growthkim@pusan.ac.kr)
2009-12-15
Background: The age- and sex-associated changes of brain development are unclear and controversial. Several previous studies showed conflicting results of a specific pattern of cerebral glucose metabolism or no differences of cerebral glucose metabolism in association with normal aging process and sex. Purpose: To investigate the effects of age and sex on changes in cerebral glucose metabolism in healthy subjects using fluorine-18 fluorodeoxyglucose (F-18 FDG) brain positron emission tomography (PET) and statistical parametric mapping (SPM) analysis. Material and Methods: Seventy-eight healthy subjects (32 males, mean age 46.6+-18.2 years; 46 females, mean age 40.6+-19.8 years) underwent F-18 FDG brain PET. Using SPM, age- and sex-associated changes in cerebral glucose metabolism were investigated. Results: In males, a negative correlation existed in several gray matter areas, including the right temporopolar (Brodmann area [BA] 38), right orbitofrontal (BA 47), left orbitofrontal gyrus (BA 10), left dorsolateral frontal gyrus (BA 8), and left insula (BA 13) areas. A positive relationship existed in the left claustrum and left thalamus. In females, negative changes existed in the left caudate body, left temporopolar area (BA 38), right orbitofrontal gyri (BA 47 and BA 10), and right dorsolateral prefrontal cortex (BA 46). A positive association was demonstrated in the left subthalamic nucleus and the left superior frontal gyrus. In white matter, an age-associated decrease in FDG uptake in males was shown in the left insula, and increased FDG uptake was found in the left corpus callosum. The female group had an age-associated negative correlation of FDG uptake only in the right corpus callosum. Conclusion: Using SPM, we found not only similar areas of brain, but also sex-specific cerebral areas of age-associated changes of FDG uptake
Statistical methods for astronomical data analysis
Chattopadhyay, Asis Kumar
2014-01-01
This book introduces “Astrostatistics” as a subject in its own right with rewarding examples, including work by the authors with galaxy and Gamma Ray Burst data to engage the reader. This includes a comprehensive blending of Astrophysics and Statistics. The first chapter’s coverage of preliminary concepts and terminologies for astronomical phenomenon will appeal to both Statistics and Astrophysics readers as helpful context. Statistics concepts covered in the book provide a methodological framework. A unique feature is the inclusion of different possible sources of astronomical data, as well as software packages for converting the raw data into appropriate forms for data analysis. Readers can then use the appropriate statistical packages for their particular data analysis needs. The ideas of statistical inference discussed in the book help readers determine how to apply statistical tests. The authors cover different applications of statistical techniques already developed or specifically introduced for ...
DEFF Research Database (Denmark)
Ris Hansen, Inge; Søgaard, Karen; Gram, Bibi
2015-01-01
This is the analysis plan for the multicentre randomised control study looking at the effect of training and exercises in chronic neck pain patients that is being conducted in Jutland and Funen, Denmark. This plan will be used as a work description for the analyses of the data collected....
Research design and statistical analysis
Myers, Jerome L; Lorch Jr, Robert F
2013-01-01
Research Design and Statistical Analysis provides comprehensive coverage of the design principles and statistical concepts necessary to make sense of real data. The book's goal is to provide a strong conceptual foundation to enable readers to generalize concepts to new research situations. Emphasis is placed on the underlying logic and assumptions of the analysis and what it tells the researcher, the limitations of the analysis, and the consequences of violating assumptions. Sampling, design efficiency, and statistical models are emphasized throughout. As per APA recommendations
Kinnear, John; Jackson, Ruth
2017-07-01
Although physicians are highly trained in the application of evidence-based medicine, and are assumed to make rational decisions, there is evidence that their decision making is prone to biases. One of the biases that has been shown to affect accuracy of judgements is that of representativeness and base-rate neglect, where the saliency of a person's features leads to overestimation of their likelihood of belonging to a group. This results in the substitution of 'subjective' probability for statistical probability. This study examines clinicians' propensity to make estimations of subjective probability when presented with clinical information that is considered typical of a medical condition. The strength of the representativeness bias is tested by presenting choices in textual and graphic form. Understanding of statistical probability is also tested by omitting all clinical information. For the questions that included clinical information, 46.7% and 45.5% of clinicians made judgements of statistical probability, respectively. Where the question omitted clinical information, 79.9% of clinicians made a judgement consistent with statistical probability. There was a statistically significant difference in responses to the questions with and without representativeness information (χ2 (1, n=254)=54.45, pprobability. One of the causes for this representativeness bias may be the way clinical medicine is taught where stereotypic presentations are emphasised in diagnostic decision making. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Statistical methods for bioimpedance analysis
Directory of Open Access Journals (Sweden)
Christian Tronstad
2014-04-01
Full Text Available This paper gives a basic overview of relevant statistical methods for the analysis of bioimpedance measurements, with an aim to answer questions such as: How do I begin with planning an experiment? How many measurements do I need to take? How do I deal with large amounts of frequency sweep data? Which statistical test should I use, and how do I validate my results? Beginning with the hypothesis and the research design, the methodological framework for making inferences based on measurements and statistical analysis is explained. This is followed by a brief discussion on correlated measurements and data reduction before an overview is given of statistical methods for comparison of groups, factor analysis, association, regression and prediction, explained in the context of bioimpedance research. The last chapter is dedicated to the validation of a new method by different measures of performance. A flowchart is presented for selection of statistical method, and a table is given for an overview of the most important terms of performance when evaluating new measurement technology.
Pyrotechnic Shock Analysis Using Statistical Energy Analysis
2015-10-23
29th Aerospace Testing Seminar, October 2015 Pyrotechnic Shock Analysis Using Statistical Energy Analysis James Ho-Jin Hwang Engineering...maximum structural response due to a pyrotechnic shock input using Statistical Energy Analysis (SEA). It had been previously understood that since the...pyrotechnic shock is not a steady state event, traditional SEA method may not applicable. A new analysis methodology effectively utilizes the
Regularized Statistical Analysis of Anatomy
DEFF Research Database (Denmark)
Sjöstrand, Karl
2007-01-01
This thesis presents the application and development of regularized methods for the statistical analysis of anatomical structures. Focus is on structure-function relationships in the human brain, such as the connection between early onset of Alzheimer’s disease and shape changes of the corpus cal...
Bayesian Inference in Statistical Analysis
Box, George E P
2011-01-01
The Wiley Classics Library consists of selected books that have become recognized classics in their respective fields. With these new unabridged and inexpensive editions, Wiley hopes to extend the life of these important works by making them available to future generations of mathematicians and scientists. Currently available in the Series: T. W. Anderson The Statistical Analysis of Time Series T. S. Arthanari & Yadolah Dodge Mathematical Programming in Statistics Emil Artin Geometric Algebra Norman T. J. Bailey The Elements of Stochastic Processes with Applications to the Natural Sciences Rob
Statistical analysis of survival data.
Crowley, J; Breslow, N
1984-01-01
A general review of the statistical techniques that the authors feel are most important in the analysis of survival data is presented. The emphasis is on the study of the duration of time between any two events as applied to people and on the nonparametric and semiparametric models most often used in these settings. The unifying concept is the hazard function, variously known as the risk, the force of mortality, or the force of transition.
Statistical Analysis of Iberian Peninsula Megaliths Orientations
González-García, A. C.
2009-08-01
Megalithic monuments have been intensively surveyed and studied from the archaeoastronomical point of view in the past decades. We have orientation measurements for over one thousand megalithic burial monuments in the Iberian Peninsula, from several different periods. These data, however, lack a sound understanding. A way to classify and start to understand such orientations is by means of statistical analysis of the data. A first attempt is done with simple statistical variables and a mere comparison between the different areas. In order to minimise the subjectivity in the process a further more complicated analysis is performed. Some interesting results linking the orientation and the geographical location will be presented. Finally I will present some models comparing the orientation of the megaliths in the Iberian Peninsula with the rising of the sun and the moon at several times of the year.
Statistical analysis of sleep spindle occurrences.
Panas, Dagmara; Malinowska, Urszula; Piotrowski, Tadeusz; Żygierewicz, Jarosław; Suffczyński, Piotr
2013-01-01
Spindles - a hallmark of stage II sleep - are a transient oscillatory phenomenon in the EEG believed to reflect thalamocortical activity contributing to unresponsiveness during sleep. Currently spindles are often classified into two classes: fast spindles, with a frequency of around 14 Hz, occurring in the centro-parietal region; and slow spindles, with a frequency of around 12 Hz, prevalent in the frontal region. Here we aim to establish whether the spindle generation process also exhibits spatial heterogeneity. Electroencephalographic recordings from 20 subjects were automatically scanned to detect spindles and the time occurrences of spindles were used for statistical analysis. Gamma distribution parameters were fit to each inter-spindle interval distribution, and a modified Wald-Wolfowitz lag-1 correlation test was applied. Results indicate that not all spindles are generated by the same statistical process, but this dissociation is not spindle-type specific. Although this dissociation is not topographically specific, a single generator for all spindle types appears unlikely.
Statistical analysis of management data
Gatignon, Hubert
2013-01-01
This book offers a comprehensive approach to multivariate statistical analyses. It provides theoretical knowledge of the concepts underlying the most important multivariate techniques and an overview of actual applications.
A Statistical Analysis of Cryptocurrencies
Stephen Chan; Jeffrey Chu; Saralees Nadarajah; Joerg Osterrieder
2017-01-01
We analyze statistical properties of the largest cryptocurrencies (determined by market capitalization), of which Bitcoin is the most prominent example. We characterize their exchange rates versus the U.S. Dollar by fitting parametric distributions to them. It is shown that returns are clearly non-normal, however, no single distribution fits well jointly to all the cryptocurrencies analysed. We find that for the most popular currencies, such as Bitcoin and Litecoin, the generalized hyperbolic...
A Statistical Analysis of Cryptocurrencies
Directory of Open Access Journals (Sweden)
Stephen Chan
2017-05-01
Full Text Available We analyze statistical properties of the largest cryptocurrencies (determined by market capitalization, of which Bitcoin is the most prominent example. We characterize their exchange rates versus the U.S. Dollar by fitting parametric distributions to them. It is shown that returns are clearly non-normal, however, no single distribution fits well jointly to all the cryptocurrencies analysed. We find that for the most popular currencies, such as Bitcoin and Litecoin, the generalized hyperbolic distribution gives the best fit, while for the smaller cryptocurrencies the normal inverse Gaussian distribution, generalized t distribution, and Laplace distribution give good fits. The results are important for investment and risk management purposes.
Morphological Analysis for Statistical Machine Translation
National Research Council Canada - National Science Library
Lee, Young-Suk
2004-01-01
We present a novel morphological analysis technique which induces a morphological and syntactic symmetry between two languages with highly asymmetrical morphological structures to improve statistical...
Statistical Power in Meta-Analysis
Liu, Jin
2015-01-01
Statistical power is important in a meta-analysis study, although few studies have examined the performance of simulated power in meta-analysis. The purpose of this study is to inform researchers about statistical power estimation on two sample mean difference test under different situations: (1) the discrepancy between the analytical power and…
Statistical analysis of sleep spindle occurrences.
Directory of Open Access Journals (Sweden)
Dagmara Panas
Full Text Available Spindles - a hallmark of stage II sleep - are a transient oscillatory phenomenon in the EEG believed to reflect thalamocortical activity contributing to unresponsiveness during sleep. Currently spindles are often classified into two classes: fast spindles, with a frequency of around 14 Hz, occurring in the centro-parietal region; and slow spindles, with a frequency of around 12 Hz, prevalent in the frontal region. Here we aim to establish whether the spindle generation process also exhibits spatial heterogeneity. Electroencephalographic recordings from 20 subjects were automatically scanned to detect spindles and the time occurrences of spindles were used for statistical analysis. Gamma distribution parameters were fit to each inter-spindle interval distribution, and a modified Wald-Wolfowitz lag-1 correlation test was applied. Results indicate that not all spindles are generated by the same statistical process, but this dissociation is not spindle-type specific. Although this dissociation is not topographically specific, a single generator for all spindle types appears unlikely.
STATISTICAL ANALYSIS OF MONETARY POLICY INDICATORS VARIABILITY
Directory of Open Access Journals (Sweden)
ANAMARIA POPESCU
2016-10-01
Full Text Available This paper attempts to characterize through statistical indicators of statistical data that we have available. The purpose of this paper is to present statistical indicators, primary and secondary, simple and synthetic, which is frequently used for statistical characterization of statistical series. We can thus analyze central tendency, and data variability, form and concentration distributions package data using analytical tools in Microsoft Excel that enables automatic calculation of descriptive statistics using Data Analysis option from the Tools menu. We will also study the links which exist between statistical variables can be studied using two techniques, correlation and regression. From the analysis of monetary policy in the period 2003 - 2014 and information provided by the website of the National Bank of Romania (BNR seems to be a certain tendency towards eccentricity and asymmetry of financial data series.
Statistical analysis with Excel for dummies
Schmuller, Joseph
2013-01-01
Take the mystery out of statistical terms and put Excel to work! If you need to create and interpret statistics in business or classroom settings, this easy-to-use guide is just what you need. It shows you how to use Excel's powerful tools for statistical analysis, even if you've never taken a course in statistics. Learn the meaning of terms like mean and median, margin of error, standard deviation, and permutations, and discover how to interpret the statistics of everyday life. You'll learn to use Excel formulas, charts, PivotTables, and other tools to make sense of everything fro
Statistical analysis of planktic foraminifera of the surface Continental ...
African Journals Online (AJOL)
Planktic foraminiferal assemblage recorded from selected samples obtained from shallow continental shelf sediments off southwestern Nigeria were subjected to statistical analysis. The Principal Component Analysis (PCA) was used to determine variants of planktic parameters. Values obtained for these parameters were ...
Hypothesis testing and statistical analysis of microbiome
Directory of Open Access Journals (Sweden)
Yinglin Xia
2017-09-01
Full Text Available After the initiation of Human Microbiome Project in 2008, various biostatistic and bioinformatic tools for data analysis and computational methods have been developed and applied to microbiome studies. In this review and perspective, we discuss the research and statistical hypotheses in gut microbiome studies, focusing on mechanistic concepts that underlie the complex relationships among host, microbiome, and environment. We review the current available statistic tools and highlight recent progress of newly developed statistical methods and models. Given the current challenges and limitations in biostatistic approaches and tools, we discuss the future direction in developing statistical methods and models for the microbiome studies.
Statistical shape analysis with applications in R
Dryden, Ian L
2016-01-01
A thoroughly revised and updated edition of this introduction to modern statistical methods for shape analysis Shape analysis is an important tool in the many disciplines where objects are compared using geometrical features. Examples include comparing brain shape in schizophrenia; investigating protein molecules in bioinformatics; and describing growth of organisms in biology. This book is a significant update of the highly-regarded `Statistical Shape Analysis’ by the same authors. The new edition lays the foundations of landmark shape analysis, including geometrical concepts and statistical techniques, and extends to include analysis of curves, surfaces, images and other types of object data. Key definitions and concepts are discussed throughout, and the relative merits of different approaches are presented. The authors have included substantial new material on recent statistical developments and offer numerous examples throughout the text. Concepts are introduced in an accessible manner, while reta...
Spatial analysis statistics, visualization, and computational methods
Oyana, Tonny J
2015-01-01
An introductory text for the next generation of geospatial analysts and data scientists, Spatial Analysis: Statistics, Visualization, and Computational Methods focuses on the fundamentals of spatial analysis using traditional, contemporary, and computational methods. Outlining both non-spatial and spatial statistical concepts, the authors present practical applications of geospatial data tools, techniques, and strategies in geographic studies. They offer a problem-based learning (PBL) approach to spatial analysis-containing hands-on problem-sets that can be worked out in MS Excel or ArcGIS-as well as detailed illustrations and numerous case studies. The book enables readers to: Identify types and characterize non-spatial and spatial data Demonstrate their competence to explore, visualize, summarize, analyze, optimize, and clearly present statistical data and results Construct testable hypotheses that require inferential statistical analysis Process spatial data, extract explanatory variables, conduct statisti...
A Statistical Framework for Single Subject Design with an Application in Post-stroke Rehabilitation
Lu, Ying; Scott, Marc; Raghavan, Preeti
2016-01-01
This paper proposes a practical yet novel solution to a longstanding statistical testing problem regarding single subject design. In particular, we aim to resolve an important clinical question: does a new patient behave the same as one from a healthy population? This question cannot be answered using the traditional single subject design when only test subject information is used, nor can it be satisfactorily resolved by comparing a single-subject's data with the mean value of a healthy popu...
Advances in statistical models for data analysis
Minerva, Tommaso; Vichi, Maurizio
2015-01-01
This edited volume focuses on recent research results in classification, multivariate statistics and machine learning and highlights advances in statistical models for data analysis. The volume provides both methodological developments and contributions to a wide range of application areas such as economics, marketing, education, social sciences and environment. The papers in this volume were first presented at the 9th biannual meeting of the Classification and Data Analysis Group (CLADAG) of the Italian Statistical Society, held in September 2013 at the University of Modena and Reggio Emilia, Italy.
Classification, (big) data analysis and statistical learning
Conversano, Claudio; Vichi, Maurizio
2018-01-01
This edited book focuses on the latest developments in classification, statistical learning, data analysis and related areas of data science, including statistical analysis of large datasets, big data analytics, time series clustering, integration of data from different sources, as well as social networks. It covers both methodological aspects as well as applications to a wide range of areas such as economics, marketing, education, social sciences, medicine, environmental sciences and the pharmaceutical industry. In addition, it describes the basic features of the software behind the data analysis results, and provides links to the corresponding codes and data sets where necessary. This book is intended for researchers and practitioners who are interested in the latest developments and applications in the field. The peer-reviewed contributions were presented at the 10th Scientific Meeting of the Classification and Data Analysis Group (CLADAG) of the Italian Statistical Society, held in Santa Margherita di Pul...
Statistics and analysis of scientific data
Bonamente, Massimiliano
2013-01-01
Statistics and Analysis of Scientific Data covers the foundations of probability theory and statistics, and a number of numerical and analytical methods that are essential for the present-day analyst of scientific data. Topics covered include probability theory, distribution functions of statistics, fits to two-dimensional datasheets and parameter estimation, Monte Carlo methods and Markov chains. Equal attention is paid to the theory and its practical application, and results from classic experiments in various fields are used to illustrate the importance of statistics in the analysis of scientific data. The main pedagogical method is a theory-then-application approach, where emphasis is placed first on a sound understanding of the underlying theory of a topic, which becomes the basis for an efficient and proactive use of the material for practical applications. The level is appropriate for undergraduates and beginning graduate students, and as a reference for the experienced researcher. Basic calculus is us...
Comparative analysis of positive and negative attitudes toward statistics
Ghulami, Hassan Rahnaward; Ab Hamid, Mohd Rashid; Zakaria, Roslinazairimah
2015-02-01
Many statistics lecturers and statistics education researchers are interested to know the perception of their students' attitudes toward statistics during the statistics course. In statistics course, positive attitude toward statistics is a vital because it will be encourage students to get interested in the statistics course and in order to master the core content of the subject matters under study. Although, students who have negative attitudes toward statistics they will feel depressed especially in the given group assignment, at risk for failure, are often highly emotional, and could not move forward. Therefore, this study investigates the students' attitude towards learning statistics. Six latent constructs have been the measurement of students' attitudes toward learning statistic such as affect, cognitive competence, value, difficulty, interest, and effort. The questionnaire was adopted and adapted from the reliable and validate instrument of Survey of Attitudes towards Statistics (SATS). This study is conducted among engineering undergraduate engineering students in the university Malaysia Pahang (UMP). The respondents consist of students who were taking the applied statistics course from different faculties. From the analysis, it is found that the questionnaire is acceptable and the relationships among the constructs has been proposed and investigated. In this case, students show full effort to master the statistics course, feel statistics course enjoyable, have confidence that they have intellectual capacity, and they have more positive attitudes then negative attitudes towards statistics learning. In conclusion in terms of affect, cognitive competence, value, interest and effort construct the positive attitude towards statistics was mostly exhibited. While negative attitudes mostly exhibited by difficulty construct.
Reproducible statistical analysis with multiple languages
DEFF Research Database (Denmark)
Lenth, Russell; Højsgaard, Søren
2011-01-01
This paper describes the system for making reproducible statistical analyses. differs from other systems for reproducible analysis in several ways. The two main differences are: (1) Several statistics programs can be in used in the same document. (2) Documents can be prepared using OpenOffice or ......Office or \\LaTeX. The main part of this paper is an example showing how to use and together in an OpenOffice text document. The paper also contains some practical considerations on the use of literate programming in statistics....
The implicative statistical analysis: an interdisciplinary paradigm
Iurato, Giuseppe
2012-01-01
In this brief note, which has simply the role of an epistemological survey paper, some of the main basic elements of Implicative Statistical Analysis (ISA) pattern are put into a possible critical comparison with some of the main aspects of Probability Theory, Inductive Inference Theory, Nonparametric and Multivariate Statistics, Optimization Theory and Dynamical System Theory which point out the very interesting multidisciplinary nature of the ISA pattern and related possible hints.
Foundation of statistical energy analysis in vibroacoustics
Le Bot, A
2015-01-01
This title deals with the statistical theory of sound and vibration. The foundation of statistical energy analysis is presented in great detail. In the modal approach, an introduction to random vibration with application to complex systems having a large number of modes is provided. For the wave approach, the phenomena of propagation, group speed, and energy transport are extensively discussed. Particular emphasis is given to the emergence of diffuse field, the central concept of the theory.
Statistical analysis of SAMPEX PET proton measurements
Pierrard, V; Heynderickx, D; Kruglanski, M; Looper, M; Blake, B; Mewaldt, D
2000-01-01
We present a statistical study of the distributions of proton counts from the Proton-Electron Telescope aboard the low-altitude polar satellite SAMPEX. Our statistical analysis shows that histograms of observed proton counts are generally distributed according to Poisson distributions but are sometimes quite different. The observed departures from Poisson distributions can be attributed to variations of the average flux or to the non-constancy of the detector lifetimes.
Statistical analysis of spatial and spatio-temporal point patterns
Diggle, Peter J
2013-01-01
Written by a prominent statistician and author, the first edition of this bestseller broke new ground in the then emerging subject of spatial statistics with its coverage of spatial point patterns. Retaining all the material from the second edition and adding substantial new material, Statistical Analysis of Spatial and Spatio-Temporal Point Patterns, Third Edition presents models and statistical methods for analyzing spatially referenced point process data. Reflected in the title, this third edition now covers spatio-temporal point patterns. It explores the methodological developments from th
Statistical analysis of network data with R
Kolaczyk, Eric D
2014-01-01
Networks have permeated everyday life through everyday realities like the Internet, social networks, and viral marketing. As such, network analysis is an important growth area in the quantitative sciences, with roots in social network analysis going back to the 1930s and graph theory going back centuries. Measurement and analysis are integral components of network research. As a result, statistical methods play a critical role in network analysis. This book is the first of its kind in network research. It can be used as a stand-alone resource in which multiple R packages are used to illustrate how to conduct a wide range of network analyses, from basic manipulation and visualization, to summary and characterization, to modeling of network data. The central package is igraph, which provides extensive capabilities for studying network graphs in R. This text builds on Eric D. Kolaczyk’s book Statistical Analysis of Network Data (Springer, 2009).
The Subject Analysis of Payment Systems Characteristics
Korobeynikova Olga Mikhaylovna
2015-01-01
The article deals with the analysis of payment systems aimed at identifying the categorical terminological apparatus, proving their specific features and revealing the impact of payment systems on the state of money turnover. On the basis of the subject analysis, the author formulates the definitions of a payment system (characterized by increasing speed of effecting payments, by the reduction of costs, by high degree of payments convenience for subjects of transactions, by security of paymen...
About Statistical Analysis of Qualitative Survey Data
Directory of Open Access Journals (Sweden)
Stefan Loehnert
2010-01-01
Full Text Available Gathered data is frequently not in a numerical form allowing immediate appliance of the quantitative mathematical-statistical methods. In this paper are some basic aspects examining how quantitative-based statistical methodology can be utilized in the analysis of qualitative data sets. The transformation of qualitative data into numeric values is considered as the entrance point to quantitative analysis. Concurrently related publications and impacts of scale transformations are discussed. Subsequently, it is shown how correlation coefficients are usable in conjunction with data aggregation constrains to construct relationship modelling matrices. For illustration, a case study is referenced at which ordinal type ordered qualitative survey answers are allocated to process defining procedures as aggregation levels. Finally options about measuring the adherence of the gathered empirical data to such kind of derived aggregation models are introduced and a statistically based reliability check approach to evaluate the reliability of the chosen model specification is outlined.
Statistics and analysis of scientific data
Bonamente, Massimiliano
2017-01-01
The revised second edition of this textbook provides the reader with a solid foundation in probability theory and statistics as applied to the physical sciences, engineering and related fields. It covers a broad range of numerical and analytical methods that are essential for the correct analysis of scientific data, including probability theory, distribution functions of statistics, fits to two-dimensional data and parameter estimation, Monte Carlo methods and Markov chains. Features new to this edition include: • a discussion of statistical techniques employed in business science, such as multiple regression analysis of multivariate datasets. • a new chapter on the various measures of the mean including logarithmic averages. • new chapters on systematic errors and intrinsic scatter, and on the fitting of data with bivariate errors. • a new case study and additional worked examples. • mathematical derivations and theoretical background material have been appropriately marked,to improve the readabili...
Zheng, Jie; Harris, Marcelline R; Masci, Anna Maria; Lin, Yu; Hero, Alfred; Smith, Barry; He, Yongqun
2016-09-14
Statistics play a critical role in biological and clinical research. However, most reports of scientific results in the published literature make it difficult for the reader to reproduce the statistical analyses performed in achieving those results because they provide inadequate documentation of the statistical tests and algorithms applied. The Ontology of Biological and Clinical Statistics (OBCS) is put forward here as a step towards solving this problem. The terms in OBCS including 'data collection', 'data transformation in statistics', 'data visualization', 'statistical data analysis', and 'drawing a conclusion based on data', cover the major types of statistical processes used in basic biological research and clinical outcome studies. OBCS is aligned with the Basic Formal Ontology (BFO) and extends the Ontology of Biomedical Investigations (OBI), an OBO (Open Biological and Biomedical Ontologies) Foundry ontology supported by over 20 research communities. Currently, OBCS comprehends 878 terms, representing 20 BFO classes, 403 OBI classes, 229 OBCS specific classes, and 122 classes imported from ten other OBO ontologies. We discuss two examples illustrating how the ontology is being applied. In the first (biological) use case, we describe how OBCS was applied to represent the high throughput microarray data analysis of immunological transcriptional profiles in human subjects vaccinated with an influenza vaccine. In the second (clinical outcomes) use case, we applied OBCS to represent the processing of electronic health care data to determine the associations between hospital staffing levels and patient mortality. Our case studies were designed to show how OBCS can be used for the consistent representation of statistical analysis pipelines under two different research paradigms. Other ongoing projects using OBCS for statistical data processing are also discussed. The OBCS source code and documentation are available at: https://github.com/obcs/obcs . The Ontology
The fuzzy approach to statistical analysis
Coppi, Renato; Gil, Maria A.; Kiers, Henk A. L.
2006-01-01
For the last decades, research studies have been developed in which a coalition of Fuzzy Sets Theory and Statistics has been established with different purposes. These namely are: (i) to introduce new data analysis problems in which the objective involves either fuzzy relationships or fuzzy terms;
Selected papers on analysis, probability, and statistics
Nomizu, Katsumi
1994-01-01
This book presents papers that originally appeared in the Japanese journal Sugaku. The papers fall into the general area of mathematical analysis as it pertains to probability and statistics, dynamical systems, differential equations and analytic function theory. Among the topics discussed are: stochastic differential equations, spectra of the Laplacian and Schrödinger operators, nonlinear partial differential equations which generate dissipative dynamical systems, fractal analysis on self-similar sets and the global structure of analytic functions.
Fundamentals of statistical experimental design and analysis
Easterling, Robert G
2015-01-01
Professionals in all areas - business; government; the physical, life, and social sciences; engineering; medicine, etc. - benefit from using statistical experimental design to better understand their worlds and then use that understanding to improve the products, processes, and programs they are responsible for. This book aims to provide the practitioners of tomorrow with a memorable, easy to read, engaging guide to statistics and experimental design. This book uses examples, drawn from a variety of established texts, and embeds them in a business or scientific context, seasoned with a dash of humor, to emphasize the issues and ideas that led to the experiment and the what-do-we-do-next? steps after the experiment. Graphical data displays are emphasized as means of discovery and communication and formulas are minimized, with a focus on interpreting the results that software produce. The role of subject-matter knowledge, and passion, is also illustrated. The examples do not require specialized knowledge, and t...
Bayesian analysis: a new statistical paradigm for new technology.
Grunkemeier, Gary L; Payne, Nicola
2002-12-01
Full Bayesian analysis is an alternative statistical paradigm, as opposed to traditionally used methods, usually called frequentist statistics. Bayesian analysis is controversial because it requires assuming a prior distribution, which can be arbitrarily chosen; thus there is a subjective element, which is considered to be a major weakness. However, this could also be considered a strength since it provides a formal way of incorporating prior knowledge. Since it is flexible and permits repeated looks at evolving data, Bayesian analysis is particularly well suited to the evaluation of new medical technology. Bayesian analysis can refer to a range of things: from a simple, noncontroversial formula for inverting probabilities to an alternative approach to the philosophy of science. Its advantages include: (1) providing direct probability statements--which are what most people wrongly assume they are getting from conventional statistics; (2) formally incorporating previous information in statistical inference of a data set, a natural approach which we follow in everyday reasoning; and (3) flexible, adaptive research designs allowing multiple looks at accumulating study data. Its primary disadvantage is the element of subjectivity which some think is not scientific. We discuss and compare frequentist and Bayesian approaches and provide three examples of Bayesian analysis: (1) EKG interpretation, (2) a coin-tossing experiment, and (3) assessing the thromboembolic risk of a new mechanical heart valve.
Statistical prediction of energy input to buildings subjected to wind force
洪, 起; Koh, Ki
2011-01-01
It is necessary to access quantitatively the statistics of wind force to cotrol the damage of buildings due to wind force predicted in future. The purpose of this paper is to develop in the frequency domain a mathematical formula making the statistical prediction of energy input to buildings subjected to wind force, which will bocome the fundamental formula to establish the rational wind-resistance design method.
Statistical Analysis Of Reconnaissance Geochemical Data From ...
African Journals Online (AJOL)
Five factors, whose structures were similar to the subjective groupings derived from the correlation matrix, were derived from R-mode factor analysis and have been interpreted in terms of underlying rock lithology, potential mineralization, and physico-chemical conditions in the environment. A high possibility of occurrence ...
Statistical inference of Minimum Rank Factor Analysis
Shapiro, A; Ten Berge, JMF
For any given number of factors, Minimum Rank Factor Analysis yields optimal communalities for an observed covariance matrix in the sense that the unexplained common variance with that number of factors is minimized, subject to the constraint that both the diagonal matrix of unique variances and the
Statistical analysis of next generation sequencing data
Nettleton, Dan
2014-01-01
Next Generation Sequencing (NGS) is the latest high throughput technology to revolutionize genomic research. NGS generates massive genomic datasets that play a key role in the big data phenomenon that surrounds us today. To extract signals from high-dimensional NGS data and make valid statistical inferences and predictions, novel data analytic and statistical techniques are needed. This book contains 20 chapters written by prominent statisticians working with NGS data. The topics range from basic preprocessing and analysis with NGS data to more complex genomic applications such as copy number variation and isoform expression detection. Research statisticians who want to learn about this growing and exciting area will find this book useful. In addition, many chapters from this book could be included in graduate-level classes in statistical bioinformatics for training future biostatisticians who will be expected to deal with genomic data in basic biomedical research, genomic clinical trials and personalized med...
Statistical Tools for Forensic Analysis of Toolmarks
Energy Technology Data Exchange (ETDEWEB)
David Baldwin; Max Morris; Stan Bajic; Zhigang Zhou; James Kreiser
2004-04-22
Recovery and comparison of toolmarks, footprint impressions, and fractured surfaces connected to a crime scene are of great importance in forensic science. The purpose of this project is to provide statistical tools for the validation of the proposition that particular manufacturing processes produce marks on the work-product (or tool) that are substantially different from tool to tool. The approach to validation involves the collection of digital images of toolmarks produced by various tool manufacturing methods on produced work-products and the development of statistical methods for data reduction and analysis of the images. The developed statistical methods provide a means to objectively calculate a ''degree of association'' between matches of similarly produced toolmarks. The basis for statistical method development relies on ''discriminating criteria'' that examiners use to identify features and spatial relationships in their analysis of forensic samples. The developed data reduction algorithms utilize the same rules used by examiners for classification and association of toolmarks.
Spatial Statistical Analysis of Large Astronomical Datasets
Szapudi, Istvan
2002-12-01
The future of astronomy will be dominated with large and complex data bases. Megapixel CMB maps, joint analyses of surveys across several wavelengths, as envisioned in the planned National Virtual Observatory (NVO), TByte/day data rate of future surveys (Pan-STARRS) put stringent constraints on future data analysis methods: they have to achieve at least N log N scaling to be viable in the long term. This warrants special attention to computational requirements, which were ignored during the initial development of current analysis tools in favor of statistical optimality. Even an optimal measurement, however, has residual errors due to statistical sample variance. Hence a suboptimal technique with significantly smaller measurement errors than the unavoidable sample variance produces results which are nearly identical to that of a statistically optimal technique. For instance, for analyzing CMB maps, I present a suboptimal alternative, indistinguishable from the standard optimal method with N3 scaling, that can be rendered N log N with a hierarchical representation of the data; a speed up of a trillion times compared to other methods. In this spirit I will present a set of novel algorithms and methods for spatial statistical analyses of future large astronomical data bases, such as galaxy catalogs, megapixel CMB maps, or any point source catalog.
Multivariate analysis: A statistical approach for computations
Michu, Sachin; Kaushik, Vandana
2014-10-01
Multivariate analysis is a type of multivariate statistical approach commonly used in, automotive diagnosis, education evaluating clusters in finance etc and more recently in the health-related professions. The objective of the paper is to provide a detailed exploratory discussion about factor analysis (FA) in image retrieval method and correlation analysis (CA) of network traffic. Image retrieval methods aim to retrieve relevant images from a collected database, based on their content. The problem is made more difficult due to the high dimension of the variable space in which the images are represented. Multivariate correlation analysis proposes an anomaly detection and analysis method based on the correlation coefficient matrix. Anomaly behaviors in the network include the various attacks on the network like DDOs attacks and network scanning.
Vapor Pressure Data Analysis and Statistics
2016-12-01
there were flaws in the original data prior to its publication. 3. FITTING METHODS Our process for correlating experimental vapor pressure ...2. Penski, E.C. Vapor Pressure Data Analysis Methodology, Statistics, and Applications; CRDEC-TR-386; U.S. Army Chemical Research, Development, and... Chemical Biological Center: Aberdeen Proving Ground, MD, 2006; UNCLASSIFIED Report (ADA447993). 11. Kemme, H.R.; Kreps, S.I. Vapor Pressure of
Statistical Challenges of Big Data Analysis in Medicine
Czech Academy of Sciences Publication Activity Database
Kalina, Jan
2015-01-01
Roč. 3, č. 1 (2015), s. 24-27 ISSN 1805-8698 R&D Projects: GA ČR GA13-23940S Grant - others:CESNET Development Fund(CZ) 494/2013 Institutional support: RVO:67985807 Keywords : big data * variable selection * classification * cluster analysis Subject RIV: BB - Applied Statistics, Operational Research http://www.ijbh.org/ijbh2015-1.pdf
Statistical analysis of brake squeal noise
Oberst, S.; Lai, J. C. S.
2011-06-01
Despite substantial research efforts applied to the prediction of brake squeal noise since the early 20th century, the mechanisms behind its generation are still not fully understood. Squealing brakes are of significant concern to the automobile industry, mainly because of the costs associated with warranty claims. In order to remedy the problems inherent in designing quieter brakes and, therefore, to understand the mechanisms, a design of experiments study, using a noise dynamometer, was performed by a brake system manufacturer to determine the influence of geometrical parameters (namely, the number and location of slots) of brake pads on brake squeal noise. The experimental results were evaluated with a noise index and ranked for warm and cold brake stops. These data are analysed here using statistical descriptors based on population distributions, and a correlation analysis, to gain greater insight into the functional dependency between the time-averaged friction coefficient as the input and the peak sound pressure level data as the output quantity. The correlation analysis between the time-averaged friction coefficient and peak sound pressure data is performed by applying a semblance analysis and a joint recurrence quantification analysis. Linear measures are compared with complexity measures (nonlinear) based on statistics from the underlying joint recurrence plots. Results show that linear measures cannot be used to rank the noise performance of the four test pad configurations. On the other hand, the ranking of the noise performance of the test pad configurations based on the noise index agrees with that based on nonlinear measures: the higher the nonlinearity between the time-averaged friction coefficient and peak sound pressure, the worse the squeal. These results highlight the nonlinear character of brake squeal and indicate the potential of using nonlinear statistical analysis tools to analyse disc brake squeal.
The CALORIES trial: statistical analysis plan.
Harvey, Sheila E; Parrott, Francesca; Harrison, David A; Mythen, Michael; Rowan, Kathryn M
2014-12-01
The CALORIES trial is a pragmatic, open, multicentre, randomised controlled trial (RCT) of the clinical effectiveness and cost-effectiveness of early nutritional support via the parenteral route compared with early nutritional support via the enteral route in unplanned admissions to adult general critical care units (CCUs) in the United Kingdom. The trial derives from the need for a large, pragmatic RCT to determine the optimal route of delivery for early nutritional support in the critically ill. To describe the proposed statistical analyses for the evaluation of the clinical effectiveness in the CALORIES trial. With the primary and secondary outcomes defined precisely and the approach to safety monitoring and data collection summarised, the planned statistical analyses, including prespecified subgroups and secondary analyses, were developed and are described. The primary outcome is all-cause mortality at 30 days. The primary analysis will be reported as a relative risk and absolute risk reduction and tested with the Fisher exact test. Prespecified subgroup analyses will be based on age, degree of malnutrition, acute severity of illness, mechanical ventilation at admission to the CCU, presence of cancer and time from CCU admission to commencement of early nutritional support. Secondary analyses include adjustment for baseline covariates. In keeping with best trial practice, we have developed, described and published a statistical analysis plan for the CALORIES trial and are placing it in the public domain before inspecting data from the trial.
Sensitivity analysis and related analysis : A survey of statistical techniques
Kleijnen, J.P.C.
1995-01-01
This paper reviews the state of the art in five related types of analysis, namely (i) sensitivity or what-if analysis, (ii) uncertainty or risk analysis, (iii) screening, (iv) validation, and (v) optimization. The main question is: when should which type of analysis be applied; which statistical
Coupling strength assumption in statistical energy analysis
Lafont, T.; Totaro, N.; Le Bot, A.
2017-04-01
This paper is a discussion of the hypothesis of weak coupling in statistical energy analysis (SEA). The examples of coupled oscillators and statistical ensembles of coupled plates excited by broadband random forces are discussed. In each case, a reference calculation is compared with the SEA calculation. First, it is shown that the main SEA relation, the coupling power proportionality, is always valid for two oscillators irrespective of the coupling strength. But the case of three subsystems, consisting of oscillators or ensembles of plates, indicates that the coupling power proportionality fails when the coupling is strong. Strong coupling leads to non-zero indirect coupling loss factors and, sometimes, even to a reversal of the energy flow direction from low to high vibrational temperature.
Statistical analysis of hydroclimatic time series: Uncertainty and insights
Koutsoyiannis, Demetris; Montanari, Alberto
2007-05-01
Today, hydrologic research and modeling depends largely on climatological inputs, whose physical and statistical behavior are the subject of many debates in the scientific community. A relevant ongoing discussion is focused on long-term persistence (LTP), a natural behavior identified in several studies of instrumental and proxy hydroclimatic time series, which, nevertheless, is neglected in some climatological studies. LTP may reflect a long-term variability of several factors and thus can support a more complete physical understanding and uncertainty characterization of climate. The implications of LTP in hydroclimatic research, especially in statistical questions and problems, may be substantial but appear to be not fully understood or recognized. To offer insights on these implications, we demonstrate by using analytical methods that the characteristics of temperature series, which appear to be compatible with the LTP hypothesis, imply a dramatic increase of uncertainty in statistical estimation and reduction of significance in statistical testing, in comparison with classical statistics. Therefore we maintain that statistical analysis in hydroclimatic research should be revisited in order not to derive misleading results and simultaneously that merely statistical arguments do not suffice to verify or falsify the LTP (or another) climatic hypothesis.
Lehmann, Thomas; Redies, Christoph
2017-01-01
For centuries, oil paintings have been a major segment of the visual arts. The JenAesthetics data set consists of a large number of high-quality images of oil paintings of Western provenance from different art periods. With this database, we studied the relationship between objective image measures and subjective evaluations of the images, especially evaluations on aesthetics (defined as artistic value) and beauty (defined as individual liking). The objective measures represented low-level statistical image properties that have been associated with aesthetic value in previous research. Subjective rating scores on aesthetics and beauty correlated not only with each other but also with different combinations of the objective measures. Furthermore, we found that paintings from different art periods vary with regard to the objective measures, that is, they exhibit specific patterns of statistical image properties. In addition, clusters of participants preferred different combinations of these properties. In conclusion, the results of the present study provide evidence that statistical image properties vary between art periods and subject matters and, in addition, they correlate with the subjective evaluation of paintings by the participants. PMID:28694958
Statistical trend analysis methods for temporal phenomena
Energy Technology Data Exchange (ETDEWEB)
Lehtinen, E.; Pulkkinen, U. [VTT Automation, (Finland); Poern, K. [Poern Consulting, Nykoeping (Sweden)
1997-04-01
We consider point events occurring in a random way in time. In many applications the pattern of occurrence is of intrinsic interest as indicating a trend or some other systematic feature in the rate of occurrence. The purpose of this report is to survey briefly different statistical trend analysis methods and illustrate their applicability to temporal phenomena in particular. The trend testing of point events is usually seen as the testing of the hypotheses concerning the intensity of the occurrence of events. When the intensity function is parametrized, the testing of trend is a typical parametric testing problem. In industrial applications the operational experience generally does not suggest any specified model and method in advance. Therefore, and particularly, if the Poisson process assumption is very questionable, it is desirable to apply tests that are valid for a wide variety of possible processes. The alternative approach for trend testing is to use some non-parametric procedure. In this report we have presented four non-parametric tests: The Cox-Stuart test, the Wilcoxon signed ranks test, the Mann test, and the exponential ordered scores test. In addition to the classical parametric and non-parametric approaches we have also considered the Bayesian trend analysis. First we discuss a Bayesian model, which is based on a power law intensity model. The Bayesian statistical inferences are based on the analysis of the posterior distribution of the trend parameters, and the probability of trend is immediately seen from these distributions. We applied some of the methods discussed in an example case. It should be noted, that this report is a feasibility study rather than a scientific evaluation of statistical methods, and the examples can only be seen as demonstrations of the methods. 14 refs, 10 figs.
Analysis of Preference Data Using Intermediate Test Statistic ...
African Journals Online (AJOL)
Intermediate statistic is a link between Friedman test statistic and the multinomial statistic. The statistic is based on ranking in a selected number of treatments, not necessarily all alternatives. We show that this statistic is transitive to well-known test statistic being used for analysis of preference data. Specifically, it is shown ...
Statistical analysis of solar proton events
Directory of Open Access Journals (Sweden)
V. Kurt
2004-06-01
Full Text Available A new catalogue of 253 solar proton events (SPEs with energy >10MeV and peak intensity >10 protons/cm2.s.sr (pfu at the Earth's orbit for three complete 11-year solar cycles (1970-2002 is given. A statistical analysis of this data set of SPEs and their associated flares that occurred during this time period is presented. It is outlined that 231 of these proton events are flare related and only 22 of them are not associated with Ha flares. It is also noteworthy that 42 of these events are registered as Ground Level Enhancements (GLEs in neutron monitors. The longitudinal distribution of the associated flares shows that a great number of these events are connected with west flares. This analysis enables one to understand the long-term dependence of the SPEs and the related flare characteristics on the solar cycle which are useful for space weather prediction.
Wavelet and statistical analysis for melanoma classification
Nimunkar, Amit; Dhawan, Atam P.; Relue, Patricia A.; Patwardhan, Sachin V.
2002-05-01
The present work focuses on spatial/frequency analysis of epiluminesence images of dysplastic nevus and melanoma. A three-level wavelet decomposition was performed on skin-lesion images to obtain coefficients in the wavelet domain. A total of 34 features were obtained by computing ratios of the mean, variance, energy and entropy of the wavelet coefficients along with the mean and standard deviation of image intensity. An unpaired t-test for a normal distribution based features and the Wilcoxon rank-sum test for non-normal distribution based features were performed for selecting statistically correlated features. For our data set, the statistical analysis of features reduced the feature set from 34 to 5 features. For classification, the discriminant functions were computed in the feature space using the Mahanalobis distance. ROC curves were generated and evaluated for false positive fraction from 0.1 to 0.4. Most of the discrimination functions provided a true positive rate for melanoma of 93% with a false positive rate up to 21%.
Chung, Chi-Jung; Kuo, Yu-Chen; Hsieh, Yun-Yu; Li, Tsai-Chung; Lin, Cheng-Chieh; Liang, Wen-Miin; Liao, Li-Na; Li, Chia-Ing; Lin, Hsueh-Chun
2017-11-01
This study applied open source technology to establish a subject-enabled analytics model that can enhance measurement statistics of case studies with the public health data in cloud computing. The infrastructure of the proposed model comprises three domains: 1) the health measurement data warehouse (HMDW) for the case study repository, 2) the self-developed modules of online health risk information statistics (HRIStat) for cloud computing, and 3) the prototype of a Web-based process automation system in statistics (PASIS) for the health risk assessment of case studies with subject-enabled evaluation. The system design employed freeware including Java applications, MySQL, and R packages to drive a health risk expert system (HRES). In the design, the HRIStat modules enforce the typical analytics methods for biomedical statistics, and the PASIS interfaces enable process automation of the HRES for cloud computing. The Web-based model supports both modes, step-by-step analysis and auto-computing process, respectively for preliminary evaluation and real time computation. The proposed model was evaluated by computing prior researches in relation to the epidemiological measurement of diseases that were caused by either heavy metal exposures in the environment or clinical complications in hospital. The simulation validity was approved by the commercial statistics software. The model was installed in a stand-alone computer and in a cloud-server workstation to verify computing performance for a data amount of more than 230K sets. Both setups reached efficiency of about 105 sets per second. The Web-based PASIS interface can be used for cloud computing, and the HRIStat module can be flexibly expanded with advanced subjects for measurement statistics. The analytics procedure of the HRES prototype is capable of providing assessment criteria prior to estimating the potential risk to public health. Copyright © 2017 Elsevier B.V. All rights reserved.
Statistical analysis of tourism destination competitiveness
Directory of Open Access Journals (Sweden)
Attilio Gardini
2013-05-01
Full Text Available The growing relevance of tourism industry for modern advanced economies has increased the interest among researchers and policy makers in the statistical analysis of destination competitiveness. In this paper we outline a new model of destination competitiveness based on sound theoretical grounds and we develop a statistical test of the model on sample data based on Italian tourist destination decisions and choices. Our model focuses on the tourism decision process which starts from the demand schedule for holidays and ends with the choice of a specific holiday destination. The demand schedule is a function of individual preferences and of destination positioning, while the final decision is a function of the initial demand schedule and the information concerning services for accommodation and recreation in the selected destinations. Moreover, we extend previous studies that focused on image or attributes (such as climate and scenery by paying more attention to the services for accommodation and recreation in the holiday destinations. We test the proposed model using empirical data collected from a sample of 1.200 Italian tourists interviewed in 2007 (October - December. Data analysis shows that the selection probability for the destination included in the consideration set is not proportional to the share of inclusion because the share of inclusion is determined by the brand image, while the selection of the effective holiday destination is influenced by the real supply conditions. The analysis of Italian tourists preferences underline the existence of a latent demand for foreign holidays which points out a risk of market share reduction for Italian tourism system in the global market. We also find a snow ball effect which helps the most popular destinations, mainly in the northern Italian regions.
Multivariate statistical analysis of wildfires in Portugal
Costa, Ricardo; Caramelo, Liliana; Pereira, Mário
2013-04-01
Several studies demonstrate that wildfires in Portugal present high temporal and spatial variability as well as cluster behavior (Pereira et al., 2005, 2011). This study aims to contribute to the characterization of the fire regime in Portugal with the multivariate statistical analysis of the time series of number of fires and area burned in Portugal during the 1980 - 2009 period. The data used in the analysis is an extended version of the Rural Fire Portuguese Database (PRFD) (Pereira et al, 2011), provided by the National Forest Authority (Autoridade Florestal Nacional, AFN), the Portuguese Forest Service, which includes information for more than 500,000 fire records. There are many multiple advanced techniques for examining the relationships among multiple time series at the same time (e.g., canonical correlation analysis, principal components analysis, factor analysis, path analysis, multiple analyses of variance, clustering systems). This study compares and discusses the results obtained with these different techniques. Pereira, M.G., Trigo, R.M., DaCamara, C.C., Pereira, J.M.C., Leite, S.M., 2005: "Synoptic patterns associated with large summer forest fires in Portugal". Agricultural and Forest Meteorology. 129, 11-25. Pereira, M. G., Malamud, B. D., Trigo, R. M., and Alves, P. I.: The history and characteristics of the 1980-2005 Portuguese rural fire database, Nat. Hazards Earth Syst. Sci., 11, 3343-3358, doi:10.5194/nhess-11-3343-2011, 2011 This work is supported by European Union Funds (FEDER/COMPETE - Operational Competitiveness Programme) and by national funds (FCT - Portuguese Foundation for Science and Technology) under the project FCOMP-01-0124-FEDER-022692, the project FLAIR (PTDC/AAC-AMB/104702/2008) and the EU 7th Framework Program through FUME (contract number 243888).
Statistical Analysis of Bus Networks in India
Chatterjee, Atanu; Ramadurai, Gitakrishnan
2015-01-01
Through the past decade the field of network science has established itself as a common ground for the cross-fertilization of exciting inter-disciplinary studies which has motivated researchers to model almost every physical system as an interacting network consisting of nodes and links. Although public transport networks such as airline and railway networks have been extensively studied, the status of bus networks still remains in obscurity. In developing countries like India, where bus networks play an important role in day-to-day commutation, it is of significant interest to analyze its topological structure and answer some of the basic questions on its evolution, growth, robustness and resiliency. In this paper, we model the bus networks of major Indian cities as graphs in \\textit{L}-space, and evaluate their various statistical properties using concepts from network science. Our analysis reveals a wide spectrum of network topology with the common underlying feature of small-world property. We observe tha...
Höller, Yvonne; Kronbichler, Martin; Bergmann, Jürgen; Crone, Julia Sophia; Schmid, Elisabeth Verena; Golaszewski, Stefan; Ladurner, Gunther
2011-06-01
In previous studies event-related potentials and oscillations in response to subject's own name have been analyzed extensively on group-level in healthy subjects and in patients with a disorder of consciousness. Subject's own name as a deviant produces a P3. With equiprobable stimuli, non-phase-locked alpha oscillations are smaller in response to subject's own name compared to other names or subject's own name backwards. However, little is known about replicability on a single-subject level. Seventeen healthy subjects were assessed in an own-name paradigm with equiprobable stimuli of subject's own name, another name, and subject's own name backwards. Event-related potentials and non-phase locked oscillations were analyzed with single-subject, non-parametric statistics. No consistent results were found either for ERPs or for the non-phase locked changes of oscillatory activities. Only 4 subjects showed a robust effect as expected, that is, a lower activity in the alpha-beta range to subject's own name compared to other conditions. Four subjects elicited a higher activity for subject's own name. Thus, analyzing the EEG reactivity in the own-name paradigm with equiprobable stimuli on a single-subject level yields a high variance between subjects. In future research, single-subject statistics should be applied for examining the validity of physiologic measurements in other paradigms and for examining the pattern of reactivity in patients. Copyright © 2011 Elsevier B.V. All rights reserved.
Statistics Analysis Measures Painting of Cooling Tower
Directory of Open Access Journals (Sweden)
A. Zacharopoulou
2013-01-01
Full Text Available This study refers to the cooling tower of Megalopolis (construction 1975 and protection from corrosive environment. The maintenance of the cooling tower took place in 2008. The cooling tower was badly damaged from corrosion of reinforcement. The parabolic cooling towers (factory of electrical power are a typical example of construction, which has a special aggressive environment. The protection of cooling towers is usually achieved through organic coatings. Because of the different environmental impacts on the internal and external side of the cooling tower, a different system of paint application is required. The present study refers to the damages caused by corrosion process. The corrosive environments, the application of this painting, the quality control process, the measures and statistics analysis, and the results were discussed in this study. In the process of quality control the following measurements were taken into consideration: (1 examination of the adhesion with the cross-cut test, (2 examination of the film thickness, and (3 controlling of the pull-off resistance for concrete substrates and paintings. Finally, this study refers to the correlations of measurements, analysis of failures in relation to the quality of repair, and rehabilitation of the cooling tower. Also this study made a first attempt to apply the specific corrosion inhibitors in such a large structure.
Transit safety & security statistics & analysis 2002 annual report (formerly SAMIS)
2004-12-01
The Transit Safety & Security Statistics & Analysis 2002 Annual Report (formerly SAMIS) is a compilation and analysis of mass transit accident, casualty, and crime statistics reported under the Federal Transit Administrations (FTAs) National Tr...
Transit safety & security statistics & analysis 2003 annual report (formerly SAMIS)
2005-12-01
The Transit Safety & Security Statistics & Analysis 2003 Annual Report (formerly SAMIS) is a compilation and analysis of mass transit accident, casualty, and crime statistics reported under the Federal Transit Administrations (FTAs) National Tr...
Statistical network analysis for analyzing policy networks
DEFF Research Database (Denmark)
Robins, Garry; Lewis, Jenny; Wang, Peng
2012-01-01
To analyze social network data using standard statistical approaches is to risk incorrect inference. The dependencies among observations implied in a network conceptualization undermine standard assumptions of the usual general linear models. One of the most quickly expanding areas of social...... and policy network methodology is the development of statistical modeling approaches that can accommodate such dependent data. In this article, we review three network statistical methods commonly used in the current literature: quadratic assignment procedures, exponential random graph models (ERGMs...
The Subject Analysis of Payment Systems Characteristics
Directory of Open Access Journals (Sweden)
Korobeynikova Olga Mikhaylovna
2015-09-01
Full Text Available The article deals with the analysis of payment systems aimed at identifying the categorical terminological apparatus, proving their specific features and revealing the impact of payment systems on the state of money turnover. On the basis of the subject analysis, the author formulates the definitions of a payment system (characterized by increasing speed of effecting payments, by the reduction of costs, by high degree of payments convenience for subjects of transactions, by security of payments, by acceptable level of risks and by social efficiency, a national payment system, and a local payment system (characterized by the growth of economic and social efficiency of systems participants, by the process of money turnover optimization on the basis of saving transaction costs and increasing speed of money flows within the local payment systems. According to the economic levels, the payment systems are divided to macrosystems (national payment systems, mezosystems (payment systems localized on the operational and territorial basis, microsystems (payments by individual economic subjects. The establishment of qualitative features of payment systems, which is a basis of the author’s terminological interpretation, gave a possibility to reveal the cause-effect relations of payment systems influence on the state of money turnover in the involved subjects, and on the economy as a whole. The result of the present research consists in revealing the payment systems influence on the state of money turnover which is significant: at the state and regional level – in the optimization of budget and inter-budgetary relations, in acceleration of the money turnover, in deceleration of the money supply and inflation rate, in reduced need in money emission; at the level of economic entities – in accelerating the money turnover and accounts receivable, in the reduction of debit and credit loans, in the growth of profit (turnover; at the household level – in
Statistical Analysis of Bus Networks in India.
Chatterjee, Atanu; Manohar, Manju; Ramadurai, Gitakrishnan
2016-01-01
In this paper, we model the bus networks of six major Indian cities as graphs in L-space, and evaluate their various statistical properties. While airline and railway networks have been extensively studied, a comprehensive study on the structure and growth of bus networks is lacking. In India, where bus transport plays an important role in day-to-day commutation, it is of significant interest to analyze its topological structure and answer basic questions on its evolution, growth, robustness and resiliency. Although the common feature of small-world property is observed, our analysis reveals a wide spectrum of network topologies arising due to significant variation in the degree-distribution patterns in the networks. We also observe that these networks although, robust and resilient to random attacks are particularly degree-sensitive. Unlike real-world networks, such as Internet, WWW and airline, that are virtual, bus networks are physically constrained. Our findings therefore, throw light on the evolution of such geographically and constrained networks that will help us in designing more efficient bus networks in the future.
Developments in statistical analysis in quantitative genetics
DEFF Research Database (Denmark)
Sorensen, Daniel
2009-01-01
A remarkable research impetus has taken place in statistical genetics since the last World Conference. This has been stimulated by breakthroughs in molecular genetics, automated data-recording devices and computer-intensive statistical methods. The latter were revolutionized by the bootstrap and ...
Petti, M; Pichiorri, F; Toppi, J; Cincotti, F; Salinari, S; Babiloni, F; Mattia, D; Astolfi, L
2014-01-01
One of the main limitations commonly encountered when dealing with the estimation of brain connectivity is the difficulty to perform a statistical assessment of significant changes in brain networks at a single-subject level. This is mainly due to the lack of information about the distribution of the connectivity estimators at different conditions. While group analysis is commonly adopted to perform a statistical comparison between conditions, it may impose major limitations when dealing with the heterogeneity expressed by a given clinical condition in patients. This holds true particularly for stroke when seeking for quantitative measurements of the efficacy of any rehabilitative intervention promoting recovery of function. The need is then evident of an assessment which may account for individual pathological network configuration associated with different level of patients' response to treatment; such network configuration is highly related to the effect that a given brain lesion has on neural networks. In this study we propose a resampling-based approach to the assessment of statistically significant changes in cortical connectivity networks at a single subject level. First, we provide the results of a simulation study testing the performances of the proposed approach under different conditions. Then, to show the sensitivity of the method, we describe its application to electroencephalographic (EEG) data recorded from two post-stroke patients who showed different clinical recovery after a rehabilitative intervention.
Analysis of Preference Data Using Intermediate Test Statistic Abstract
African Journals Online (AJOL)
PROF. O. E. OSUAGWU
2013-06-01
Jun 1, 2013 ... We show that this statistic is transitive to well-known test statistic being used for analysis of preference data. Specifically, it is shown that our link is equivalent to the ... Keywords:-Preference data, Friedman statistic, multinomial test statistic, intermediate test ... favourable ones would not be a big issue in.
Statistical Analysis of Data for Timber Strengths
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard; Hoffmeyer, P.
Statistical analyses are performed for material strength parameters from approximately 6700 specimens of structural timber. Non-parametric statistical analyses and fits to the following distributions types have been investigated: Normal, Lognormal, 2 parameter Weibull and 3-parameter Weibull. The......-parameter Weibull (and Normal) distributions give the best fits to the data available, especially if tail fits are used whereas the LogNormal distribution generally gives poor fit and larger coefficients of variation, especially if tail fits are used........ The statistical fits have generally been made using all data (100%) and the lower tail (30%) of the data. The Maximum Likelihood Method and the Least Square Technique have been used to estimate the statistical parameters in the selected distributions. 8 different databases are analysed. The results show that 2...
Statistical convergence, selection principles and asymptotic analysis
Energy Technology Data Exchange (ETDEWEB)
Di Maio, G. [Dipartimento di Matematica, Seconda Universita di Napoli, Via Vivaldi 43, 81100 Caserta (Italy)], E-mail: giuseppe.dimaio@unina2.it; Djurcic, D. [Technical Faculty, University of Kragujevac, Svetog Save 65, 32000 Cacak (Serbia)], E-mail: dragandj@tfc.kg.ac.yu; Kocinac, Lj.D.R. [Faculty of Sciences and Mathematics, University of Nis, Visegradska 33, 18000 Nis (Serbia)], E-mail: lkocinac@ptt.rs; Zizovic, M.R. [Technical Faculty, University of Kragujevac, Svetog Save 65, 32000 Cacak (Serbia)], E-mail: zizo@tfc.kg.ac.yu
2009-12-15
We consider the set S of sequences of positive real numbers in the context of statistical convergence/divergence and show that some subclasses of S have certain nice selection and game-theoretic properties.
Statistical analysis of microbiological diagnostic tests
Directory of Open Access Journals (Sweden)
C P Baveja
2017-01-01
Full Text Available No study in medical science is complete without application of the statistical principles. Incorrect application of statistical tests causes incorrect interpretation of the study results obtained through hard work. Yet statistics remains one of the most neglected and loathed areas, probably due to the lack of understanding of the basic principles. In microbiology, rapid progress is being made in the field of diagnostic test, and a huge number of studies being conducted are related to the evaluation of these tests. Therefore, a good knowledge of statistical principles will aid a microbiologist to plan, conduct and interpret the result. The initial part of this review discusses the study designs, types of variables, principles of sampling, calculation of sample size, types of errors and power of the study. Subsequently, description of the performance characteristics of a diagnostic test, receiver operator characteristic curve and tests of significance are explained. Lack of a perfect gold standard test against which our test is being compared can hamper the study results; thus, it becomes essential to apply the remedial measures described here. Rapid computerisation has made statistical calculations much simpler, obviating the need for the routine researcher to rote learn the derivations and apply the complex formulae. Thus, greater focus has been laid on developing an understanding of principles. Finally, it should be kept in mind that a diagnostic test may show exemplary statistical results, yet it may not be useful in the routine laboratory or in the field; thus, its operational characteristics are as important as the statistical results.
Statistical damage constitutive model for rocks subjected to cyclic stress and cyclic temperature
Zhou, Shu-Wei; Xia, Cai-Chu; Zhao, Hai-Bin; Mei, Song-Hua; Zhou, Yu
2017-10-01
A constitutive model of rocks subjected to cyclic stress-temperature was proposed. Based on statistical damage theory, the damage constitutive model with Weibull distribution was extended. Influence of model parameters on the stress-strain curve for rock reloading after stress-temperature cycling was then discussed. The proposed model was initially validated by rock tests for cyclic stress-temperature and only cyclic stress. Finally, the total damage evolution induced by stress-temperature cycling and reloading after cycling was explored and discussed. The proposed constitutive model is reasonable and applicable, describing well the stress-strain relationship during stress-temperature cycles and providing a good fit to the test results. Elastic modulus in the reference state and the damage induced by cycling affect the shape of reloading stress-strain curve. Total damage induced by cycling and reloading after cycling exhibits three stages: initial slow increase, mid-term accelerated increase, and final slow increase.
Statistical Analysis of Data for Timber Strengths
DEFF Research Database (Denmark)
Sørensen, John Dalsgaard
2003-01-01
. The statistical fits have generally been made using all data and the lower tail of the data. The Maximum Likelihood Method and the Least Square Technique have been used to estimate the statistical parameters in the selected distributions. The results show that the 2-parameter Weibull distribution gives the best...... fits to the data available, especially if tail fits are used whereas the Log Normal distribution generally gives a poor fit and larger coefficients of variation, especially if tail fits are used. The implications on the reliability level of typical structural elements and on partial safety factors...
Toppi, J; Anzolin, A; Petti, M; Cincotti, F; Mattia, D; Salinari, S; Babiloni, F; Astolfi, L
2014-01-01
Methods based on the multivariate autoregressive (MVAR) approach are commonly used for effective connectivity estimation as they allow to include all available sources into a unique model. To ensure high levels of accuracy for high model dimensions, all the observations are used to provide a unique estimation of the model, and thus of the network and its properties. The unavailability of a distribution of connectivity values for a single experimental condition prevents to perform statistical comparisons between different conditions at a single subject level. This is a major limitation, especially when dealing with the heterogeneity of clinical conditions presented by patients. In the present paper we proposed a novel approach to the construction of a distribution of connectivity in a single subject case. The proposed approach is based on small perturbations of the networks properties and allows to assess significant changes in brain connectivity indexes derived from graph theory. Its feasibility and applicability were investigated by means of a simulation study and an application to real EEG data.
Gregor Mendel's Genetic Experiments: A Statistical Analysis after 150 Years
Czech Academy of Sciences Publication Activity Database
Kalina, Jan
2016-01-01
Roč. 12, č. 2 (2016), s. 20-26 ISSN 1801-5603 Institutional support: RVO:67985807 Keywords : genetics * history of science * biostatistics * design of experiments Subject RIV: BB - Applied Statistics, Operational Research
Information sources of company's competitive environment statistical analysis
Khvostenko, O.
2010-01-01
The article is dedicated to a problem of the company's competitive environment statistical analysis and its information sources. The main features of information system and its significance in the competitive environment statistical research have been considered.
Statistical analysis of lineaments of Goa, India
Digital Repository Service at National Institute of Oceanography (India)
Iyer, S.D.; Banerjee, G.; Wagle, B.G.
statistically to obtain the nonlinear pattern in the form of a cosine wave. Three distinct peaks were found at azimuths of 40-45 degrees, 90-95 degrees and 140-145 degrees, which have peak values of 5.85, 6.80 respectively. These three peaks are correlated...
Statistical models and methods for reliability and survival analysis
Couallier, Vincent; Huber-Carol, Catherine; Mesbah, Mounir; Huber -Carol, Catherine; Limnios, Nikolaos; Gerville-Reache, Leo
2013-01-01
Statistical Models and Methods for Reliability and Survival Analysis brings together contributions by specialists in statistical theory as they discuss their applications providing up-to-date developments in methods used in survival analysis, statistical goodness of fit, stochastic processes for system reliability, amongst others. Many of these are related to the work of Professor M. Nikulin in statistics over the past 30 years. The authors gather together various contributions with a broad array of techniques and results, divided into three parts - Statistical Models and Methods, Statistical
Statistical analysis of medical data using SAS
Der, Geoff
2005-01-01
An Introduction to SASDescribing and Summarizing DataBasic InferenceScatterplots Correlation: Simple Regression and SmoothingAnalysis of Variance and CovarianceMultiple RegressionLogistic RegressionThe Generalized Linear ModelGeneralized Additive ModelsNonlinear Regression ModelsThe Analysis of Longitudinal Data IThe Analysis of Longitudinal Data II: Models for Normal Response VariablesThe Analysis of Longitudinal Data III: Non-Normal ResponseSurvival AnalysisAnalysis Multivariate Date: Principal Components and Cluster AnalysisReferences
Common misconceptions about data analysis and statistics.
Motulsky, Harvey J
2015-02-01
Ideally, any experienced investigator with the right tools should be able to reproduce a finding published in a peer-reviewed biomedical science journal. In fact, the reproducibility of a large percentage of published findings has been questioned. Undoubtedly, there are many reasons for this, but one reason may be that investigators fool themselves due to a poor understanding of statistical concepts. In particular, investigators often make these mistakes: (1) P-Hacking. This is when you reanalyze a data set in many different ways, or perhaps reanalyze with additional replicates, until you get the result you want. (2) Overemphasis on P values rather than on the actual size of the observed effect. (3) Overuse of statistical hypothesis testing, and being seduced by the word "significant". (4) Overreliance on standard errors, which are often misunderstood.
Common misconceptions about data analysis and statistics.
Motulsky, Harvey J
2014-11-01
Ideally, any experienced investigator with the right tools should be able to reproduce a finding published in a peer-reviewed biomedical science journal. In fact, the reproducibility of a large percentage of published findings has been questioned. Undoubtedly, there are many reasons for this, but one reason maybe that investigators fool themselves due to a poor understanding of statistical concepts. In particular, investigators often make these mistakes: 1. P-Hacking. This is when you reanalyze a data set in many different ways, or perhaps reanalyze with additional replicates, until you get the result you want. 2. Overemphasis on P values rather than on the actual size of the observed effect. 3. Overuse of statistical hypothesis testing, and being seduced by the word "significant". 4. Overreliance on standard errors, which are often misunderstood.
Directory of Open Access Journals (Sweden)
Priya Ranganathan
2015-01-01
Full Text Available In the second part of a series on pitfalls in statistical analysis, we look at various ways in which a statistically significant study result can be expressed. We debunk some of the myths regarding the ′P′ value, explain the importance of ′confidence intervals′ and clarify the importance of including both values in a paper
Commentary Discrepancy between statistical analysis method and ...
African Journals Online (AJOL)
to strive for compatibility between study design and analysis plan. Many authors have reported on common discrepancies in medical research, specifically between analysis methods and study design.4,5 For instance, after reviewing several published studies, Varnell et al. observed that many studies had applied.
Confidence Levels in Statistical Analyses. Analysis of Variances. Case Study.
Directory of Open Access Journals (Sweden)
Ileana Brudiu
2010-05-01
Full Text Available Applying a statistical test to check statistical assumptions offers a positive or negative response regarding the veracity of the issued hypothesis. In case of variance analysis it’s necessary to apply a post hoc test to determine differences within the group. Statistical estimation using confidence levels provides more information than a statistical test, it shows the high degree of uncertainty resulting from small samples and builds conclusions in terms of "marginally significant" or "almost significant (p being close to 0,05 . The case study shows how the statistical estimation completes the application form the analysis of variance test and Tukey test.
Why Flash Type Matters: A Statistical Analysis
Mecikalski, Retha M.; Bitzer, Phillip M.; Carey, Lawrence D.
2017-09-01
While the majority of research only differentiates between intracloud (IC) and cloud-to-ground (CG) flashes, there exists a third flash type, known as hybrid flashes. These flashes have extensive IC components as well as return strokes to ground but are misclassified as CG flashes in current flash type analyses due to the presence of a return stroke. In an effort to show that IC, CG, and hybrid flashes should be separately classified, the two-sample Kolmogorov-Smirnov (KS) test was applied to the flash sizes, flash initiation, and flash propagation altitudes for each of the three flash types. The KS test statistically showed that IC, CG, and hybrid flashes do not have the same parent distributions and thus should be separately classified. Separate classification of hybrid flashes will lead to improved lightning-related research, because unambiguously classified hybrid flashes occur on the same order of magnitude as CG flashes for multicellular storms.
Statistics over features: EEG signals analysis.
Derya Ubeyli, Elif
2009-08-01
This paper presented the usage of statistics over the set of the features representing the electroencephalogram (EEG) signals. Since classification is more accurate when the pattern is simplified through representation by important features, feature extraction and selection play an important role in classifying systems such as neural networks. Multilayer perceptron neural network (MLPNN) architectures were formulated and used as basis for detection of electroencephalographic changes. Three types of EEG signals (EEG signals recorded from healthy volunteers with eyes open, epilepsy patients in the epileptogenic zone during a seizure-free interval, and epilepsy patients during epileptic seizures) were classified. The selected Lyapunov exponents, wavelet coefficients and the power levels of power spectral density (PSD) values obtained by eigenvector methods of the EEG signals were used as inputs of the MLPNN trained with Levenberg-Marquardt algorithm. The classification results confirmed that the proposed MLPNN has potential in detecting the electroencephalographic changes.
Statistical power analysis for the behavioral sciences
National Research Council Canada - National Science Library
Cohen, Jacob
1988-01-01
.... A chapter has been added for power analysis in set correlation and multivariate methods (Chapter 10). Set correlation is a realization of the multivariate general linear model, and incorporates the standard multivariate methods...
Statistical methods for categorical data analysis
Powers, Daniel
2008-01-01
This book provides a comprehensive introduction to methods and models for categorical data analysis and their applications in social science research. Companion website also available, at https://webspace.utexas.edu/dpowers/www/
Statistical power analysis for the behavioral sciences
National Research Council Canada - National Science Library
Cohen, Jacob
1988-01-01
... offers a unifying framework and some new data-analytic possibilities. 2. A new chapter (Chapter 11) considers some general topics in power analysis in more integrted form than is possible in the earlier...
Statistical Analysis of the Flexographic Printing Quality
Directory of Open Access Journals (Sweden)
Agnė Matulaitienė
2014-02-01
Full Text Available Analysis of flexographic printing output quality was performedusing SPSS software package. Samples of defected productswere collected for one year in the existing flexographic printingcompany. Any defective products examples were described indetails and analyzed. It was decided to use SPPS software packagebecause of large amount of data. Data flaw based hypotheseswere formulated which were approved or rejected in analysis.The results obtained are presented in the charts.
A statistical package for computing time and frequency domain analysis
Brownlow, J.
1978-01-01
The spectrum analysis (SPA) program is a general purpose digital computer program designed to aid in data analysis. The program does time and frequency domain statistical analyses as well as some preanalysis data preparation. The capabilities of the SPA program include linear trend removal and/or digital filtering of data, plotting and/or listing of both filtered and unfiltered data, time domain statistical characterization of data, and frequency domain statistical characterization of data.
Hayslett, H T
1991-01-01
Statistics covers the basic principles of Statistics. The book starts by tackling the importance and the two kinds of statistics; the presentation of sample data; the definition, illustration and explanation of several measures of location; and the measures of variation. The text then discusses elementary probability, the normal distribution and the normal approximation to the binomial. Testing of statistical hypotheses and tests of hypotheses about the theoretical proportion of successes in a binomial population and about the theoretical mean of a normal population are explained. The text the
Statistical analysis of Hasegawa-Wakatani turbulence
Anderson, Johan; Hnat, Bogdan
2017-06-01
Resistive drift wave turbulence is a multipurpose paradigm that can be used to understand transport at the edge of fusion devices. The Hasegawa-Wakatani model captures the essential physics of drift turbulence while retaining the simplicity needed to gain a qualitative understanding of this process. We provide a theoretical interpretation of numerically generated probability density functions (PDFs) of intermittent events in Hasegawa-Wakatani turbulence with enforced equipartition of energy in large scale zonal flows, and small scale drift turbulence. We find that for a wide range of adiabatic index values, the stochastic component representing the small scale turbulent eddies of the flow, obtained from the autoregressive integrated moving average model, exhibits super-diffusive statistics, consistent with intermittent transport. The PDFs of large events (above one standard deviation) are well approximated by the Laplace distribution, while small events often exhibit a Gaussian character. Furthermore, there exists a strong influence of zonal flows, for example, via shearing and then viscous dissipation maintaining a sub-diffusive character of the fluxes.
Book review: Statistical Analysis and Modelling of Spatial Point Patterns
DEFF Research Database (Denmark)
Møller, Jesper
2009-01-01
Statistical Analysis and Modelling of Spatial Point Patterns by J. Illian, A. Penttinen, H. Stoyan and D. Stoyan. Wiley (2008), ISBN 9780470014912......Statistical Analysis and Modelling of Spatial Point Patterns by J. Illian, A. Penttinen, H. Stoyan and D. Stoyan. Wiley (2008), ISBN 9780470014912...
Statistical Modelling of Wind Proles - Data Analysis and Modelling
DEFF Research Database (Denmark)
Jónsson, Tryggvi; Pinson, Pierre
The aim of the analysis presented in this document is to investigate whether statistical models can be used to make very short-term predictions of wind profiles.......The aim of the analysis presented in this document is to investigate whether statistical models can be used to make very short-term predictions of wind profiles....
Sensitivity analysis of ranked data: from order statistics to quantiles
Heidergott, B.F.; Volk-Makarewicz, W.
2015-01-01
In this paper we provide the mathematical theory for sensitivity analysis of order statistics of continuous random variables, where the sensitivity is with respect to a distributional parameter. Sensitivity analysis of order statistics over a finite number of observations is discussed before
The Statistical Analysis of Failure Time Data
Kalbfleisch, John D
2011-01-01
Contains additional discussion and examples on left truncation as well as material on more general censoring and truncation patterns.Introduces the martingale and counting process formulation swil lbe in a new chapter.Develops multivariate failure time data in a separate chapter and extends the material on Markov and semi Markov formulations.Presents new examples and applications of data analysis.
NUCLEI SHAPE ANALYSIS, A STATISTICAL APPROACH
Directory of Open Access Journals (Sweden)
Alberto Nettel-Aguirre
2011-05-01
Full Text Available The method presented in our paper suggests the use of Functional Data Analysis (FDA techniques in an attempt to characterise the nuclei of two types of cells: Cancer and non-cancer, based on their 2 dimensional profiles. The characteristics of the profile itself, as traced by its X and Y coordinates, their first and second derivatives, their variability and use in characterization are the main focus of this approach which is not constrained to star shaped nuclei. Findings: Principal components created from the coordinates relate to shape with significant differences between nuclei type. Characterisations for each type of profile were found.
Baltic sea algae analysis using Bayesian spatial statistics methods
Directory of Open Access Journals (Sweden)
Eglė Baltmiškytė
2013-03-01
Full Text Available Spatial statistics is one of the fields in statistics dealing with spatialy spread data analysis. Recently, Bayes methods are often applied for data statistical analysis. A spatial data model for predicting algae quantity in the Baltic Sea is made and described in this article. Black Carrageen is a dependent variable and depth, sand, pebble, boulders are independent variables in the described model. Two models with different covariation functions (Gaussian and exponential are built to estimate the best model fitting for algae quantity prediction. Unknown model parameters are estimated and Bayesian kriging prediction posterior distribution is computed in OpenBUGS modeling environment by using Bayesian spatial statistics methods.
A statistical analysis of UK financial networks
Chu, J.; Nadarajah, S.
2017-04-01
In recent years, with a growing interest in big or large datasets, there has been a rise in the application of large graphs and networks to financial big data. Much of this research has focused on the construction and analysis of the network structure of stock markets, based on the relationships between stock prices. Motivated by Boginski et al. (2005), who studied the characteristics of a network structure of the US stock market, we construct network graphs of the UK stock market using same method. We fit four distributions to the degree density of the vertices from these graphs, the Pareto I, Fréchet, lognormal, and generalised Pareto distributions, and assess the goodness of fit. Our results show that the degree density of the complements of the market graphs, constructed using a negative threshold value close to zero, can be fitted well with the Fréchet and lognormal distributions.
Statistical Performance Analysis and Modeling Techniques for Nanometer VLSI Designs
Shen, Ruijing; Yu, Hao
2012-01-01
Since process variation and chip performance uncertainties have become more pronounced as technologies scale down into the nanometer regime, accurate and efficient modeling or characterization of variations from the device to the architecture level have become imperative for the successful design of VLSI chips. This book provides readers with tools for variation-aware design methodologies and computer-aided design (CAD) of VLSI systems, in the presence of process variations at the nanometer scale. It presents the latest developments for modeling and analysis, with a focus on statistical interconnect modeling, statistical parasitic extractions, statistical full-chip leakage and dynamic power analysis considering spatial correlations, statistical analysis and modeling for large global interconnects and analog/mixed-signal circuits. Provides readers with timely, systematic and comprehensive treatments of statistical modeling and analysis of VLSI systems with a focus on interconnects, on-chip power grids and ...
Conjunction analysis and propositional logic in fMRI data analysis using Bayesian statistics.
Rudert, Thomas; Lohmann, Gabriele
2008-12-01
To evaluate logical expressions over different effects in data analyses using the general linear model (GLM) and to evaluate logical expressions over different posterior probability maps (PPMs). In functional magnetic resonance imaging (fMRI) data analysis, the GLM was applied to estimate unknown regression parameters. Based on the GLM, Bayesian statistics can be used to determine the probability of conjunction, disjunction, implication, or any other arbitrary logical expression over different effects or contrast. For second-level inferences, PPMs from individual sessions or subjects are utilized. These PPMs can be combined to a logical expression and its probability can be computed. The methods proposed in this article are applied to data from a STROOP experiment and the methods are compared to conjunction analysis approaches for test-statistics. The combination of Bayesian statistics with propositional logic provides a new approach for data analyses in fMRI. Two different methods are introduced for propositional logic: the first for analyses using the GLM and the second for common inferences about different probability maps. The methods introduced extend the idea of conjunction analysis to a full propositional logic and adapt it from test-statistics to Bayesian statistics. The new approaches allow inferences that are not possible with known standard methods in fMRI. (c) 2008 Wiley-Liss, Inc.
Links to sources of cancer-related statistics, including the Surveillance, Epidemiology and End Results (SEER) Program, SEER-Medicare datasets, cancer survivor prevalence data, and the Cancer Trends Progress Report.
CORSSA: The Community Online Resource for Statistical Seismicity Analysis
Michael, Andrew J.; Wiemer, Stefan
2010-01-01
Statistical seismology is the application of rigorous statistical methods to earthquake science with the goal of improving our knowledge of how the earth works. Within statistical seismology there is a strong emphasis on the analysis of seismicity data in order to improve our scientific understanding of earthquakes and to improve the evaluation and testing of earthquake forecasts, earthquake early warning, and seismic hazards assessments. Given the societal importance of these applications, statistical seismology must be done well. Unfortunately, a lack of educational resources and available software tools make it difficult for students and new practitioners to learn about this discipline. The goal of the Community Online Resource for Statistical Seismicity Analysis (CORSSA) is to promote excellence in statistical seismology by providing the knowledge and resources necessary to understand and implement the best practices, so that the reader can apply these methods to their own research. This introduction describes the motivation for and vision of CORRSA. It also describes its structure and contents.
Data Analysis & Statistical Methods for Command File Errors
Meshkat, Leila; Waggoner, Bruce; Bryant, Larry
2014-01-01
This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.
Statistical Analysis of Research Data | Center for Cancer Research
Recent advances in cancer biology have resulted in the need for increased statistical analysis of research data. The Statistical Analysis of Research Data (SARD) course will be held on April 12-13, 2017 from 9:00 AM – 5:00 PM at the Natcher Conference Center, Balcony A on the Bethesda campus. SARD is designed to provide an overview of the general principles of statistical analysis of research data. The course will be taught by Paul W. Thurman of Columbia University.
Method for statistical data analysis of multivariate observations
Gnanadesikan, R
1997-01-01
A practical guide for multivariate statistical techniques-- now updated and revised In recent years, innovations in computer technology and statistical methodologies have dramatically altered the landscape of multivariate data analysis. This new edition of Methods for Statistical Data Analysis of Multivariate Observations explores current multivariate concepts and techniques while retaining the same practical focus of its predecessor. It integrates methods and data-based interpretations relevant to multivariate analysis in a way that addresses real-world problems arising in many areas of inte
Statistical evaluation of diagnostic performance topics in ROC analysis
Zou, Kelly H; Bandos, Andriy I; Ohno-Machado, Lucila; Rockette, Howard E
2016-01-01
Statistical evaluation of diagnostic performance in general and Receiver Operating Characteristic (ROC) analysis in particular are important for assessing the performance of medical tests and statistical classifiers, as well as for evaluating predictive models or algorithms. This book presents innovative approaches in ROC analysis, which are relevant to a wide variety of applications, including medical imaging, cancer research, epidemiology, and bioinformatics. Statistical Evaluation of Diagnostic Performance: Topics in ROC Analysis covers areas including monotone-transformation techniques in parametric ROC analysis, ROC methods for combined and pooled biomarkers, Bayesian hierarchical transformation models, sequential designs and inferences in the ROC setting, predictive modeling, multireader ROC analysis, and free-response ROC (FROC) methodology. The book is suitable for graduate-level students and researchers in statistics, biostatistics, epidemiology, public health, biomedical engineering, radiology, medi...
Online Statistical Modeling (Regression Analysis) for Independent Responses
Made Tirta, I.; Anggraeni, Dian; Pandutama, Martinus
2017-06-01
Regression analysis (statistical analmodelling) are among statistical methods which are frequently needed in analyzing quantitative data, especially to model relationship between response and explanatory variables. Nowadays, statistical models have been developed into various directions to model various type and complex relationship of data. Rich varieties of advanced and recent statistical modelling are mostly available on open source software (one of them is R). However, these advanced statistical modelling, are not very friendly to novice R users, since they are based on programming script or command line interface. Our research aims to developed web interface (based on R and shiny), so that most recent and advanced statistical modelling are readily available, accessible and applicable on web. We have previously made interface in the form of e-tutorial for several modern and advanced statistical modelling on R especially for independent responses (including linear models/LM, generalized linier models/GLM, generalized additive model/GAM and generalized additive model for location scale and shape/GAMLSS). In this research we unified them in the form of data analysis, including model using Computer Intensive Statistics (Bootstrap and Markov Chain Monte Carlo/ MCMC). All are readily accessible on our online Virtual Statistics Laboratory. The web (interface) make the statistical modeling becomes easier to apply and easier to compare them in order to find the most appropriate model for the data.
An Industrial Analysis for Integrating Business Subjects.
Kapusinski, Albert T.
1986-01-01
Describes the industrial analysis seminar at Caldwell College (New Jersey), which was designed to be a capstone course for undergraduate business majors, allowing them to bring business topics into focus by using all their collected business acumen: accounting, marketing, management, economics, law, etc. (CT)
Directory of Open Access Journals (Sweden)
Jesús Vega Encabo
2015-11-01
Full Text Available In this paper, I claim that subjectivity is a way of being that is constituted through a set of practices in which the self is subject to the dangers of fictionalizing and plotting her life and self-image. I examine some ways of becoming subject through narratives and through theatrical performance before others. Through these practices, a real and active subjectivity is revealed, capable of self-knowledge and self-transformation.
Analysis of thrips distribution: application of spatial statistics and Kriging
John Aleong; Bruce L. Parker; Margaret Skinner; Diantha Howard
1991-01-01
Kriging is a statistical technique that provides predictions for spatially and temporally correlated data. Observations of thrips distribution and density in Vermont soils are made in both space and time. Traditional statistical analysis of such data assumes that the counts taken over space and time are independent, which is not necessarily true. Therefore, to analyze...
Attitudes and Achievement in Statistics: A Meta-Analysis Study
Emmioglu, Esma; Capa-Aydin, Yesim
2012-01-01
This study examined the relationships among statistics achievement and four components of attitudes toward statistics (Cognitive Competence, Affect, Value, and Difficulty) as assessed by the SATS. Meta-analysis results revealed that the size of relationships differed by the geographical region in which the studies were conducted as well as by the…
The Importance of Statistical Modeling in Data Analysis and Inference
Rollins, Derrick, Sr.
2017-01-01
Statistical inference simply means to draw a conclusion based on information that comes from data. Error bars are the most commonly used tool for data analysis and inference in chemical engineering data studies. This work demonstrates, using common types of data collection studies, the importance of specifying the statistical model for sound…
Guidelines for Statistical Analysis of Percentage of Syllables Stuttered Data
Jones, Mark; Onslow, Mark; Packman, Ann; Gebski, Val
2006-01-01
Purpose: The purpose of this study was to develop guidelines for the statistical analysis of percentage of syllables stuttered (%SS) data in stuttering research. Method; Data on %SS from various independent sources were used to develop a statistical model to describe this type of data. On the basis of this model, %SS data were simulated with…
THE SUBJECT OF STATISTICS IN NATURAL SCIENCE CURRICULA: A CASE STUDY
Directory of Open Access Journals (Sweden)
HYBŠOVÁ, Aneta
2015-03-01
Full Text Available Statistics is considered to be an indispensable part of a wide range of curricula across the globe, natural science curricula included. Teachers and curriculum developers are typically confronted with four questions with regard to the role and position of statistics in a curriculum: (1 how to integrate statistics in the curriculum; (2 which topics to cover and in what detail; (3 how much time to allocate to statistics in a curriculum; and (4 how to organize a course and which study materials to select. This paper addresses these four questions through a case study: four curricula at Charles University, Prague, Czech Republic, are compared in terms of how they address these four questions. Placing this comparison in a framework of cognitive load theory and two decades of research inspired by this theory, this paper concludes with a number of guidelines for addressing the aforementioned four questions when designing a curriculum.
Statistical Analysis of the Exchange Rate of Bitcoin: e0133678
National Research Council Canada - National Science Library
Jeffrey Chu; Saralees Nadarajah; Stephen Chan
2015-01-01
Bitcoin, the first electronic payment system, is becoming a popular currency. We provide a statistical analysis of the log-returns of the exchange rate of Bitcoin versus the United States Dollar...
Statistical Analysis for Grinding Mechanism of Fine Ceramic Material
National Research Council Canada - National Science Library
NISHIOKA, Takao; TANAKA, Yoshio; YAMAKAWA, Akira; MIYAKE, Masaya
1994-01-01
.... Statistical analysis was conducted on the specific grinding energy and stock removal rate with respect to the maximum grain depth of cut by a new method of directly evaluation successive cutting point spacing...
Statistical Analysis of a Method to Predict DrugePolymer Miscibility
DEFF Research Database (Denmark)
Knopp, Matthias Manne; Olesen, Niels Erik; Huang, Yanbin
2016-01-01
In this study, a method proposed to predict drugepolymer miscibility from differential scanning calorimetry measurements was subjected to statistical analysis. The method is relatively fast and inexpensive and has gained popularity as a result of the increasing interest in the formulation of drugs...... to a substantial bias. The statistical analysis performed in this present study revealed that the mathematical procedure associated with the method is not only biased, but also too uncertain to predict drugepolymer miscibility at room temperature. Consequently, the statistical inference based on the mathematical...... as amorphous solid dispersions. However, it does not include a standard statistical assessment of the experimental uncertainty by means of a confidence interval. In addition, it applies a routine mathematical operation known as “transformation to linearity,” which previously has been shown to be subject...
Statistical Analysis of a Method to Predict Drug-Polymer Miscibility
DEFF Research Database (Denmark)
Knopp, Matthias Manne; Olesen, Niels Erik; Huang, Yanbin
2016-01-01
In this study, a method proposed to predict drug-polymer miscibility from differential scanning calorimetry measurements was subjected to statistical analysis. The method is relatively fast and inexpensive and has gained popularity as a result of the increasing interest in the formulation of drugs...... to a substantial bias. The statistical analysis performed in this present study revealed that the mathematical procedure associated with the method is not only biased, but also too uncertain to predict drug-polymer miscibility at room temperature. Consequently, the statistical inference based on the mathematical...... as amorphous solid dispersions. However, it does not include a standard statistical assessment of the experimental uncertainty by means of a confidence interval. In addition, it applies a routine mathematical operation known as "transformation to linearity", which previously has been shown to be subject...
[Statistical analysis using freely-available "EZR (Easy R)" software].
Kanda, Yoshinobu
2015-10-01
Clinicians must often perform statistical analyses for purposes such evaluating preexisting evidence and designing or executing clinical studies. R is a free software environment for statistical computing. R supports many statistical analysis functions, but does not incorporate a statistical graphical user interface (GUI). The R commander provides an easy-to-use basic-statistics GUI for R. However, the statistical function of the R commander is limited, especially in the field of biostatistics. Therefore, the author added several important statistical functions to the R commander and named it "EZR (Easy R)", which is now being distributed on the following website: http://www.jichi.ac.jp/saitama-sct/. EZR allows the application of statistical functions that are frequently used in clinical studies, such as survival analyses, including competing risk analyses and the use of time-dependent covariates and so on, by point-and-click access. In addition, by saving the script automatically created by EZR, users can learn R script writing, maintain the traceability of the analysis, and assure that the statistical process is overseen by a supervisor.
Using multivariate statistical analysis to assess changes in water ...
African Journals Online (AJOL)
Abstract. Multivariate statistical analysis was used to investigate changes in water chemistry at 5 river sites in the Vaal Dam catch- ... analysis (CCA) showed that the environmental variables used in the analysis, discharge and month of sampling, explained ...... DINGENEN R, WILD O and ZENG G (2006) The global atmos-.
Statistical Learning in Specific Language Impairment: A Meta-Analysis
Lammertink, Imme; Boersma, Paul; Wijnen, Frank; Rispens, Judith
2017-01-01
Purpose: The current meta-analysis provides a quantitative overview of published and unpublished studies on statistical learning in the auditory verbal domain in people with and without specific language impairment (SLI). The database used for the meta-analysis is accessible online and open to updates (Community-Augmented Meta-Analysis), which…
Detecting errors in micro and trace analysis by using statistics
DEFF Research Database (Denmark)
Heydorn, K.
1993-01-01
to be in statistical control. Significant deviations between analytical results from different laboratories reveal the presence of systematic errors, and agreement between different laboratories indicate the absence of systematic errors. This statistical approach, referred to as the analysis of precision, was applied...... to results for chlorine in freshwater from BCR certification analyses by highly competent analytical laboratories in the EC. Titration showed systematic errors of several percent, while radiochemical neutron activation analysis produced results without detectable bias....
[Appropriate usage of statistical analysis in eye research].
Ge, Jian
2013-02-01
To avoid data bias in clinical research, it is essential to carefully select the suitable analysis of statistics on different research purposes and designs. It is optimal that team-work by statistician, scientist and specialist will assure to obtain reliable and scientific analysis of a study. The best way to analyze a study is to select more appropriate statistical methods rather than complicated ones.
Advanced data analysis in neuroscience integrating statistical and computational models
Durstewitz, Daniel
2017-01-01
This book is intended for use in advanced graduate courses in statistics / machine learning, as well as for all experimental neuroscientists seeking to understand statistical methods at a deeper level, and theoretical neuroscientists with a limited background in statistics. It reviews almost all areas of applied statistics, from basic statistical estimation and test theory, linear and nonlinear approaches for regression and classification, to model selection and methods for dimensionality reduction, density estimation and unsupervised clustering. Its focus, however, is linear and nonlinear time series analysis from a dynamical systems perspective, based on which it aims to convey an understanding also of the dynamical mechanisms that could have generated observed time series. Further, it integrates computational modeling of behavioral and neural dynamics with statistical estimation and hypothesis testing. This way computational models in neuroscience are not only explanat ory frameworks, but become powerfu...
Basic statistical tools in research and data analysis
Directory of Open Access Journals (Sweden)
Zulfiqar Ali
2016-01-01
Full Text Available Statistical methods involved in carrying out a study include planning, designing, collecting data, analysing, drawing meaningful interpretation and reporting of the research findings. The statistical analysis gives meaning to the meaningless numbers, thereby breathing life into a lifeless data. The results and inferences are precise only if proper statistical tests are used. This article will try to acquaint the reader with the basic research tools that are utilised while conducting various studies. The article covers a brief outline of the variables, an understanding of quantitative and qualitative variables and the measures of central tendency. An idea of the sample size estimation, power analysis and the statistical errors is given. Finally, there is a summary of parametric and non-parametric tests used for data analysis.
Numeric computation and statistical data analysis on the Java platform
Chekanov, Sergei V
2016-01-01
Numerical computation, knowledge discovery and statistical data analysis integrated with powerful 2D and 3D graphics for visualization are the key topics of this book. The Python code examples powered by the Java platform can easily be transformed to other programming languages, such as Java, Groovy, Ruby and BeanShell. This book equips the reader with a computational platform which, unlike other statistical programs, is not limited by a single programming language. The author focuses on practical programming aspects and covers a broad range of topics, from basic introduction to the Python language on the Java platform (Jython), to descriptive statistics, symbolic calculations, neural networks, non-linear regression analysis and many other data-mining topics. He discusses how to find regularities in real-world data, how to classify data, and how to process data for knowledge discoveries. The code snippets are so short that they easily fit into single pages. Numeric Computation and Statistical Data Analysis ...
Directory of Open Access Journals (Sweden)
Julio Michael Stern
2011-04-01
Full Text Available This article analyzes the role of entropy in Bayesian statistics, focusing on its use as a tool for detection, recognition and validation of eigen-solutions. “Objects as eigen-solutions” is a key metaphor of the cognitive constructivism epistemological framework developed by the philosopher Heinz von Foerster. Special attention is given to some objections to the concepts of probability, statistics and randomization posed by George Spencer-Brown, a figure of great influence in the field of radical constructivism.
Statistical Reform: Evidence-Based Practice, Meta-Analyses, and Single Subject Designs
Jenson, William R.; Clark, Elaine; Kircher, John C.; Kristjansson, Sean D.
2007-01-01
Evidence-based practice approaches to interventions has come of age and promises to provide a new standard of excellence for school psychologists. This article describes several definitions of evidence-based practice and the problems associated with traditional statistical analyses that rely on rejection of the null hypothesis for the…
Simulation Experiments in Practice: Statistical Design and Regression Analysis
Kleijnen, J.P.C.
2007-01-01
In practice, simulation analysts often change only one factor at a time, and use graphical analysis of the resulting Input/Output (I/O) data. The goal of this article is to change these traditional, naïve methods of design and analysis, because statistical theory proves that more information is obtained when applying Design Of Experiments (DOE) and linear regression analysis. Unfortunately, classic DOE and regression analysis assume a single simulation response that is normally and independen...
THE SYSTEM OF STATISTICAL OBJECTIVE AND SUBJECTIVE INDICATORS OF MEASURING QUALITY OF LIFE
I. Romaniuk
2015-01-01
This article examines approaches to defining and measuring quality of life. Each approach to measuring the quality of life contains information that is not contained in the other measures. It describes the economic, subjective and social indicators. The strengths and weaknesses of those indicators are also analyzed.
ALGORITHM OF PRIMARY STATISTICAL ANALYSIS OF ARRAYS OF EXPERIMENTAL DATA
Directory of Open Access Journals (Sweden)
LAUKHIN D. V.
2017-02-01
Full Text Available Annotation. Purpose. Construction of an algorithm for preliminary (primary estimation of arrays of experimental data for further obtaining a mathematical model of the process under study. Methodology. The use of the main regularities of the theory of processing arrays of experimental values in the initial analysis of data. Originality. An algorithm for performing a primary statistical analysis of the arrays of experimental data is given. Practical value. Development of methods for revealing statistically unreliable values in arrays of experimental data for the purpose of their subsequent detailed analysis and construction of a mathematical model of the studied processes.
Statistical Analysis of Data with Non-Detectable Values
Energy Technology Data Exchange (ETDEWEB)
Frome, E.L.
2004-08-26
Environmental exposure measurements are, in general, positive and may be subject to left censoring, i.e. the measured value is less than a ''limit of detection''. In occupational monitoring, strategies for assessing workplace exposures typically focus on the mean exposure level or the probability that any measurement exceeds a limit. A basic problem of interest in environmental risk assessment is to determine if the mean concentration of an analyte is less than a prescribed action level. Parametric methods, used to determine acceptable levels of exposure, are often based on a two parameter lognormal distribution. The mean exposure level and/or an upper percentile (e.g. the 95th percentile) are used to characterize exposure levels, and upper confidence limits are needed to describe the uncertainty in these estimates. In certain situations it is of interest to estimate the probability of observing a future (or ''missed'') value of a lognormal variable. Statistical methods for random samples (without non-detects) from the lognormal distribution are well known for each of these situations. In this report, methods for estimating these quantities based on the maximum likelihood method for randomly left censored lognormal data are described and graphical methods are used to evaluate the lognormal assumption. If the lognormal model is in doubt and an alternative distribution for the exposure profile of a similar exposure group is not available, then nonparametric methods for left censored data are used. The mean exposure level, along with the upper confidence limit, is obtained using the product limit estimate, and the upper confidence limit on the 95th percentile (i.e. the upper tolerance limit) is obtained using a nonparametric approach. All of these methods are well known but computational complexity has limited their use in routine data analysis with left censored data. The recent development of the R environment for statistical
PRECISE - pregabalin in addition to usual care: Statistical analysis plan
S. Mathieson (Stephanie); L. Billot (Laurent); C. Maher (Chris); A.J. McLachlan (Andrew J.); J. Latimer (Jane); B.W. Koes (Bart); M.J. Hancock (Mark J.); I. Harris (Ian); R.O. Day (Richard O.); J. Pik (Justin); S. Jan (Stephen); C.-W.C. Lin (Chung-Wei Christine)
2016-01-01
textabstractBackground: Sciatica is a severe, disabling condition that lacks high quality evidence for effective treatment strategies. This a priori statistical analysis plan describes the methodology of analysis for the PRECISE study. Methods/design: PRECISE is a prospectively registered, double
HistFitter software framework for statistical data analysis
Baak, M.; Côte, D.; Koutsman, A.; Lorenz, J.; Short, D.
2015-01-01
We present a software framework for statistical data analysis, called HistFitter, that has been used extensively by the ATLAS Collaboration to analyze big datasets originating from proton-proton collisions at the Large Hadron Collider at CERN. Since 2012 HistFitter has been the standard statistical tool in searches for supersymmetric particles performed by ATLAS. HistFitter is a programmable and flexible framework to build, book-keep, fit, interpret and present results of data models of nearly arbitrary complexity. Starting from an object-oriented configuration, defined by users, the framework builds probability density functions that are automatically fitted to data and interpreted with statistical tests. A key innovation of HistFitter is its design, which is rooted in core analysis strategies of particle physics. The concepts of control, signal and validation regions are woven into its very fabric. These are progressively treated with statistically rigorous built-in methods. Being capable of working with mu...
A Divergence Statistics Extension to VTK for Performance Analysis
Energy Technology Data Exchange (ETDEWEB)
Pebay, Philippe Pierre [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bennett, Janine Camille [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-02-01
This report follows the series of previous documents ([PT08, BPRT09b, PT09, BPT09, PT10, PB13], where we presented the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k -means, order and auto-correlative statistics engines which we developed within the Visualization Tool Kit ( VTK ) as a scalable, parallel and versatile statistics package. We now report on a new engine which we developed for the calculation of divergence statistics, a concept which we hereafter explain and whose main goal is to quantify the discrepancy, in a stasticial manner akin to measuring a distance, between an observed empirical distribution and a theoretical, "ideal" one. The ease of use of the new diverence statistics engine is illustrated by the means of C++ code snippets. Although this new engine does not yet have a parallel implementation, it has already been applied to HPC performance analysis, of which we provide an example.
Longitudinal data analysis a handbook of modern statistical methods
Fitzmaurice, Garrett; Verbeke, Geert; Molenberghs, Geert
2008-01-01
Although many books currently available describe statistical models and methods for analyzing longitudinal data, they do not highlight connections between various research threads in the statistical literature. Responding to this void, Longitudinal Data Analysis provides a clear, comprehensive, and unified overview of state-of-the-art theory and applications. It also focuses on the assorted challenges that arise in analyzing longitudinal data. After discussing historical aspects, leading researchers explore four broad themes: parametric modeling, nonparametric and semiparametric methods, joint
Directory of Open Access Journals (Sweden)
Т. М. Бикова
2016-03-01
Full Text Available The purpose of our article is consideration of questions of electronic subject analysis of documents and methodical ensuring editing subject headings in the electronic catalog. The main objective of our article – to show a technique of editing the dictionary of subject headings, to study and apply this technique in work of libraries of higher education institutions. Object of research is the thesaurus of subject headings of the electronic catalog of the Scientific Library of Odessa I. I. Mechnikov National University. To improve the efficiency and quality of the search capabilities of the electronic catalog needs constant work on its optimization, that is, technical editing of subject headings, the opening of new subject headings and subheadings. In Scientific library the instruction, which regulates a technique of edition of subject headings, was developed and put into practice and establishes rationing of this process. The main finding of the work should be to improve the level of bibliographic service users and rationalization systematizer. The research findings have the practical value for employees of libraries.
Adler, I D; Bootman, J; Favor, J; Hook, G; Schriever-Schwemmer, G; Welzl, G; Whorton, E; Yoshimura, I; Hayashi, M
1998-09-01
A workshop was held on September 13 and 14, 1993, at the GSF, Neuherberg, Germany, to start a discussion of experimental design and statistical analysis issues for three in vivo mutagenicity test systems, the micronucleus test in mouse bone marrow/peripheral blood, the chromosomal aberration tests in mouse bone marrow/differentiating spermatogonia, and the mouse dominant lethal test. The discussion has now come to conclusions which we would like to make generally known. Rather than dwell upon specific statistical tests which could be used for data analysis, serious consideration was given to test design. However, the test design, its power of detecting a given increase of adverse effects and the test statistics are interrelated. Detailed analyses of historical negative control data led to important recommendations for each test system. Concerning the statistical sensitivity parameters, a type I error of 0.05 (one tailed), a type II error of 0.20 and a dose related increase of twice the background (negative control) frequencies were generally adopted. It was recommended that sufficient observations (cells, implants) be planned for each analysis unit (animal) so that at least one adverse outcome (micronucleus, aberrant cell, dead implant) would likely be observed. The treated animal was the smallest unit of analysis allowed. On the basis of these general consideration the sample size was determined for each of the three assays. A minimum of 2000 immature erythrocytes/animal should be scored for micronuclei from each of at least 4 animals in each comparison group in the micronucleus assays. A minimum of 200 cells should be scored for chromosomal aberrations from each of at least 5 animals in each comparison group in the aberration assays. In the dominant lethal test, a minimum of 400 implants (40-50 pregnant females) are required per dose group for each mating period. The analysis unit for the dominant lethal test would be the treated male unless the background
Towards proper sampling and statistical analysis of defects
Directory of Open Access Journals (Sweden)
Cetin Ali
2014-06-01
Full Text Available Advancements in applied statistics with great relevance to defect sampling and analysis are presented. Three main issues are considered; (i proper handling of multiple defect types, (ii relating sample data originating from polished inspection surfaces (2D to finite material volumes (3D, and (iii application of advanced extreme value theory in statistical analysis of block maximum data. Original and rigorous, but practical mathematical solutions are presented. Finally, these methods are applied to make prediction regarding defect sizes in a steel alloy containing multiple defect types.
Data analysis using the Gnu R system for statistical computation
Energy Technology Data Exchange (ETDEWEB)
Simone, James; /Fermilab
2011-07-01
R is a language system for statistical computation. It is widely used in statistics, bioinformatics, machine learning, data mining, quantitative finance, and the analysis of clinical drug trials. Among the advantages of R are: it has become the standard language for developing statistical techniques, it is being actively developed by a large and growing global user community, it is open source software, it is highly portable (Linux, OS-X and Windows), it has a built-in documentation system, it produces high quality graphics and it is easily extensible with over four thousand extension library packages available covering statistics and applications. This report gives a very brief introduction to R with some examples using lattice QCD simulation results. It then discusses the development of R packages designed for chi-square minimization fits for lattice n-pt correlation functions.
Statistical analysis of fNIRS data: a comprehensive review.
Tak, Sungho; Ye, Jong Chul
2014-01-15
Functional near-infrared spectroscopy (fNIRS) is a non-invasive method to measure brain activities using the changes of optical absorption in the brain through the intact skull. fNIRS has many advantages over other neuroimaging modalities such as positron emission tomography (PET), functional magnetic resonance imaging (fMRI), or magnetoencephalography (MEG), since it can directly measure blood oxygenation level changes related to neural activation with high temporal resolution. However, fNIRS signals are highly corrupted by measurement noises and physiology-based systemic interference. Careful statistical analyses are therefore required to extract neuronal activity-related signals from fNIRS data. In this paper, we provide an extensive review of historical developments of statistical analyses of fNIRS signal, which include motion artifact correction, short source-detector separation correction, principal component analysis (PCA)/independent component analysis (ICA), false discovery rate (FDR), serially-correlated errors, as well as inference techniques such as the standard t-test, F-test, analysis of variance (ANOVA), and statistical parameter mapping (SPM) framework. In addition, to provide a unified view of various existing inference techniques, we explain a linear mixed effect model with restricted maximum likelihood (ReML) variance estimation, and show that most of the existing inference methods for fNIRS analysis can be derived as special cases. Some of the open issues in statistical analysis are also described. Copyright © 2013 Elsevier Inc. All rights reserved.
Analysis of room transfer function and reverberant signal statistics
DEFF Research Database (Denmark)
Georganti, Eleftheria; Mourjopoulos, John; Jacobsen, Finn
2008-01-01
For some time now, statistical analysis has been a valuable tool in analyzing room transfer functions (RTFs). This work examines existing statistical time-frequency models and techniques for RTF analysis (e.g., Schroeder's stochastic model and the standard deviation over frequency bands for the R...... “anechoic” and “reverberant” audio speech signals, in order to model the alterations due to room acoustics. The above results are obtained from both in-situ room response measurements and controlled acoustical response simulations.......For some time now, statistical analysis has been a valuable tool in analyzing room transfer functions (RTFs). This work examines existing statistical time-frequency models and techniques for RTF analysis (e.g., Schroeder's stochastic model and the standard deviation over frequency bands for the RTF...... magnitude and phase). RTF fractional octave smoothing, as with 1-slash 3 octave analysis, may lead to RTF simplifications that can be useful for several audio applications, like room compensation, room modeling, auralisation purposes. The aim of this work is to identify the relationship of optimal response...
Content analysis of subjective experiences in partial epileptic seizures.
Johanson, Mirja; Valli, Katja; Revonsuo, Antti; Wedlund, Jan-Eric
2008-01-01
A new content analysis method for systematically describing the phenomenology of subjective experiences in connection with partial epileptic seizures is described. Forty patients provided 262 descriptions of subjective experience relative to their partial epileptic seizures. The results revealed that subjective experiences during seizures consist mostly of sensory and bodily sensations, hallucinatory experiences, and thinking. The majority of subjective experiences during seizures are bizarre and distorted; nevertheless, the patients are able to engage in adequate behavior. To the best of our knowledge, this is the first study for which detailed subjective seizure descriptions were collected immediately after each seizure and the first study in which the content of verbal reports of subjective experiences during seizures, including both the ictal and postictal experiences, has been analyzed in detail.
A novel statistic for genome-wide interaction analysis.
Directory of Open Access Journals (Sweden)
Xuesen Wu
2010-09-01
Full Text Available Although great progress in genome-wide association studies (GWAS has been made, the significant SNP associations identified by GWAS account for only a few percent of the genetic variance, leading many to question where and how we can find the missing heritability. There is increasing interest in genome-wide interaction analysis as a possible source of finding heritability unexplained by current GWAS. However, the existing statistics for testing interaction have low power for genome-wide interaction analysis. To meet challenges raised by genome-wide interactional analysis, we have developed a novel statistic for testing interaction between two loci (either linked or unlinked. The null distribution and the type I error rates of the new statistic for testing interaction are validated using simulations. Extensive power studies show that the developed statistic has much higher power to detect interaction than classical logistic regression. The results identified 44 and 211 pairs of SNPs showing significant evidence of interactions with FDR<0.001 and 0.001
Statistical Compilation of the ICT Sector and Policy Analysis | IDRC ...
International Development Research Centre (IDRC) Digital Library (Canada)
Statistical Compilation of the ICT Sector and Policy Analysis. As the presence and influence of information and communication technologies (ICTs) continues to widen and deepen, so too does its impact on economic development. However, much work needs to be done before the linkages between economic development ...
statistical analysis of wind speed for electrical power generation in ...
African Journals Online (AJOL)
HOD
are employed to fit wind speed data of some selected sites in Northern Nigeria. This is because the design of wind energy conversion systems depends on the correct analysis of the site renewable energy resources. [13]. In addition, the statistical judgements are based on the accuracy in fitting the available data at the sites.
Using multivariate statistical analysis to assess changes in water ...
African Journals Online (AJOL)
Multivariate statistical analysis was used to investigate changes in water chemistry at 5 river sites in the Vaal Dam catchment, draining the Highveld grasslands. These grasslands receive more than 8 kg sulphur (S) ha-1·year-1 and 6 kg nitrogen (N) ha-1·year-1 via atmospheric deposition. It was hypothesised that between ...
Statistical Compilation of the ICT Sector and Policy Analysis | CRDI ...
International Development Research Centre (IDRC) Digital Library (Canada)
Statistical Compilation of the ICT Sector and Policy Analysis. As the presence and influence of information and communication technologies (ICTs) continues to widen and deepen, so too does its impact on economic development. However, much work needs to be done before the linkages between economic development ...
A Statistical Analysis of Women's Perceptions on Politics and Peace ...
African Journals Online (AJOL)
This article is a statistical analysis of the perception that more women in politics would enhance peace building. The data was drawn from a comparative survey of 325 women and four men (community leaders) in the regions of the Niger Delta (Nigeria) and KwaZulu-Natal (South Africa). According to the findings, the ...
Statistical Analysis of Human Body Movement and Group Interactions in Response to Music
Desmet, Frank; Leman, Marc; Lesaffre, Micheline; de Bruyn, Leen
Quantification of time series that relate to physiological data is challenging for empirical music research. Up to now, most studies have focused on time-dependent responses of individual subjects in controlled environments. However, little is known about time-dependent responses of between-subject interactions in an ecological context. This paper provides new findings on the statistical analysis of group synchronicity in response to musical stimuli. Different statistical techniques were applied to time-dependent data obtained from an experiment on embodied listening in individual and group settings. Analysis of inter group synchronicity are described. Dynamic Time Warping (DTW) and Cross Correlation Function (CCF) were found to be valid methods to estimate group coherence of the resulting movements. It was found that synchronicity of movements between individuals (human-human interactions) increases significantly in the social context. Moreover, Analysis of Variance (ANOVA) revealed that the type of music is the predominant factor in both the individual and the social context.
Allen, Kirk
The Statistics Concept Inventory (SCI) is a multiple choice test designed to assess students' conceptual understanding of topics typically encountered in an introductory statistics course. This dissertation documents the development of the SCI from Fall 2002 up to Spring 2006. The first phase of the project essentially sought to answer the question: "Can you write a test to assess topics typically encountered in introductory statistics?" Book One presents the results utilized in answering this question in the affirmative. The bulk of the results present the development and evolution of the items, primarily relying on objective metrics to gauge effectiveness but also incorporating student feedback. The second phase boils down to: "Now that you have the test, what else can you do with it?" This includes an exploration of Cronbach's alpha, the most commonly-used measure of test reliability in the literature. An online version of the SCI was designed, and its equivalency to the paper version is assessed. Adding an extra wrinkle to the online SCI, subjects rated their answer confidence. These results show a general positive trend between confidence and correct responses. However, some items buck this trend, revealing potential sources of misunderstandings, with comparisons offered to the extant statistics and probability educational research. The third phase is a re-assessment of the SCI: "Are you sure?" A factor analytic study favored a uni-dimensional structure for the SCI, although maintaining the likelihood of a deeper structure if more items can be written to tap similar topics. A shortened version of the instrument is proposed, demonstrated to be able to maintain a reliability nearly identical to that of the full instrument. Incorporating student feedback and a faculty topics survey, improvements to the items and recommendations for further research are proposed. The state of the concept inventory movement is assessed, to offer a comparison to the work presented
Multivariate statistical analysis of atom probe tomography data.
Parish, Chad M; Miller, Michael K
2010-10-01
The application of spectrum imaging multivariate statistical analysis methods, specifically principal component analysis (PCA), to atom probe tomography (APT) data has been investigated. The mathematical method of analysis is described and the results for two example datasets are analyzed and presented. The first dataset is from the analysis of a PM 2000 Fe-Cr-Al-Ti steel containing two different ultrafine precipitate populations. PCA properly describes the matrix and precipitate phases in a simple and intuitive manner. A second APT example is from the analysis of an irradiated reactor pressure vessel steel. Fine, nm-scale Cu-enriched precipitates having a core-shell structure were identified and qualitatively described by PCA. Advantages, disadvantages, and future prospects for implementing these data analysis methodologies for APT datasets, particularly with regard to quantitative analysis, are also discussed. Copyright 2010 Elsevier B.V. All rights reserved.
Using Multivariate Statistical Analysis for Grouping of State Forest Enterprises
Directory of Open Access Journals (Sweden)
Atakan Öztürk
2010-11-01
Full Text Available The purpose of this study was to investigate the use possibilities of multivariate statistical analysis methods for grouping of Forest Enterprises. This study involved 24 Forest Enterprises in Eastern Black Sea Region. A total 69 variables, classified as physical, economic, social, rural settlements, technical-managerial, and functional variables, were developed. Multivariate statistics such as factor, cluster and discriminate analyses were used to classify 24 Forest Enterpprises. These enterprises classified into 2 groups. 22 enterprises were in first group and while remained 2 enterprises in second group.
Network similarity and statistical analysis of earthquake seismic data
Deyasi, Krishanu; Chakraborty, Abhijit; Banerjee, Anirban
2017-09-01
We study the structural similarity of earthquake networks constructed from seismic catalogs of different geographical regions. A hierarchical clustering of underlying undirected earthquake networks is shown using Jensen-Shannon divergence in graph spectra. The directed nature of links indicates that each earthquake network is strongly connected, which motivates us to study the directed version statistically. Our statistical analysis of each earthquake region identifies the hub regions. We calculate the conditional probability of the forthcoming occurrences of earthquakes in each region. The conditional probability of each event has been compared with their stationary distribution.
MOSI: Multimodal Corpus of Sentiment Intensity and Subjectivity Analysis in Online Opinion Videos
Zadeh, Amir; Zellers, Rowan; Pincus, Eli; Morency, Louis-Philippe
2016-01-01
People are sharing their opinions, stories and reviews through online video sharing websites every day. Studying sentiment and subjectivity in these opinion videos is experiencing a growing attention from academia and industry. While sentiment analysis has been successful for text, it is an understudied research question for videos and multimedia content. The biggest setbacks for studies in this direction are lack of a proper dataset, methodology, baselines and statistical analysis of how inf...
Otake, Y.; Murphy, R. J.; Grupp, R. B.; Sato, Y.; Taylor, R. H.; Armand, M.
2015-03-01
A robust atlas-to-subject registration using a statistical deformation model (SDM) is presented. The SDM uses statistics of voxel-wise displacement learned from pre-computed deformation vectors of a training dataset. This allows an atlas instance to be directly translated into an intensity volume and compared with a patient's intensity volume. Rigid and nonrigid transformation parameters were simultaneously optimized via the Covariance Matrix Adaptation - Evolutionary Strategy (CMA-ES), with image similarity used as the objective function. The algorithm was tested on CT volumes of the pelvis from 55 female subjects. A performance comparison of the CMA-ES and Nelder-Mead downhill simplex optimization algorithms with the mutual information and normalized cross correlation similarity metrics was conducted. Simulation studies using synthetic subjects were performed, as well as leave-one-out cross validation studies. Both studies suggested that mutual information and CMA-ES achieved the best performance. The leave-one-out test demonstrated 4.13 mm error with respect to the true displacement field, and 26,102 function evaluations in 180 seconds, on average.
Explorations in statistics: the analysis of ratios and normalized data.
Curran-Everett, Douglas
2013-09-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This ninth installment of Explorations in Statistics explores the analysis of ratios and normalized-or standardized-data. As researchers, we compute a ratio-a numerator divided by a denominator-to compute a proportion for some biological response or to derive some standardized variable. In each situation, we want to control for differences in the denominator when the thing we really care about is the numerator. But there is peril lurking in a ratio: only if the relationship between numerator and denominator is a straight line through the origin will the ratio be meaningful. If not, the ratio will misrepresent the true relationship between numerator and denominator. In contrast, regression techniques-these include analysis of covariance-are versatile: they can accommodate an analysis of the relationship between numerator and denominator when a ratio is useless.
Statistical analysis and interpolation of compositional data in materials science.
Pesenson, Misha Z; Suram, Santosh K; Gregoire, John M
2015-02-09
Compositional data are ubiquitous in chemistry and materials science: analysis of elements in multicomponent systems, combinatorial problems, etc., lead to data that are non-negative and sum to a constant (for example, atomic concentrations). The constant sum constraint restricts the sampling space to a simplex instead of the usual Euclidean space. Since statistical measures such as mean and standard deviation are defined for the Euclidean space, traditional correlation studies, multivariate analysis, and hypothesis testing may lead to erroneous dependencies and incorrect inferences when applied to compositional data. Furthermore, composition measurements that are used for data analytics may not include all of the elements contained in the material; that is, the measurements may be subcompositions of a higher-dimensional parent composition. Physically meaningful statistical analysis must yield results that are invariant under the number of composition elements, requiring the application of specialized statistical tools. We present specifics and subtleties of compositional data processing through discussion of illustrative examples. We introduce basic concepts, terminology, and methods required for the analysis of compositional data and utilize them for the spatial interpolation of composition in a sputtered thin film. The results demonstrate the importance of this mathematical framework for compositional data analysis (CDA) in the fields of materials science and chemistry.
Feature-Based Statistical Analysis of Combustion Simulation Data
Energy Technology Data Exchange (ETDEWEB)
Bennett, J; Krishnamoorthy, V; Liu, S; Grout, R; Hawkes, E; Chen, J; Pascucci, V; Bremer, P T
2011-11-18
We present a new framework for feature-based statistical analysis of large-scale scientific data and demonstrate its effectiveness by analyzing features from Direct Numerical Simulations (DNS) of turbulent combustion. Turbulent flows are ubiquitous and account for transport and mixing processes in combustion, astrophysics, fusion, and climate modeling among other disciplines. They are also characterized by coherent structure or organized motion, i.e. nonlocal entities whose geometrical features can directly impact molecular mixing and reactive processes. While traditional multi-point statistics provide correlative information, they lack nonlocal structural information, and hence, fail to provide mechanistic causality information between organized fluid motion and mixing and reactive processes. Hence, it is of great interest to capture and track flow features and their statistics together with their correlation with relevant scalar quantities, e.g. temperature or species concentrations. In our approach we encode the set of all possible flow features by pre-computing merge trees augmented with attributes, such as statistical moments of various scalar fields, e.g. temperature, as well as length-scales computed via spectral analysis. The computation is performed in an efficient streaming manner in a pre-processing step and results in a collection of meta-data that is orders of magnitude smaller than the original simulation data. This meta-data is sufficient to support a fully flexible and interactive analysis of the features, allowing for arbitrary thresholds, providing per-feature statistics, and creating various global diagnostics such as Cumulative Density Functions (CDFs), histograms, or time-series. We combine the analysis with a rendering of the features in a linked-view browser that enables scientists to interactively explore, visualize, and analyze the equivalent of one terabyte of simulation data. We highlight the utility of this new framework for combustion
SPA- STATISTICAL PACKAGE FOR TIME AND FREQUENCY DOMAIN ANALYSIS
Brownlow, J. D.
1994-01-01
The need for statistical analysis often arises when data is in the form of a time series. This type of data is usually a collection of numerical observations made at specified time intervals. Two kinds of analysis may be performed on the data. First, the time series may be treated as a set of independent observations using a time domain analysis to derive the usual statistical properties including the mean, variance, and distribution form. Secondly, the order and time intervals of the observations may be used in a frequency domain analysis to examine the time series for periodicities. In almost all practical applications, the collected data is actually a mixture of the desired signal and a noise signal which is collected over a finite time period with a finite precision. Therefore, any statistical calculations and analyses are actually estimates. The Spectrum Analysis (SPA) program was developed to perform a wide range of statistical estimation functions. SPA can provide the data analyst with a rigorous tool for performing time and frequency domain studies. In a time domain statistical analysis the SPA program will compute the mean variance, standard deviation, mean square, and root mean square. It also lists the data maximum, data minimum, and the number of observations included in the sample. In addition, a histogram of the time domain data is generated, a normal curve is fit to the histogram, and a goodness-of-fit test is performed. These time domain calculations may be performed on both raw and filtered data. For a frequency domain statistical analysis the SPA program computes the power spectrum, cross spectrum, coherence, phase angle, amplitude ratio, and transfer function. The estimates of the frequency domain parameters may be smoothed with the use of Hann-Tukey, Hamming, Barlett, or moving average windows. Various digital filters are available to isolate data frequency components. Frequency components with periods longer than the data collection interval
Open Access Publishing Trend Analysis: Statistics beyond the Perception
Poltronieri, Elisabetta; Bravo, Elena; Curti, Moreno; Maurizio Ferri,; Mancini, Cristina
2016-01-01
Introduction: The purpose of this analysis was twofold: to track the number of open access journals acquiring impact factor, and to investigate the distribution of subject categories pertaining to these journals. As a case study, journals in which the researchers of the National Institute of Health (Istituto Superiore di Sanità) in Italy have…
Building the Community Online Resource for Statistical Seismicity Analysis (CORSSA)
Michael, A. J.; Wiemer, S.; Zechar, J. D.; Hardebeck, J. L.; Naylor, M.; Zhuang, J.; Steacy, S.; Corssa Executive Committee
2010-12-01
Statistical seismology is critical to the understanding of seismicity, the testing of proposed earthquake prediction and forecasting methods, and the assessment of seismic hazard. Unfortunately, despite its importance to seismology - especially to those aspects with great impact on public policy - statistical seismology is mostly ignored in the education of seismologists, and there is no central repository for the existing open-source software tools. To remedy these deficiencies, and with the broader goal to enhance the quality of statistical seismology research, we have begun building the Community Online Resource for Statistical Seismicity Analysis (CORSSA). CORSSA is a web-based educational platform that is authoritative, up-to-date, prominent, and user-friendly. We anticipate that the users of CORSSA will range from beginning graduate students to experienced researchers. More than 20 scientists from around the world met for a week in Zurich in May 2010 to kick-start the creation of CORSSA: the format and initial table of contents were defined; a governing structure was organized; and workshop participants began drafting articles. CORSSA materials are organized with respect to six themes, each containing between four and eight articles. The CORSSA web page, www.corssa.org, officially unveiled on September 6, 2010, debuts with an initial set of approximately 10 to 15 articles available online for viewing and commenting with additional articles to be added over the coming months. Each article will be peer-reviewed and will present a balanced discussion, including illustrative examples and code snippets. Topics in the initial set of articles will include: introductions to both CORSSA and statistical seismology, basic statistical tests and their role in seismology; understanding seismicity catalogs and their problems; basic techniques for modeling seismicity; and methods for testing earthquake predictability hypotheses. A special article will compare and review
CORSSA: Community Online Resource for Statistical Seismicity Analysis
Zechar, J. D.; Hardebeck, J. L.; Michael, A. J.; Naylor, M.; Steacy, S.; Wiemer, S.; Zhuang, J.
2011-12-01
Statistical seismology is critical to the understanding of seismicity, the evaluation of proposed earthquake prediction and forecasting methods, and the assessment of seismic hazard. Unfortunately, despite its importance to seismology-especially to those aspects with great impact on public policy-statistical seismology is mostly ignored in the education of seismologists, and there is no central repository for the existing open-source software tools. To remedy these deficiencies, and with the broader goal to enhance the quality of statistical seismology research, we have begun building the Community Online Resource for Statistical Seismicity Analysis (CORSSA, www.corssa.org). We anticipate that the users of CORSSA will range from beginning graduate students to experienced researchers. More than 20 scientists from around the world met for a week in Zurich in May 2010 to kick-start the creation of CORSSA: the format and initial table of contents were defined; a governing structure was organized; and workshop participants began drafting articles. CORSSA materials are organized with respect to six themes, each will contain between four and eight articles. CORSSA now includes seven articles with an additional six in draft form along with forums for discussion, a glossary, and news about upcoming meetings, special issues, and recent papers. Each article is peer-reviewed and presents a balanced discussion, including illustrative examples and code snippets. Topics in the initial set of articles include: introductions to both CORSSA and statistical seismology, basic statistical tests and their role in seismology; understanding seismicity catalogs and their problems; basic techniques for modeling seismicity; and methods for testing earthquake predictability hypotheses. We have also begun curating a collection of statistical seismology software packages.
Statistical Analysis of SAR Sea Clutter for Classification Purposes
Directory of Open Access Journals (Sweden)
Jaime Martín-de-Nicolás
2014-09-01
Full Text Available Statistical analysis of radar clutter has always been one of the topics, where more effort has been put in the last few decades. These studies were usually focused on finding the statistical models that better fitted the clutter distribution; however, the goal of this work is not the modeling of the clutter, but the study of the suitability of the statistical parameters to carry out a sea state classification. In order to achieve this objective and provide some relevance to this study, an important set of maritime and coastal Synthetic Aperture Radar data is considered. Due to the nature of the acquisition of data by SAR sensors, speckle noise is inherent to these data, and a specific study of how this noise affects the clutter distribution is also performed in this work. In pursuit of a sense of wholeness, a thorough study of the most suitable statistical parameters, as well as the most adequate classifier is carried out, achieving excellent results in terms of classification success rates. These concluding results confirm that a sea state classification is not only viable, but also successful using statistical parameters different from those of the best modeling distribution and applying a speckle filter, which allows a better characterization of the parameters used to distinguish between different sea states.
Wavelet analysis in ecology and epidemiology: impact of statistical tests.
Cazelles, Bernard; Cazelles, Kévin; Chavez, Mario
2014-02-06
Wavelet analysis is now frequently used to extract information from ecological and epidemiological time series. Statistical hypothesis tests are conducted on associated wavelet quantities to assess the likelihood that they are due to a random process. Such random processes represent null models and are generally based on synthetic data that share some statistical characteristics with the original time series. This allows the comparison of null statistics with those obtained from original time series. When creating synthetic datasets, different techniques of resampling result in different characteristics shared by the synthetic time series. Therefore, it becomes crucial to consider the impact of the resampling method on the results. We have addressed this point by comparing seven different statistical testing methods applied with different real and simulated data. Our results show that statistical assessment of periodic patterns is strongly affected by the choice of the resampling method, so two different resampling techniques could lead to two different conclusions about the same time series. Moreover, our results clearly show the inadequacy of resampling series generated by white noise and red noise that are nevertheless the methods currently used in the wide majority of wavelets applications. Our results highlight that the characteristics of a time series, namely its Fourier spectrum and autocorrelation, are important to consider when choosing the resampling technique. Results suggest that data-driven resampling methods should be used such as the hidden Markov model algorithm and the 'beta-surrogate' method.
Vector-field statistics for the analysis of time varying clinical gait data.
Donnelly, C J; Alexander, C; Pataky, T C; Stannage, K; Reid, S; Robinson, M A
2017-01-01
In clinical settings, the time varying analysis of gait data relies heavily on the experience of the individual(s) assessing these biological signals. Though three dimensional kinematics are recognised as time varying waveforms (1D), exploratory statistical analysis of these data are commonly carried out with multiple discrete or 0D dependent variables. In the absence of an a priori 0D hypothesis, clinicians are at risk of making type I and II errors in their analyis of time varying gait signatures in the event statistics are used in concert with prefered subjective clinical assesment methods. The aim of this communication was to determine if vector field waveform statistics were capable of providing quantitative corroboration to practically significant differences in time varying gait signatures as determined by two clinically trained gait experts. The case study was a left hemiplegic Cerebral Palsy (GMFCS I) gait patient following a botulinum toxin (BoNT-A) injection to their left gastrocnemius muscle. When comparing subjective clinical gait assessments between two testers, they were in agreement with each other for 61% of the joint degrees of freedom and phases of motion analysed. For tester 1 and tester 2, they were in agreement with the vector-field analysis for 78% and 53% of the kinematic variables analysed. When the subjective analyses of tester 1 and tester 2 were pooled together and then compared to the vector-field analysis, they were in agreement for 83% of the time varying kinematic variables analysed. These outcomes demonstrate that in principle, vector-field statistics corroborates with what a team of clinical gait experts would classify as practically meaningful pre- versus post time varying kinematic differences. The potential for vector-field statistics to be used as a useful clinical tool for the objective analysis of time varying clinical gait data is established. Future research is recommended to assess the usefulness of vector-field analyses
Univariate statistical analysis of environmental (compositional) data: problems and possibilities.
Filzmoser, Peter; Hron, Karel; Reimann, Clemens
2009-11-15
For almost 30 years it has been known that compositional (closed) data have special geometrical properties. In environmental sciences, where the concentration of chemical elements in different sample materials is investigated, almost all datasets are compositional. In general, compositional data are parts of a whole which only give relative information. Data that sum up to a constant, e.g. 100 wt.%, 1,000,000 mg/kg are the best known example. It is widely neglected that the "closure" characteristic remains even if only one of all possible elements is measured, it is an inherent property of compositional data. No variable is free to vary independent of all the others. Existing transformations to "open" closed data are seldom applied. They are more complicated than a log transformation and the relationship to the original data unit is lost. Results obtained when using classical statistical techniques for data analysis appeared reasonable and the possible consequences of working with closed data were rarely questioned. Here the simple univariate case of data analysis is investigated. It can be demonstrated that data closure must be overcome prior to calculating even simple statistical measures like mean or standard deviation or plotting graphs of the data distribution, e.g. a histogram. Some measures like the standard deviation (or the variance) make no statistical sense with closed data and all statistical tests building on the standard deviation (or variance) will thus provide erroneous results if used with the original data.
STATISTICAL ANALYSIS OF THE HEAVY NEUTRAL ATOMS MEASURED BY IBEX
Energy Technology Data Exchange (ETDEWEB)
Park, Jeewoo; Kucharek, Harald; Möbius, Eberhard [Space Science Center and Department of Physics, University of New Hampshire, 8 College Road, Durham, NH 03824 (United States); Galli, André [Pysics Institute, University of Bern, Bern 3012 (Switzerland); Livadiotis, George; Fuselier, Steve A.; McComas, David J., E-mail: jtl29@wildcats.unh.edu [Southwest Research Institute, P.O. Drawer 28510, San Antonio, TX 78228 (United States)
2015-10-15
We investigate the directional distribution of heavy neutral atoms in the heliosphere by using heavy neutral maps generated with the IBEX-Lo instrument over three years from 2009 to 2011. The interstellar neutral (ISN) O and Ne gas flow was found in the first-year heavy neutral map at 601 keV and its flow direction and temperature were studied. However, due to the low counting statistics, researchers have not treated the full sky maps in detail. The main goal of this study is to evaluate the statistical significance of each pixel in the heavy neutral maps to get a better understanding of the directional distribution of heavy neutral atoms in the heliosphere. Here, we examine three statistical analysis methods: the signal-to-noise filter, the confidence limit method, and the cluster analysis method. These methods allow us to exclude background from areas where the heavy neutral signal is statistically significant. These methods also allow the consistent detection of heavy neutral atom structures. The main emission feature expands toward lower longitude and higher latitude from the observational peak of the ISN O and Ne gas flow. We call this emission the extended tail. It may be an imprint of the secondary oxygen atoms generated by charge exchange between ISN hydrogen atoms and oxygen ions in the outer heliosheath.
The Digital Divide in Romania – A Statistical Analysis
Directory of Open Access Journals (Sweden)
Daniela BORISOV
2012-06-01
Full Text Available The digital divide is a subject of major importance in the current economic circumstances in which Information and Communication Technologies (ICT are seen as a significant determinant of increasing the domestic competitiveness and contribute to better life quality. Latest international reports regarding various aspects of ICT usage in modern society reveal a decrease of overall digital disparity towards the average trends of the worldwide ITC’s sector – this relates to latest advances of mobile and computer penetration rates, both for personal use and for households/ business. In Romania, the low starting point in the development of economy and society in the ICT direction was, in some extent, compensated by the rapid annual growth of the last decade. Even with these dynamic developments, the statistical data still indicate poor positions in European Union hierarchy; in this respect, the prospects of a rapid recovery of the low performance of the Romanian ICT endowment and usage and the issue continue to be regarded as a challenge for progress in economic and societal terms. The paper presents several methods for assessing the current state of ICT related aspects in terms of Internet usage based on the latest data provided by international databases. The current position of Romanian economy is judged according to several economy using statistical methods based on variability measurements: the descriptive statistics indicators, static measures of disparities and distance metrics.
GNSS Spoofing Detection Based on Signal Power Measurements: Statistical Analysis
Directory of Open Access Journals (Sweden)
V. Dehghanian
2012-01-01
Full Text Available A threat to GNSS receivers is posed by a spoofing transmitter that emulates authentic signals but with randomized code phase and Doppler values over a small range. Such spoofing signals can result in large navigational solution errors that are passed onto the unsuspecting user with potentially dire consequences. An effective spoofing detection technique is developed in this paper, based on signal power measurements and that can be readily applied to present consumer grade GNSS receivers with minimal firmware changes. An extensive statistical analysis is carried out based on formulating a multihypothesis detection problem. Expressions are developed to devise a set of thresholds required for signal detection and identification. The detection processing methods developed are further manipulated to exploit incidental antenna motion arising from user interaction with a GNSS handheld receiver to further enhance the detection performance of the proposed algorithm. The statistical analysis supports the effectiveness of the proposed spoofing detection technique under various multipath conditions.
Statistics in experimental design, preprocessing, and analysis of proteomics data.
Jung, Klaus
2011-01-01
High-throughput experiments in proteomics, such as 2-dimensional gel electrophoresis (2-DE) and mass spectrometry (MS), yield usually high-dimensional data sets of expression values for hundreds or thousands of proteins which are, however, observed on only a relatively small number of biological samples. Statistical methods for the planning and analysis of experiments are important to avoid false conclusions and to receive tenable results. In this chapter, the most frequent experimental designs for proteomics experiments are illustrated. In particular, focus is put on studies for the detection of differentially regulated proteins. Furthermore, issues of sample size planning, statistical analysis of expression levels as well as methods for data preprocessing are covered.
Lifetime statistics of quantum chaos studied by a multiscale analysis
Di Falco, A.
2012-04-30
In a series of pump and probe experiments, we study the lifetime statistics of a quantum chaotic resonator when the number of open channels is greater than one. Our design embeds a stadium billiard into a two dimensional photonic crystal realized on a silicon-on-insulator substrate. We calculate resonances through a multiscale procedure that combines energy landscape analysis and wavelet transforms. Experimental data is found to follow the universal predictions arising from random matrix theory with an excellent level of agreement.
Statistical Analysis of the Exchange Rate of Bitcoin.
Chu, Jeffrey; Nadarajah, Saralees; Chan, Stephen
2015-01-01
Bitcoin, the first electronic payment system, is becoming a popular currency. We provide a statistical analysis of the log-returns of the exchange rate of Bitcoin versus the United States Dollar. Fifteen of the most popular parametric distributions in finance are fitted to the log-returns. The generalized hyperbolic distribution is shown to give the best fit. Predictions are given for future values of the exchange rate.
Statistical Analysis of the Exchange Rate of Bitcoin.
Directory of Open Access Journals (Sweden)
Jeffrey Chu
Full Text Available Bitcoin, the first electronic payment system, is becoming a popular currency. We provide a statistical analysis of the log-returns of the exchange rate of Bitcoin versus the United States Dollar. Fifteen of the most popular parametric distributions in finance are fitted to the log-returns. The generalized hyperbolic distribution is shown to give the best fit. Predictions are given for future values of the exchange rate.
Statistical Analysis of the Exchange Rate of Bitcoin
Chu, Jeffrey; Nadarajah, Saralees; Chan, Stephen
2015-01-01
Bitcoin, the first electronic payment system, is becoming a popular currency. We provide a statistical analysis of the log-returns of the exchange rate of Bitcoin versus the United States Dollar. Fifteen of the most popular parametric distributions in finance are fitted to the log-returns. The generalized hyperbolic distribution is shown to give the best fit. Predictions are given for future values of the exchange rate. PMID:26222702
: Statistical analysis of the students' behavior in algebra
Bisson, Gilles; Bronner, Alain; Gordon, Mirta; Nicaud, Jean-François; Renaudie, David
2003-01-01
We present an analysis of behaviors of students solving algebra exercises with the Aplusix software. We built a set of statistics from the protocols (records of the interactions between the student and the software) in order to evaluate the correctness of the calculation steps and of the corresponding solutions. We have particularly studied the activities of college students (sixteen and seventeen years old). This study emphasizes the didactic variables which are relevant for the types of sel...
Development of statistical analysis for single dose bronchodilators.
Salsburg, D
1981-12-01
When measurements developed for the diagnosis of patients are used to detect treatment effects in clinical trials with chronic disease, problems in definition of response and in the statistical distributions of those measurements within patients have to be resolved before the results of clinical studies can be analyzed. An example of this process is shown in the development of the analysis of single-dose bronchodilator trials.
Lifetime statistics of quantum chaos studied by a multiscale analysis
Di Falco, A.; Krauss, T. F.; Fratalocchi, A.
2012-04-01
In a series of pump and probe experiments, we study the lifetime statistics of a quantum chaotic resonator when the number of open channels is greater than one. Our design embeds a stadium billiard into a two dimensional photonic crystal realized on a silicon-on-insulator substrate. We calculate resonances through a multiscale procedure that combines energy landscape analysis and wavelet transforms. Experimental data is found to follow the universal predictions arising from random matrix theory with an excellent level of agreement.
Statistical and machine learning approaches for network analysis
Dehmer, Matthias
2012-01-01
Explore the multidisciplinary nature of complex networks through machine learning techniques Statistical and Machine Learning Approaches for Network Analysis provides an accessible framework for structurally analyzing graphs by bringing together known and novel approaches on graph classes and graph measures for classification. By providing different approaches based on experimental data, the book uniquely sets itself apart from the current literature by exploring the application of machine learning techniques to various types of complex networks. Comprised of chapters written by internation
Directory of Open Access Journals (Sweden)
Mark William Perlin
2015-01-01
Full Text Available Background: DNA mixtures of two or more people are a common type of forensic crime scene evidence. A match statistic that connects the evidence to a criminal defendant is usually needed for court. Jurors rely on this strength of match to help decide guilt or innocence. However, the reliability of unsophisticated match statistics for DNA mixtures has been questioned. Materials and Methods: The most prevalent match statistic for DNA mixtures is the combined probability of inclusion (CPI, used by crime labs for over 15 years. When testing 13 short tandem repeat (STR genetic loci, the CPI -1 value is typically around a million, regardless of DNA mixture composition. However, actual identification information, as measured by a likelihood ratio (LR, spans a much broader range. This study examined probability of inclusion (PI mixture statistics for 517 locus experiments drawn from 16 reported cases and compared them with LR locus information calculated independently on the same data. The log(PI -1 values were examined and compared with corresponding log(LR values. Results: The LR and CPI methods were compared in case examples of false inclusion, false exclusion, a homicide, and criminal justice outcomes. Statistical analysis of crime laboratory STR data shows that inclusion match statistics exhibit a truncated normal distribution having zero center, with little correlation to actual identification information. By the law of large numbers (LLN, CPI -1 increases with the number of tested genetic loci, regardless of DNA mixture composition or match information. These statistical findings explain why CPI is relatively constant, with implications for DNA policy, criminal justice, cost of crime, and crime prevention. Conclusions: Forensic crime laboratories have generated CPI statistics on hundreds of thousands of DNA mixture evidence items. However, this commonly used match statistic behaves like a random generator of inclusionary values, following the LLN
Metz, Anneke M
2008-01-01
There is an increasing need for students in the biological sciences to build a strong foundation in quantitative approaches to data analyses. Although most science, engineering, and math field majors are required to take at least one statistics course, statistical analysis is poorly integrated into undergraduate biology course work, particularly at the lower-division level. Elements of statistics were incorporated into an introductory biology course, including a review of statistics concepts and opportunity for students to perform statistical analysis in a biological context. Learning gains were measured with an 11-item statistics learning survey instrument developed for the course. Students showed a statistically significant 25% (p < 0.005) increase in statistics knowledge after completing introductory biology. Students improved their scores on the survey after completing introductory biology, even if they had previously completed an introductory statistics course (9%, improvement p < 0.005). Students retested 1 yr after completing introductory biology showed no loss of their statistics knowledge as measured by this instrument, suggesting that the use of statistics in biology course work may aid long-term retention of statistics knowledge. No statistically significant differences in learning were detected between male and female students in the study.
A statistical analysis of human lymphocyte transformation data.
Harina, B M; Gill, T J; Rabin, B S; Taylor, F H
1979-06-01
The lymphocytes from 107 maternal-foetal pairs were examined for their in vitro responsiveness, as determined by the incorporation of tritiated thymidine following stimulation with phytohaemagglutinin (PHA), candida, varicella, mumps, streptokinase-streptodornase (SKSD) and tetanus toxoid. The data were collected and analysed in two sequential groups (forty-seven and sixty) in order to determine whether the results were reproducible. The variable chosen for analysis was the difference (d) between the square roots of the isotope incorporation in the stimulated and control cultures because it gave the most symmetrical distribution of the data. The experimental error in the determination of maternal lymphocyte stimulation was 1.4--8.6% and of the foetal lymphocytes, 1.0--16.6%, depending upon the antigen or mitogen and its concentration. The data in the two sets of patients were statistically the same in forty-eight of the fifty-six analyses (fourteen antigen or mitogen concentrations in autologous and AB plasma for maternal and foetal lymphocytes). The statistical limits of the distribution of responses for stimulation or suppression were set by an analysis of variance taking two standard deviations from the mean as the limits. When these limits were translated into stimulation indices, they varied for each antigen or mitogen and for different concentrations of the same antigen. Thus, a detailed statistical analysis of a large volume of lymphocyte transformation data indicates that the technique is reproducible and offers a reliable method for determing when significant differences from control values are present.
Directory of Open Access Journals (Sweden)
M. S. Karyaeva
2015-01-01
Full Text Available The paper is devoted to the analysis of the body of terms and terminological sources for further automation of constructing the thesaurus of a subject area, which is regarded as poetics in our work. Preliminary systematization of terminology with a linguistic and statistical approach forms the body of semantically related concepts to automate extraction of semantic relationships between terms that define the structure of the thesaurus of the specified field.
Carriot, Jérome; Jamali, Mohsen; Cullen, Kathleen E; Chacron, Maurice J
2017-01-01
There is accumulating evidence that the brain's neural coding strategies are constrained by natural stimulus statistics. Here we investigated the statistics of the time varying envelope (i.e. a second-order stimulus attribute that is related to variance) of rotational and translational self-motion signals experienced by human subjects during everyday activities. We found that envelopes can reach large values across all six motion dimensions (~450 deg/s for rotations and ~4 G for translations). Unlike results obtained in other sensory modalities, the spectral power of envelope signals decreased slowly for low (2 Hz) temporal frequencies and thus was not well-fit by a power law. We next compared the spectral properties of envelope signals resulting from active and passive self-motion, as well as those resulting from signals obtained when the subject is absent (i.e. external stimuli). Our data suggest that different mechanisms underlie deviation from scale invariance in rotational and translational self-motion envelopes. Specifically, active self-motion and filtering by the human body cause deviation from scale invariance primarily for translational and rotational envelope signals, respectively. Finally, we used well-established models in order to predict the responses of peripheral vestibular afferents to natural envelope stimuli. We found that irregular afferents responded more strongly to envelopes than their regular counterparts. Our findings have important consequences for understanding the coding strategies used by the vestibular system to process natural second-order self-motion signals.
Carriot, Jérome; Jamali, Mohsen; Cullen, Kathleen E.
2017-01-01
There is accumulating evidence that the brain’s neural coding strategies are constrained by natural stimulus statistics. Here we investigated the statistics of the time varying envelope (i.e. a second-order stimulus attribute that is related to variance) of rotational and translational self-motion signals experienced by human subjects during everyday activities. We found that envelopes can reach large values across all six motion dimensions (~450 deg/s for rotations and ~4 G for translations). Unlike results obtained in other sensory modalities, the spectral power of envelope signals decreased slowly for low (2 Hz) temporal frequencies and thus was not well-fit by a power law. We next compared the spectral properties of envelope signals resulting from active and passive self-motion, as well as those resulting from signals obtained when the subject is absent (i.e. external stimuli). Our data suggest that different mechanisms underlie deviation from scale invariance in rotational and translational self-motion envelopes. Specifically, active self-motion and filtering by the human body cause deviation from scale invariance primarily for translational and rotational envelope signals, respectively. Finally, we used well-established models in order to predict the responses of peripheral vestibular afferents to natural envelope stimuli. We found that irregular afferents responded more strongly to envelopes than their regular counterparts. Our findings have important consequences for understanding the coding strategies used by the vestibular system to process natural second-order self-motion signals. PMID:28575032
SAS and R data management, statistical analysis, and graphics
Kleinman, Ken
2009-01-01
An All-in-One Resource for Using SAS and R to Carry out Common TasksProvides a path between languages that is easier than reading complete documentationSAS and R: Data Management, Statistical Analysis, and Graphics presents an easy way to learn how to perform an analytical task in both SAS and R, without having to navigate through the extensive, idiosyncratic, and sometimes unwieldy software documentation. The book covers many common tasks, such as data management, descriptive summaries, inferential procedures, regression analysis, and the creation of graphics, along with more complex applicat
Institutions Function and Failure Statistic and Analysis of Wind Turbine
yang, Ma; Chengbing, He; Xinxin, Feng
Recently,with install capacity of wind turbines increases continuously, the wind power consisting of operation,research on reliability,maintenance and rapair will be developed into a key point..Failure analysis can support operation,management of spare components and accessories in wind plants,maintenance and repair of wind turbines.In this paper,with the eye of wind plants'structure and function,statistic and analysis the common fault of each part of the plant,and then find out the faults law, faults cause and fault effect,from which put forward the corresponding measures.
Statistical Analysis of Hypercalcaemia Data related to Transferability
DEFF Research Database (Denmark)
Frølich, Anne; Nielsen, Bo Friis
2005-01-01
In this report we describe statistical analysis related to a study of hypercalcaemia carried out in the Copenhagen area in the ten year period from 1984 to 1994. Results from the study have previously been publised in a number of papers [3, 4, 5, 6, 7, 8, 9] and in various abstracts and posters...... at conferences during the late eighties and early nineties. In this report we give a more detailed description of many of the analysis and provide some new results primarily by simultaneous studies of several databases....
Statistical Analysis of 30 Years Rainfall Data: A Case Study
Arvind, G.; Ashok Kumar, P.; Girish Karthi, S.; Suribabu, C. R.
2017-07-01
Rainfall is a prime input for various engineering design such as hydraulic structures, bridges and culverts, canals, storm water sewer and road drainage system. The detailed statistical analysis of each region is essential to estimate the relevant input value for design and analysis of engineering structures and also for crop planning. A rain gauge station located closely in Trichy district is selected for statistical analysis where agriculture is the prime occupation. The daily rainfall data for a period of 30 years is used to understand normal rainfall, deficit rainfall, Excess rainfall and Seasonal rainfall of the selected circle headquarters. Further various plotting position formulae available is used to evaluate return period of monthly, seasonally and annual rainfall. This analysis will provide useful information for water resources planner, farmers and urban engineers to assess the availability of water and create the storage accordingly. The mean, standard deviation and coefficient of variation of monthly and annual rainfall was calculated to check the rainfall variability. From the calculated results, the rainfall pattern is found to be erratic. The best fit probability distribution was identified based on the minimum deviation between actual and estimated values. The scientific results and the analysis paved the way to determine the proper onset and withdrawal of monsoon results which were used for land preparation and sowing.
Validation of statistical models for creep rupture by parametric analysis
Energy Technology Data Exchange (ETDEWEB)
Bolton, J., E-mail: john.bolton@uwclub.net [65, Fisher Ave., Rugby, Warks CV22 5HW (United Kingdom)
2012-01-15
Statistical analysis is an efficient method for the optimisation of any candidate mathematical model of creep rupture data, and for the comparative ranking of competing models. However, when a series of candidate models has been examined and the best of the series has been identified, there is no statistical criterion to determine whether a yet more accurate model might be devised. Hence there remains some uncertainty that the best of any series examined is sufficiently accurate to be considered reliable as a basis for extrapolation. This paper proposes that models should be validated primarily by parametric graphical comparison to rupture data and rupture gradient data. It proposes that no mathematical model should be considered reliable for extrapolation unless the visible divergence between model and data is so small as to leave no apparent scope for further reduction. This study is based on the data for a 12% Cr alloy steel used in BS PD6605:1998 to exemplify its recommended statistical analysis procedure. The models considered in this paper include a) a relatively simple model, b) the PD6605 recommended model and c) a more accurate model of somewhat greater complexity. - Highlights: Black-Right-Pointing-Pointer The paper discusses the validation of creep rupture models derived from statistical analysis. Black-Right-Pointing-Pointer It demonstrates that models can be satisfactorily validated by a visual-graphic comparison of models to data. Black-Right-Pointing-Pointer The method proposed utilises test data both as conventional rupture stress and as rupture stress gradient. Black-Right-Pointing-Pointer The approach is shown to be more reliable than a well-established and widely used method (BS PD6605).
HistFitter: a flexible framework for statistical data analysis
Besjes, G J; Côté, D; Koutsman, A; Lorenz, J M; Short, D
2015-01-01
HistFitter is a software framework for statistical data analysis that has been used extensively in the ATLAS Collaboration to analyze data of proton-proton collisions produced by the Large Hadron Collider at CERN. Most notably, HistFitter has become a de-facto standard in searches for supersymmetric particles since 2012, with some usage for Exotic and Higgs boson physics. HistFitter coherently combines several statistics tools in a programmable and flexible framework that is capable of bookkeeping hundreds of data models under study using thousands of generated input histograms.HistFitter interfaces with the statistics tools HistFactory and RooStats to construct parametric models and to perform statistical tests of the data, and extends these tools in four key areas. The key innovations are to weave the concepts of control, validation and signal regions into the very fabric of HistFitter, and to treat these with rigorous methods. Multiple tools to visualize and interpret the results through a simple configura...
Through the Camera's Eye: A Phenomenological Analysis of Teacher Subjectivity
Greenwalt, Kyle A.
2008-01-01
The purpose of this study is to understand how preservice teachers experience a common university assignment: the videotaping and analysis of their own instruction. Using empirical data and the thought of the French philosophers Michel Foucault and Emmanuel Levinas, the study examines the difficulties in transitioning from student subjectivity to…
Multivariate statistical analysis a high-dimensional approach
Serdobolskii, V
2000-01-01
In the last few decades the accumulation of large amounts of in formation in numerous applications. has stimtllated an increased in terest in multivariate analysis. Computer technologies allow one to use multi-dimensional and multi-parametric models successfully. At the same time, an interest arose in statistical analysis with a de ficiency of sample data. Nevertheless, it is difficult to describe the recent state of affairs in applied multivariate methods as satisfactory. Unimprovable (dominating) statistical procedures are still unknown except for a few specific cases. The simplest problem of estimat ing the mean vector with minimum quadratic risk is unsolved, even for normal distributions. Commonly used standard linear multivari ate procedures based on the inversion of sample covariance matrices can lead to unstable results or provide no solution in dependence of data. Programs included in standard statistical packages cannot process 'multi-collinear data' and there are no theoretical recommen ...
The bivariate statistical analysis of environmental (compositional) data.
Filzmoser, Peter; Hron, Karel; Reimann, Clemens
2010-09-01
Environmental sciences usually deal with compositional (closed) data. Whenever the concentration of chemical elements is measured, the data will be closed, i.e. the relevant information is contained in the ratios between the variables rather than in the data values reported for the variables. Data closure has severe consequences for statistical data analysis. Most classical statistical methods are based on the usual Euclidean geometry - compositional data, however, do not plot into Euclidean space because they have their own geometry which is not linear but curved in the Euclidean sense. This has severe consequences for bivariate statistical analysis: correlation coefficients computed in the traditional way are likely to be misleading, and the information contained in scatterplots must be used and interpreted differently from sets of non-compositional data. As a solution, the ilr transformation applied to a variable pair can be used to display the relationship and to compute a measure of stability. This paper discusses how this measure is related to the usual correlation coefficient and how it can be used and interpreted. Moreover, recommendations are provided for how the scatterplot can still be used, and which alternatives exist for displaying the relationship between two variables. Copyright 2010 Elsevier B.V. All rights reserved.
Statistical Analysis of Sport Movement Observations: the Case of Orienteering
Amouzandeh, K.; Karimipour, F.
2017-09-01
Study of movement observations is becoming more popular in several applications. Particularly, analyzing sport movement time series has been considered as a demanding area. However, most of the attempts made on analyzing movement sport data have focused on spatial aspects of movement to extract some movement characteristics, such as spatial patterns and similarities. This paper proposes statistical analysis of sport movement observations, which refers to analyzing changes in the spatial movement attributes (e.g. distance, altitude and slope) and non-spatial movement attributes (e.g. speed and heart rate) of athletes. As the case study, an example dataset of movement observations acquired during the "orienteering" sport is presented and statistically analyzed.
The NIRS Analysis Package: noise reduction and statistical inference.
Directory of Open Access Journals (Sweden)
Tomer Fekete
Full Text Available Near infrared spectroscopy (NIRS is a non-invasive optical imaging technique that can be used to measure cortical hemodynamic responses to specific stimuli or tasks. While analyses of NIRS data are normally adapted from established fMRI techniques, there are nevertheless substantial differences between the two modalities. Here, we investigate the impact of NIRS-specific noise; e.g., systemic (physiological, motion-related artifacts, and serial autocorrelations, upon the validity of statistical inference within the framework of the general linear model. We present a comprehensive framework for noise reduction and statistical inference, which is custom-tailored to the noise characteristics of NIRS. These methods have been implemented in a public domain Matlab toolbox, the NIRS Analysis Package (NAP. Finally, we validate NAP using both simulated and actual data, showing marked improvement in the detection power and reliability of NIRS.
Statistical Analysis of Radio Propagation Channel in Ruins Environment
Directory of Open Access Journals (Sweden)
Jiao He
2015-01-01
Full Text Available The cellphone based localization system for search and rescue in complex high density ruins has attracted a great interest in recent years, where the radio channel characteristics are critical for design and development of such a system. This paper presents a spatial smoothing estimation via rotational invariance technique (SS-ESPRIT for radio channel characterization of high density ruins. The radio propagations at three typical mobile communication bands (0.9, 1.8, and 2 GHz are investigated in two different scenarios. Channel parameters, such as arrival time, delays, and complex amplitudes, are statistically analyzed. Furthermore, a channel simulator is built based on these statistics. By comparison analysis of average excess delay and delay spread, the validation results show a good agreement between the measurements and channel modeling results.
STATISTICAL ANALYSIS OF SPORT MOVEMENT OBSERVATIONS: THE CASE OF ORIENTEERING
Directory of Open Access Journals (Sweden)
K. Amouzandeh
2017-09-01
Full Text Available Study of movement observations is becoming more popular in several applications. Particularly, analyzing sport movement time series has been considered as a demanding area. However, most of the attempts made on analyzing movement sport data have focused on spatial aspects of movement to extract some movement characteristics, such as spatial patterns and similarities. This paper proposes statistical analysis of sport movement observations, which refers to analyzing changes in the spatial movement attributes (e.g. distance, altitude and slope and non-spatial movement attributes (e.g. speed and heart rate of athletes. As the case study, an example dataset of movement observations acquired during the “orienteering” sport is presented and statistically analyzed.
Parametric analysis of the statistical model of the stick-slip process
Lima, Roberta; Sampaio, Rubens
2017-06-01
In this paper it is performed a parametric analysis of the statistical model of the response of a dry-friction oscillator. The oscillator is a spring-mass system which moves over a base with a rough surface. Due to this roughness, the mass is subject to a dry-frictional force modeled as a Coulomb friction. The system is stochastically excited by an imposed bang-bang base motion. The base velocity is modeled by a Poisson process for which a probabilistic model is fully specified. The excitation induces in the system stochastic stick-slip oscillations. The system response is composed by a random sequence alternating stick and slip-modes. With realizations of the system, a statistical model is constructed for this sequence. In this statistical model, the variables of interest of the sequence are modeled as random variables, as for example, the number of time intervals in which stick or slip occur, the instants at which they begin, and their duration. Samples of the system response are computed by integration of the dynamic equation of the system using independent samples of the base motion. Statistics and histograms of the random variables which characterize the stick-slip process are estimated for the generated samples. The objective of the paper is to analyze how these estimated statistics and histograms vary with the system parameters, i.e., to make a parametric analysis of the statistical model of the stick-slip process.
International Conference on Modern Problems of Stochastic Analysis and Statistics
2017-01-01
This book brings together the latest findings in the area of stochastic analysis and statistics. The individual chapters cover a wide range of topics from limit theorems, Markov processes, nonparametric methods, acturial science, population dynamics, and many others. The volume is dedicated to Valentin Konakov, head of the International Laboratory of Stochastic Analysis and its Applications on the occasion of his 70th birthday. Contributions were prepared by the participants of the international conference of the international conference “Modern problems of stochastic analysis and statistics”, held at the Higher School of Economics in Moscow from May 29 - June 2, 2016. It offers a valuable reference resource for researchers and graduate students interested in modern stochastics.
Statistical analysis plan for the EuroHYP-1 trial
DEFF Research Database (Denmark)
Winkel, Per; Bath, Philip M; Gluud, Christian
2017-01-01
Score; (4) brain infarct size at 48 +/-24 hours; (5) EQ-5D-5 L score, and (6) WHODAS 2.0 score. Other outcomes are: the primary safety outcome serious adverse events; and the incremental cost-effectiveness, and cost utility ratios. The analysis sets include (1) the intention-to-treat population, and (2...... outcome), logistic regression (binary outcomes), general linear model (continuous outcomes), and the Poisson or negative binomial model (rate outcomes). DISCUSSION: Major adjustments compared with the original statistical analysis plan encompass: (1) adjustment of analyses by nationality; (2) power......) the per protocol population. The sample size is estimated to 800 patients (5% type 1 and 20% type 2 errors). All analyses are adjusted for the protocol-specified stratification variables (nationality of centre), and the minimisation variables. In the analysis, we use ordinal regression (the primary...
Composition and Statistical Analysis of Biophenols in Apulian Italian EVOOs
Centonze, Carla; Grasso, Maria Elena; Latronico, Maria Francesca; Mastrangelo, Pier Francesco; Maffia, Michele
2017-01-01
Extra-virgin olive oil (EVOO) is among the basic constituents of the Mediterranean diet. Its nutraceutical properties are due mainly, but not only, to a plethora of molecules with antioxidant activity known as biophenols. In this article, several biophenols were measured in EVOOs from South Apulia, Italy. Hydroxytyrosol, tyrosol and their conjugated structures to elenolic acid in different forms were identified and quantified by high performance liquid chromatography (HPLC) together with lignans, luteolin and α-tocopherol. The concentration of the analyzed metabolites was quite high in all the cultivars studied, but it was still possible to discriminate them through multivariate statistical analysis (MVA). Furthermore, principal component analysis (PCA) and orthogonal partial least-squares discriminant analysis (OPLS-DA) were also exploited for determining variances among samples depending on the interval time between harvesting and milling, on the age of the olive trees, and on the area where the olive trees were grown. PMID:29057813
Composition and Statistical Analysis of Biophenols in Apulian Italian EVOOs
Directory of Open Access Journals (Sweden)
Andrea Ragusa
2017-10-01
Full Text Available Extra-virgin olive oil (EVOO is among the basic constituents of the Mediterranean diet. Its nutraceutical properties are due mainly, but not only, to a plethora of molecules with antioxidant activity known as biophenols. In this article, several biophenols were measured in EVOOs from South Apulia, Italy. Hydroxytyrosol, tyrosol and their conjugated structures to elenolic acid in different forms were identified and quantified by high performance liquid chromatography (HPLC together with lignans, luteolin and α-tocopherol. The concentration of the analyzed metabolites was quite high in all the cultivars studied, but it was still possible to discriminate them through multivariate statistical analysis (MVA. Furthermore, principal component analysis (PCA and orthogonal partial least-squares discriminant analysis (OPLS-DA were also exploited for determining variances among samples depending on the interval time between harvesting and milling, on the age of the olive trees, and on the area where the olive trees were grown.
STATISTICS. The reusable holdout: Preserving validity in adaptive data analysis.
Dwork, Cynthia; Feldman, Vitaly; Hardt, Moritz; Pitassi, Toniann; Reingold, Omer; Roth, Aaron
2015-08-07
Misapplication of statistical data analysis is a common cause of spurious discoveries in scientific research. Existing approaches to ensuring the validity of inferences drawn from data assume a fixed procedure to be performed, selected before the data are examined. In common practice, however, data analysis is an intrinsically adaptive process, with new analyses generated on the basis of data exploration, as well as the results of previous analyses on the same data. We demonstrate a new approach for addressing the challenges of adaptivity based on insights from privacy-preserving data analysis. As an application, we show how to safely reuse a holdout data set many times to validate the results of adaptively chosen analyses. Copyright © 2015, American Association for the Advancement of Science.
GIS-BASED SPATIAL STATISTICAL ANALYSIS OF COLLEGE GRADUATES EMPLOYMENT
Directory of Open Access Journals (Sweden)
R. Tang
2012-07-01
Full Text Available It is urgently necessary to be aware of the distribution and employment status of college graduates for proper allocation of human resources and overall arrangement of strategic industry. This study provides empirical evidence regarding the use of geocoding and spatial analysis in distribution and employment status of college graduates based on the data from 2004–2008 Wuhan Municipal Human Resources and Social Security Bureau, China. Spatio-temporal distribution of employment unit were analyzed with geocoding using ArcGIS software, and the stepwise multiple linear regression method via SPSS software was used to predict the employment and to identify spatially associated enterprise and professionals demand in the future. The results show that the enterprises in Wuhan east lake high and new technology development zone increased dramatically from 2004 to 2008, and tended to distributed southeastward. Furthermore, the models built by statistical analysis suggest that the specialty of graduates major in has an important impact on the number of the employment and the number of graduates engaging in pillar industries. In conclusion, the combination of GIS and statistical analysis which helps to simulate the spatial distribution of the employment status is a potential tool for human resource development research.
Gis-Based Spatial Statistical Analysis of College Graduates Employment
Tang, R.
2012-07-01
It is urgently necessary to be aware of the distribution and employment status of college graduates for proper allocation of human resources and overall arrangement of strategic industry. This study provides empirical evidence regarding the use of geocoding and spatial analysis in distribution and employment status of college graduates based on the data from 2004-2008 Wuhan Municipal Human Resources and Social Security Bureau, China. Spatio-temporal distribution of employment unit were analyzed with geocoding using ArcGIS software, and the stepwise multiple linear regression method via SPSS software was used to predict the employment and to identify spatially associated enterprise and professionals demand in the future. The results show that the enterprises in Wuhan east lake high and new technology development zone increased dramatically from 2004 to 2008, and tended to distributed southeastward. Furthermore, the models built by statistical analysis suggest that the specialty of graduates major in has an important impact on the number of the employment and the number of graduates engaging in pillar industries. In conclusion, the combination of GIS and statistical analysis which helps to simulate the spatial distribution of the employment status is a potential tool for human resource development research.
Consolidity analysis for fully fuzzy functions, matrices, probability and statistics
Directory of Open Access Journals (Sweden)
Walaa Ibrahim Gabr
2015-03-01
Full Text Available The paper presents a comprehensive review of the know-how for developing the systems consolidity theory for modeling, analysis, optimization and design in fully fuzzy environment. The solving of systems consolidity theory included its development for handling new functions of different dimensionalities, fuzzy analytic geometry, fuzzy vector analysis, functions of fuzzy complex variables, ordinary differentiation of fuzzy functions and partial fraction of fuzzy polynomials. On the other hand, the handling of fuzzy matrices covered determinants of fuzzy matrices, the eigenvalues of fuzzy matrices, and solving least-squares fuzzy linear equations. The approach demonstrated to be also applicable in a systematic way in handling new fuzzy probabilistic and statistical problems. This included extending the conventional probabilistic and statistical analysis for handling fuzzy random data. Application also covered the consolidity of fuzzy optimization problems. Various numerical examples solved have demonstrated that the new consolidity concept is highly effective in solving in a compact form the propagation of fuzziness in linear, nonlinear, multivariable and dynamic problems with different types of complexities. Finally, it is demonstrated that the implementation of the suggested fuzzy mathematics can be easily embedded within normal mathematics through building special fuzzy functions library inside the computational Matlab Toolbox or using other similar software languages.
Statistical analysis of C/NOFS planar Langmuir probe data
Directory of Open Access Journals (Sweden)
E. Costa
2014-07-01
Full Text Available The planar Langmuir probe (PLP onboard the Communication/Navigation Outage Forecasting System (C/NOFS satellite has been monitoring ionospheric plasma densities and their irregularities with high resolution almost seamlessly since May 2008. Considering the recent changes in status of the C/NOFS mission, it may be interesting to summarize some statistical results from these measurements. PLP data from 2 different years (1 October 2008–30 September 2009 and 1 January 2012–31 December 2012 were selected for analysis. The first data set corresponds to solar minimum conditions and the second one is as close to solar maximum conditions of solar cycle 24 as possible at the time of the analysis. The results from the analysis show how the values of the standard deviation of the ion density which are greater than specified thresholds are statistically distributed as functions of several combinations of the following geophysical parameters: (i solar activity, (ii altitude range, (iii longitude sector, (iv local time interval, (v geomagnetic latitude interval, and (vi season.
The features of Drosophila core promoters revealed by statistical analysis
Directory of Open Access Journals (Sweden)
Trifonov Edward N
2006-06-01
Full Text Available Abstract Background Experimental investigation of transcription is still a very labor- and time-consuming process. Only a few transcription initiation scenarios have been studied in detail. The mechanism of interaction between basal machinery and promoter, in particular core promoter elements, is not known for the majority of identified promoters. In this study, we reveal various transcription initiation mechanisms by statistical analysis of 3393 nonredundant Drosophila promoters. Results Using Drosophila-specific position-weight matrices, we identified promoters containing TATA box, Initiator, Downstream Promoter Element (DPE, and Motif Ten Element (MTE, as well as core elements discovered in Human (TFIIB Recognition Element (BRE and Downstream Core Element (DCE. Promoters utilizing known synergetic combinations of two core elements (TATA_Inr, Inr_MTE, Inr_DPE, and DPE_MTE were identified. We also establish the existence of promoters with potentially novel synergetic combinations: TATA_DPE and TATA_MTE. Our analysis revealed several motifs with the features of promoter elements, including possible novel core promoter element(s. Comparison of Human and Drosophila showed consistent percentages of promoters with TATA, Inr, DPE, and synergetic combinations thereof, as well as most of the same functional and mutual positions of the core elements. No statistical evidence of MTE utilization in Human was found. Distinct nucleosome positioning in particular promoter classes was revealed. Conclusion We present lists of promoters that potentially utilize the aforementioned elements/combinations. The number of these promoters is two orders of magnitude larger than the number of promoters in which transcription initiation was experimentally studied. The sequences are ready to be experimentally tested or used for further statistical analysis. The developed approach may be utilized for other species.
Kleijnen, J.P.C.
1995-01-01
This tutorial discusses what-if analysis and optimization of System Dynamics models. These problems are solved, using the statistical techniques of regression analysis and design of experiments (DOE). These issues are illustrated by applying the statistical techniques to a System Dynamics model for
SAS and R data management, statistical analysis, and graphics
Kleinman, Ken
2014-01-01
An Up-to-Date, All-in-One Resource for Using SAS and R to Perform Frequent TasksThe first edition of this popular guide provided a path between SAS and R using an easy-to-understand, dictionary-like approach. Retaining the same accessible format, SAS and R: Data Management, Statistical Analysis, and Graphics, Second Edition explains how to easily perform an analytical task in both SAS and R, without having to navigate through the extensive, idiosyncratic, and sometimes unwieldy software documentation. The book covers many common tasks, such as data management, descriptive summaries, inferentia
Using R for Data Management, Statistical Analysis, and Graphics
Horton, Nicholas J
2010-01-01
This title offers quick and easy access to key element of documentation. It includes worked examples across a wide variety of applications, tasks, and graphics. "Using R for Data Management, Statistical Analysis, and Graphics" presents an easy way to learn how to perform an analytical task in R, without having to navigate through the extensive, idiosyncratic, and sometimes unwieldy software documentation and vast number of add-on packages. Organized by short, clear descriptive entries, the book covers many common tasks, such as data management, descriptive summaries, inferential proc
STATISTIC ANALYSIS OF INTERNATIONAL TOURISM ON ROMANIAN SEASIDE
Directory of Open Access Journals (Sweden)
MIRELA SECARĂ
2010-01-01
Full Text Available In order to meet European and international touristic competition standards, modernization, re-establishment and development of Romanian tourism are necessary as well as creation of modern touristic products that are competitive on this market. The use of modern methods of statistic analysis in the field of tourism facilitates the achievement of systems of information that are the instruments for: evaluation of touristic demand and touristic supply, follow-up of touristic services of each touring form, follow-up of transportation services, leisure activities, hotel accommodation, touristic market study, and a complex flexible system of management and accountancy.
Statistical Analysis of Strength Data for an Aerospace Aluminum Alloy
Neergaard, L.; Malone, T.
2001-01-01
Aerospace vehicles are produced in limited quantities that do not always allow development of MIL-HDBK-5 A-basis design allowables. One method of examining production and composition variations is to perform 100% lot acceptance testing for aerospace Aluminum (Al) alloys. This paper discusses statistical trends seen in strength data for one Al alloy. A four-step approach reduced the data to residuals, visualized residuals as a function of time, grouped data with quantified scatter, and conducted analysis of variance (ANOVA).
Spatial Analysis Along Networks Statistical and Computational Methods
Okabe, Atsuyuki
2012-01-01
In the real world, there are numerous and various events that occur on and alongside networks, including the occurrence of traffic accidents on highways, the location of stores alongside roads, the incidence of crime on streets and the contamination along rivers. In order to carry out analyses of those events, the researcher needs to be familiar with a range of specific techniques. Spatial Analysis Along Networks provides a practical guide to the necessary statistical techniques and their computational implementation. Each chapter illustrates a specific technique, from Stochastic Point Process
Statistical Analysis of Designed Experiments Theory and Applications
Tamhane, Ajit C
2012-01-01
A indispensable guide to understanding and designing modern experiments The tools and techniques of Design of Experiments (DOE) allow researchers to successfully collect, analyze, and interpret data across a wide array of disciplines. Statistical Analysis of Designed Experiments provides a modern and balanced treatment of DOE methodology with thorough coverage of the underlying theory and standard designs of experiments, guiding the reader through applications to research in various fields such as engineering, medicine, business, and the social sciences. The book supplies a foundation for the
Improved statistical model checking methods for pathway analysis.
Koh, Chuan Hock; Palaniappan, Sucheendra K; Thiagarajan, P S; Wong, Limsoon
2012-01-01
Statistical model checking techniques have been shown to be effective for approximate model checking on large stochastic systems, where explicit representation of the state space is impractical. Importantly, these techniques ensure the validity of results with statistical guarantees on errors. There is an increasing interest in these classes of algorithms in computational systems biology since analysis using traditional model checking techniques does not scale well. In this context, we present two improvements to existing statistical model checking algorithms. Firstly, we construct an algorithm which removes the need of the user to define the indifference region, a critical parameter in previous sequential hypothesis testing algorithms. Secondly, we extend the algorithm to account for the case when there may be a limit on the computational resources that can be spent on verifying a property; i.e, if the original algorithm is not able to make a decision even after consuming the available amount of resources, we resort to a p-value based approach to make a decision. We demonstrate the improvements achieved by our algorithms in comparison to current algorithms first with a straightforward yet representative example, followed by a real biological model on cell fate of gustatory neurons with microRNAs.
Vibroacoustic optimization using a statistical energy analysis model
Culla, Antonio; D`Ambrogio, Walter; Fregolent, Annalisa; Milana, Silvia
2016-08-01
In this paper, an optimization technique for medium-high frequency dynamic problems based on Statistical Energy Analysis (SEA) method is presented. Using a SEA model, the subsystem energies are controlled by internal loss factors (ILF) and coupling loss factors (CLF), which in turn depend on the physical parameters of the subsystems. A preliminary sensitivity analysis of subsystem energy to CLF's is performed to select CLF's that are most effective on subsystem energies. Since the injected power depends not only on the external loads but on the physical parameters of the subsystems as well, it must be taken into account under certain conditions. This is accomplished in the optimization procedure, where approximate relationships between CLF's, injected power and physical parameters are derived. The approach is applied on a typical aeronautical structure: the cabin of a helicopter.
Topics in statistical data analysis for high-energy physics
Cowan, G.
2013-06-27
These lectures concern two topics that are becoming increasingly important in the analysis of High Energy Physics (HEP) data: Bayesian statistics and multivariate methods. In the Bayesian approach we extend the interpretation of probability to cover not only the frequency of repeatable outcomes but also to include a degree of belief. In this way we are able to associate probability with a hypothesis and thus to answer directly questions that cannot be addressed easily with traditional frequentist methods. In multivariate analysis we try to exploit as much information as possible from the characteristics that we measure for each event to distinguish between event types. In particular we will look at a method that has gained popularity in HEP in recent years: the boosted decision tree (BDT).
On Understanding Statistical Data Analysis in Higher Education
Montalbano, Vera
2012-01-01
Data analysis is a powerful tool in all experimental sciences. Statistical methods, such as sampling theory, computer technologies necessary for handling large amounts of data, skill in analysing information contained in different types of graphs are all competences necessary for achieving an in-depth data analysis. In higher education, these topics are usually fragmentized in different courses, the interdisciplinary integration can lack, some caution in the use of these topics can missing or be misunderstood. Students are often obliged to acquire these skills by themselves during the preparation of the final experimental thesis. A proposal for a learning path on nuclear phenomena is presented in order to develop these scientific competences in physics courses. An introduction to radioactivity and nuclear phenomenology is followed by measurements of natural radioactivity. Background and weak sources can be monitored for long time in a physics laboratory. The data are collected and analyzed in a computer lab i...
Statistical learning analysis in neuroscience: aiming for transparency
Directory of Open Access Journals (Sweden)
Michael Hanke
2010-05-01
Full Text Available Encouraged by a rise of reciprocal interest between the machine learning and neuroscience communities, several recent studies have demonstrated the explanatory power of statistical learning techniques for the analysis of neural data. In order to facilitate a wider adoption of these methods neuroscientific research needs to ensure a maximum of transparency to allow for comprehensive evaluation of the employed procedures. We argue that such transparency requires ``neuroscience-aware'' technology for the performance of multivariate pattern analyses of neural data that can be documented in a comprehensive, yet comprehensible way. Recently, we introduced PyMVPA, a specialized Python framework for machine learning based data analysis that addresses this demand. Here we review its features and applicability to various neural data modalities.
First statistical analysis of Geant4 quality software metrics
Ronchieri, Elisabetta; Grazia Pia, Maria; Giacomini, Francesco
2015-12-01
Geant4 is a simulation system of particle transport through matter, widely used in several experimental areas from high energy physics and nuclear experiments to medical studies. Some of its applications may involve critical use cases; therefore they would benefit from an objective assessment of the software quality of Geant4. In this paper, we provide a first statistical evaluation of software metrics data related to a set of Geant4 physics packages. The analysis aims at identifying risks for Geant4 maintainability, which would benefit from being addressed at an early stage. The findings of this pilot study set the grounds for further extensions of the analysis to the whole of Geant4 and to other high energy physics software systems.
Using Statistical Analysis Software to Advance Nitro Plasticizer Wettability
Energy Technology Data Exchange (ETDEWEB)
Shear, Trevor Allan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-08-29
Statistical analysis in science is an extremely powerful tool that is often underutilized. Additionally, it is frequently the case that data is misinterpreted or not used to its fullest extent. Utilizing the advanced software JMP®, many aspects of experimental design and data analysis can be evaluated and improved. This overview will detail the features of JMP® and how they were used to advance a project, resulting in time and cost savings, as well as the collection of scientifically sound data. The project analyzed in this report addresses the inability of a nitro plasticizer to coat a gold coated quartz crystal sensor used in a quartz crystal microbalance. Through the use of the JMP® software, the wettability of the nitro plasticizer was increased by over 200% using an atmospheric plasma pen, ensuring good sample preparation and reliable results.
Improvement of Information and Methodical Provision of Macro-economic Statistical Analysis
Directory of Open Access Journals (Sweden)
Tiurina Dina M.
2014-02-01
Full Text Available The article generalises and analyses main shortcomings of the modern system of macro-statistical analysis based on the use of the system of national accounts and balance of the national economy. The article proves on the basis of historic analysis of formation of indicators of the system of national accounts that problems with its practical use have both regional and global reasons. In order to eliminate impossibility of accounting life quality the article offers a system of quality indicators based on the general perception of wellbeing as assurance in own solvency of population and representative sampling of economic subjects.
[Analysis of complaints in primary care using statistical process control].
Valdivia Pérez, Antonio; Arteaga Pérez, Lourdes; Escortell Mayor, Esperanza; Monge Corella, Susana; Villares Rodríguez, José Enrique
2009-08-01
To analyze patient complaints in a Primary Health Care District (PHCD) using statistical process control methods compared to multivariate methods, as regards their results and feasibility of application in this context. Descriptive study based on an aggregate analysis of administrative complaints. Complaints received between January 2005 and August 2008 in the Customer Management Department in the 3rd PHCD Management Office, Madrid Health Services. Complaints are registered through Itrack, a computer software tool used throughout the whole Community of Madrid. Total number of complaints, complaints sorted by Reason and Primary Health Care Team (PHCT), total number of patient visits (including visits on demand, appointment visits and home visits) and visits by PHCT and per month and year. Multivariate analysis and control charts were used. 44-month time series with a mean of 76 complaints per month, an increasing trend in the first three years and decreasing during summer months. Poisson regression detected an excess of complaints in 8 out of the 44 months in the series. The control chart detected the same 8 months plus two additional ones. Statistical process control can be useful for detecting an excess of complaints in a PHCD and enables comparisons to be made between different PHC teams. As it is a simple technique, it can be used for ongoing monitoring of customer perceived quality.
Using zebrafish to learn statistical analysis and Mendelian genetics.
Lindemann, Samantha; Senkler, Jon; Auchter, Elizabeth; Liang, Jennifer O
2011-06-01
This project was developed to promote understanding of how mathematics and statistical analysis are used as tools in genetic research. It gives students the opportunity to carry out hypothesis-driven experiments in the classroom: students generate hypotheses about Mendelian and non-Mendelian inheritance patterns, gather raw data, and test their hypotheses using chi-square statistical analysis. In the first protocol, students are challenged to analyze inheritance patterns using GloFish, brightly colored, commercially available, transgenic zebrafish that express Green, Yellow, or Red Fluorescent Protein throughout their muscles. In the second protocol, students learn about genetic screens, microscopy, and developmental biology by analyzing the inheritance patterns of mutations that cause developmental defects. The difficulty of the experiments can be adapted for middle school to upper level undergraduate students. Since the GloFish experiments use only fish and materials that can be purchased from pet stores, they should be accessible to many schools. For each protocol, we provide detailed instructions, ideas for how the experiments fit into an undergraduate curriculum, raw data, and example analyses. Our plan is to have these protocols form the basis of a growing and adaptable educational tool available on the Zebrafish in the Classroom Web site.
A biologist's guide to statistical thinking and analysis.
Fay, David S; Gerow, Ken
2013-07-09
The proper understanding and use of statistical tools are essential to the scientific enterprise. This is true both at the level of designing one's own experiments as well as for critically evaluating studies carried out by others. Unfortunately, many researchers who are otherwise rigorous and thoughtful in their scientific approach lack sufficient knowledge of this field. This methods chapter is written with such individuals in mind. Although the majority of examples are drawn from the field of Caenorhabditis elegans biology, the concepts and practical applications are also relevant to those who work in the disciplines of molecular genetics and cell and developmental biology. Our intent has been to limit theoretical considerations to a necessary minimum and to use common examples as illustrations for statistical analysis. Our chapter includes a description of basic terms and central concepts and also contains in-depth discussions on the analysis of means, proportions, ratios, probabilities, and correlations. We also address issues related to sample size, normality, outliers, and non-parametric approaches.
Pattern recognition in menstrual bleeding diaries by statistical cluster analysis
Directory of Open Access Journals (Sweden)
Wessel Jens
2009-07-01
Full Text Available Abstract Background The aim of this paper is to empirically identify a treatment-independent statistical method to describe clinically relevant bleeding patterns by using bleeding diaries of clinical studies on various sex hormone containing drugs. Methods We used the four cluster analysis methods single, average and complete linkage as well as the method of Ward for the pattern recognition in menstrual bleeding diaries. The optimal number of clusters was determined using the semi-partial R2, the cubic cluster criterion, the pseudo-F- and the pseudo-t2-statistic. Finally, the interpretability of the results from a gynecological point of view was assessed. Results The method of Ward yielded distinct clusters of the bleeding diaries. The other methods successively chained the observations into one cluster. The optimal number of distinctive bleeding patterns was six. We found two desirable and four undesirable bleeding patterns. Cyclic and non cyclic bleeding patterns were well separated. Conclusion Using this cluster analysis with the method of Ward medications and devices having an impact on bleeding can be easily compared and categorized.
Design and statistical analysis of oral medicine studies: common pitfalls.
Baccaglini, L; Shuster, J J; Cheng, J; Theriaque, D W; Schoenbach, V J; Tomar, S L; Poole, C
2010-04-01
A growing number of articles are emerging in the medical and statistics literature that describe epidemiologic and statistical flaws of research studies. Many examples of these deficiencies are encountered in the oral, craniofacial, and dental literature. However, only a handful of methodologic articles have been published in the oral literature warning investigators of potential errors that may arise early in the study and that can irreparably bias the final results. In this study, we briefly review some of the most common pitfalls that our team of epidemiologists and statisticians has identified during the review of submitted or published manuscripts and research grant applications. We use practical examples from the oral medicine and dental literature to illustrate potential shortcomings in the design and analysis of research studies, and how these deficiencies may affect the results and their interpretation. A good study design is essential, because errors in the analysis can be corrected if the design was sound, but flaws in study design can lead to data that are not salvageable. We recommend consultation with an epidemiologist or a statistician during the planning phase of a research study to optimize study efficiency, minimize potential sources of bias, and document the analytic plan.
Short-run and Current Analysis Model in Statistics
Directory of Open Access Journals (Sweden)
Constantin Anghelache
2006-01-01
Full Text Available Using the short-run statistic indicators is a compulsory requirement implied in the current analysis. Therefore, there is a system of EUROSTAT indicators on short run which has been set up in this respect, being recommended for utilization by the member-countries. On the basis of these indicators, there are regular, usually monthly, analysis being achieved in respect of: the production dynamic determination; the evaluation of the short-run investment volume; the development of the turnover; the wage evolution: the employment; the price indexes and the consumer price index (inflation; the volume of exports and imports and the extent to which the imports are covered by the exports and the sold of trade balance. The EUROSTAT system of indicators of conjuncture is conceived as an open system, so that it can be, at any moment extended or restricted, allowing indicators to be amended or even removed, depending on the domestic users requirements as well as on the specific requirements of the harmonization and integration. For the short-run analysis, there is also the World Bank system of indicators of conjuncture, which is utilized, relying on the data sources offered by the World Bank, The World Institute for Resources or other international organizations statistics. The system comprises indicators of the social and economic development and focuses on the indicators for the following three fields: human resources, environment and economic performances. At the end of the paper, there is a case study on the situation of Romania, for which we used all these indicators.
Theoretical assessment of image analysis: statistical vs structural approaches
Lei, Tianhu; Udupa, Jayaram K.
2003-05-01
Statistical and structural methods are two major approaches commonly used in image analysis and have demonstrated considerable success. The former is based on statistical properties and stochastic models of the image and the latter utilizes geometric and topological models. In this study, Markov random field (MRF) theory/model based image segmentation and Fuzzy Connectedness (FC) theory/Fuzzy connected objeect delineation are chosen as the representatives for these two approaches, respectively. The comparative study is focused on their theoretical foundations and main operative procedures. The MRF is defined on a lattice and the associated neighborhood system and is based on the Markov property. The FC method is defined on a fuzzy digital space and is based on fuzzy relations. Locally, MRF is characterized by potentials of cliques, and FC is described by fuzzy adjacency and affinity relations. Globally, MRF is characterized by Gibbs distribution, and FC is described by fuzzy connectedness. The task of MRF model based image segmentation is toe seek a realization of the embedded MRF through a two-level operation: partitioning and labeling. The task of FC object delineation is to extract a fuzzy object from a given scene, through a two-step operation: recognition and delineation. Theoretical foundations which underly statistical and structural approaches and the principles of the main operative procedures in image segmentation by these two approaches demonstrate more similarities than differences between them. Two approaches can also complement each other, particularly in seed selection, scale formation, affinity and object membership function design for FC and neighbor set selection and clique potential design for MRF.
The system for statistical analysis of logistic information
Directory of Open Access Journals (Sweden)
Khayrullin Rustam Zinnatullovich
2015-05-01
Full Text Available The current problem for managers in logistic and trading companies is the task of improving the operational business performance and developing the logistics support of sales. The development of logistics sales supposes development and implementation of a set of works for the development of the existing warehouse facilities, including both a detailed description of the work performed, and the timing of their implementation. Logistics engineering of warehouse complex includes such tasks as: determining the number and the types of technological zones, calculation of the required number of loading-unloading places, development of storage structures, development and pre-sales preparation zones, development of specifications of storage types, selection of loading-unloading equipment, detailed planning of warehouse logistics system, creation of architectural-planning decisions, selection of information-processing equipment, etc. The currently used ERP and WMS systems did not allow us to solve the full list of logistics engineering problems. In this regard, the development of specialized software products, taking into account the specifics of warehouse logistics, and subsequent integration of these software with ERP and WMS systems seems to be a current task. In this paper we suggest a system of statistical analysis of logistics information, designed to meet the challenges of logistics engineering and planning. The system is based on the methods of statistical data processing.The proposed specialized software is designed to improve the efficiency of the operating business and the development of logistics support of sales. The system is based on the methods of statistical data processing, the methods of assessment and prediction of logistics performance, the methods for the determination and calculation of the data required for registration, storage and processing of metal products, as well as the methods for planning the reconstruction and development
Statistical analysis of the autoregressive modeling of reverberant speech.
Gaubitch, Nikolay D; Ward, Darren B; Naylor, Patrick A
2006-12-01
Hands-free speech input is required in many modern telecommunication applications that employ autoregressive (AR) techniques such as linear predictive coding. When the hands-free input is obtained in enclosed reverberant spaces such as typical office rooms, the speech signal is distorted by the room transfer function. This paper utilizes theoretical results from statistical room acoustics to analyze the AR modeling of speech under these reverberant conditions. Three cases are considered: (i) AR coefficients calculated from a single observation; (ii) AR coefficients calculated jointly from an M-channel observation (M > 1); and (iii) AR coefficients calculated from the output of a delay-and sum beamformer. The statistical analysis, with supporting simulations, shows that the spatial expectation of the AR coefficients for cases (i) and (ii) are approximately equal to those from the original speech, while for case (iii) there is a discrepancy due to spatial correlation between the microphones which can be significant. It is subsequently demonstrated that at each individual source-microphone position (without spatial expectation), the M-channel AR coefficients from case (ii) provide the best approximation to the clean speech coefficients when microphones are closely spaced (<0.3m).
Statistical analysis of the breaking processes of Ni nanowires
Energy Technology Data Exchange (ETDEWEB)
Garcia-Mochales, P [Departamento de Fisica de la Materia Condensada, Facultad de Ciencias, Universidad Autonoma de Madrid, c/ Francisco Tomas y Valiente 7, Campus de Cantoblanco, E-28049-Madrid (Spain); Paredes, R [Centro de Fisica, Instituto Venezolano de Investigaciones CientIficas, Apartado 20632, Caracas 1020A (Venezuela); Pelaez, S; Serena, P A [Instituto de Ciencia de Materiales de Madrid, Consejo Superior de Investigaciones CientIficas, c/ Sor Juana Ines de la Cruz 3, Campus de Cantoblanco, E-28049-Madrid (Spain)], E-mail: pedro.garciamochales@uam.es
2008-06-04
We have performed a massive statistical analysis on the breaking behaviour of Ni nanowires using molecular dynamic simulations. Three stretching directions, five initial nanowire sizes and two temperatures have been studied. We have constructed minimum cross-section histograms and analysed for the first time the role played by monomers and dimers. The shape of such histograms and the absolute number of monomers and dimers strongly depend on the stretching direction and the initial size of the nanowire. In particular, the statistical behaviour of the breakage final stages of narrow nanowires strongly differs from the behaviour obtained for large nanowires. We have analysed the structure around monomers and dimers. Their most probable local configurations differ from those usually appearing in static electron transport calculations. Their non-local environments show disordered regions along the nanowire if the stretching direction is [100] or [110]. Additionally, we have found that, at room temperature, [100] and [110] stretching directions favour the appearance of non-crystalline staggered pentagonal structures. These pentagonal Ni nanowires are reported in this work for the first time. This set of results suggests that experimental Ni conducting histograms could show a strong dependence on the orientation and temperature.
Analysis of filament statistics in fast camera data on MAST
Farley, Tom; Militello, Fulvio; Walkden, Nick; Harrison, James; Silburn, Scott; Bradley, James
2017-10-01
Coherent filamentary structures have been shown to play a dominant role in turbulent cross-field particle transport [D'Ippolito 2011]. An improved understanding of filaments is vital in order to control scrape off layer (SOL) density profiles and thus control first wall erosion, impurity flushing and coupling of radio frequency heating in future devices. The Elzar code [T. Farley, 2017 in prep.] is applied to MAST data. The code uses information about the magnetic equilibrium to calculate the intensity of light emission along field lines as seen in the camera images, as a function of the field lines' radial and toroidal locations at the mid-plane. In this way a `pseudo-inversion' of the intensity profiles in the camera images is achieved from which filaments can be identified and measured. In this work, a statistical analysis of the intensity fluctuations along field lines in the camera field of view is performed using techniques similar to those typically applied in standard Langmuir probe analyses. These filament statistics are interpreted in terms of the theoretical ergodic framework presented by F. Militello & J.T. Omotani, 2016, in order to better understand how time averaged filament dynamics produce the more familiar SOL density profiles. This work has received funding from the RCUK Energy programme (Grant Number EP/P012450/1), from Euratom (Grant Agreement No. 633053) and from the EUROfusion consortium.
A Statistical Analysis of Cointegration for I(2) Variables
DEFF Research Database (Denmark)
Johansen, Søren
1995-01-01
be conducted using the ¿ sup2/sup distribution. It is shown to what extent inference on the cointegration ranks can be conducted using the tables already prepared for the analysis of cointegration of I(1) variables. New tables are needed for the test statistics to control the size of the tests. This paper......This paper discusses inference for I(2) variables in a VAR model. The estimation procedure suggested consists of two reduced rank regressions. The asymptotic distribution of the proposed estimators of the cointegrating coefficients is mixed Gaussian, which implies that asymptotic inference can...... contains a multivariate test for the existence of I(2) variables. This test is illustrated using a data set consisting of U.K. and foreign prices and interest rates as well as the exchange rate....
Analysis of Official Suicide Statistics in Spain (1910-2011
Directory of Open Access Journals (Sweden)
2017-01-01
Full Text Available In this article we examine the evolution of suicide rates in Spain from 1910 to 2011. As something new, we use standardised suicide rates, making them perfectly comparable geographically and in time, as they no longer reflect population structure. Using historical data from a series of socioeconomic variables for all Spain's provinces and applying new techniques for the statistical analysis of panel data, we are able to confirm many of the hypotheses established by Durkheim at the end of the 19th century, especially those related to fertility and marriage rates, age, sex and the aging index. Our findings, however, contradict Durkheim's approach regarding the impact of urbanisation processes and poverty on suicide.
Statistical energy analysis for a compact refrigeration compressor
Lim, Ji Min; Bolton, J. Stuart; Park, Sung-Un; Hwang, Seon-Woong
2005-09-01
Traditionally the prediction of the vibrational energy level of the components in a compressor is accomplished by using a deterministic model such as a finite element model. While a deterministic approach requires much detail and computational time for a complete dynamic analysis, statistical energy analysis (SEA) requires much less information and computing time. All of these benefits can be obtained by using data averaged over the frequency and spatial domains instead of the direct use of deterministic data. In this paper, SEA will be applied to a compact refrigeration compressor for the prediction of dynamic behavior of each subsystem. Since the compressor used in this application is compact and stiff, the modal densities of its various components are low, especially in the low frequency ranges, and most energy transfers in these ranges are achieved through the indirect coupling paths instead of via direct coupling. For this reason, experimental SEA (ESEA), a good tool for the consideration of the indirect coupling, was used to derive an SEA formulation. Direct comparison of SEA results and experimental data for an operating compressor will be introduced. The power transfer path analysis at certain frequencies made possible by using SEA will be also described to show the advantage of SEA in this application.
Spectral signature verification using statistical analysis and text mining
DeCoster, Mallory E.; Firpi, Alexe H.; Jacobs, Samantha K.; Cone, Shelli R.; Tzeng, Nigel H.; Rodriguez, Benjamin M.
2016-05-01
In the spectral science community, numerous spectral signatures are stored in databases representative of many sample materials collected from a variety of spectrometers and spectroscopists. Due to the variety and variability of the spectra that comprise many spectral databases, it is necessary to establish a metric for validating the quality of spectral signatures. This has been an area of great discussion and debate in the spectral science community. This paper discusses a method that independently validates two different aspects of a spectral signature to arrive at a final qualitative assessment; the textual meta-data and numerical spectral data. Results associated with the spectral data stored in the Signature Database1 (SigDB) are proposed. The numerical data comprising a sample material's spectrum is validated based on statistical properties derived from an ideal population set. The quality of the test spectrum is ranked based on a spectral angle mapper (SAM) comparison to the mean spectrum derived from the population set. Additionally, the contextual data of a test spectrum is qualitatively analyzed using lexical analysis text mining. This technique analyzes to understand the syntax of the meta-data to provide local learning patterns and trends within the spectral data, indicative of the test spectrum's quality. Text mining applications have successfully been implemented for security2 (text encryption/decryption), biomedical3 , and marketing4 applications. The text mining lexical analysis algorithm is trained on the meta-data patterns of a subset of high and low quality spectra, in order to have a model to apply to the entire SigDB data set. The statistical and textual methods combine to assess the quality of a test spectrum existing in a database without the need of an expert user. This method has been compared to other validation methods accepted by the spectral science community, and has provided promising results when a baseline spectral signature is
Classification of Malaysia aromatic rice using multivariate statistical analysis
Abdullah, A. H.; Adom, A. H.; Shakaff, A. Y. Md; Masnan, M. J.; Zakaria, A.; Rahim, N. A.; Omar, O.
2015-05-01
Aromatic rice (Oryza sativa L.) is considered as the best quality premium rice. The varieties are preferred by consumers because of its preference criteria such as shape, colour, distinctive aroma and flavour. The price of aromatic rice is higher than ordinary rice due to its special needed growth condition for instance specific climate and soil. Presently, the aromatic rice quality is identified by using its key elements and isotopic variables. The rice can also be classified via Gas Chromatography Mass Spectrometry (GC-MS) or human sensory panels. However, the uses of human sensory panels have significant drawbacks such as lengthy training time, and prone to fatigue as the number of sample increased and inconsistent. The GC-MS analysis techniques on the other hand, require detailed procedures, lengthy analysis and quite costly. This paper presents the application of in-house developed Electronic Nose (e-nose) to classify new aromatic rice varieties. The e-nose is used to classify the variety of aromatic rice based on the samples odour. The samples were taken from the variety of rice. The instrument utilizes multivariate statistical data analysis, including Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA) and K-Nearest Neighbours (KNN) to classify the unknown rice samples. The Leave-One-Out (LOO) validation approach is applied to evaluate the ability of KNN to perform recognition and classification of the unspecified samples. The visual observation of the PCA and LDA plots of the rice proves that the instrument was able to separate the samples into different clusters accordingly. The results of LDA and KNN with low misclassification error support the above findings and we may conclude that the e-nose is successfully applied to the classification of the aromatic rice varieties.
Classification of Malaysia aromatic rice using multivariate statistical analysis
Energy Technology Data Exchange (ETDEWEB)
Abdullah, A. H.; Adom, A. H.; Shakaff, A. Y. Md; Masnan, M. J.; Zakaria, A.; Rahim, N. A. [School of Mechatronic Engineering, Universiti Malaysia Perlis, Kampus Pauh Putra, 02600 Arau, Perlis (Malaysia); Omar, O. [Malaysian Agriculture Research and Development Institute (MARDI), Persiaran MARDI-UPM, 43400 Serdang, Selangor (Malaysia)
2015-05-15
Aromatic rice (Oryza sativa L.) is considered as the best quality premium rice. The varieties are preferred by consumers because of its preference criteria such as shape, colour, distinctive aroma and flavour. The price of aromatic rice is higher than ordinary rice due to its special needed growth condition for instance specific climate and soil. Presently, the aromatic rice quality is identified by using its key elements and isotopic variables. The rice can also be classified via Gas Chromatography Mass Spectrometry (GC-MS) or human sensory panels. However, the uses of human sensory panels have significant drawbacks such as lengthy training time, and prone to fatigue as the number of sample increased and inconsistent. The GC–MS analysis techniques on the other hand, require detailed procedures, lengthy analysis and quite costly. This paper presents the application of in-house developed Electronic Nose (e-nose) to classify new aromatic rice varieties. The e-nose is used to classify the variety of aromatic rice based on the samples odour. The samples were taken from the variety of rice. The instrument utilizes multivariate statistical data analysis, including Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA) and K-Nearest Neighbours (KNN) to classify the unknown rice samples. The Leave-One-Out (LOO) validation approach is applied to evaluate the ability of KNN to perform recognition and classification of the unspecified samples. The visual observation of the PCA and LDA plots of the rice proves that the instrument was able to separate the samples into different clusters accordingly. The results of LDA and KNN with low misclassification error support the above findings and we may conclude that the e-nose is successfully applied to the classification of the aromatic rice varieties.
Analysis of acoustic emission data for bearings subject to unbalance
Directory of Open Access Journals (Sweden)
Rapinder Sawhney
2013-01-01
Full Text Available Acoustic Emission (AE is an effective nondestructive method for investigating the behavior of materials under stress. In recent decades, AE applications in structural health monitoring have been extended to other areas such as rotating machineries and cutting tools. This research investigates the application of acoustic emission data for unbalance analysis and detection in rotary systems. The AE parameter of interest in this study is a discrete variable that covers the significance of count, duration and amplitude of AE signals. A statistical model based on Zero-Inflated Poisson (ZIP regression is proposed to handle over-dispersion and excess zeros of the counting data. The ZIP model indicates that faulty bearings can generate more transient wave in the AE waveform. Control charts can easily detect the faulty bearing using the parameters of the ZIP model. Categorical data analysis based on generalized linear models (GLM is also presented. The results demonstrate the significance of the couple unbalance.
Subjective facial analysis and its correlation with dental relationships
Directory of Open Access Journals (Sweden)
Gustavo Silva Siécola
Full Text Available ABSTRACT INTRODUCTION: Subjective facial analysis is a diagnostic method that provides morphological analysis of the face. Thus, the aim of the present study was to compare the facial and dental diagnoses and investigate their relationship. METHODS: This sample consisted of 151 children (7 to 13 years old, without previous orthodontic treatment, analyzed by an orthodontist. Standardized extraoral and intraoral photographs were taken for the subjective facial classification according to Facial Pattern classification and occlusal analyses. It has been researched the occurrence of different Facial Patterns, the relationship between Facial Pattern classification in frontal and profile views, the relationship between Facial Patterns and Angle classification, and between anterior open bite and Long Face Pattern. RESULTS: Facial Pattern I was verified in 64.24% of the children, Pattern II in 21.29%, Pattern III in 6.62%, Long Face Pattern in 5.96% and Short Face Pattern in 1.99%. A substantial strength of agreement of approximately 84% between frontal and profile classification of Facial Pattern was observed (Kappa = 0.69. Agreement between the Angle classification and the Facial Pattern was seen in approximately 63% of the cases (Kappa = 0.27. Long Face Pattern did not present more open bite prevalence. CONCLUSION: Facial Patterns I and II were the most prevalent in children and the less prevalent was the Short Face Pattern. A significant concordance was observed between profile and frontal subjective facial analysis. There was slight concordance between the Facial Pattern and the sagittal dental relationships. The anterior open bite (AOB was not significantly prevalent in any Facial Pattern.
Subjective facial analysis and its correlation with dental relationships
Siécola, Gustavo Silva; Capelozza, Leopoldino; Lorenzoni, Diego Coelho; Janson, Guilherme; Henriques, José Fernando Castanha
2017-01-01
ABSTRACT INTRODUCTION: Subjective facial analysis is a diagnostic method that provides morphological analysis of the face. Thus, the aim of the present study was to compare the facial and dental diagnoses and investigate their relationship. METHODS: This sample consisted of 151 children (7 to 13 years old), without previous orthodontic treatment, analyzed by an orthodontist. Standardized extraoral and intraoral photographs were taken for the subjective facial classification according to Facial Pattern classification and occlusal analyses. It has been researched the occurrence of different Facial Patterns, the relationship between Facial Pattern classification in frontal and profile views, the relationship between Facial Patterns and Angle classification, and between anterior open bite and Long Face Pattern. RESULTS: Facial Pattern I was verified in 64.24% of the children, Pattern II in 21.29%, Pattern III in 6.62%, Long Face Pattern in 5.96% and Short Face Pattern in 1.99%. A substantial strength of agreement of approximately 84% between frontal and profile classification of Facial Pattern was observed (Kappa = 0.69). Agreement between the Angle classification and the Facial Pattern was seen in approximately 63% of the cases (Kappa = 0.27). Long Face Pattern did not present more open bite prevalence. CONCLUSION: Facial Patterns I and II were the most prevalent in children and the less prevalent was the Short Face Pattern. A significant concordance was observed between profile and frontal subjective facial analysis. There was slight concordance between the Facial Pattern and the sagittal dental relationships. The anterior open bite (AOB) was not significantly prevalent in any Facial Pattern. PMID:28658360
A Statistic Analysis Of Romanian Seaside Hydro Tourism
Secara Mirela
2011-01-01
Tourism represents one of the ways of spending spare time for rest, recreation, treatment and entertainment, and the specific aspect of Constanta County economy is touristic and spa capitalization of Romanian seaside. In order to analyze hydro tourism on Romanian seaside we have used statistic indicators within tourism as well as statistic methods such as chronological series, interdependent statistic series, regression and statistic correlation. The major objective of this research is to rai...
Analysis of sagittal condyl inclination in subjects with temporomandibular disorders
Directory of Open Access Journals (Sweden)
Dodić Slobodan
2010-01-01
Full Text Available Bacground/Aim. Disturbances of mandibular border movements is considered to be one of the major signs of temporomandibular disorders (TMD. The purpose of this study was to evaluate the possible association between disturbances of mandibular border movements and the presence of symptoms of TMD in the young. Methods. This study included two groups of volunteers between 18 and 26 years of age. The study group included 30 examineers with signs (symptoms of TMD, and the control group also included 30 persons without any signs (symptoms of TMD. The presence of TMD was confirmed according to the craniomandibular index (Helkimo. The functional analysis of mandibular movements was performed in each subject using the computer pantograph. Results. The results of this study did not confirm any significant differences between the values of the condylar variables/sagittal condylar inclination, length of the sagital condylar guidance, in the control and in the study group. Conclusion. The study did not confirm significant differences in the length and inclination of the protrusive condylar guidance, as well as in the values of the sagittal condylar inclination between the subjects with the signs and symptoms of TMD and the normal asymptomatic subjects.
Statistical Analysis Of Tank 5 Floor Sample Results
Energy Technology Data Exchange (ETDEWEB)
Shine, E. P.
2012-08-01
Sampling has been completed for the characterization of the residual material on the floor of Tank 5 in the F-Area Tank Farm at the Savannah River Site (SRS), near Aiken, SC. The sampling was performed by Savannah River Remediation (SRR) LLC using a stratified random sampling plan with volume-proportional compositing. The plan consisted of partitioning the residual material on the floor of Tank 5 into three non-overlapping strata: two strata enclosed accumulations, and a third stratum consisted of a thin layer of material outside the regions of the two accumulations. Each of three composite samples was constructed from five primary sample locations of residual material on the floor of Tank 5. Three of the primary samples were obtained from the stratum containing the thin layer of material, and one primary sample was obtained from each of the two strata containing an accumulation. This report documents the statistical analyses of the analytical results for the composite samples. The objective of the analysis is to determine the mean concentrations and upper 95% confidence (UCL95) bounds for the mean concentrations for a set of analytes in the tank residuals. The statistical procedures employed in the analyses were consistent with the Environmental Protection Agency (EPA) technical guidance by Singh and others [2010]. Savannah River National Laboratory (SRNL) measured the sample bulk density, nonvolatile beta, gross alpha, and the radionuclide, elemental, and chemical concentrations three times for each of the composite samples. The analyte concentration data were partitioned into three separate groups for further analysis: analytes with every measurement above their minimum detectable concentrations (MDCs), analytes with no measurements above their MDCs, and analytes with a mixture of some measurement results above and below their MDCs. The means, standard deviations, and UCL95s were computed for the analytes in the two groups that had at least some measurements
STATISTICAL ANALYSIS OF TANK 5 FLOOR SAMPLE RESULTS
Energy Technology Data Exchange (ETDEWEB)
Shine, E.
2012-03-14
Sampling has been completed for the characterization of the residual material on the floor of Tank 5 in the F-Area Tank Farm at the Savannah River Site (SRS), near Aiken, SC. The sampling was performed by Savannah River Remediation (SRR) LLC using a stratified random sampling plan with volume-proportional compositing. The plan consisted of partitioning the residual material on the floor of Tank 5 into three non-overlapping strata: two strata enclosed accumulations, and a third stratum consisted of a thin layer of material outside the regions of the two accumulations. Each of three composite samples was constructed from five primary sample locations of residual material on the floor of Tank 5. Three of the primary samples were obtained from the stratum containing the thin layer of material, and one primary sample was obtained from each of the two strata containing an accumulation. This report documents the statistical analyses of the analytical results for the composite samples. The objective of the analysis is to determine the mean concentrations and upper 95% confidence (UCL95) bounds for the mean concentrations for a set of analytes in the tank residuals. The statistical procedures employed in the analyses were consistent with the Environmental Protection Agency (EPA) technical guidance by Singh and others [2010]. Savannah River National Laboratory (SRNL) measured the sample bulk density, nonvolatile beta, gross alpha, radionuclide, inorganic, and anion concentrations three times for each of the composite samples. The analyte concentration data were partitioned into three separate groups for further analysis: analytes with every measurement above their minimum detectable concentrations (MDCs), analytes with no measurements above their MDCs, and analytes with a mixture of some measurement results above and below their MDCs. The means, standard deviations, and UCL95s were computed for the analytes in the two groups that had at least some measurements above their
Statistical Analysis of Tank 5 Floor Sample Results
Energy Technology Data Exchange (ETDEWEB)
Shine, E. P.
2013-01-31
Sampling has been completed for the characterization of the residual material on the floor of Tank 5 in the F-Area Tank Farm at the Savannah River Site (SRS), near Aiken, SC. The sampling was performed by Savannah River Remediation (SRR) LLC using a stratified random sampling plan with volume-proportional compositing. The plan consisted of partitioning the residual material on the floor of Tank 5 into three non-overlapping strata: two strata enclosed accumulations, and a third stratum consisted of a thin layer of material outside the regions of the two accumulations. Each of three composite samples was constructed from five primary sample locations of residual material on the floor of Tank 5. Three of the primary samples were obtained from the stratum containing the thin layer of material, and one primary sample was obtained from each of the two strata containing an accumulation. This report documents the statistical analyses of the analytical results for the composite samples. The objective of the analysis is to determine the mean concentrations and upper 95% confidence (UCL95) bounds for the mean concentrations for a set of analytes in the tank residuals. The statistical procedures employed in the analyses were consistent with the Environmental Protection Agency (EPA) technical guidance by Singh and others [2010]. Savannah River National Laboratory (SRNL) measured the sample bulk density, nonvolatile beta, gross alpha, and the radionuclide1, elemental, and chemical concentrations three times for each of the composite samples. The analyte concentration data were partitioned into three separate groups for further analysis: analytes with every measurement above their minimum detectable concentrations (MDCs), analytes with no measurements above their MDCs, and analytes with a mixture of some measurement results above and below their MDCs. The means, standard deviations, and UCL95s were computed for the analytes in the two groups that had at least some measurements
Statistical analysis of the operating parameters which affect cupola emissions
Energy Technology Data Exchange (ETDEWEB)
Davis, J.W.; Draper, A.B.
1977-12-01
A sampling program was undertaken to determine the operating parameters which affected air pollution emission from gray iron foundry cupolas. The experimental design utilized the analysis of variance routine. Four independent variables were selected for examination on the basis of previous work reported in the literature. These were: (1) blast rate; (2) iron-coke ratio; (3) blast temperature; and (4) cupola size. The last variable was chosen since it most directly affects melt rate. Emissions from cupolas for which concern has been expressed are particle matter and carbon monoxide. The dependent variables were, therefore, particle loading, particle size distribution, and carbon monoxide concentration. Seven production foundries were visited and samples taken under conditions prescribed by the experimental plan. The data obtained from these tests were analyzed using the analysis of variance and other statistical techniques where applicable. The results indicated that blast rate, blast temperature, and cupola size affected particle emissions and the latter two also affected the particle size distribution. The particle size information was also unique in that it showed a consistent particle size distribution at all seven foundaries with a sizable fraction of the particles less than 1.0 micrometers in diameter.
Criminal victimization in Ukraine: analysis of statistical data
Directory of Open Access Journals (Sweden)
Serhiy Nezhurbida
2007-12-01
Full Text Available The article is based on the analysis of statistical data provided by law-enforcement, judicial and other bodies of Ukraine. The given analysis allows us to give an accurate quantity of a current status of crime victimization in Ukraine, to characterize its basic features (level, rate, structure, dynamics, and etc.. L’article se concentre sur l’analyse des données statystiques fournies par les institutions de contrôle sociale (forces de police et magistrature et par d’autres organes institutionnels ukrainiens. Les analyses effectuées attirent l'attention sur la situation actuelle des victimes du crime en Ukraine et aident à délinéer leur principales caractéristiques (niveau, taux, structure, dynamiques, etc.L’articolo si basa sull’analisi dei dati statistici forniti dalle agenzie del controllo sociale (forze dell'ordine e magistratura e da altri organi istituzionali ucraini. Le analisi effettuate forniscono molte informazioni sulla situazione attuale delle vittime del crimine in Ucraina e aiutano a delinearne le caratteristiche principali (livello, tasso, struttura, dinamiche, ecc..
A statistical design for testing apomictic diversification through linkage analysis.
Zeng, Yanru; Hou, Wei; Song, Shuang; Feng, Sisi; Shen, Lin; Xia, Guohua; Wu, Rongling
2014-03-01
The capacity of apomixis to generate maternal clones through seed reproduction has made it a useful characteristic for the fixation of heterosis in plant breeding. It has been observed that apomixis displays pronounced intra- and interspecific diversification, but the genetic mechanisms underlying this diversification remains elusive, obstructing the exploitation of this phenomenon in practical breeding programs. By capitalizing on molecular information in mapping populations, we describe and assess a statistical design that deploys linkage analysis to estimate and test the pattern and extent of apomictic differences at various levels from genotypes to species. The design is based on two reciprocal crosses between two individuals each chosen from a hermaphrodite or monoecious species. A multinomial distribution likelihood is constructed by combining marker information from two crosses. The EM algorithm is implemented to estimate the rate of apomixis and test its difference between two plant populations or species as the parents. The design is validated by computer simulation. A real data analysis of two reciprocal crosses between hickory (Carya cathayensis) and pecan (C. illinoensis) demonstrates the utilization and usefulness of the design in practice. The design provides a tool to address fundamental and applied questions related to the evolution and breeding of apomixis.
On the Optimality of Trust Network Analysis with Subjective Logic
Directory of Open Access Journals (Sweden)
PARK, Y.
2014-08-01
Full Text Available Building and measuring trust is one of crucial aspects in e-commerce, social networking and computer security. Trust networks are widely used to formalize trust relationships and to conduct formal reasoning of trust values. Diverse trust network analysis methods have been developed so far and one of the most widely used schemes is TNA-SL (Trust Network Analysis with Subjective Logic. Recent papers claimed that TNA-SL always finds the optimal solution by producing the least uncertainty. In this paper, we present some counter-examples, which imply that TNA-SL is not an optimal algorithm. Furthermore, we present a probabilistic algorithm in edge splitting to minimize uncertainty.
Statistical analysis of cone penetration resistance of railway ballast
Directory of Open Access Journals (Sweden)
Saussine Gilles
2017-01-01
Full Text Available Dynamic penetrometer tests are widely used in geotechnical studies for soils characterization but their implementation tends to be difficult. The light penetrometer test is able to give information about a cone resistance useful in the field of geotechnics and recently validated as a parameter for the case of coarse granular materials. In order to characterize directly the railway ballast on track and sublayers of ballast, a huge test campaign has been carried out for more than 5 years in order to build up a database composed of 19,000 penetration tests including endoscopic video record on the French railway network. The main objective of this work is to give a first statistical analysis of cone resistance in the coarse granular layer which represents a major component of railway track: the ballast. The results show that the cone resistance (qd increases with depth and presents strong variations corresponding to layers of different natures identified using the endoscopic records. In the first zone corresponding to the top 30cm, (qd increases linearly with a slope of around 1MPa/cm for fresh ballast and fouled ballast. In the second zone below 30cm deep, (qd increases more slowly with a slope of around 0,3MPa/cm and decreases below 50cm. These results show that there is no clear difference between fresh and fouled ballast. Hence, the (qd sensitivity is important and increases with depth. The (qd distribution for a set of tests does not follow a normal distribution. In the upper 30cm layer of ballast of track, data statistical treatment shows that train load and speed do not have any significant impact on the (qd distribution for clean ballast; they increase by 50% the average value of (qd for fouled ballast and increase the thickness as well. Below the 30cm upper layer, train load and speed have a clear impact on the (qd distribution.
Statistical analysis of cone penetration resistance of railway ballast
Saussine, Gilles; Dhemaied, Amine; Delforge, Quentin; Benfeddoul, Selim
2017-06-01
Dynamic penetrometer tests are widely used in geotechnical studies for soils characterization but their implementation tends to be difficult. The light penetrometer test is able to give information about a cone resistance useful in the field of geotechnics and recently validated as a parameter for the case of coarse granular materials. In order to characterize directly the railway ballast on track and sublayers of ballast, a huge test campaign has been carried out for more than 5 years in order to build up a database composed of 19,000 penetration tests including endoscopic video record on the French railway network. The main objective of this work is to give a first statistical analysis of cone resistance in the coarse granular layer which represents a major component of railway track: the ballast. The results show that the cone resistance (qd) increases with depth and presents strong variations corresponding to layers of different natures identified using the endoscopic records. In the first zone corresponding to the top 30cm, (qd) increases linearly with a slope of around 1MPa/cm for fresh ballast and fouled ballast. In the second zone below 30cm deep, (qd) increases more slowly with a slope of around 0,3MPa/cm and decreases below 50cm. These results show that there is no clear difference between fresh and fouled ballast. Hence, the (qd) sensitivity is important and increases with depth. The (qd) distribution for a set of tests does not follow a normal distribution. In the upper 30cm layer of ballast of track, data statistical treatment shows that train load and speed do not have any significant impact on the (qd) distribution for clean ballast; they increase by 50% the average value of (qd) for fouled ballast and increase the thickness as well. Below the 30cm upper layer, train load and speed have a clear impact on the (qd) distribution.
Tucker Tensor analysis of Matern functions in spatial statistics
Litvinenko, Alexander
2017-11-18
In this work, we describe advanced numerical tools for working with multivariate functions and for the analysis of large data sets. These tools will drastically reduce the required computing time and the storage cost, and, therefore, will allow us to consider much larger data sets or finer meshes. Covariance matrices are crucial in spatio-temporal statistical tasks, but are often very expensive to compute and store, especially in 3D. Therefore, we approximate covariance functions by cheap surrogates in a low-rank tensor format. We apply the Tucker and canonical tensor decompositions to a family of Matern- and Slater-type functions with varying parameters and demonstrate numerically that their approximations exhibit exponentially fast convergence. We prove the exponential convergence of the Tucker and canonical approximations in tensor rank parameters. Several statistical operations are performed in this low-rank tensor format, including evaluating the conditional covariance matrix, spatially averaged estimation variance, computing a quadratic form, determinant, trace, loglikelihood, inverse, and Cholesky decomposition of a large covariance matrix. Low-rank tensor approximations reduce the computing and storage costs essentially. For example, the storage cost is reduced from an exponential O(n^d) to a linear scaling O(drn), where d is the spatial dimension, n is the number of mesh points in one direction, and r is the tensor rank. Prerequisites for applicability of the proposed techniques are the assumptions that the data, locations, and measurements lie on a tensor (axes-parallel) grid and that the covariance function depends on a distance, ||x-y||.
STATISTICAL ANALYSIS OF RAW SUGAR MATERIAL FOR SUGAR PRODUCER COMPLEX
Directory of Open Access Journals (Sweden)
A. A. Gromkovskii
2015-01-01
Full Text Available Summary. In the article examines the statistical data on the development of average weight and average sugar content of sugar beet roots. The successful solution of the problem of forecasting these raw indices is essential for solving problems of sugar producing complex control. In the paper by calculating the autocorrelation function demonstrated that the predominant trend component of the growth raw characteristics. For construct the prediction model is proposed to use an autoregressive first and second order. It is shown that despite the small amount of experimental data, which provide raw sugar producing enterprises laboratory, using autoregression is justified. The proposed model allows correctly out properly the dynamics of changes raw indexes in the time, which confirms the estimates. In the article highlighted the fact that in the case the predominance trend components in the dynamics of the studied characteristics of sugar beet proposed prediction models provide the better quality of the forecast. In the presence the oscillations portions of the curve describing the change raw performance, for better construction of the forecast required increase number of measurements data. In the article also presents the results of the use adaptive prediction Brown’s model for predicting sugar beet raw performance. The statistical analysis allowed conclusions about the level of quality sufficient to describe changes raw indices for the forecast development. The optimal discount rates data are identified that determined by the form of the curve of growth sugar content of the beet root and mass in the process of maturation. Formulated conclusions of the quality of the forecast, depending on these factors that determines the expert forecaster. In the article shows the calculated expression, derived from experimental data that allow calculate changes of the raw material feature of sugar beet in the process of maturation.
[Use and comprehensibility of statistical analysis in Archivos de Bronconeumología (1970-1999)].
De Granda Orive, J I; García Río, F; Gutiérrez Jiménez, T; Escobar Sacristán, J; Gallego Rodríguez, V; Sáez Valls, R
2002-08-01
To describe the type of statistical analysis used most often in original research articles published in Archivos de Bronconeumología, and the evolution of statistical analysis over time in terms of complexity. To determine comprehensibility, taking bivariate analysis as the reference threshold. All articles published in the original research section of Archivos de Bronconeumología from 1970 through 1999 were reviewed manually. For each article we recorded the category or categories of statistical analysis and its comprehensibility (with a reference threshold set at category 7). We studied the following factors: year of publication, type of analysis, comprehensibility, maximum category achieved, subject area and number of authors. Eight hundred sixty original articles, with a mean 5 2 authors per article were examined. The maximum category reached was a mean 4.15 4.61. The three types of analysis used most often in all articles were category 1 (descriptive only) at 49.4%, category 2 (t and z tests) at 26.4% and category 3 (bivariate tables) at 19.1%. Among the more complex analytical categories, the most often used were analysis of variance (category 8) at 9%, survival analysis (category 16) at 6.2%, and non-parametric correlations at 3.4%. Comparing results by decade, the proportion of articles with only descriptive analysis fell from 74% in the seventies to 63.9% in the eighties and to 36.1% in the nineties (90s vs. 80s, p Archivos de Bronconeumología increased over time, while comprehensibility decreased.
Directory of Open Access Journals (Sweden)
Jerneja Pikelj
2015-06-01
Full Text Available The paper has two practical purposes. The first one is to analyze how successfully R can be used for data analysis on surveys carried out by the Statistical Office of the Republic of Slovenia. In order to achieve this goal, we analyzed the data of the Monthly Statistical Survey on Earnings Paid by Legal Persons. The second purpose is to analyze how the assumption on the nonresponse mechanism, which occurs in the sample, impacts the estimated values of the unknown statistics in the survey. Depending on these assumptions, different approaches to adjust the problem caused by unit nonresponse are presented. We conclude the paper with the results of the analysis of the data and the main issues connected with the usage of R in official statistics.
Directory of Open Access Journals (Sweden)
Bokang Ncube
2015-08-01
Full Text Available The major part of students at institutions of higher learning has shown an aversion for statistics. These attitudes impede on students’ performance. Among factors affecting students’ achievement in the subject is self-efficacy, self-concept, anxiety and low self-perception. In the main, this study sought to explore students’ perceptions and attitudes towards statistics. Data used was collected through SATS-36 and MPSP questionnaires from students who availed themselves for lectures of first year statistics and statistics related courses at a university in South Africa. The findings proved that students’ perceived academic and professional relevance of statistics relates to their statistics proficiency. Students with low statistics self-perception are bound to develop negative attitudes towards the subject. Interest, mathematics and statistics self-efficacy, enjoyment, worth, relevance and effort were identified as precursors of statistics course achievement.
The Inappropriate Symmetries of Multivariate Statistical Analysis in Geometric Morphometrics.
Bookstein, Fred L
In today's geometric morphometrics the commonest multivariate statistical procedures, such as principal component analysis or regressions of Procrustes shape coordinates on Centroid Size, embody a tacit roster of symmetries-axioms concerning the homogeneity of the multiple spatial domains or descriptor vectors involved-that do not correspond to actual biological fact. These techniques are hence inappropriate for any application regarding which we have a-priori biological knowledge to the contrary (e.g., genetic/morphogenetic processes common to multiple landmarks, the range of normal in anatomy atlases, the consequences of growth or function for form). But nearly every morphometric investigation is motivated by prior insights of this sort. We therefore need new tools that explicitly incorporate these elements of knowledge, should they be quantitative, to break the symmetries of the classic morphometric approaches. Some of these are already available in our literature but deserve to be known more widely: deflated (spatially adaptive) reference distributions of Procrustes coordinates, Sewall Wright's century-old variant of factor analysis, the geometric algebra of importing explicit biomechanical formulas into Procrustes space. Other methods, not yet fully formulated, might involve parameterized models for strain in idealized forms under load, principled approaches to the separation of functional from Brownian aspects of shape variation over time, and, in general, a better understanding of how the formalism of landmarks interacts with the many other approaches to quantification of anatomy. To more powerfully organize inferences from the high-dimensional measurements that characterize so much of today's organismal biology, tomorrow's toolkit must rely neither on principal component analysis nor on the Procrustes distance formula, but instead on sound prior biological knowledge as expressed in formulas whose coefficients are not all the same. I describe the problems of
Statistical methods for the forensic analysis of striated tool marks
Energy Technology Data Exchange (ETDEWEB)
Hoeksema, Amy Beth [Iowa State Univ., Ames, IA (United States)
2013-01-01
In forensics, fingerprints can be used to uniquely identify suspects in a crime. Similarly, a tool mark left at a crime scene can be used to identify the tool that was used. However, the current practice of identifying matching tool marks involves visual inspection of marks by forensic experts which can be a very subjective process. As a result, declared matches are often successfully challenged in court, so law enforcement agencies are particularly interested in encouraging research in more objective approaches. Our analysis is based on comparisons of profilometry data, essentially depth contours of a tool mark surface taken along a linear path. In current practice, for stronger support of a match or non-match, multiple marks are made in the lab under the same conditions by the suspect tool. We propose the use of a likelihood ratio test to analyze the difference between a sample of comparisons of lab tool marks to a field tool mark, against a sample of comparisons of two lab tool marks. Chumbley et al. (2010) point out that the angle of incidence between the tool and the marked surface can have a substantial impact on the tool mark and on the effectiveness of both manual and algorithmic matching procedures. To better address this problem, we describe how the analysis can be enhanced to model the effect of tool angle and allow for angle estimation for a tool mark left at a crime scene. With sufficient development, such methods may lead to more defensible forensic analyses.
Mitchell, Patrick; Krotish, Debra; Shin, Yong-June; Hirth, Victor
2010-12-01
The effects of hypertension are chronic and continuous; it affects gait, balance, and fall risk. Therefore, it is desirable to assess gait health across hypertensive and nonhypertensive subjects in order to prevent or reduce the risk of falls. Analysis of electromyography (EMG) signals can identify age related changes of neuromuscular activation due to various neuropathies and myopathies, but it is difficult to translate these medical changes to clinical diagnosis. To examine and compare geriatrics patients with these gait-altering diseases, we acquire EMG muscle activation signals, and by use of a timesynchronized mat capable of recording pressure information, we localize the EMG data to the gait cycle, ensuring identical comparison across subjects. Using time-frequency analysis on the EMG signal, in conjunction with several parameters obtained from the time-frequency analyses, we can determine the statistical discrepancy between diseases. We base these parameters on physiological manifestations caused by hypertension, as well as other comorbities that affect the geriatrics community. Using these metrics in a small population, we identify a statistical discrepancy between a control group and subjects with hypertension, neuropathy, diabetes, osteoporosis, arthritis, and several other common diseases which severely affect the geriatrics community.
Gudmestad, Aarnes; House, Leanna; Geeslin, Kimberly L.
2013-01-01
This study constitutes the first statistical analysis to employ a Bayesian multinomial probit model in the investigation of subject expression in first and second language (L2) Spanish. The study analyzes the use of third-person subject-expression forms and demonstrates that the following variables are important for subject expression:…
Statistical analysis as approach to conductive heat transfer modelling
Antonyová, A.; Antony, P.
2013-04-01
The main inspiration for article was the problem of high investment into installation of the building insulation. The question of its effectiveness and reliability also after the period of 10 or 15 years was the topic of the international research project carried out at the University of Prešov in Prešov and Vienna University of Technology entitled "Detection and Management of Risk Processes in Building Insulation" and numbered SRDA SK-AT-0008-10. To detect especially the moisture problem as risk process in the space between the wall and insulation led to construction new measuring equipment to test the moisture and temperature without the insulation destruction and this way to describe real situation in old buildings too. The further investigation allowed us to analyse the range of data in the amount of 1680 measurements and express conductive heat transfer using the methods of statistical analysis. Modelling comprises relationships of the environment properties inside the building, in the space between the wall and insulation and in ambient surrounding of the building. Radial distribution function also characterizes the connection of the temperature differences.
Utility green pricing programs: A statistical analysis of program effectiveness
Energy Technology Data Exchange (ETDEWEB)
Wiser, Ryan; Olson, Scott; Bird, Lori; Swezey, Blair
2004-02-01
Development of renewable energy. Such programs have grown in number in recent years. The design features and effectiveness of these programs varies considerably, however, leading a variety of stakeholders to suggest specific marketing and program design features that might improve customer response and renewable energy sales. This report analyzes actual utility green pricing program data to provide further insight into which program features might help maximize both customer participation in green pricing programs and the amount of renewable energy purchased by customers in those programs. Statistical analysis is performed on both the residential and non-residential customer segments. Data comes from information gathered through a questionnaire completed for 66 utility green pricing programs in early 2003. The questionnaire specifically gathered data on residential and non-residential participation, amount of renewable energy sold, program length, the type of renewable supply used, program price/cost premiums, types of consumer research and program evaluation performed, different sign-up options available, program marketing efforts, and ancillary benefits offered to participants.
Measurement of Plethysmogram and Statistical Method for Analysis
Shimizu, Toshihiro
The plethysmogram is measured at different points of human body by using the photo interrupter, which sensitively depends on the physical and mental situation of human body. In this paper the statistical method of the data-analysis is investigated to discuss the dependence of plethysmogram on stress and aging. The first one is the representation method based on the return map, which provides usuful information for the waveform, the flucuation in phase and the fluctuation in amplitude. The return map method makes it possible to understand the fluctuation of plethymogram in amplitude and in phase more clearly and globally than in the conventional power spectrum method. The second is the Lisajous plot and the correlation function to analyze the phase difference between the plethysmograms of the right finger tip and of the left finger tip. The third is the R-index, from which we can estimate “the age of the blood flow”. The R-index is defined by the global character of plethysmogram, which is different from the usual APG-index. The stress- and age-dependence of plethysmogram is discussed by using these methods.
Corrected Statistical Energy Analysis Model for Car Interior Noise
Directory of Open Access Journals (Sweden)
A. Putra
2015-01-01
Full Text Available Statistical energy analysis (SEA is a well-known method to analyze the flow of acoustic and vibration energy in a complex structure. For an acoustic space where significant absorptive materials are present, direct field component from the sound source dominates the total sound field rather than a reverberant field, where the latter becomes the basis in constructing the conventional SEA model. Such environment can be found in a car interior and thus a corrected SEA model is proposed here to counter this situation. The model is developed by eliminating the direct field component from the total sound field and only the power after the first reflection is considered. A test car cabin was divided into two subsystems and by using a loudspeaker as a sound source, the power injection method in SEA was employed to obtain the corrected coupling loss factor and the damping loss factor from the corrected SEA model. These parameters were then used to predict the sound pressure level in the interior cabin using the injected input power from the engine. The results show satisfactory agreement with the directly measured SPL.
Ensemble Solar Forecasting Statistical Quantification and Sensitivity Analysis: Preprint
Energy Technology Data Exchange (ETDEWEB)
Cheung, WanYin; Zhang, Jie; Florita, Anthony; Hodge, Bri-Mathias; Lu, Siyuan; Hamann, Hendrik F.; Sun, Qian; Lehman, Brad
2015-12-08
Uncertainties associated with solar forecasts present challenges to maintain grid reliability, especially at high solar penetrations. This study aims to quantify the errors associated with the day-ahead solar forecast parameters and the theoretical solar power output for a 51-kW solar power plant in a utility area in the state of Vermont, U.S. Forecasts were generated by three numerical weather prediction (NWP) models, including the Rapid Refresh, the High Resolution Rapid Refresh, and the North American Model, and a machine-learning ensemble model. A photovoltaic (PV) performance model was adopted to calculate theoretical solar power generation using the forecast parameters (e.g., irradiance, cell temperature, and wind speed). Errors of the power outputs were quantified using statistical moments and a suite of metrics, such as the normalized root mean squared error (NRMSE). In addition, the PV model's sensitivity to different forecast parameters was quantified and analyzed. Results showed that the ensemble model yielded forecasts in all parameters with the smallest NRMSE. The NRMSE of solar irradiance forecasts of the ensemble NWP model was reduced by 28.10% compared to the best of the three NWP models. Further, the sensitivity analysis indicated that the errors of the forecasted cell temperature attributed only approximately 0.12% to the NRMSE of the power output as opposed to 7.44% from the forecasted solar irradiance.
Statistical analysis of CSP plants by simulating extensive meteorological series
Pavón, Manuel; Fernández, Carlos M.; Silva, Manuel; Moreno, Sara; Guisado, María V.; Bernardos, Ana
2017-06-01
The feasibility analysis of any power plant project needs the estimation of the amount of energy it will be able to deliver to the grid during its lifetime. To achieve this, its feasibility study requires a precise knowledge of the solar resource over a long term period. In Concentrating Solar Power projects (CSP), financing institutions typically requires several statistical probability of exceedance scenarios of the expected electric energy output. Currently, the industry assumes a correlation between probabilities of exceedance of annual Direct Normal Irradiance (DNI) and energy yield. In this work, this assumption is tested by the simulation of the energy yield of CSP plants using as input a 34-year series of measured meteorological parameters and solar irradiance. The results of this work show that, even if some correspondence between the probabilities of exceedance of annual DNI values and energy yields is found, the intra-annual distribution of DNI may significantly affect this correlation. This result highlights the need of standardized procedures for the elaboration of representative DNI time series representative of a given probability of exceedance of annual DNI.
Statistical Analysis of Loss of Offsite Power Events
Directory of Open Access Journals (Sweden)
Andrija Volkanovski
2016-01-01
Full Text Available This paper presents the results of the statistical analysis of the loss of offsite power events (LOOP registered in four reviewed databases. The reviewed databases include the IRSN (Institut de Radioprotection et de Sûreté Nucléaire SAPIDE database and the GRS (Gesellschaft für Anlagen- und Reaktorsicherheit mbH VERA database reviewed over the period from 1992 to 2011. The US NRC (Nuclear Regulatory Commission Licensee Event Reports (LERs database and the IAEA International Reporting System (IRS database were screened for relevant events registered over the period from 1990 to 2013. The number of LOOP events in each year in the analysed period and mode of operation are assessed during the screening. The LOOP frequencies obtained for the French and German nuclear power plants (NPPs during critical operation are of the same order of magnitude with the plant related events as a dominant contributor. A frequency of one LOOP event per shutdown year is obtained for German NPPs in shutdown mode of operation. For the US NPPs, the obtained LOOP frequency for critical and shutdown mode is comparable to the one assessed in NUREG/CR-6890. Decreasing trend is obtained for the LOOP events registered in three databases (IRSN, GRS, and NRC.
Statistical Analysis of Development Trends in Global Renewable Energy
Directory of Open Access Journals (Sweden)
Marina D. Simonova
2016-01-01
Full Text Available The article focuses on the economic and statistical analysis of industries associated with the use of renewable energy sources in several countries. The dynamic development and implementation of technologies based on renewable energy sources (hereinafter RES is the defining trend of world energy development. The uneven distribution of hydrocarbon reserves, increasing demand of developing countries and environmental risks associated with the production and consumption of fossil resources has led to an increasing interest of many states to this field. Creating low-carbon economies involves the implementation of plans to increase the proportion of clean energy through renewable energy sources, energy efficiency, reduce greenhouse gas emissions. The priority of this sector is a characteristic feature of modern development of developed (USA, EU, Japan and emerging economies (China, India, Brazil, etc., as evidenced by the inclusion of the development of this segment in the state energy strategies and the revision of existing approaches to energy security. The analysis of the use of renewable energy, its contribution to value added of countries-producers is of a particular interest. Over the last decade, the share of energy produced from renewable sources in the energy balances of the world's largest economies increased significantly. Every year the number of power generating capacity based on renewable energy is growing, especially, this trend is apparent in China, USA and European Union countries. There is a significant increase in direct investment in renewable energy. The total investment over the past ten years increased by 5.6 times. The most rapidly developing kinds are solar energy and wind power.
Statistical analysis and optimization of igbt manufacturing flow
Directory of Open Access Journals (Sweden)
Baranov V. V.
2015-02-01
Full Text Available The use of computer simulation, design and optimization of power electronic devices formation technological processes can significantly reduce development time, improve the accuracy of calculations, choose the best options for implementation based on strict mathematical analysis. One of the most common power electronic devices is isolated gate bipolar transistor (IGBT, which combines the advantages of MOSFET and bipolar transistor. The achievement of high requirements for these devices is only possible by optimizing device design and manufacturing process parameters. Therefore important and necessary step in the modern cycle of IC design and manufacturing is to carry out the statistical analysis. Procedure of the IGBT threshold voltage optimization was realized. Through screening experiments according to the Plackett-Burman design the most important input parameters (factors that have the greatest impact on the output characteristic was detected. The coefficients of the approximation polynomial adequately describing the relationship between the input parameters and investigated output characteristics ware determined. Using the calculated approximation polynomial, a series of multiple, in a cycle of Monte Carlo, calculations to determine the spread of threshold voltage values at selected ranges of input parameters deviation were carried out. Combinations of input process parameters values were determined randomly by a normal distribution within a given range of changes. The procedure of IGBT process parameters optimization consist a mathematical problem of determining the value range of the input significant structural and technological parameters providing the change of the IGBT threshold voltage in a given interval. The presented results demonstrate the effectiveness of the proposed optimization techniques.
Orthogonal separations: Comparison of orthogonality metrics by statistical analysis.
Schure, Mark R; Davis, Joe M
2015-10-02
Twenty orthogonality metrics (OMs) derived from convex hull, information theory, fractal dimension, correlation coefficients, nearest neighbor distances and bin-density techniques were calculated from a diverse group of 47 experimental two-dimensional (2D) chromatograms. These chromatograms comprise two datasets; one dataset is a collection of 2D chromatograms from Peter Carr's laboratory at the University of Minnesota, and the other dataset is based on pairs of one-dimensional chromatograms previously published by Martin Gilar and coworkers (Waters Corp.). The chromatograms were pooled to make a third or combined dataset. Cross-correlation results suggest that specific OMs are correlated within families of nearest neighbor methods, correlation coefficients and the information theory methods. Principal component analysis of the OMs show that none of the OMs stands out as clearly better at explaining the data variance than any another OM. Principal component analysis of individual chromatograms shows that different OMs favor certain chromatograms. The chromatograms exhibit a range of quality, as subjectively graded by nine experts experienced in 2D chromatography. The subjective (grading) evaluations were taken at two intervals per expert and demonstrated excellent consistency for each expert. Excellent agreement for both very good and very bad chromatograms was seen across the range of experts. However, evaluation uncertainty increased for chromatograms that were judged as average to mediocre. The grades were converted to numbers (percentages) for numerical computations. The percentages were correlated with OMs to establish good OMs for evaluating the quality of 2D chromatograms. Certain metrics correlate better than others. However, these results are not consistent across all chromatograms examined. Most of the nearest neighbor methods were observed to correlate poorly with the percentages. However, one method, devised by Clark and Evans, appeared to work
Ekobo Akoa, Brice; Simeu, Emmanuel; Lebowsky, Fritz
2014-01-01
This paper proposes two novel approaches to Video Quality Assessment (VQA). Both approaches attempt to develop video evaluation techniques capable of replacing human judgment when rating video quality in subjective experiments. The underlying study consists of selecting fundamental quality metrics based on Human Visual System (HVS) models and using artificial intelligence solutions as well as advanced statistical analysis. This new combination enables suitable video quality ratings while taking as input multiple quality metrics. The first method uses a neural network based machine learning process. The second method consists in evaluating the video quality assessment using non-linear regression model. The efficiency of the proposed methods is demonstrated by comparing their results with those of existing work done on synthetic video artifacts. The results obtained by each method are compared with scores from a database resulting from subjective experiments.
TECHNIQUE OF THE STATISTICAL ANALYSIS OF INVESTMENT APPEAL OF THE REGION
Directory of Open Access Journals (Sweden)
А. А. Vershinina
2014-01-01
Full Text Available The technique of the statistical analysis of investment appeal of the region is given in scientific article for direct foreign investments. Definition of a technique of the statistical analysis is given, analysis stages reveal, the mathematico-statistical tools are considered.
The subjective meaning of cognitive architecture: a Marrian analysis.
Varma, Sashank
2014-01-01
Marr famously decomposed cognitive theories into three levels. Newell, Pylyshyn, and Anderson offered parallel decompositions of cognitive architectures, which are psychologically plausible computational formalisms for expressing computational models of cognition. These analyses focused on the objective meaning of each level - how it supports computational models that correspond to cognitive phenomena. This paper develops a complementary analysis of the subjective meaning of each level - how it helps cognitive scientists understand cognition. It then argues against calls to eliminatively reduce higher levels to lower levels, for example, in the name of parsimony. Finally, it argues that the failure to attend to the multiple meanings and levels of cognitive architecture contributes to the current, disunified state of theoretical cognitive science.
Bayesian analysis of factors associated with fibromyalgia syndrome subjects
Jayawardana, Veroni; Mondal, Sumona; Russek, Leslie
2015-01-01
Factors contributing to movement-related fear were assessed by Russek, et al. 2014 for subjects with Fibromyalgia (FM) based on the collected data by a national internet survey of community-based individuals. The study focused on the variables, Activities-Specific Balance Confidence scale (ABC), Primary Care Post-Traumatic Stress Disorder screen (PC-PTSD), Tampa Scale of Kinesiophobia (TSK), a Joint Hypermobility Syndrome screen (JHS), Vertigo Symptom Scale (VSS-SF), Obsessive-Compulsive Personality Disorder (OCPD), Pain, work status and physical activity dependent from the "Revised Fibromyalgia Impact Questionnaire" (FIQR). The study presented in this paper revisits same data with a Bayesian analysis where appropriate priors were introduced for variables selected in the Russek's paper.
Statistical analysis of the ambiguities in the asteroid period determinations
Butkiewicz, M.; Kwiatkowski, T.; Bartczak, P.; Dudziński, G.
2014-07-01
A synodic period of an asteroid can be derived from its lightcurve by standard methods like Fourier-series fitting. A problem appears when results of observations are based on less than a full coverage of a lightcurve and/or high level of noise. Also, long gaps between individual lightcurves create an ambiguity in the cycle count which leads to aliases. Excluding binary systems and objects with non-principal-axis rotation, the rotation period is usually identical to the period of the second Fourier harmonic of the lightcurve. There are cases, however, where it may be connected with the 1st, 3rd, or 4th harmonic and it is difficult to choose among them when searching for the period. To help remove such uncertainties we analysed asteroid lightcurves for a range of shapes and observing/illuminating geometries. We simulated them using a modified internal code from the ISAM service (Marciniak et al. 2012, A&A 545, A131). In our computations, shapes of asteroids were modeled as Gaussian random spheres (Muinonen 1998, A&A, 332, 1087). A combination of Lommel-Seeliger and Lambert scattering laws was assumed. For each of the 100 shapes, we randomly selected 1000 positions of the spin axis, systematically changing the solar phase angle with a step of 5°. For each lightcurve, we determined its peak-to-peak amplitude, fitted the 6th-order Fourier series and derived the amplitudes of its harmonics. Instead of the number of the lightcurve extrema, which in many cases is subjective, we characterized each lightcurve by the order of the highest-amplitude Fourier harmonic. The goal of our simulations was to derive statistically significant conclusions (based on the underlying assumptions) about the dominance of different harmonics in the lightcurves of the specified amplitude and phase angle. The results, presented in the Figure, can be used in individual cases to estimate the probability that the obtained lightcurve is dominated by a specified Fourier harmonic. Some of the
A note on the statistical analysis of point judgment matrices
African Journals Online (AJOL)
There is scope for further research into statistical approaches for analyzing judgment matrices. In particular statistically based methods address rank reversal since standard errors are associated with estimates of the weights and thus the rankings are not stated with certainty. However, the weights are constrained to lie in a ...
Noel, Jean; Prieto, Juan C.; Styner, Martin
2017-03-01
Functional Analysis of Diffusion Tensor Tract Statistics (FADTTS) is a toolbox for analysis of white matter (WM) fiber tracts. It allows associating diffusion properties along major WM bundles with a set of covariates of interest, such as age, diagnostic status and gender, and the structure of the variability of these WM tract properties. However, to use this toolbox, a user must have an intermediate knowledge in scripting languages (MATLAB). FADTTSter was created to overcome this issue and make the statistical analysis accessible to any non-technical researcher. FADTTSter is actively being used by researchers at the University of North Carolina. FADTTSter guides non-technical users through a series of steps including quality control of subjects and fibers in order to setup the necessary parameters to run FADTTS. Additionally, FADTTSter implements interactive charts for FADTTS' outputs. This interactive chart enhances the researcher experience and facilitates the analysis of the results. FADTTSter's motivation is to improve usability and provide a new analysis tool to the community that complements FADTTS. Ultimately, by enabling FADTTS to a broader audience, FADTTSter seeks to accelerate hypothesis testing in neuroimaging studies involving heterogeneous clinical data and diffusion tensor imaging. This work is submitted to the Biomedical Applications in Molecular, Structural, and Functional Imaging conference. The source code of this application is available in NITRC.
Directory of Open Access Journals (Sweden)
Kelly H. Zou
2010-01-01
Full Text Available Magnetization transfer imaging (MT may have considerable promise for early detection and monitoring of subtle brain changes before they are apparent on conventional magnetic resonance images. At 3 Tesla (T, MT affords higher resolution and increased tissue contrast associated with macromolecules. The reliability and reproducibility of a new high-resolution MT strategy were assessed in brain images acquired from 9 healthy subjects. Repeated measures were taken for 12 brain regions of interest (ROIs: genu, splenium, and the left and right hemispheres of the hippocampus, caudate, putamen, thalamus, and cerebral white matter. Spearman's correlation coefficient, coefficient of variation, and intraclass correlation coefficient (ICC were computed. Multivariate mixed-effects regression models were used to fit the mean ROI values and to test the significance of the effects due to region, subject, observer, time, and manual repetition. A sensitivity analysis of various model specifications and the corresponding ICCs was conducted. Our statistical methods may be generalized to many similar evaluative studies of the reliability and reproducibility of various imaging modalities.
Willard, Melissa A Bodnar; McGuffin, Victoria L; Smith, Ruth Waddell
2012-01-01
Salvia divinorum is a hallucinogenic herb that is internationally regulated. In this study, salvinorin A, the active compound in S. divinorum, was extracted from S. divinorum plant leaves using a 5-min extraction with dichloromethane. Four additional Salvia species (Salvia officinalis, Salvia guaranitica, Salvia splendens, and Salvia nemorosa) were extracted using this procedure, and all extracts were analyzed by gas chromatography-mass spectrometry. Differentiation of S. divinorum from other Salvia species was successful based on visual assessment of the resulting chromatograms. To provide a more objective comparison, the total ion chromatograms (TICs) were subjected to principal components analysis (PCA). Prior to PCA, the TICs were subjected to a series of data pretreatment procedures to minimize non-chemical sources of variance in the data set. Successful discrimination of S. divinorum from the other four Salvia species was possible based on visual assessment of the PCA scores plot. To provide a numerical assessment of the discrimination, a series of statistical procedures such as Euclidean distance measurement, hierarchical cluster analysis, Student's t tests, Wilcoxon rank-sum tests, and Pearson product moment correlation were also applied to the PCA scores. The statistical procedures were then compared to determine the advantages and disadvantages for forensic applications.
AMA Statistical Information Based Analysis of a Compressive Imaging System
Hope, D.; Prasad, S.
-based analysis of a compressive imaging system based on a new highly efficient and robust method that enables us to evaluate statistical entropies. Our method is based on the notion of density of states (DOS), which plays a major role in statistical mechanics by allowing one to express macroscopic thermal averages in terms of the number of configuration states of a system for a certain energy level. Instead of computing the number of states at a certain energy level, however, we compute the number of possible configurations (states) of a particular image scene that correspond to a certain probability value. This allows us to compute the probability for each possible state, or configuration, of the scene being imaged. We assess the performance of a single pixel compressive imaging system based on the amount of information encoded and transmitted in parameters that characterize the information in the scene. Amongst many examples, we study the problem of faint companion detection. Here, we show how information in the recorded images depends on the choice of basis for representing the scene and the amount of measurement noise. The noise creates confusion when associating a recorded image with the correct member of the ensemble that produced the image. We show that multiple measurements enable one to mitigate this confusion noise.
Statistical analysis of compressive low rank tomography with random measurements
Acharya, Anirudh; Guţă, Mădălin
2017-05-01
We consider the statistical problem of ‘compressive’ estimation of low rank states (r\\ll d ) with random basis measurements, where r, d are the rank and dimension of the state respectively. We investigate whether for a fixed sample size N, the estimation error associated with a ‘compressive’ measurement setup is ‘close’ to that of the setting where a large number of bases are measured. We generalise and extend previous results, and show that the mean square error (MSE) associated with the Frobenius norm attains the optimal rate rd/N with only O(r log{d}) random basis measurements for all states. An important tool in the analysis is the concentration of the Fisher information matrix (FIM). We demonstrate that although a concentration of the MSE follows from a concentration of the FIM for most states, the FIM fails to concentrate for states with eigenvalues close to zero. We analyse this phenomenon in the case of a single qubit and demonstrate a concentration of the MSE about its optimal despite a lack of concentration of the FIM for states close to the boundary of the Bloch sphere. We also consider the estimation error in terms of a different metric-the quantum infidelity. We show that a concentration in the mean infidelity (MINF) does not exist uniformly over all states, highlighting the importance of loss function choice. Specifically, we show that for states that are nearly pure, the MINF scales as 1/\\sqrt{N} but the constant converges to zero as the number of settings is increased. This demonstrates a lack of ‘compressive’ recovery for nearly pure states in this metric.
Tutorial on Biostatistics: Statistical Analysis for Correlated Binary Eye Data.
Ying, Gui-Shuang; Maguire, Maureen G; Glynn, Robert; Rosner, Bernard
2018-02-01
To describe and demonstrate methods for analyzing correlated binary eye data. We describe non-model based (McNemar's test, Cochran-Mantel-Haenszel test) and model-based methods (generalized linear mixed effects model, marginal model) for analyses involving both eyes. These methods were applied to: (1) CAPT (Complications of Age-related Macular Degeneration Prevention Trial) where one eye was treated and the other observed (paired design); (2) ETROP (Early Treatment for Retinopathy of Prematurity) where bilaterally affected infants had one eye treated conventionally and the other treated early and unilaterally affected infants had treatment assigned randomly; and (3) AREDS (Age-Related Eye Disease Study) where treatment was systemic and outcome was eye-specific (both eyes in the same treatment group). In the CAPT (n = 80), treatment group (30% vision loss in treated vs. 44% in observed eyes) was not statistically significant (p = 0.07) when inter-eye correlation was ignored, but was significant (p = 0.01) with McNemar's test and the marginal model. Using standard logistic regression for unfavorable vision in ETROP, standard errors and p-values were larger for person-level covariates and were smaller for ocular covariates than using models accounting for inter-eye correlation. For risk factors of geographic atrophy in AREDS, two-eye analyses accounting for inter-eye correlation yielded more power than one-eye analyses and provided larger standard errors and p-values than invalid two-eye analyses ignoring inter-eye correlation. Ignoring inter-eye correlation can lead to larger p-values for paired designs and smaller p-values when both eyes are in the same group. Marginal models or mixed effects models using the eye as the unit of analysis provide valid inference.
Probability and Statistics Questions and Tests : a critical analysis
Directory of Open Access Journals (Sweden)
Fabrizio Maturo
2015-06-01
Full Text Available In probability and statistics courses, a popular method for the evaluation of the students is to assess them using multiple choice tests. The use of these tests allows to evaluate certain types of skills such as fast response, short-term memory, mental clarity and ability to compete. In our opinion, the verification through testing can certainly be useful for the analysis of certain aspects, and to speed up the process of assessment, but we should be aware of the limitations of such a standardized procedure and then exclude that the assessments of pupils, classes and schools can be reduced to processing of test results. To prove this thesis, this article argues in detail the main test limits, presents some recent models which have been proposed in the literature and suggests some alternative valuation methods. Quesiti e test di Probabilità e Statistica: un'analisi critica Nei corsi di Probabilità e Statistica, un metodo molto diffuso per la valutazione degli studenti consiste nel sottoporli a quiz a risposta multipla. L'uso di questi test permette di valutare alcuni tipi di abilità come la rapidità di risposta, la memoria a breve termine, la lucidità mentale e l'attitudine a gareggiare. A nostro parere, la verifica attraverso i test può essere sicuramente utile per l'analisi di alcuni aspetti e per velocizzare il percorso di valutazione ma si deve essere consapevoli dei limiti di una tale procedura standardizzata e quindi escludere che le valutazioni di alunni, classi e scuole possano essere ridotte a elaborazioni di risultati di test. A dimostrazione di questa tesi, questo articolo argomenta in dettaglio i limiti principali dei test, presenta alcuni recenti modelli proposti in letteratura e propone alcuni metodi di valutazione alternativi. Parole Chiave: item responce theory, valutazione, test, probabilità
Statistical analysis of unstructured amino acid residues in protein structures.
Lobanov, M Yu; Garbuzynskiy, S O; Galzitskaya, O V
2010-02-01
We have performed a statistical analysis of unstructured amino acid residues in protein structures available in the databank of protein structures. Data on the occurrence of disordered regions at the ends and in the middle part of protein chains have been obtained: in the regions near the ends (at distance less than 30 residues from the N- or C-terminus), there are 66% of unstructured residues (38% are near the N-terminus and 28% are near the C-terminus), although these terminal regions include only 23% of the amino acid residues. The frequencies of occurrence of unstructured residues have been calculated for each of 20 types in different positions in the protein chain. It has been shown that relative frequencies of occurrence of unstructured residues of 20 types at the termini of protein chains differ from the ones in the middle part of the protein chain; amino acid residues of the same type have different probabilities to be unstructured in the terminal regions and in the middle part of the protein chain. The obtained frequencies of occurrence of unstructured residues in the middle part of the protein chain have been used as a scale for predicting disordered regions from amino acid sequence using the method (FoldUnfold) previously developed by us. This scale of frequencies of occurrence of unstructured residues correlates with the contact scale (previously developed by us and used for the same purpose) at a level of 95%. Testing the new scale on a database of 427 unstructured proteins and 559 completely structured proteins has shown that this scale can be successfully used for the prediction of disordered regions in protein chains.
Statistical Analysis of Upper Ocean Time Series of Vertical Shear.
1982-05-01
SHEAR ............. 5-1 5.1 Preliminary Statistical Tests ............ 5-1 5.1.1 Autocorrelation and Run Test for Randomness .................... 5-1...parameters are based on the statistical model for S7(NTz) from Section 4. 5.1 PRELIMINARY STATISTICAL TESTS 5.1.1 Autocorrelation and Run Test for Randomness...estimating this interval directly from shear auto- correlation functions, and the second involves the use of the run test . * 5-1 In qeneral, shear in the
Gini s ideas: new perspectives for modern multivariate statistical analysis
Directory of Open Access Journals (Sweden)
Angela Montanari
2013-05-01
Full Text Available Corrado Gini (1884-1964 may be considered the greatest Italian statistician. We believe that his important contributions to statistics, however mainly limited to the univariate context, may be profitably employed in modern multivariate statistical methods, aimed at overcoming the curse of dimensionality by decomposing multivariate problems into a series of suitably posed univariate ones.In this paper we critically summarize Gini’s proposals and consider their impact on multivariate statistical methods, both reviewing already well established applications and suggesting new perspectives.Particular attention will be devoted to classification and regression trees, multiple linear regression, linear dimension reduction methods and transvariation based discrimination.
DEFF Research Database (Denmark)
Köylüoglu, H. U.; Nielsen, Søren R. K.; Cakmak, A. S.
Geometrically non-linear multi-degree-of-freedom (MDOF) systems subject to random excitation are considered. New semi-analytical approximate forward difference equations for the lower order non-stationary statistical moments of the response are derived from the stochastic differential equations...... of motion, and, the accuracy of these equations is numerically investigated. For stationary excitations, the proposed method computes the stationary statistical moments of the response from the solution of non-linear algebraic equations....
Analysis of statistical model properties from discrete nuclear structure data
Firestone, Richard B.
2012-02-01
Experimental M1, E1, and E2 photon strengths have been compiled from experimental data in the Evaluated Nuclear Structure Data File (ENSDF) and the Evaluated Gamma-ray Activation File (EGAF). Over 20,000 Weisskopf reduced transition probabilities were recovered from the ENSDF and EGAF databases. These transition strengths have been analyzed for their dependence on transition energies, initial and final level energies, spin/parity dependence, and nuclear deformation. ENSDF BE1W values were found to increase exponentially with energy, possibly consistent with the Axel-Brink hypothesis, although considerable excess strength observed for transitions between 4-8 MeV. No similar energy dependence was observed in EGAF or ARC data. BM1W average values were nearly constant at all energies above 1 MeV with substantial excess strength below 1 MeV and between 4-8 MeV. BE2W values decreased exponentially by a factor of 1000 from 0 to 16 MeV. The distribution of ENSDF transition probabilities for all multipolarities could be described by a lognormal statistical distribution. BE1W, BM1W, and BE2W strengths all increased substantially for initial transition level energies between 4-8 MeV possibly due to dominance of spin-flip and Pygmy resonance transitions at those excitations. Analysis of the average resonance capture data indicated no transition probability dependence on final level spins or energies between 0-3 MeV. The comparison of favored to unfavored transition probabilities for odd-A or odd-Z targets indicated only partial support for the expected branching intensity ratios with many unfavored transitions having nearly the same strength as favored ones. Average resonance capture BE2W transition strengths generally increased with greater deformation. Analysis of ARC data suggest that there is a large E2 admixture in M1 transitions with the mixing ratio δ ≈ 1.0. The ENSDF reduced transition strengths were considerably stronger than those derived from capture gamma ray
Analysis of statistical model properties from discrete nuclear structure data
Directory of Open Access Journals (Sweden)
Firestone Richard B.
2012-02-01
Full Text Available Experimental M1, E1, and E2 photon strengths have been compiled from experimental data in the Evaluated Nuclear Structure Data File (ENSDF and the Evaluated Gamma-ray Activation File (EGAF. Over 20,000 Weisskopf reduced transition probabilities were recovered from the ENSDF and EGAF databases. These transition strengths have been analyzed for their dependence on transition energies, initial and final level energies, spin/parity dependence, and nuclear deformation. ENSDF BE1W values were found to increase exponentially with energy, possibly consistent with the Axel-Brink hypothesis, although considerable excess strength observed for transitions between 4-8 MeV. No similar energy dependence was observed in EGAF or ARC data. BM1W average values were nearly constant at all energies above 1 MeV with substantial excess strength below 1 MeV and between 4-8 MeV. BE2W values decreased exponentially by a factor of 1000 from 0 to 16 MeV. The distribution of ENSDF transition probabilities for all multipolarities could be described by a lognormal statistical distribution. BE1W, BM1W, and BE2W strengths all increased substantially for initial transition level energies between 4-8 MeV possibly due to dominance of spin-flip and Pygmy resonance transitions at those excitations. Analysis of the average resonance capture data indicated no transition probability dependence on final level spins or energies between 0-3 MeV. The comparison of favored to unfavored transition probabilities for odd-A or odd-Z targets indicated only partial support for the expected branching intensity ratios with many unfavored transitions having nearly the same strength as favored ones. Average resonance capture BE2W transition strengths generally increased with greater deformation. Analysis of ARC data suggest that there is a large E2 admixture in M1 transitions with the mixing ratio δ ≈ 1.0. The ENSDF reduced transition strengths were considerably stronger than those derived from
A statistical framework for differential network analysis from microarray data
Directory of Open Access Journals (Sweden)
Datta Somnath
2010-02-01
Full Text Available Abstract Background It has been long well known that genes do not act alone; rather groups of genes act in consort during a biological process. Consequently, the expression levels of genes are dependent on each other. Experimental techniques to detect such interacting pairs of genes have been in place for quite some time. With the advent of microarray technology, newer computational techniques to detect such interaction or association between gene expressions are being proposed which lead to an association network. While most microarray analyses look for genes that are differentially expressed, it is of potentially greater significance to identify how entire association network structures change between two or more biological settings, say normal versus diseased cell types. Results We provide a recipe for conducting a differential analysis of networks constructed from microarray data under two experimental settings. At the core of our approach lies a connectivity score that represents the strength of genetic association or interaction between two genes. We use this score to propose formal statistical tests for each of following queries: (i whether the overall modular structures of the two networks are different, (ii whether the connectivity of a particular set of "interesting genes" has changed between the two networks, and (iii whether the connectivity of a given single gene has changed between the two networks. A number of examples of this score is provided. We carried out our method on two types of simulated data: Gaussian networks and networks based on differential equations. We show that, for appropriate choices of the connectivity scores and tuning parameters, our method works well on simulated data. We also analyze a real data set involving normal versus heavy mice and identify an interesting set of genes that may play key roles in obesity. Conclusions Examining changes in network structure can provide valuable information about the
Olive mill wastewater characteristics: modelling and statistical analysis
Directory of Open Access Journals (Sweden)
Martins-Dias, Susete
2004-09-01
Full Text Available A synthesis of the work carried out on Olive Mill Wastewater (OMW characterisation is given, covering articles published over the last 50 years. Data on OMW characterisation found in the literature are summarised and correlations between them and with phenolic compounds content are sought. This permits the characteristics of an OMW to be estimated from one simple measurement: the phenolic compounds concentration. A model based on OMW characterisations accounting 6 countries was developed along with a model for Portuguese OMW. The statistical analysis of the correlations obtained indicates that Chemical Oxygen Demand of a given OMW is a second-degree polynomial function of its phenolic compounds concentration. Tests to evaluate the regressions significance were carried out, based on multivariable ANOVA analysis, on visual standardised residuals distribution and their means for confidence levels of 95 and 99 %, validating clearly these models. This modelling work will help in the future planning, operation and monitoring of an OMW treatment plant.Presentamos una síntesis de los trabajos realizados en los últimos 50 años relacionados con la caracterización del alpechín. Realizamos una recopilación de los datos publicados, buscando correlaciones entre los datos relativos al alpechín y los compuestos fenólicos. Esto permite la determinación de las características del alpechín a partir de una sola medida: La concentración de compuestos fenólicos. Proponemos dos modelos, uno basado en datos relativos a seis países y un segundo aplicado únicamente a Portugal. El análisis estadístico de las correlaciones obtenidas indica que la demanda química de oxígeno de un determinado alpechín es una función polinómica de segundo grado de su concentración de compuestos fenólicos. Se comprobó la significancia de esta correlación mediante la aplicación del análisis multivariable ANOVA, y además se evaluó la distribución de residuos y sus
Petocz, Agnes; Newbery, Glenn
2010-01-01
Statistics education in psychology often falls disappointingly short of its goals. The increasing use of qualitative approaches in statistics education research has extended and enriched our understanding of statistical cognition processes, and thus facilitated improvements in statistical education and practices. Yet conceptual analysis, a…
Descriptive Analysis of Single Subject Research Designs: 1983-2007
Hammond, Diana; Gast, David L.
2010-01-01
Single subject research methodology is commonly used and cited in special education courses and journals. This article reviews the types of single subject research designs published in eight refereed journals between 1983 and 2007 used to answer applied research questions. Single subject designs were categorized as withdrawal/reversal, time…
STATISTICAL ANALYSIS OF THE DEMOLITION OF THE HITCH DEVICES ELEMENTS
Directory of Open Access Journals (Sweden)
V. V. Artemchuk
2009-03-01
Full Text Available The results of statistical research of wear of automatic coupler body butts and thrust plates of electric locomotives are presented in the article. Due to the increased wear the mentioned elements require special attention.
Climate time series analysis classical statistical and bootstrap methods
Mudelsee, Manfred
2010-01-01
This book presents bootstrap resampling as a computationally intensive method able to meet the challenges posed by the complexities of analysing climate data. It shows how the bootstrap performs reliably in the most important statistical estimation techniques.
3D geometry analysis of the medial meniscus – a statistical shape modeling approach
Vrancken, A C T; Crijns, S P M; Ploegmakers, M J M; O'Kane, C; van Tienen, T G; Janssen, D; Buma, P; Verdonschot, N
2014-01-01
The geometry-dependent functioning of the meniscus indicates that detailed knowledge on 3D meniscus geometry and its inter-subject variation is essential to design well functioning anatomically shaped meniscus replacements. Therefore, the aim of this study was to quantify 3D meniscus geometry and to determine whether variation in medial meniscus geometry is size- or shape-driven. Also we performed a cluster analysis to identify distinct morphological groups of medial menisci and assessed whether meniscal geometry is gender-dependent. A statistical shape model was created, containing the meniscus geometries of 35 subjects (20 females, 15 males) that were obtained from MR images. A principal component analysis was performed to determine the most important modes of geometry variation and the characteristic changes per principal component were evaluated. Each meniscus from the original dataset was then reconstructed as a linear combination of principal components. This allowed the comparison of male and female menisci, and a cluster analysis to determine distinct morphological meniscus groups. Of the variation in medial meniscus geometry, 53.8% was found to be due to primarily size-related differences and 29.6% due to shape differences. Shape changes were most prominent in the cross-sectional plane, rather than in the transverse plane. Significant differences between male and female menisci were only found for principal component 1, which predominantly reflected size differences. The cluster analysis resulted in four clusters, yet these clusters represented two statistically different meniscal shapes, as differences between cluster 1, 2 and 4 were only present for principal component 1. This study illustrates that differences in meniscal geometry cannot be explained by scaling only, but that different meniscal shapes can be distinguished. Functional analysis, e.g. through finite element modeling, is required to assess whether these distinct shapes actually influence
3D geometry analysis of the medial meniscus--a statistical shape modeling approach.
Vrancken, A C T; Crijns, S P M; Ploegmakers, M J M; O'Kane, C; van Tienen, T G; Janssen, D; Buma, P; Verdonschot, N
2014-10-01
The geometry-dependent functioning of the meniscus indicates that detailed knowledge on 3D meniscus geometry and its inter-subject variation is essential to design well functioning anatomically shaped meniscus replacements. Therefore, the aim of this study was to quantify 3D meniscus geometry and to determine whether variation in medial meniscus geometry is size- or shape-driven. Also we performed a cluster analysis to identify distinct morphological groups of medial menisci and assessed whether meniscal geometry is gender-dependent. A statistical shape model was created, containing the meniscus geometries of 35 subjects (20 females, 15 males) that were obtained from MR images. A principal component analysis was performed to determine the most important modes of geometry variation and the characteristic changes per principal component were evaluated. Each meniscus from the original dataset was then reconstructed as a linear combination of principal components. This allowed the comparison of male and female menisci, and a cluster analysis to determine distinct morphological meniscus groups. Of the variation in medial meniscus geometry, 53.8% was found to be due to primarily size-related differences and 29.6% due to shape differences. Shape changes were most prominent in the cross-sectional plane, rather than in the transverse plane. Significant differences between male and female menisci were only found for principal component 1, which predominantly reflected size differences. The cluster analysis resulted in four clusters, yet these clusters represented two statistically different meniscal shapes, as differences between cluster 1, 2 and 4 were only present for principal component 1. This study illustrates that differences in meniscal geometry cannot be explained by scaling only, but that different meniscal shapes can be distinguished. Functional analysis, e.g. through finite element modeling, is required to assess whether these distinct shapes actually influence
Gait analysis in demented subjects: Interests and perspectives
Directory of Open Access Journals (Sweden)
Olivier Beauchet
2008-03-01
Full Text Available Olivier Beauchet1, Gilles Allali2, Gilles Berrut3, Caroline Hommet4, Véronique Dubost5, Frédéric Assal21Department of Geriatrics, Angers University Hospital, France; 2Department of Neurology, Geneva University Hospital, France; 3Department of Geriatrics, Nantes University Hospital, France; 4Department of Internal Medicine and Geriatrics, Tours University Hospital, France; 5Department of Geriatrics, Dijon University Hospital, FranceAbstract: Gait disorders are more prevalent in dementia than in normal aging and are related to the severity of cognitive decline. Dementia-related gait changes (DRGC mainly include decrease in walking speed provoked by a decrease in stride length and an increase in support phase. More recently, dual-task related changes in gait were found in Alzheimer’s disease (AD and non-Alzheimer dementia, even at an early stage. An increase in stride-to-stride variability while usual walking and dual-tasking has been shown to be more specific and sensitive than any change in mean value in subjects with dementia. Those data show that DRGC are not only associated to motor disorders but also to problem with central processing of information and highlight that dysfunction of temporal and frontal lobe may in part explain gait impairment among demented subjects. Gait assessment, and more particularly dual-task analysis, is therefore crucial in early diagnosis of dementia and/or related syndromes in the elderly. Moreover, dual-task disturbances could be a specific marker of falling at a pre-dementia stage.Keywords: gait, prediction of dementia, risk of falling, older adult
Corriveau, H; Arsenault, A B; Dutil, E; Lepage, Y
1992-01-01
An evaluation based on the Bobath approach to treatment has previously been developed and partially validated. The purpose of the present study was to verify the content validity of this evaluation with the use of a statistical approach known as principal components analysis. Thirty-eight hemiplegic subjects participated in the study. Analysis of the scores on each of six parameters (sensorium, active movements, muscle tone, reflex activity, postural reactions, and pain) was evaluated on three occasions across a 2-month period. Each time this produced three factors that contained 70% of the variation in the data set. The first component mainly reflected variations in mobility, the second mainly variations in muscle tone, and the third mainly variations in sensorium and pain. The results of such exploratory analysis highlight the fact that some of the parameters are not only important but also interrelated. These results seem to partially support the conceptual framework substantiating the Bobath approach to treatment.
Alonso-Prieto, Esther; Pancaroglu, Raika; Dalrymple, Kirsten A; Handy, Todd; Barton, Jason J S; Oruc, Ipek
2015-01-01
Prior event-related potential studies using group statistics within a priori selected time windows have yielded conflicting results about familiarity effects in face processing. Our goal was to evaluate the temporal dynamics of the familiarity effect at all time points at the single-subject level. Ten subjects were shown faces of anonymous people or celebrities. Individual results were analysed using a point-by-point bootstrap analysis. While familiarity effects were less consistent at later epochs, all subjects showed them between 130 and 195 ms in occipitotemporal electrodes. However, the relation between the time course of familiarity effects and the peak latency of the N170 was variable. We concluded that familiarity effects between 130 and 195 ms are robust and can be shown in single subjects. The variability of their relation to the timing of the N170 potential may lead to underestimation of familiarity effects in studies that use group-based statistics.
Harmonic analysis of peripheral pulse for screening subjects at high risk of diabetes.
Jindal, G D; Jain, Rajesh Kumar; Bhat, Sushma N; Pande, Jyoti A; Sawant, Manasi S; Jindal, Sameer K; Deshpande, Alaka K
2017-08-01
Power spectral density (PSD) of peripheral pulses in human has been investigated in the past for its clinical applications. Continuing the efforts, data acquired using Peripheral Pulse Analyser in research projects sponsored by Board of Research in Nuclear Sciences in 207 control subjects, 18 descendants of diabetic patients and 22 patients with systemic hypertension have been subjected to PSD analysis for its study of harmonics. Application software, named Pulse Harmonic Analyser specifically developed for this work, selected 131,072 samples from each data file, obtained PSD, derived 52 PHA parameters and saved them in an Excel sheet. Coefficient of variation in control data was reduced significantly by application of Central Limit Theorem, which enabled use of parametric methods for statistical analysis of the observations. Data in hypertensive patients have shown significant difference in comparison to that of controls in eight parameters at low values of α and β. Data in offspring of diabetic patients also have shown significant difference in one parameter indicating its usefulness in screening subjects with genetic disposition of acquiring Type-II Diabetes. PHA analysis has also yielded sub-harmonic components, which are related to combined variability in the heart rate, pulse volume and pulse morphology and has a potential to become method of choice for real time variability monitoring.
Analysis of the cephalometric pattern of Brazilian achondroplastic adult subjects
Directory of Open Access Journals (Sweden)
Renato Cardoso
2012-12-01
Full Text Available OBJECTIVE: The aim of this study was to assess the position of the cranial base, maxilla, and mandible of Brazilian achondroplastic adult subjects through cephalometric measurements of the cranio-dento-facial complex, and to compare the results to normal patterns established in literature. METHODS: Fourteen achondroplastic adult subjects were evaluated based on their radiographic cephalometric measurements, which were obtained using the tracings proposed by Downs, Steinner, Bjork, Ricketts and McNamara. Statistical comparison of the means was performed with Student's t test. RESULTS: When compared to normal patterns, the cranial base presented a smaller size in both its anterior and posterior portions, the cranial base angle was acute and there was an anterior projection of the porion; the maxilla was found to be smaller in size in both the anteroposterior and transversal directions, it was inclined anteriorly with anterior vertical excess, and retropositioned in relation to the cranial base and to the mandible; the mandible presented a normal-sized ramus, a decreased body and transverse dimension, a tendency towards vertical growth and clockwise rotation, and it was slightly protruded in relation to the cranial base and maxilla. CONCLUSION: Although we observed wide individual variation in some parameters, it was possible to identify significant differences responsible for the phenotypical characteristics of achondroplastic patients.OBJETIVO: avaliar o tamanho e o posicionamento da base do crânio, da maxila e da mandíbula de indivíduos acondroplásicos brasileiros adultos, a partir de medidas cefalométricas do complexo dentoesqueletofacial. Confrontar os dados obtidos aos padrões de normalidade estabelecidos na literatura. MÉTODOS: foram avaliados 14 indivíduos acondroplásicos adultos, utilizando algumas grandezas cefalométricas radiográficas obtidas a partir dos traçados preconizados por Downs, Steinner, Björk, Ricketts e Mc
Zeman, Philip M; Till, Bernie C; Livingston, Nigel J; Tanaka, James W; Driessen, Peter F
2007-12-01
To evaluate the effectiveness of a new method of using Independent Component Analysis (ICA) and k-means clustering to increase the signal-to-noise ratio of Event-Related Potential (ERP) measurements while permitting standard statistical comparisons to be made despite the inter-subject variations characteristic of ICA. Per-subject ICA results were used to create a channel pool, with unequal weights, that could be applied consistently across subjects. Signals derived from this and other pooling schemes, and from unpooled electrodes, were subjected to identical statistical analysis of the N170 own-face effect in a Joe/No Joe face recognition paradigm wherein participants monitored for a target face (Joe) presented amongst other unfamiliar faces and their own face. Results between the Joe, unfamiliar face and own face conditions were compared using Cohen's d statistic (square root of signal-to-noise ratio) to measure effect size. When the own-face condition was compared to the Joe and unfamiliar-face conditions, the channel map method increased effect size by a factor ranging from 1.2 to 2.2. These results stand in contrast to previous findings, where conventional pooling schemes failed to reveal an N170 effect to the own-face stimulus (Tanaka JW, Curran T, Porterfield A, Collins D. The activation of pre-existing and acquired face representations: the N250 ERP as an index of face familiarity. J Cogn Neurosci 2006;18:1488-97). Consistent with conventional pooling schemes, the channel map approach showed no reliable differences between the Joe and Unfamiliar face conditions, yielding a decrease in effect size ranging from 0.13 to 0.75. By increasing the signal-to-noise ratio in the measured waveforms, the channel pool method demonstrated an enhanced sensitivity to the neurophysiological response to own-face relative to other faces. By overcoming the characteristic inter-subject variations of ICA, this work allows classic ERP analysis methods to exploit the improved
Cramer, Gregory D; Cantu, Joe A; Pocius, Judith D; Cambron, Jerrilyn A; McKinnis, Ray A
2010-01-01
This purpose of this study was to assess the reliability of measurements made of the zygapophysial (Z) joint space from the magnetic resonance imaging scans of subjects with acute low back pain using new equipment and 2 different methods of statistical analysis. If found to be reliable, the methods of Z joint measurement can be applied to scans taken before and after spinal manipulation in a larger study of acute low back pain subjects. Three observers measured the central anterior-to-posterior distance of the left and right L4/L5 and L5/S1 Z joint space from 5 subject scans (20 digitizer measurements, rounded to 0.1 mm) on 2 separate occasions separated by 4 weeks. Observers were blinded to each other and their previous work. Intra- and interobserver reliability was calculated by means of intraclass correlation coefficients and also by mean differences using the methods of Bland and Altman (1986). A mean difference of less than +/-0.4 mm was considered clinically acceptable. Intraclass correlation coefficients showed intraobserver reliabilities of 0.95 (95% confidence interval, 0.87-0.98), 0.83 (0.62-0.92), and 0.92 (0.83-0.96) for each of the 3 observers and interobserver reliabilities of 0.90 (0.82-0.95), 0.79 (0.61-0.90), and 0.84 (0.75-0.90) for the first and second measurements and overall reliability, respectively. The mean difference between the first and second measurements was -0.04 mm (+/-1.96 SD = -0.37 to 0.29), 0.23 (-0.48 to 0.94), 0.25 (-0.24 to 0.75), and 0.15 (-0.44 to 0.74) for each of the 3 observers and the overall agreement, respectively. Both statistical methods were found to be useful and complementary and showed the measurements to be highly reliable. Copyright 2010 National University of Health Sciences. Published by Mosby, Inc. All rights reserved.
Linley, Heather S; Sled, Elizabeth A; Culham, Elsie G; Deluzio, Kevin J
2010-12-01
Trunk lean over the stance limb during gait has been linked to a reduction in the knee adduction moment, which is associated with joint loading. We examined differences in knee adduction moments and frontal plane trunk lean during gait between subjects with knee osteoarthritis and a control group of healthy adults. Gait analysis was performed on 80 subjects (40 osteoarthritis). To define lateral trunk lean two definitions were used. The line connecting the midpoint between two reference points on the pelvis and the midpoint between the acromion processes was projected onto the lab frontal plane and the pelvis frontal plane. Pelvic tilt was also measured in the frontal plane as the angle between the pelvic and lab coordinate systems. Angles were calculated across the stance phase of gait. We analyzed the data, (i) by extracting discrete parameters (mean and peak) waveform values, and (ii) using principal component analysis to extract shape and magnitude differences between the waveforms. Osteoarthritis subjects had a higher knee adduction moment than the control group (α=0.05). Although the discrete parameters for trunk lean did not show differences between groups, principal component analysis did detect characteristic waveform differences between the control and osteoarthritis groups. A thorough biomechanical analysis revealed small differences in the pattern of motion of the pelvis and the trunk between subjects with knee osteoarthritis and control subjects; however these differences were only detectable using principal component analysis. Copyright © 2010 Elsevier Ltd. All rights reserved.
Statistics and data analysis for financial engineering with R examples
Ruppert, David
2015-01-01
The new edition of this influential textbook, geared towards graduate or advanced undergraduate students, teaches the statistics necessary for financial engineering. In doing so, it illustrates concepts using financial markets and economic data, R Labs with real-data exercises, and graphical and analytic methods for modeling and diagnosing modeling errors. Financial engineers now have access to enormous quantities of data. To make use of these data, the powerful methods in this book, particularly about volatility and risks, are essential. Strengths of this fully-revised edition include major additions to the R code and the advanced topics covered. Individual chapters cover, among other topics, multivariate distributions, copulas, Bayesian computations, risk management, multivariate volatility and cointegration. Suggested prerequisites are basic knowledge of statistics and probability, matrices and linear algebra, and calculus. There is an appendix on probability, statistics and linear algebra. Practicing fina...
Statistical analysis of natural disasters and related losses
Pisarenko, VF
2014-01-01
The study of disaster statistics and disaster occurrence is a complicated interdisciplinary field involving the interplay of new theoretical findings from several scientific fields like mathematics, physics, and computer science. Statistical studies on the mode of occurrence of natural disasters largely rely on fundamental findings in the statistics of rare events, which were derived in the 20th century. With regard to natural disasters, it is not so much the fact that the importance of this problem for mankind was recognized during the last third of the 20th century - the myths one encounters in ancient civilizations show that the problem of disasters has always been recognized - rather, it is the fact that mankind now possesses the necessary theoretical and practical tools to effectively study natural disasters, which in turn supports effective, major practical measures to minimize their impact. All the above factors have resulted in considerable progress in natural disaster research. Substantial accrued ma...
Hydroelastic analysis of a rectangular plate subjected to slamming loads
Wang, Shan; Guedes Soares, C.
2017-12-01
A hydroelastic analysis of a rectangular plate subjected to slamming loads is presented. An analytical model based on Wagner theory is used for calculations of transient slamming load on the ship plate. A thin isotropic plate theory is considered for determining the vibration of a rectangular plate excited by an external slamming force. The forced vibration of the plate is calculated by the modal expansion method. Analytical results of the transient response of a rectangular plate induced by slamming loads are compared with numerical calculations from finite element method. The theoretical slamming pressure based on Wagner model is applied on the finite element model of a plate. Good agreement is obtained between the analytical and numerical results for the structural deflection of a rectangular plate due to slamming pressure. The effects of plate dimension and wave profile on the structural vibration are discussed as well. The results show that a low impact velocity and a small wetted radial length of wave yield negligible effects of hydroelasticity.
Hydroelastic analysis of a rectangular plate subjected to slamming loads
Wang, Shan; Guedes Soares, C.
2017-10-01
A hydroelastic analysis of a rectangular plate subjected to slamming loads is presented. An analytical model based on Wagner theory is used for calculations of transient slamming load on the ship plate. A thin isotropic plate theory is considered for determining the vibration of a rectangular plate excited by an external slamming force. The forced vibration of the plate is calculated by the modal expansion method. Analytical results of the transient response of a rectangular plate induced by slamming loads are compared with numerical calculations from finite element method. The theoretical slamming pressure based on Wagner model is applied on the finite element model of a plate. Good agreement is obtained between the analytical and numerical results for the structural deflection of a rectangular plate due to slamming pressure. The effects of plate dimension and wave profile on the structural vibration are discussed as well. The results show that a low impact velocity and a small wetted radial length of wave yield negligible effects of hydroelasticity.
Nanotechnology in concrete: Critical review and statistical analysis
Glenn, Jonathan
This thesis investigates the use of nanotechnology in an extensive literature search in the field of cement and concrete. A summary is presented. The research was divided into two categories: (1) nanoparticles and (2) nanofibers and nanotubes. The successes and challenges of each category is documented in this thesis. The data from the literature search is taken and analyzed using statistical prediction by the use of the Monte Carlo and Bayesian methods. It shows how statistical prediction can be used to analyze patterns and trends and also discover optimal additive dosages for concrete mixes.
Gene Identification Algorithms Using Exploratory Statistical Analysis of Periodicity
Mukherjee, Shashi Bajaj; Sen, Pradip Kumar
2010-10-01
Studying periodic pattern is expected as a standard line of attack for recognizing DNA sequence in identification of gene and similar problems. But peculiarly very little significant work is done in this direction. This paper studies statistical properties of DNA sequences of complete genome using a new technique. A DNA sequence is converted to a numeric sequence using various types of mappings and standard Fourier technique is applied to study the periodicity. Distinct statistical behaviour of periodicity parameters is found in coding and non-coding sequences, which can be used to distinguish between these parts. Here DNA sequences of Drosophila melanogaster were analyzed with significant accuracy.
Categorical and nonparametric data analysis choosing the best statistical technique
Nussbaum, E Michael
2014-01-01
Featuring in-depth coverage of categorical and nonparametric statistics, this book provides a conceptual framework for choosing the most appropriate type of test in various research scenarios. Class tested at the University of Nevada, the book's clear explanations of the underlying assumptions, computer simulations, and Exploring the Concept boxes help reduce reader anxiety. Problems inspired by actual studies provide meaningful illustrations of the techniques. The underlying assumptions of each test and the factors that impact validity and statistical power are reviewed so readers can explain
Subjective dimension in the analysis of human development
Directory of Open Access Journals (Sweden)
LÓPEZ NOVAL, Borja
2012-06-01
Full Text Available In recent years subjective evaluations about own quality of life, resumed in levels of life satisfactionor happiness, are gaining importance as indicators of development. Some authors state that subjectivewell-being is a necessary and sufficient condition for human development. In this work the arguments ofthese authors are explained and it is discussed the role subjective evaluations must play on developmentstudies. The main conclusion is that although it is necessary to integrate subjective well-being into humandevelopment studies we cannot identify subjective well-being and development.
Lio, Guillaume; Boulinguez, Philippe
2013-02-15
A mandatory assumption in blind source separation (BSS) of the human electroencephalogram (EEG) is that the mixing matrix remains invariant, i.e., that the sources, electrodes and geometry of the head do not change during the experiment. Actually, this is not often the case. For instance, it is common that some electrodes slightly move during EEG recording. This issue is even more critical for group independent component analysis (gICA), a method of growing interest, in which only one mixing matrix is estimated for several subjects. Indeed, because of interindividual anatomo-functional variability, this method violates the mandatory principle of invariance. Here, using simulated (experiments 1 and 2) and real (experiment 3) EEG data, we test how eleven current BSS algorithms undergo distortions of the mixing matrix. We show that this usual kind of perturbation creates non-Gaussian features that are virtually added to all sources, impairing the estimation of real higher order statistics (HOS) features of the actual sources by HOS algorithms (e.g., Ext-INFOMAX, FASTICA). HOS-based methods are likely to identify more components (with similar properties) than actual neurological sources, a problem frequently encountered by BSS users. In practice, the quality of the recovered signal and the efficiency of subsequent source localization are substantially impaired. Performing dimensionality reduction before applying HOS-based BSS does not seem to be a safe strategy to circumvent the problem. Second order statistics (SOS)-based BSS methods belonging to the less popular SOBI family class are much less sensitive to this bias. Copyright © 2012 Elsevier Inc. All rights reserved.
Simulation modelling analysis for small sets of single-subject data collected over time.
Borckardt, Jeffrey J; Nash, Michael R
2014-01-01
The behavioural data yielded by single subjects in naturalistic and controlled settings likely contain valuable information to scientists and practitioners alike. Although some of the properties unique to this data complicate statistical analysis, progress has been made in developing specialised techniques for rigorous data evaluation. There are no perfect tests currently available to analyse short autocorrelated data streams, but there are some promising approaches that warrant further development. Although many approaches have been proposed, and some appear better than others, they all have some limitations. When data sets are large enough (∼30 data points per phase), the researcher has a reasonably rich pallet of statistical tools from which to choose. However, when the data set is sparse, the analytical options dwindle. Simulation modelling analysis (SMA; described in this article) is a relatively new technique that appears to offer acceptable Type-I and Type-II error rate control with short streams of autocorrelated data. However, at this point, it is probably too early to endorse any specific statistical approaches for short, autocorrelated time-series data streams. While SMA shows promise, more work is needed to verify that it is capable of reliable Type-I and Type-II error performance with short serially dependent streams of data.
A Bayesian Statistical Analysis of the Enhanced Greenhouse Effect
de Vos, A.F.; Tol, R.S.J.
1998-01-01
This paper demonstrates that there is a robust statistical relationship between the records of the global mean surface air temperature and the atmospheric concentration of carbon dioxide over the period 1870-1991. As such, the enhanced greenhouse effect is a plausible explanation for the observed
A note on the statistical analysis of point judgment matrices
African Journals Online (AJOL)
by Saaty in the 1970s which has received considerable attention in the mathematical and statistical literature [11, 18]. The core of .... question is how to determine the weights associated with the objects. 3 Distributional approaches ..... Research Foundation of South Africa for financial support. The authors are also grateful.
Statistical analysis of DNT detection using chemically functionalized microcantilever arrays
DEFF Research Database (Denmark)
Bosco, Filippo; Bache, M.; Hwu, E.-T.
2012-01-01
from 1 to 2 cantilevers have been reported, without any information on repeatability and reliability of the presented data. In explosive detection high reliability is needed and thus a statistical measurement approach needs to be developed and implemented. We have developed a DVD-based read-out system...
Statistical analysis and optimization of copper biosorption capability ...
African Journals Online (AJOL)
enoh
2012-03-01
% glucose, 0.5% yeast extract, supplemented with 20 ml/L apple juice) with 15% ... represented by "+" sign, while dead cells, temperature of 25°C, dry weight of 0.13 ... optimum. Using the Microsoft Excel program, statistical t-.
A statistical analysis on the leak detection performance of ...
Indian Academy of Sciences (India)
This paper attempts to provide a statistical insight on the concepts of leak detection performance of WSNs when deployed on overground and underground pipelines.The approach in the study employs the hypothesis testing problem to formulate a solution on the detection plan.Through the hypothesis test, the maximum ...
Did Tanzania Achieve the Second Millennium Development Goal? Statistical Analysis
Magoti, Edwin
2016-01-01
Development Goal "Achieve universal primary education", the challenges faced, along with the way forward towards achieving the fourth Sustainable Development Goal "Ensure inclusive and equitable quality education and promote lifelong learning opportunities for all". Statistics show that Tanzania has made very promising steps…
Herbal gardens of India: A statistical analysis report | Rao | African ...
African Journals Online (AJOL)
A knowledge system of the herbal garden in India was developed and these herbal gardens' information was statistically classified for efficient data processing, sharing and retrieving of information, which could act as a decision tool to the farmers, researchers, decision makers and policy makers in the field of medicinal ...
Statistical analysis of stream sediment geochemical data from Oyi ...
African Journals Online (AJOL)
Ife Journal of Science ... The results of concentrations of twenty-four elements treated with both univariate and multivariate statistical analytical techniques revealed that all the elements analyzed except Co, Cr, Fe and V ... The cumulative probability plots of the elements showed that Mn and Cu consisted of one population.
Differences in subjective well-being within households: An analysis ...
African Journals Online (AJOL)
We investigate differences in subjective well-being (life satisfaction) within the household using matched data on co-resident couples drawn from the 2008 National Income Dynamics Study for South Africa. The majority of men and women in co-resident partnerships report different levels of subjective wellbeing. We use ...
Taylor, Sandra L; Ruhaak, L Renee; Weiss, Robert H; Kelly, Karen; Kim, Kyoungmi
2017-01-01
High through-put mass spectrometry (MS) is now being used to profile small molecular compounds across multiple biological sample types from the same subjects with the goal of leveraging information across biospecimens. Multivariate statistical methods that combine information from all biospecimens could be more powerful than the usual univariate analyses. However, missing values are common in MS data and imputation can impact between-biospecimen correlation and multivariate analysis results. We propose two multivariate two-part statistics that accommodate missing values and combine data from all biospecimens to identify differentially regulated compounds. Statistical significance is determined using a multivariate permutation null distribution. Relative to univariate tests, the multivariate procedures detected more significant compounds in three biological datasets. In a simulation study, we showed that multi-biospecimen testing procedures were more powerful than single-biospecimen methods when compounds are differentially regulated in multiple biospecimens but univariate methods can be more powerful if compounds are differentially regulated in only one biospecimen. We provide R functions to implement and illustrate our method as supplementary information CONTACT: sltaylor@ucdavis.eduSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Gender influence on white matter microstructure: a tract-based spatial statistics analysis.
Directory of Open Access Journals (Sweden)
Richard A Kanaan
Full Text Available BACKGROUND: Sexual dimorphism in human brain structure is well recognised, but less is known about gender differences in white matter microstructure. We used diffusion tensor imaging to explore gender differences in fractional anisotropy (FA, an index of microstructural integrity. We previously found increased FA in the corpus callosum in women, and increased FA in the cerebellum and left superior longitudinal fasciculus (SLF in men, using a whole-brain voxel-based analysis. METHODS: A whole-brain tract-based spatial statistics analysis of 120 matched subjects from the previous analysis, and 134 new subjects (147 men and 107 women in total using a 1.5T scanner, with division into tract-based regions of interest. RESULTS: Men had higher FA in the superior cerebellar peduncles and women had higher FA in corpus callosum in both the first and second samples. The higher SLF FA in men was not found in either sample. DISCUSSION: We confirmed our previous, controversial finding of increased FA in the corpus callosum in women, and increased cerebellar FA in men. The corpus callosum FA difference offers some explanation for the otherwise puzzling advantage in inter-callosal transfer time shown in women; the cerebellar FA difference may be associated with the developmental motor advantage shown in men.
An Analysis of Attitudes toward Statistics: Gender Differences among Advertising Majors.
Fullerton, Jami A.; Umphrey, Don
This study measures advertising students' attitudes toward statistics. Subjects, 275 undergraduate advertising students from two southwestern United States universities, completed a questionnaire used to gauge students' attitudes toward statistics by measuring 6 underlying factors: (1) students' interest and future applicability; (2) relationship…
Single-subject analysis reveals variation in knee mechanics during step landing.
Scholes, Corey J; McDonald, Michael D; Parker, Anthony W
2012-08-09
Evidence concerning the alteration of knee function during landing suffers from a lack of consensus. This uncertainty can be attributed to methodological flaws, particularly in relation to the statistical analysis of variable human movement data. The aim of this study was to compare single-subject and group analyses in detecting changes in knee stiffness and coordination during step landing that occur independent of an experimental intervention. A group of healthy men (N=12) stepped-down from a knee-high platform for 60 consecutive trials, each trial separated by a 1-minute rest. The magnitude and within-participant variability of sagittal stiffness and coordination of the landing knee were evaluated with both group and single-subject analyses. The group analysis detected significant changes in knee coordination. However, the single-subject analyses detected changes in all dependent variables, which included increases in variability with task repetition. Between-individual variation was also present in the timing, size and direction of alterations. The results have important implications for the interpretation of existing information regarding the adaptation of knee mechanics to interventions such as fatigue, footwear or landing height. It is proposed that a participant's natural variation in knee mechanics should be analysed prior to an intervention in future experiments. Copyright © 2012 Elsevier Ltd. All rights reserved.
Analysis of embedded waste storage tanks subjected to seismic loading
Energy Technology Data Exchange (ETDEWEB)
Zaslawsky, M.; Sammaddar, S.; Kennedy, W.N.
1991-12-31
At the Savannah River Site, High Activity Wastes are stored in carbon steel tanks that are within reinforced concrete vaults. These soil-embedded tank/vault structures are approximately 80 ft. in diameter and 40 ft. deep. The tanks were studied to determine the essentials of governing variables, to reduce the problem to the least number of governing cases to optimize analysis effort without introducing excessive conservatism. The problem reduced to a limited number of cases of soil-structure interaction and fluid (tank contents) -- structure interaction problems. It was theorized that substantially reduced input would be realized from soil structure interaction (SSI) but that it was also possible that tank-to-tank proximity would result in (re)amplification of the input. To determine the governing seismic input motion, the three dimensional SSI code, SASSI, was used. Significant among the issues relative to waste tanks is to the determination of fluid response and tank behavior as a function of tank contents viscosity. Tank seismic analyses and studies have been based on low viscosity fluids (water) and the behavior is quite well understood. Typical wastes (salts, sludge), which are highly viscous, have not been the subject of studies to understand the effect of viscosity on seismic response. The computer code DYNA3D was used to study how viscosity alters tank wall pressure distribution and tank base shear and overturning moments. A parallel hand calculation was performed using standard procedures. Conclusions based on the study provide insight into the quantification of the reduction of seismic inputs for soil structure interaction for a ``soft`` soil site.
Statistical Analysis of Questionnaire on Physical Rehabilitation in Multiple Sclerosis
Czech Academy of Sciences Publication Activity Database
Martinková, Patrícia; Řasová, K.
-, č. 3 (2010), S340 ISSN 1210-7859. [Obnovené neuroimunologickjé a likvorologické dny. 21.05.2010-22.05.2010, Praha] R&D Projects: GA MŠk(CZ) 1M06014 Institutional research plan: CEZ:AV0Z10300504 Keywords : questionnaire * physical rehabilitation * multiple sclerosis Subject RIV: IN - Informatics, Computer Science
Statistical methods for data analysis in particle physics
Lista, Luca
2017-01-01
This concise set of course-based notes provides the reader with the main concepts and tools needed to perform statistical analyses of experimental data, in particular in the field of high-energy physics (HEP). First, the book provides an introduction to probability theory and basic statistics, mainly intended as a refresher from readers’ advanced undergraduate studies, but also to help them clearly distinguish between the Frequentist and Bayesian approaches and interpretations in subsequent applications. More advanced concepts and applications are gradually introduced, culminating in the chapter on both discoveries and upper limits, as many applications in HEP concern hypothesis testing, where the main goal is often to provide better and better limits so as to eventually be able to distinguish between competing hypotheses, or to rule out some of them altogether. Many worked-out examples will help newcomers to the field and graduate students alike understand the pitfalls involved in applying theoretical conc...
Statistical methods for data analysis in particle physics
Lista, Luca
2017-01-01
This concise set of course-based notes provides the reader with the main concepts and tools needed to perform statistical analyses of experimental data, in particular in the field of high-energy physics (HEP). First, the book provides an introduction to probability theory and basic statistics, mainly intended as a refresher from readers’ advanced undergraduate studies, but also to help them clearly distinguish between the Frequentist and Bayesian approaches and interpretations in subsequent applications. More advanced concepts and applications are gradually introduced, culminating in the chapter on both discoveries and upper limits, as many applications in HEP concern hypothesis testing, where the main goal is often to provide better and better limits so as to eventually be able to distinguish between competing hypotheses, or to rule out some of them altogether. Many worked-out examples will help newcomers to the field and graduate students alike understand the pitfalls involved in applying theoretical co...
Statistical analysis of motion contrast in optical coherence tomography angiography
Cheng, Yuxuan; Pan, Cong; Lu, Tongtong; Hong, Tianyu; Ding, Zhihua; Li, Peng
2015-01-01
Optical coherence tomography angiography (Angio-OCT), mainly based on the temporal dynamics of OCT scattering signals, has found a range of potential applications in clinical and scientific researches. In this work, based on the model of random phasor sums, temporal statistics of the complex-valued OCT signals are mathematically described. Statistical distributions of the amplitude differential (AD) and complex differential (CD) Angio-OCT signals are derived. The theories are validated through the flow phantom and live animal experiments. Using the model developed in this work, the origin of the motion contrast in Angio-OCT is mathematically explained, and the implications in the improvement of motion contrast are further discussed, including threshold determination and its residual classification error, averaging method, and scanning protocol. The proposed mathematical model of Angio-OCT signals can aid in the optimal design of the system and associated algorithms.
Common misconceptions about data analysis and statistics1
Motulsky, Harvey J
2015-01-01
Ideally, any experienced investigator with the right tools should be able to reproduce a finding published in a peer-reviewed biomedical science journal. In fact, the reproducibility of a large percentage of published findings has been questioned. Undoubtedly, there are many reasons for this, but one reason may be that investigators fool themselves due to a poor understanding of statistical concepts. In particular, investigators often make these mistakes: (1) P-Hacking. This is when you reanalyze a data set in many different ways, or perhaps reanalyze with additional replicates, until you get the result you want. (2) Overemphasis on P values rather than on the actual size of the observed effect. (3) Overuse of statistical hypothesis testing, and being seduced by the word “significant”. (4) Overreliance on standard errors, which are often misunderstood. PMID:25692012
Learning to Translate: A Statistical and Computational Analysis
Directory of Open Access Journals (Sweden)
Marco Turchi
2012-01-01
Full Text Available We present an extensive experimental study of Phrase-based Statistical Machine Translation, from the point of view of its learning capabilities. Very accurate Learning Curves are obtained, using high-performance computing, and extrapolations of the projected performance of the system under different conditions are provided. Our experiments confirm existing and mostly unpublished beliefs about the learning capabilities of statistical machine translation systems. We also provide insight into the way statistical machine translation learns from data, including the respective influence of translation and language models, the impact of phrase length on performance, and various unlearning and perturbation analyses. Our results support and illustrate the fact that performance improves by a constant amount for each doubling of the data, across different language pairs, and different systems. This fundamental limitation seems to be a direct consequence of Zipf law governing textual data. Although the rate of improvement may depend on both the data and the estimation method, it is unlikely that the general shape of the learning curve will change without major changes in the modeling and inference phases. Possible research directions that address this issue include the integration of linguistic rules or the development of active learning procedures.
Performance Analysis of Statistical Time Division Multiplexing Systems
Directory of Open Access Journals (Sweden)
Johnson A. AJIBOYE
2010-12-01
Full Text Available Multiplexing is a way of accommodating many input sources of a low capacity over a high capacity outgoing channel. Statistical Time Division Multiplexing (STDM is a technique that allows the number of users to be multiplexed over the channel more than the channel can afford. The STDM normally exploits unused time slots by the non-active users and allocates those slots for the active users. Therefore, STDM is appropriate for bursty sources. In this way STDM normally utilizes channel bandwidth better than traditional Time Division Multiplexing (TDM. In this work, the statistical multiplexer is viewed as M/M/1queuing system and the performance is measured by comparing analytical results to simulation results using Matlab. The index used to determine the performance of the statistical multiplexer is the number of packets both in the system and the queue. Comparison of analytical results was also done between M/M/1 and M/M/2 and also between M/M/1 and M/D/1 queue systems. At high utilizations, M/M/2 performs better than M/M/1. M/D/1 also outperforms M/M1.
Directory of Open Access Journals (Sweden)
Antonio Carlos da Silva Senra Filho
2017-08-01
Full Text Available Abstract Introduction: The search for human brain templates has been progressing in the past decades and in order to understand disease patterns a need for a standard diffusion tensor imaging (DTI dataset was raised. For this purposes, some DTI templates were developed which assist group analysis studies. In this study, complementary information to the most commonly used DTI template is proposed in order to offer a patient-specific statistical analysis on diffusion-weighted data. Methods 131 normal subjects were used to reconstruct a population-averaged template. After image pre processing, reconstruction and diagonalization, the eigenvalues and eigenvectors were used to reconstruct the quantitative DTI maps, namely fractional anisotropy (FA, mean diffusivity (MD, relative anisotropy (RA, and radial diffusivity (RD. The mean absolute error (MAE was calculated using a voxel-wise procedure, which informs the global error regarding the mean intensity value for each quantitative map. Results the MAE values presented a low MAE estimate (max(MAE = 0.112, showing a reasonable error measure between our DTI-USP-131 template and the classical DTI-JHU-81 approach, which also shows a statistical equivalence (p<0.05 with the classical DTI template. Hence, the complementary standard deviation (SD maps for each quantitative DTI map can be added to the classical DTI-JHU-81 template. Conclusion In this study, variability DTI maps (SD maps were reconstructed providing the possibility of a voxel-wise statistical analysis in patient-specific approach. Finally, the brain template (DTI-USP-131 described here was made available for research purposes on the web site (http://dx.doi.org/10.17632/br7bhs4h7m.1, being valuable to research and clinical applications.
Advances in Statistical Methods for Meta-Analysis.
Hedges, Larry V.
1984-01-01
The adequacy of traditional effect size measures for research synthesis is challenged. Analogues to analysis of variance and multiple regression analysis for effect sizes are presented. The importance of tests for the consistency of effect sizes in interpreting results, and problems in obtaining well-specified models for meta-analysis are…
Statistical Analysis of fMRI Time-Series: A Critical Review of the GLM Approach
Directory of Open Access Journals (Sweden)
Martin M Monti
2011-03-01
Full Text Available Functional Magnetic Resonance Imaging (fMRI is one of the most widely used tools to study the neural underpinnings of human cognition. Standard analysis of fMRI data relies on a General Linear Model (GLM approach to separate stimulus induced signals from noise. Crucially, this approach relies on a number of assumptions about the data which, for inferences to be valid, must be met. The current paper reviews the GLM approach to analysis of fMRI time-series, focusing in particular on the degree to which such data abides by the assumptions of the GLM framework, and on the methods that have been developed to correct for any violation of those assumptions. Rather than biasing estimates of effect size, the major consequence of non-conformity to the assumptions is to introduce bias into estimates of the variance, thus affecting test statistics, power and false positive rates. Furthermore, this bias can have pervasive effects on both individual subject and group-level statistics, potentially yielding qualitatively different results across replications, especially after the thresholding procedures commonly used for inference-making.
Statistical Analysis of fMRI Time-Series: A Critical Review of the GLM Approach.
Monti, Martin M
2011-01-01
Functional magnetic resonance imaging (fMRI) is one of the most widely used tools to study the neural underpinnings of human cognition. Standard analysis of fMRI data relies on a general linear model (GLM) approach to separate stimulus induced signals from noise. Crucially, this approach relies on a number of assumptions about the data which, for inferences to be valid, must be met. The current paper reviews the GLM approach to analysis of fMRI time-series, focusing in particular on the degree to which such data abides by the assumptions of the GLM framework, and on the methods that have been developed to correct for any violation of those assumptions. Rather than biasing estimates of effect size, the major consequence of non-conformity to the assumptions is to introduce bias into estimates of the variance, thus affecting test statistics, power, and false positive rates. Furthermore, this bias can have pervasive effects on both individual subject and group-level statistics, potentially yielding qualitatively different results across replications, especially after the thresholding procedures commonly used for inference-making.
Toward a theory of statistical tree-shape analysis
DEFF Research Database (Denmark)
Feragen, Aasa; Lo, Pechin Chien Pau; de Bruijne, Marleen
2013-01-01
In order to develop statistical methods for shapes with a tree-structure, we construct a shape space framework for tree-shapes and study metrics on the shape space. This shape space has singularities, which correspond to topological transitions in the represented trees. We study two closely related...... metrics on the shape space, TED and QED. QED is a quotient Euclidean distance arising naturally from the shape space formulation, while TED is the classical tree edit distance. Using Gromov's metric geometry we gain new insight into the geometries defined by TED and QED. We show that the new metric QED...
Introduction to statistical data analysis for the life sciences
Ekstrom, Claus Thorn
2014-01-01
This text provides a computational toolbox that enables students to analyze real datasets and gain the confidence and skills to undertake more sophisticated analyses. Although accessible with any statistical software, the text encourages a reliance on R. For those new to R, an introduction to the software is available in an appendix. The book also includes end-of-chapter exercises as well as an entire chapter of case exercises that help students apply their knowledge to larger datasets and learn more about approaches specific to the life sciences.
Improving the Conduct and Reporting of Statistical Analysis in Psychology.
Sijtsma, Klaas; Veldkamp, Coosje L S; Wicherts, Jelte M
2016-03-01
We respond to the commentaries Waldman and Lilienfeld (Psychometrika, 2015) and Wigboldus and Dotch (Psychometrika, 2015) provided in response to Sijtsma's (Sijtsma in Psychometrika, 2015) discussion article on questionable research practices. Specifically, we discuss the fear of an increased dichotomy between substantive and statistical aspects of research that may arise when the latter aspects are laid entirely in the hands of a statistician, remedies for false positives and replication failure, and the status of data exploration, and we provide a re-definition of the concept of questionable research practices.
Bayesian statistical analysis of censored data in geotechnical engineering
DEFF Research Database (Denmark)
Ditlevsen, Ove Dalager; Tarp-Johansen, Niels Jacob; Denver, Hans
2000-01-01
The geotechnical engineer is often faced with the problem ofhow to assess the statistical properties of a soil parameter on the basis ofa sample measured in-situ or in the laboratory with the defect that somevalues have been replaced by interval bounds because the corresponding soilparameter values...... is available about the soil parameter distribution.The present paper shows how a characteristic value by computer calcula-tions can be assessed systematically from the actual sample of censored datasupplemented with prior information from a soil parameter data base....
An invariant approach to statistical analysis of shapes
Lele, Subhash R
2001-01-01
INTRODUCTIONA Brief History of MorphometricsFoundations for the Study of Biological FormsDescription of the data SetsMORPHOMETRIC DATATypes of Morphometric DataLandmark Homology and CorrespondenceCollection of Landmark CoordinatesReliability of Landmark Coordinate DataSummarySTATISTICAL MODELS FOR LANDMARK COORDINATE DATAStatistical Models in GeneralModels for Intra-Group VariabilityEffect of Nuisance ParametersInvariance and Elimination of Nuisance ParametersA Definition of FormCoordinate System Free Representation of FormEst
JAWS data collection, analysis highlights, and microburst statistics
Mccarthy, J.; Roberts, R.; Schreiber, W.
1983-01-01
Organization, equipment, and the current status of the Joint Airport Weather Studies project initiated in relation to the microburst phenomenon are summarized. Some data collection techniques and preliminary statistics on microburst events recorded by Doppler radar are discussed as well. Radar studies show that microbursts occur much more often than expected, with majority of the events being potentially dangerous to landing or departing aircraft. Seventy events were registered, with the differential velocities ranging from 10 to 48 m/s; headwind/tailwind velocity differentials over 20 m/s are considered seriously hazardous. It is noted that a correlation is yet to be established between the velocity differential and incoherent radar reflectivity.
Data analysis of asymmetric structures advanced approaches in computational statistics
Saito, Takayuki
2004-01-01
Data Analysis of Asymmetric Structures provides a comprehensive presentation of a variety of models and theories for the analysis of asymmetry and its applications and provides a wealth of new approaches in every section. It meets both the practical and theoretical needs of research professionals across a wide range of disciplines and considers data analysis in fields such as psychology, sociology, social science, ecology, and marketing. In seven comprehensive chapters this guide details theories, methods, and models for the analysis of asymmetric structures in a variety of disciplines and presents future opportunities and challenges affecting research developments and business applications.
Radar Derived Spatial Statistics of Summer Rain. Volume 2; Data Reduction and Analysis
Konrad, T. G.; Kropfli, R. A.
1975-01-01
Data reduction and analysis procedures are discussed along with the physical and statistical descriptors used. The statistical modeling techniques are outlined and examples of the derived statistical characterization of rain cells in terms of the several physical descriptors are presented. Recommendations concerning analyses which can be pursued using the data base collected during the experiment are included.
The R software fundamentals of programming and statistical analysis
Lafaye de Micheaux, Pierre; Liquet, Benoit
2013-01-01
The contents of The R Software are presented so as to be both comprehensive and easy for the reader to use. Besides its application as a self-learning text, this book can support lectures on R at any level from beginner to advanced. This book can serve as a textbook on R for beginners as well as more advanced users, working on Windows, MacOs or Linux OSes. The first part of the book deals with the heart of the R language and its fundamental concepts, including data organization, import and export, various manipulations, documentation, plots, programming and maintenance. The last chapter in this part deals with oriented object programming as well as interfacing R with C/C++ or Fortran, and contains a section on debugging techniques. This is followed by the second part of the book, which provides detailed explanations on how to perform many standard statistical analyses, mainly in the Biostatistics field. Topics from mathematical and statistical settings that are included are matrix operations, integration, o...
A COMPARISON OF SOME STATISTICAL TECHNIQUES FOR ROAD ACCIDENT ANALYSIS
OPPE, S INST ROAD SAFETY RES, SWOV
1992-01-01
At the TRRL/SWOV Workshop on Accident Analysis Methodology, heldin Amsterdam in 1988, the need to establish a methodology for the analysis of road accidents was firmly stated by all participants. Data from different countries cannot be compared because there is no agreement on research methodology,
Using multivariate statistical analysis to assess changes in water ...
African Journals Online (AJOL)
Canonical correspondence analysis (CCA) showed that the environmental variables used in the analysis, discharge and month of sampling, explained a small proportion of the total variance in the data set – less than 10% at each site. However, the total data set variance, explained by the 4 hypothetical axes generated by ...
Sealed-bid auction of Netherlands mussels: statistical analysis
Kleijnen, J.P.C.; van Schaik, F.D.J.
2011-01-01
This article presents an econometric analysis of the many data on the sealed-bid auction that sells mussels in Yerseke town, the Netherlands. The goals of this analysis are obtaining insight into the important factors that determine the price of these mussels, and quantifying the performance of an
Griffiths, Dawn
2009-01-01
Wouldn't it be great if there were a statistics book that made histograms, probability distributions, and chi square analysis more enjoyable than going to the dentist? Head First Statistics brings this typically dry subject to life, teaching you everything you want and need to know about statistics through engaging, interactive, and thought-provoking material, full of puzzles, stories, quizzes, visual aids, and real-world examples. Whether you're a student, a professional, or just curious about statistical analysis, Head First's brain-friendly formula helps you get a firm grasp of statistics
Subjective Analysis and Objective Characterization of Adaptive Bitrate Videos
DEFF Research Database (Denmark)
Søgaard, Jacob; Tavakoli, Samira; Brunnström, Kjell
2016-01-01
The HTTP Adaptive Streaming (HAS) technology allows video service providers to improve the network utilization and thereby increasing the end-users’ Quality of Experience (QoE).This has made HAS a widely used approach for audiovisual delivery. There are several previous studies aiming to identify...... the factors influencing on subjective QoE of adaptation events.However, adapting the video quality typically lasts in a time scale much longer than what current standardized subjective testing methods are designed for, thus making the full matrix design of the experiment on an event level hard to achieve....... In this study, we investigated the overall subjective QoE of 6 minutes long video sequences containing different sequential adaptation events. This was compared to a data set from our previous work performed to evaluate the individual adaptation events. We could then derive a relationship between the overall...
A Unified Analysis for Subject Topics in Brazilian Portuguese
Directory of Open Access Journals (Sweden)
Aroldo de Andrade
2014-06-01
Full Text Available In this paper we discuss the phenomenon of subject topics, consisting of the movement of either a genitive or a locative constituent into subject position in Brazilian Portuguese. This construction occurs with different verb classes, shows subject-verb agreement and precludes a resumptive pronoun. The goal of the present text is to account for its distribution. To do so, we argue that the two subclasses of unaccusative verbs found with genitive and locative topics instantiate some sort of secondary predication, and that only specific configurations allow for the movement of a constituent out of the argument structure domain. Finally, we address the comparative issue involved in explaining why the derivation of such a construction is not possible in European Portuguese.
Shouno, Hayaru; Kido, Shoji; Okada, Masato
2004-09-01
Bidirectional associative memory (BAM) is a kind of an artificial neural network used to memorize and retrieve heterogeneous pattern pairs. Many efforts have been made to improve BAM from the the viewpoint of computer application, and few theoretical studies have been done. We investigated the theoretical characteristics of BAM using a framework of statistical-mechanical analysis. To investigate the equilibrium state of BAM, we applied self-consistent signal to noise analysis (SCSNA) and obtained a macroscopic parameter equations and relative capacity. Moreover, to investigate not only the equilibrium state but also the retrieval process of reaching the equilibrium state, we applied statistical neurodynamics to the update rule of BAM and obtained evolution equations for the macroscopic parameters. These evolution equations are consistent with the results of SCSNA in the equilibrium state.
Statistical analysis of questionnaires a unified approach based on R and Stata
Bartolucci, Francesco; Gnaldi, Michela
2015-01-01
Statistical Analysis of Questionnaires: A Unified Approach Based on R and Stata presents special statistical methods for analyzing data collected by questionnaires. The book takes an applied approach to testing and measurement tasks, mirroring the growing use of statistical methods and software in education, psychology, sociology, and other fields. It is suitable for graduate students in applied statistics and psychometrics and practitioners in education, health, and marketing.The book covers the foundations of classical test theory (CTT), test reliability, va
Statistical Analysis of Conductor Motion in LHC Superconducting Dipole Magnets
Calvi, M; Pugnat, P; Siemko, A
2004-01-01
Premature training quenches are usually caused by the transient energy release within the magnet coil as it is energised. The dominant disturbances originate in cable motion and produce observable rapid variation in voltage signals called spikes. The experimental set up and the raw data treatment to detect these phenomena are briefly recalled. The statistical properties of different features of spikes are presented like for instance the maximal amplitude, the energy, the duration and the time correlation between events. The parameterisation of the mechanical activity of magnets is addressed. The mechanical activity of full-scale prototype and first preseries LHC dipole magnets is analysed and correlations with magnet manufacturing procedures and quench performance are established. The predictability of the quench occurrence is discussed and examples presented.
PHAST: Protein-like heteropolymer analysis by statistical thermodynamics
Frigori, Rafael B.
2017-06-01
PHAST is a software package written in standard Fortran, with MPI and CUDA extensions, able to efficiently perform parallel multicanonical Monte Carlo simulations of single or multiple heteropolymeric chains, as coarse-grained models for proteins. The outcome data can be straightforwardly analyzed within its microcanonical Statistical Thermodynamics module, which allows for computing the entropy, caloric curve, specific heat and free energies. As a case study, we investigate the aggregation of heteropolymers bioinspired on Aβ25-33 fragments and their cross-seeding with IAPP20-29 isoforms. Excellent parallel scaling is observed, even under numerically difficult first-order like phase transitions, which are properly described by the built-in fully reconfigurable force fields. Still, the package is free and open source, this shall motivate users to readily adapt it to specific purposes.
Detailed statistical analysis plan for the pulmonary protection trial
DEFF Research Database (Denmark)
Buggeskov, Katrine B; Jakobsen, Janus C; Secher, Niels H
2014-01-01
BACKGROUND: Pulmonary dysfunction complicates cardiac surgery that includes cardiopulmonary bypass. The pulmonary protection trial evaluates effect of pulmonary perfusion on pulmonary function in patients suffering from chronic obstructive pulmonary disease. This paper presents the statistical plan...... for the main publication to avoid risk of outcome reporting bias, selective reporting, and data-driven results as an update to the published design and method for the trial. RESULTS: The pulmonary protection trial is a randomized, parallel group clinical trial that assesses the effect of pulmonary perfusion......: The pulmonary protection trial investigates the effect of pulmonary perfusion during cardiopulmonary bypass in chronic obstructive pulmonary disease patients. A preserved oxygenation index following pulmonary perfusion may indicate an effect and inspire to a multicenter confirmatory trial to assess a more...
Statistical mechanics analysis of LDPC coding in MIMO Gaussian channels
Energy Technology Data Exchange (ETDEWEB)
Alamino, Roberto C; Saad, David [Neural Computing Research Group, Aston University, Birmingham B4 7ET (United Kingdom)
2007-10-12
Using analytical methods of statistical mechanics, we analyse the typical behaviour of a multiple-input multiple-output (MIMO) Gaussian channel with binary inputs under low-density parity-check (LDPC) network coding and joint decoding. The saddle point equations for the replica symmetric solution are found in particular realizations of this channel, including a small and large number of transmitters and receivers. In particular, we examine the cases of a single transmitter, a single receiver and symmetric and asymmetric interference. Both dynamical and thermodynamical transitions from the ferromagnetic solution of perfect decoding to a non-ferromagnetic solution are identified for the cases considered, marking the practical and theoretical limits of the system under the current coding scheme. Numerical results are provided, showing the typical level of improvement/deterioration achieved with respect to the single transmitter/receiver result, for the various cases.
Statistical analysis of complex systems with nonclassical invariant measures
Fratalocchi, Andrea
2011-02-28
I investigate the problem of finding a statistical description of a complex many-body system whose invariant measure cannot be constructed stemming from classical thermodynamics ensembles. By taking solitons as a reference system and by employing a general formalism based on the Ablowitz-Kaup-Newell-Segur scheme, I demonstrate how to build an invariant measure and, within a one-dimensional phase space, how to develop a suitable thermodynamics. A detailed example is provided with a universal model of wave propagation, with reference to a transparent potential sustaining gray solitons. The system shows a rich thermodynamic scenario, with a free-energy landscape supporting phase transitions and controllable emergent properties. I finally discuss the origin of such behavior, trying to identify common denominators in the area of complex dynamics.
Statistical Analysis of Upper Bound using Data with Uncertainties
Tng, Barry Jia Hao
2014-01-01
Let $F$ be the unknown distribution of a non-negative continuous random variable. We would like to determine if $supp(F) \\subseteq [0,c]$ where $c$ is a constant (a proposed upper bound). Instead of directly observing $X_1,...,X_n i.i.d. \\sim F$, we only get to observe as data $Y_1,...,Y_n$ where $Y_i = X_i + \\epsilon_i$, with $\\epsilon_i$ being random variables representing errors. In this paper, we will explore methods to handle this statistical problem for two primary cases - parametric and nonparametric. The data from deep inelastic scattering experiments on measurements of $R=\\sigma_L / \\sigma_T$ would be used to test code which has been written to implement the discussed methods.
Statistical analysis of NOMAO customer votes for spots of France
Palovics, Robert; Benczur, Andras; Pap, Julia; Ermann, Leonardo; Phan, Samuel; Chepelianskii, Alexei D; Shepelyansky, Dima L
2015-01-01
We investigate the statistical properties of votes of customers for spots of France collected by the startup company NOMAO. The frequencies of votes per spot and per customer are characterized by a power law distributions which remain stable on a time scale of a decade when the number of votes is varied by almost two orders of magnitude. Using the computer science methods we explore the spectrum and the eigenvalues of a matrix containing user ratings to geolocalized items. Eigenvalues nicely map to large towns and regions but show certain level of instability as we modify the interpretation of the underlying matrix. We evaluate imputation strategies that provide improved prediction performance by reaching geographically smooth eigenvectors. We point on possible links between distribution of votes and the phenomenon of self-organized criticality.
Statistical Analysis of Complexity Generators for Cost Estimation
Rowell, Ginger Holmes
1999-01-01
Predicting the cost of cutting edge new technologies involved with spacecraft hardware can be quite complicated. A new feature of the NASA Air Force Cost Model (NAFCOM), called the Complexity Generator, is being developed to model the complexity factors that drive the cost of space hardware. This parametric approach is also designed to account for the differences in cost, based on factors that are unique to each system and subsystem. The cost driver categories included in this model are weight, inheritance from previous missions, technical complexity, and management factors. This paper explains the Complexity Generator framework, the statistical methods used to select the best model within this framework, and the procedures used to find the region of predictability and the prediction intervals for the cost of a mission.
Statistical Lineament Analysis in South Greenland Based on Landsat Imagery
DEFF Research Database (Denmark)
Conradsen, Knut; Nilsson, Gert; Thyrsted, Tage
1986-01-01
Linear features, mapped visually from MSS channel-7 photoprints (1: 1 000 000) of Landsat images from South Greenland, were digitized and analyzed statistically. A sinusoidal curve was fitted to the frequency distribution which was then divided into ten significant classes of azimuthal trends. Maps...... showing the density of linear features for each of the ten classes indicate that many of the classes are distributed in zones defined by elongate maxima or rows of maxima. In cases where the elongate maxima and the linear feature direction of the class in question are parallel, a zone of major crustal...... discontinuity is inferred. In the area investigated, such zones coincide with geochemical boundaries and graben structures, and the intersections of some zones seem to control intrusion sites. In cases where there is no parallelism between the elongate maxima and the linear feature direction, an en echelon...
Automated longitudinal intra-subject analysis (ALISA) for diffusion MRI tractography
DEFF Research Database (Denmark)
Aarnink, Saskia H; Vos, Sjoerd B; Leemans, Alexander
2014-01-01
the inter-subject and intra-subject automation in this situation are intended for subjects without gross pathology. In this work, we propose such an automated longitudinal intra-subject analysis (dubbed ALISA) approach, and assessed whether ALISA could preserve the same level of reliability as obtained...
Comparative Analysis of Kernel Methods for Statistical Shape Learning
National Research Council Canada - National Science Library
Rathi, Yogesh; Dambreville, Samuel; Tannenbaum, Allen
2006-01-01
.... In this work, we perform a comparative analysis of shape learning techniques such as linear PCA, kernel PCA, locally linear embedding and propose a new method, kernelized locally linear embedding...
Consolidity analysis for fully fuzzy functions, matrices, probability and statistics
Walaa Ibrahim Gabr
2015-01-01
The paper presents a comprehensive review of the know-how for developing the systems consolidity theory for modeling, analysis, optimization and design in fully fuzzy environment. The solving of systems consolidity theory included its development for handling new functions of different dimensionalities, fuzzy analytic geometry, fuzzy vector analysis, functions of fuzzy complex variables, ordinary differentiation of fuzzy functions and partial fraction of fuzzy polynomials. On the other hand, ...
A Web Survey Analysis of Subjective Well-being
Guzi, M.; de Pedraza García, P.
2015-01-01
Purpose - This paper explores the role of work conditions and job characteristics with respect to three subjective well-being indicators: life satisfaction, job satisfaction and satisfaction with work-life balance. From a methodological point of view, the paper shows how social sciences can benefit
Performance, Pinned Down: A Lacanian Analysis of Subjectivity at Work
C.M.W. Hoedemaekers (Casper)
2008-01-01
textabstractThis study seeks to create an account of how the performing subject comes into being within a specific organizational context. It looks at some of the ways in which managerial practices impact upon the selfhood of employees by means of the language in which they are couched. Drawing
Analysis of Idiom Variation in the Framework of Linguistic Subjectivity
Liu, Zhengyuan
2012-01-01
Idiom variation is a ubiquitous linguistic phenomenon which has raised a lot of research questions. The past approach was either formal or functional. Both of them did not pay much attention to cognitive factors of language users. By putting idiom variation in the framework of linguistic subjectivity, we have offered a new perspective in the…
Pressure transient analysis of a horizontal well subject to four ...
African Journals Online (AJOL)
Reservoir characterization is essential for effective reservoir and wellbore management. But when a horizontal well is subject to constant-pressure external boundaries, the extent of reservoir characterization that is possible depends on the flow regimes that are encountered in a given flow time. In this paper dimensionless ...
Analysis of Subjective Conceptualizations Towards Collective Conceptual Modelling
DEFF Research Database (Denmark)
Glückstad, Fumiko Kano; Herlau, Tue; Schmidt, Mikkel Nørgaard
2013-01-01
This work is conducted as a preliminary study for a project where individuals' conceptualizations of domain knowledge will thoroughly be analyzed across 150 subjects from 6 countries. The project aims at investigating how humans' conceptualizations differ according to different types of mother la...
Analysis of Subjective Conceptualizations Towards Collective Conceptual Modelling
DEFF Research Database (Denmark)
Kano Glückstad, Fumiko; Herlau, Tue; Schmidt, Mikkel N.
2013-01-01
This work is conducted as a preliminary study for a project where individuals' conceptualizations of domain knowledge will thoroughly be analyzed across 150 subjects from 6 countries. The project aims at investigating how humans' conceptualizations dier according to dierent types of mother langua...
AN ANALYSIS OF SUBJECT AGREEMENT ERRORS IN ENGLISH ...
African Journals Online (AJOL)
Windows User
however, continuing prevalence of a wide range of errors in students' writing. ... were written before. In English, as in many other languages, one of the grammar rules is that the subjects and the verbs must agree both in number and in person. .... The incorrect sentences which were picked were the ones which had types of.
Multivariate Statistical Analysis of the Tularosa-Hueco Basin
Agrawala, G.; Walton, J. C.
2006-12-01
The border region is growing rapidly and experiencing a sharp decline both in water quality and availability putting a strain on the quickly diminishing resource. Since water is used primarily for agricultural, domestic, commercial, livestock, mining and power generation, its rapid depletion is of major concern in the region. Tools such as Principal Component Analysis (PCA), Correspondence Analysis and Cluster Analysis have the potential to present new insight into this problem. The Tularosa-Hueco Basin is analyzed here using some of these Multivariate Analysis methods. PCA is applied to geo-chemical data from the region and a Cluster Analysis is applied to the results in order to group wells with similar characteristics. The derived Principal Axis and well groups are presented as biplots and overlaid on a digital elevation map of the region providing a visualization of potential interactions and flow path between surface water and ground water. Simulation by this modeling technique give a valuable insight to the water chemistry and the potential pollution threats to the already water diminishing resources.
Statistical and Spatial Analysis of Borderland Ground Water Geochemistry
Agrawala, G. K.; Woocay, A.; Walton, J. C.
2007-12-01
The border region is growing rapidly and experiencing a sharp decline both in water quality and availability putting a strain on the quickly diminishing resource. Since water is used primarily for agricultural, domestic, commercial, livestock, mining and power generation, its rapid depletion is of major concern in the region. Tools such as Principal Component Analysis (PCA), Correspondence Analysis and Cluster Analysis have the potential to present new insight into this problem. The Borderland groundwater is analyzed here using some of these Multivariate Analysis methods. PCA is applied to geo-chemical data from the region and a Cluster Analysis is applied to the results in order to group wells with similar characteristics. The derived Principal Axis and well groups are presented as biplots and overlaid on a digital elevation map of the region providing a visualization of potential interactions and flow path between surface water and ground water. Simulation by this modeling technique give a valuable insight to the water chemistry and the potential pollution threats to the already water diminishing resources.
New Statistical Approach to the Analysis of Hierarchical Data
Neuman, S. P.; Guadagnini, A.; Riva, M.
2014-12-01
Many variables possess a hierarchical structure reflected in how their increments vary in space and/or time. Quite commonly the increments (a) fluctuate in a highly irregular manner; (b) possess symmetric, non-Gaussian frequency distributions characterized by heavy tails that often decay with separation distance or lag; (c) exhibit nonlinear power-law scaling of sample structure functions in a midrange of lags, with breakdown in such scaling at small and large lags; (d) show extended power-law scaling (ESS) at all lags; and (e) display nonlinear scaling of power-law exponent with order of sample structure function. Some interpret this to imply that the variables are multifractal, which explains neither breakdowns in power-law scaling nor ESS. We offer an alternative interpretation consistent with all above phenomena. It views data as samples from stationary, anisotropic sub-Gaussian random fields subordinated to truncated fractional Brownian motion (tfBm) or truncated fractional Gaussian noise (tfGn). The fields are scaled Gaussian mixtures with random variances. Truncation of fBm and fGn entails filtering out components below data measurement or resolution scale and above domain scale. Our novel interpretation of the data allows us to obtain maximum likelihood estimates of all parameters characterizing the underlying truncated sub-Gaussian fields. These parameters in turn make it possible to downscale or upscale all statistical moments to situations entailing smaller or larger measurement or resolution and sampling scales, respectively. They also allow one to perform conditional or unconditional Monte Carlo simulations of random field realizations corresponding to these scales. Aspects of our approach are illustrated on field and laboratory measured porous and fractured rock permeabilities, as well as soil texture characteristics and neural network estimates of unsaturated hydraulic parameters in a deep vadose zone near Phoenix, Arizona. We also use our approach
Power flow as a complement to statistical energy analysis and finite element analysis
Cuschieri, J. M.
1987-01-01
Present methods of analysis of the structural response and the structure-borne transmission of vibrational energy use either finite element (FE) techniques or statistical energy analysis (SEA) methods. The FE methods are a very useful tool at low frequencies where the number of resonances involved in the analysis is rather small. On the other hand SEA methods can predict with acceptable accuracy the response and energy transmission between coupled structures at relatively high frequencies where the structural modal density is high and a statistical approach is the appropriate solution. In the mid-frequency range, a relatively large number of resonances exist which make finite element method too costly. On the other hand SEA methods can only predict an average level form. In this mid-frequency range a possible alternative is to use power flow techniques, where the input and flow of vibrational energy to excited and coupled structural components can be expressed in terms of input and transfer mobilities. This power flow technique can be extended from low to high frequencies and this can be integrated with established FE models at low frequencies and SEA models at high frequencies to form a verification of the method. This method of structural analysis using power flo and mobility methods, and its integration with SEA and FE analysis is applied to the case of two thin beams joined together at right angles.
Ockham's razor and Bayesian analysis. [statistical theory for systems evaluation
Jefferys, William H.; Berger, James O.
1992-01-01
'Ockham's razor', the ad hoc principle enjoining the greatest possible simplicity in theoretical explanations, is presently shown to be justifiable as a consequence of Bayesian inference; Bayesian analysis can, moreover, clarify the nature of the 'simplest' hypothesis consistent with the given data. By choosing the prior probabilities of hypotheses, it becomes possible to quantify the scientific judgment that simpler hypotheses are more likely to be correct. Bayesian analysis also shows that a hypothesis with fewer adjustable parameters intrinsically possesses an enhanced posterior probability, due to the clarity of its predictions.
SEDA: A software package for the Statistical Earthquake Data Analysis
Lombardi, A. M.
2017-03-01
In this paper, the first version of the software SEDA (SEDAv1.0), designed to help seismologists statistically analyze earthquake data, is presented. The package consists of a user-friendly Matlab-based interface, which allows the user to easily interact with the application, and a computational core of Fortran codes, to guarantee the maximum speed. The primary factor driving the development of SEDA is to guarantee the research reproducibility, which is a growing movement among scientists and highly recommended by the most important scientific journals. SEDAv1.0 is mainly devoted to produce accurate and fast outputs. Less care has been taken for the graphic appeal, which will be improved in the future. The main part of SEDAv1.0 is devoted to the ETAS modeling. SEDAv1.0 contains a set of consistent tools on ETAS, allowing the estimation of parameters, the testing of model on data, the simulation of catalogs, the identification of sequences and forecasts calculation. The peculiarities of routines inside SEDAv1.0 are discussed in this paper. More specific details on the software are presented in the manual accompanying the program package.
Statistical language analysis for automatic exfiltration event detection.
Energy Technology Data Exchange (ETDEWEB)
Robinson, David Gerald
2010-04-01
This paper discusses the recent development a statistical approach for the automatic identification of anomalous network activity that is characteristic of exfiltration events. This approach is based on the language processing method eferred to as latent dirichlet allocation (LDA). Cyber security experts currently depend heavily on a rule-based framework for initial detection of suspect network events. The application of the rule set typically results in an extensive list of uspect network events that are then further explored manually for suspicious activity. The ability to identify anomalous network events is heavily dependent on the experience of the security personnel wading through the network log. Limitations f this approach are clear: rule-based systems only apply to exfiltration behavior that has previously been observed, and experienced cyber security personnel are rare commodities. Since the new methodology is not a discrete rule-based pproach, it is more difficult for an insider to disguise the exfiltration events. A further benefit is that the methodology provides a risk-based approach that can be implemented in a continuous, dynamic or evolutionary fashion. This permits uspect network activity to be identified early with a quantifiable risk associated with decision making when responding to suspicious activity.
Directory of Open Access Journals (Sweden)
Anders Eklund
2011-01-01
Full Text Available Parametric statistical methods, such as Z-, t-, and F-values, are traditionally employed in functional magnetic resonance imaging (fMRI for identifying areas in the brain that are active with a certain degree of statistical significance. These parametric methods, however, have two major drawbacks. First, it is assumed that the observed data are Gaussian distributed and independent; assumptions that generally are not valid for fMRI data. Second, the statistical test distribution can be derived theoretically only for very simple linear detection statistics. With nonparametric statistical methods, the two limitations described above can be overcome. The major drawback of non-parametric methods is the computational burden with processing times ranging from hours to days, which so far have made them impractical for routine use in single-subject fMRI analysis. In this work, it is shown how the computational power of cost-efficient graphics processing units (GPUs can be used to speed up random permutation tests. A test with 10000 permutations takes less than a minute, making statistical analysis of advanced detection methods in fMRI practically feasible. To exemplify the permutation-based approach, brain activity maps generated by the general linear model (GLM and canonical correlation analysis (CCA are compared at the same significance level.
Eklund, Anders; Andersson, Mats; Knutsson, Hans
2011-01-01
Parametric statistical methods, such as Z-, t-, and F-values, are traditionally employed in functional magnetic resonance imaging (fMRI) for identifying areas in the brain that are active with a certain degree of statistical significance. These parametric methods, however, have two major drawbacks. First, it is assumed that the observed data are Gaussian distributed and independent; assumptions that generally are not valid for fMRI data. Second, the statistical test distribution can be derived theoretically only for very simple linear detection statistics. With nonparametric statistical methods, the two limitations described above can be overcome. The major drawback of non-parametric methods is the computational burden with processing times ranging from hours to days, which so far have made them impractical for routine use in single-subject fMRI analysis. In this work, it is shown how the computational power of cost-efficient graphics processing units (GPUs) can be used to speed up random permutation tests. A test with 10000 permutations takes less than a minute, making statistical analysis of advanced detection methods in fMRI practically feasible. To exemplify the permutation-based approach, brain activity maps generated by the general linear model (GLM) and canonical correlation analysis (CCA) are compared at the same significance level. PMID:22046176
Performatising the knower: On semiotic analysis of subject and knowledge
Directory of Open Access Journals (Sweden)
Artuković Kristina
2013-01-01
Full Text Available This paper considers epistemological implications of the concept of performative, starting from the elaborate conception provided by Judith Butler’s theories. The primary postulate of this work is that various interpretations of the performative, with their semiotic shifting from the notions of truth-evaluability and the descriptive nature of meaning, form a line of abandoning traditional epistemological distinction between subject and object. Through other semiotic concepts which will be presented and analysed, this line reveals the key epistemological issues in the light of semiology, while Judith Butler’s concept of performativity is viewed as a possible outcome of this course of semiology of knowledge, resulting in final transcending of the category of subject.
Statistics Education Research in Malaysia and the Philippines: A Comparative Analysis
Reston, Enriqueta; Krishnan, Saras; Idris, Noraini
2014-01-01
This paper presents a comparative analysis of statistics education research in Malaysia and the Philippines by modes of dissemination, research areas, and trends. An electronic search for published research papers in the area of statistics education from 2000-2012 yielded 20 for Malaysia and 19 for the Philippines. Analysis of these papers showed…
Spatial statistical analysis of dissatisfaction with the performance of ...
African Journals Online (AJOL)
The analysis reveals spatial clustering in the level of dissatisfaction with the performance of local government. It also reveals percentage of respondents dissatisfied with dwelling, mean sense of safety index, and percentage agree the country is going in the wrong direction, as significant predictors of the level of local ...
The statistical analysis of results of solidification of fly ash
Directory of Open Access Journals (Sweden)
Pliešovská Natália
1996-09-01
Full Text Available The analysis shows, that there is no statical dependence between contents of heavy metals in fly ash on one side, and contents in leaching characteristics of heavy metals from the stabilized waste and from the waste itself on the other side.
Statistical Analysis of Hit/Miss Data (Preprint)
2012-07-01
HDBK-1823A, 2009). Other agencies and industries have also made use of this guidance (Gandossi et al., 2010) and ( Drury et al., 2006). It should...2002. Drury , Ghylin, and Holness, Error Analysis and Threat Magnitude for Carry-on Bag Inspection, Proceedings of the Human Factors and Ergonomic
Statistical Analysis Of Trace Element Concentrations In Shale ...
African Journals Online (AJOL)
Principal component and regression analysis of geochemical data in sampled shale – carbonate sediments in Guyuk, Northeastern Nigeria reveal enrichments of four predictor elements, Ni, Co, Cr and Cu to gypsum mineralisation. Ratios of their enrichments are Cu(10:1), Ni(8:1), Co(58:1) and Cr(30:1) The >70% ...
Multivariate statistical analysis of a multi-step industrial processes
DEFF Research Database (Denmark)
Reinikainen, S.P.; Høskuldsson, Agnar
2007-01-01
multivariate multi-step processes, where results from each step are used to evaluate future results, is presented. The methods presented are based on Priority PLS Regression. The basic idea is to compute the weights in the regression analysis for given steps, but adjust all data by the resulting score vectors...
A statistical inference method for the stochastic reachability analysis
Bujorianu, L.M.
2005-01-01
Many control systems have large, infinite state space that can not be easily abstracted. One method to analyse and verify these systems is reachability analysis. It is frequently used for air traffic control and power plants. Because of lack of complete information about the environment or
Statistical analysis of geodetic networks for detecting regional events
Granat, Robert
2004-01-01
We present an application of hidden Markov models (HMMs) to analysis of geodetic time series in Southern California. Our model fitting method uses a regularized version of the deterministic annealing expectation-maximization algorithm to ensure that model solutions are both robust and of high quality.
Sealed-Bid Auction of Dutch Mussels : Statistical Analysis
Kleijnen, J.P.C.; van Schaik, F.D.J.
2007-01-01
This article presents an econometric analysis of the many data on the sealed-bid auction that sells mussels in Yerseke town, the Netherlands. The goals of this analy- sis are obtaining insight into the important factors that determine the price of these mussels, and quantifying the performance of an
Experimental research and statistic analysis of polymer composite adhesive joints strength
Rudawska, Anna; Miturska, Izabela; Szabelski, Jakub; Skoczylas, Agnieszka; Droździel, Paweł; Bociąga, Elżbieta; Madleňák, Radovan; Kasperek, Dariusz
2017-05-01
The aim of this paper is to determine the effect of arrangement of fibreglass fabric plies in a polymer composite on adhesive joint strength. Based on the experimental results, the real effect of plies arrangement and their most favourable configuration with respect to strength is determined. The experiments were performed on 3 types of composites which had different fibre orientations. The composites had three plies of fabric. The plies arrangement in Composite I was unchanged, in Composite II the central ply had the 45° orientation, while in Composite III the outside ply (tangential to the adhesive layer) was oriented at 45°. Composite plates were first cut into smaller specimens and then adhesive-bonded in different combinations with Epidian 61/Z1/100:10 epoxy adhesive. After stabilizing, the single-lap adhesive joints were subjected to shear strength tests. It was noted that plies arrangement in composite materials affects the strength of adhesive joints made of these composites between the values of the strength of the confidence level of 0.95. The statistical analysis of the results also showed that there are no statistical significant differences in average values of surface free energy (0.95 confidence level).
Dynamic Analysis of an Inflatable Dam Subjected to a Flood
Lowery, Kristen Mary
1997-01-01
A dynamic simulation of the response of an inflatable dam subjected to a flood was carried out to determine the survivability envelope of the dam where it can operate without rupture, or overflow. A fully nonlinear free-surface flow was applied in two dimensions using a mixed Eulerian-Lagrangian formulation. An ABAQUS finite element model was used to determine the dynamic structural response of the dam. The problem was solved in the time domain which allows the prediction of a number ...
Zhao, Wenle; Mu, Yunming; Tayama, Darren; Yeatts, Sharon D.
2015-01-01
Large multicenter acute stroke trials demand a randomization procedure with a high level of treatment allocation randomness, an effective control on overall and within-site imbalances, and a minimized time delay of study treatment caused by the randomization procedure. Driven by the randomization algorithm design of A Study of the Efficacy and Safety of Activase (Alteplase) in Patients With Mild Stroke (PRISMS) (NCT02072226), this paper compares operational and statistical properties of different randomization algorithms in local, central, and step-forward randomization settings. Results show that the step-forward randomization with block urn design provides better performances over others. If the concern on the potential time delay is not serious and a central randomization system is available, the minimization method with an imbalance control threshold and a biased coin probability could be a better choice. PMID:25638754
Analysis of prototypical narratives produced by aphasic individuals and cognitively healthy subjects
Directory of Open Access Journals (Sweden)
Gabriela Silveira
Full Text Available Aphasia can globally or selectively affect comprehension and production of verbal and written language. Discourse analysis can aid language assessment and diagnosis.Objective:[1] To explore narratives that produce a number of valid indicators for diagnosing aphasia in speakers of Brazilian Portuguese. [2] To analyze the macrostructural aspects of the discourse of normal individuals. [3] To analyze the macrostructural aspects of the discourse of aphasic individuals.Methods:The macrostructural aspects of three narratives produced by aphasic individuals and cognitively healthy subjects were analyzed.Results:A total of 30 volunteers were examined comprising 10 aphasic individuals (AG and 20 healthy controls (CG. The CG included 5 males. The CG had a mean age of 38.9 years (SD=15.61 and mean schooling of 13 years (SD=2.67 whereas the AG had a mean age of 51.7 years (SD=17.3 and mean schooling of 9.1 years (SD=3.69. Participants were asked to narrate three fairy tales as a basis for analyzing the macrostructure of discourse. Comparison of the three narratives revealed no statistically significant difference in number of propositions produced by the groups. A significant negative correlation was found between age and number of propositions produced. Also, statistically significant differences were observed in the number of propositions produced by the individuals in the CG and the AG for the three tales.Conclusion:It was concluded that the three tales are applicable for discourse assessment, containing a similar number of propositions and differentiating aphasic individuals and cognitively healthy subjects based on analysis of the macrostructure of discourse.
A comparative assessment of statistical methods for extreme weather analysis
Schlögl, Matthias; Laaha, Gregor
2017-04-01
Extreme weather exposure assessment is of major importance for scientists and practitioners alike. We compare different extreme value approaches and fitting methods with respect to their value for assessing extreme precipitation and temperature impacts. Based on an Austrian data set from 25 meteorological stations representing diverse meteorological conditions, we assess the added value of partial duration series over the standardly used annual maxima series in order to give recommendations for performing extreme value statistics of meteorological hazards. Results show the merits of the robust L-moment estimation, which yielded better results than maximum likelihood estimation in 62 % of all cases. At the same time, results question the general assumption of the threshold excess approach (employing partial duration series, PDS) being superior to the block maxima approach (employing annual maxima series, AMS) due to information gain. For low return periods (non-extreme events) the PDS approach tends to overestimate return levels as compared to the AMS approach, whereas an opposite behavior was found for high return levels (extreme events). In extreme cases, an inappropriate threshold was shown to lead to considerable biases that may outperform the possible gain of information from including additional extreme events by far. This effect was neither visible from the square-root criterion, nor from standardly used graphical diagnosis (mean residual life plot), but from a direct comparison of AMS and PDS in synoptic quantile plots. We therefore recommend performing AMS and PDS approaches simultaneously in order to select the best suited approach. This will make the analyses more robust, in cases where threshold selection and dependency introduces biases to the PDS approach, but also in cases where the AMS contains non-extreme events that may introduce similar biases. For assessing the performance of extreme events we recommend conditional performance measures that focus
Detailed statistical analysis plan for the pulmonary protection trial.
Buggeskov, Katrine B; Jakobsen, Janus C; Secher, Niels H; Jonassen, Thomas; Andersen, Lars W; Steinbrüchel, Daniel A; Wetterslev, Jørn
2014-12-23
Pulmonary dysfunction complicates cardiac surgery that includes cardiopulmonary bypass. The pulmonary protection trial evaluates effect of pulmonary perfusion on pulmonary function in patients suffering from chronic obstructive pulmonary disease. This paper presents the statistical plan for the main publication to avoid risk of outcome reporting bias, selective reporting, and data-driven results as an update to the published design and method for the trial. The pulmonary protection trial is a randomized, parallel group clinical trial that assesses the effect of pulmonary perfusion with oxygenated blood or Custodiol™ HTK (histidine-tryptophan-ketoglutarate) solution versus no pulmonary perfusion in 90 chronic obstructive pulmonary disease patients. Patients, the statistician, and the conclusion drawers are blinded to intervention allocation. The primary outcome is the oxygenation index from 10 to 15 minutes after the end of cardiopulmonary bypass until 24 hours thereafter. Secondary outcome measures are oral tracheal intubation time, days alive outside the intensive care unit, days alive outside the hospital, and 30- and 90-day mortality, and one or more of the following selected serious adverse events: pneumothorax or pleural effusion requiring drainage, major bleeding, reoperation, severe infection, cerebral event, hyperkaliemia, acute myocardial infarction, cardiac arrhythmia, renal replacement therapy, and readmission for a respiratory-related problem. The pulmonary protection trial investigates the effect of pulmonary perfusion during cardiopulmonary bypass in chronic obstructive pulmonary disease patients. A preserved oxygenation index following pulmonary perfusion may indicate an effect and inspire to a multicenter confirmatory trial to assess a more clinically relevant outcome. ClinicalTrials.gov identifier: NCT01614951, registered on 6 June 2012.
Statistical Analysis of the Grid Connected Photovoltaic System Performance Ratio
Directory of Open Access Journals (Sweden)
Javier Vilariño-García
2017-05-01
Full Text Available A methodology based on the application of variance analysis and Tukey's method to a data set of solar radiation in the plane of the photovoltaic modules and the corresponding values of power delivered to the grid at intervals of 10 minutes presents from sunrise to sunset during the 52 weeks of the year 2013. These data were obtained through a monitoring system located in a photovoltaic plant of 10 MW of rated power located in Cordoba, consisting of 16 transformers and 98 investors. The application of the comparative method among the middle of the performance index of the processing centers to detect with an analysis of variance if there is significant difference in average at least the rest at a level of significance of 5% and then by testing Tukey which one or more processing centers that are below average due to a fault to be detected and corrected are.
Statistical analysis of aerosol species, trace gasses, and meteorology in Chicago.
Binaku, Katrina; O'Brien, Timothy; Schmeling, Martina; Fosco, Tinamarie
2013-09-01
Both canonical correlation analysis (CCA) and principal component analysis (PCA) were applied to atmospheric aerosol and trace gas concentrations and meteorological data collected in Chicago during the summer months of 2002, 2003, and 2004. Concentrations of ammonium, calcium, nitrate, sulfate, and oxalate particulate matter, as well as, meteorological parameters temperature, wind speed, wind direction, and humidity were subjected to CCA and PCA. Ozone and nitrogen oxide mixing ratios were also included in the data set. The purpose of statistical analysis was to determine the extent of existing linear relationship(s), or lack thereof, between meteorological parameters and pollutant concentrations in addition to reducing dimensionality of the original data to determine sources of pollutants. In CCA, the first three canonical variate pairs derived were statistically significant at the 0.05 level. Canonical correlation between the first canonical variate pair was 0.821, while correlations of the second and third canonical variate pairs were 0.562 and 0.461, respectively. The first canonical variate pair indicated that increasing temperatures resulted in high ozone mixing ratios, while the second canonical variate pair showed wind speed and humidity's influence on local ammonium concentrations. No new information was uncovered in the third variate pair. Canonical loadings were also interpreted for information regarding relationships between data sets. Four principal components (PCs), expressing 77.0 % of original data variance, were derived in PCA. Interpretation of PCs suggested significant production and/or transport of secondary aerosols in the region (PC1). Furthermore, photochemical production of ozone and wind speed's influence on pollutants were expressed (PC2) along with overall measure of local meteorology (PC3). In summary, CCA and PCA results combined were successful in uncovering linear relationships between meteorology and air pollutants in Chicago and
Determinants of ICT Infrastructure: A Cross-Country Statistical Analysis
Jens J. Krüger; Rhiel, Mathias
2016-01-01
We investigate economic and institutional determinants of ICT infrastructure for a broad cross section ofmore than 100 countries. The ICT variable is constructed from a principal components analysis. The explanatory variables are selected by variants of the Lasso estimator from the machine learning literature.In addition to least squares, we also apply robust and semiparametric regression estimators. The results show that the regressions are able to explain ICT infrastructure very well. Maj...
Practical guidance for statistical analysis of operational event data
Energy Technology Data Exchange (ETDEWEB)
Atwood, C.L.
1995-10-01
This report presents ways to avoid mistakes that are sometimes made in analysis of operational event data. It then gives guidance on what to do when a model is rejected, a list of standard types of models to consider, and principles for choosing one model over another. For estimating reliability, it gives advice on which failure modes to model, and moment formulas for combinations of failure modes. The issues are illustrated with many examples and case studies.
Statistical analysis of joint toxicity in biological growth experiments
DEFF Research Database (Denmark)
Spliid, Henrik; Tørslev, J.
1994-01-01
The authors formulate a model for the analysis of designed biological growth experiments where a mixture of toxicants is applied to biological target organisms. The purpose of such experiments is to assess the toxicity of the mixture in comparison with the toxicity observed when the toxicants are...... is applied on data from an experiment where inhibition of the growth of the bacteria Pseudomonas fluorescens caused by different mixtures of pentachlorophenol and aniline was studied....
Analysis of tensile bond strengths using Weibull statistics.
Burrow, Michael F; Thomas, David; Swain, Mike V; Tyas, Martin J
2004-09-01
Tensile strength tests of restorative resins bonded to dentin, and the resultant strengths of interfaces between the two, exhibit wide variability. Many variables can affect test results, including specimen preparation and storage, test rig design and experimental technique. However, the more fundamental source of variability, that associated with the brittle nature of the materials, has received little attention. This paper analyzes results from micro-tensile tests on unfilled resins and adhesive bonds between restorative resin composite and dentin in terms of reliability using the Weibull probability of failure method. Results for the tensile strengths of Scotchbond Multipurpose Adhesive (3M) and Clearfil LB Bond (Kuraray) bonding resins showed Weibull moduli (m) of 6.17 (95% confidence interval, 5.25-7.19) and 5.01 (95% confidence interval, 4.23-5.8). Analysis of results for micro-tensile tests on bond strengths to dentin gave moduli between 1.81 (Clearfil Liner Bond 2V) and 4.99 (Gluma One Bond, Kulzer). Material systems with m in this range do not have a well-defined strength. The Weibull approach also enables the size dependence of the strength to be estimated. An example where the bonding area was changed from 3.1 to 1.1 mm diameter is shown. Weibull analysis provides a method for determining the reliability of strength measurements in the analysis of data from bond strength and tensile tests on dental restorative materials.
Meta-analysis and The Cochrane Collaboration: 20 years of the Cochrane Statistical Methods Group.
McKenzie, Joanne E; Salanti, Georgia; Lewis, Steff C; Altman, Douglas G
2013-11-26
The Statistical Methods Group has played a pivotal role in The Cochrane Collaboration over the past 20 years. The Statistical Methods Group has determined the direction of statistical methods used within Cochrane reviews, developed guidance for these methods, provided training, and continued to discuss and consider new and controversial issues in meta-analysis. The contribution of Statistical Methods Group members to the meta-analysis literature has been extensive and has helped to shape the wider meta-analysis landscape.In this paper, marking the 20th anniversary of The Cochrane Collaboration, we reflect on the history of the Statistical Methods Group, beginning in 1993 with the identification of aspects of statistical synthesis for which consensus was lacking about the best approach. We highlight some landmark methodological developments that Statistical Methods Group members have contributed to in the field of meta-analysis. We discuss how the Group implements and disseminates statistical methods within The Cochrane Collaboration. Finally, we consider the importance of robust statistical methodology for Cochrane systematic reviews, note research gaps, and reflect on the challenges that the Statistical Methods Group faces in its future direction.
Dickinson, Elizabeth M.; And Others
Directed toward the eradication of sexual and racial bias in bibliographic systems, the subcommittee reports its progress in the identification of areas of classification systems and subject headings requiring change. A policy statement and six guidelines establish a framework for three categories of projects: (1) the need for changes in Library…
Statistical Analysis Software for the TRS-80 Microcomputer.
1981-09-01
007011260 11240 LC-LCMC 112500070 11210 11240 F0LsCe*PLC 11270 0X01l-FOX i1280 RETURN 67 11290 14Xa(4CX/DF)C (1/3) - (1-(21(9.OF)DI)/SQR(2/(9*OF)) 11300...Linear Regression"FRIT I007 PRINT#4 Analysis of Variance’ 100m KPB898 : zs-X : oOSUu ISO 1009 IF 10.4 0070 20 101001 10.I~3 THEN ZT=*20 10070 10120
Ball lightning diameter-lifetime statistical analysis of SKB databank
Amirov, Anvar Kh; Bychkov, Vladimir L.
1995-03-01
Revelation of the significance of diameter as a factor for the lifetime as a parameter for different ways of Ball Lightning (BL) disappearance has been made. Methods for non-parametric regression analysis have been applied for pairs diameter - radiation losses in correspondence to BL disappearance. BL diameter as a factor turned out to be significant for BL life-time in the case of explosion and decay and insignificant in the case of extinction. Dependence logarithm of radiation losses - logarithm of BL volume obtained with the help of nonparametric regression treatment turned out to be different according to BL ways of disappearance.
Analysis of compressive fracture in rock using statistical techniques
Energy Technology Data Exchange (ETDEWEB)
Blair, S.C.
1994-12-01
Fracture of rock in compression is analyzed using a field-theory model, and the processes of crack coalescence and fracture formation and the effect of grain-scale heterogeneities on macroscopic behavior of rock are studied. The model is based on observations of fracture in laboratory compression tests, and incorporates assumptions developed using fracture mechanics analysis of rock fracture. The model represents grains as discrete sites, and uses superposition of continuum and crack-interaction stresses to create cracks at these sites. The sites are also used to introduce local heterogeneity. Clusters of cracked sites can be analyzed using percolation theory. Stress-strain curves for simulated uniaxial tests were analyzed by studying the location of cracked sites, and partitioning of strain energy for selected intervals. Results show that the model implicitly predicts both development of shear-type fracture surfaces and a strength-vs-size relation that are similar to those observed for real rocks. Results of a parameter-sensitivity analysis indicate that heterogeneity in the local stresses, attributed to the shape and loading of individual grains, has a first-order effect on strength, and that increasing local stress heterogeneity lowers compressive strength following an inverse power law. Peak strength decreased with increasing lattice size and decreasing mean site strength, and was independent of site-strength distribution. A model for rock fracture based on a nearest-neighbor algorithm for stress redistribution is also presented and used to simulate laboratory compression tests, with promising results.
Tanavalee, Chotetawan; Luksanapruksa, Panya; Singhatanadgige, Weerasak
2016-06-01
Microsoft Excel (MS Excel) is a commonly used program for data collection and statistical analysis in biomedical research. However, this program has many limitations, including fewer functions that can be used for analysis and a limited number of total cells compared with dedicated statistical programs. MS Excel cannot complete analyses with blank cells, and cells must be selected manually for analysis. In addition, it requires multiple steps of data transformation and formulas to plot survival analysis graphs, among others. The Megastat add-on program, which will be supported by MS Excel 2016 soon, would eliminate some limitations of using statistic formulas within MS Excel.
INNOVATIVE TEACHING IN ACCOUNTING SUBJECTS: ANALYSIS OF THE FLIPPED CLASSROOM
Directory of Open Access Journals (Sweden)
E. Lubbe
2016-07-01
Full Text Available Accounting students often have a negative attitude towards the subject andstruggle to understand core concepts of accounting standards. A large percentageof accounting students do not prepare for class and homework is either not doneor neglected. Many factors contributed to students struggling to prepare for classand complete homework assignments. The flipped classroom approach has grownat a rapid pace and was perceived very successful in many subjects. Little researchhas been done on the effectiveness of this approach for accounting students.Videos was created whereby accounting theory was explained and questions withexamples were given and explained. All contact sessions were transformed into anactive learning environment. During contact sessions, students were provided withquestions. Guidance was given with regards to the interpretation of a practicalcase study. Students had to analyze questions before feedback was provided tothem. Contact sessions commenced with easy questions, and progressedto moredifficult questions.Research was conducted in order to determine whether a flipped classroommethod could improve the learning experience of accounting students at a highereducation institution. The study indicated that students watched the videos beforecontact sessions, they felt more positive about their performance in accountingand improved their time management. The majority of students that completed thesurvey preferred the flipped classroom method. It enables students to learn fromtheir own mistakes in class.
Comparability of mixed IC₅₀ data - a statistical analysis.
Directory of Open Access Journals (Sweden)
Tuomo Kalliokoski
Full Text Available The biochemical half maximal inhibitory concentration (IC50 is the most commonly used metric for on-target activity in lead optimization. It is used to guide lead optimization, build large-scale chemogenomics analysis, off-target activity and toxicity models based on public data. However, the use of public biochemical IC50 data is problematic, because they are assay specific and comparable only under certain conditions. For large scale analysis it is not feasible to check each data entry manually and it is very tempting to mix all available IC50 values from public database even if assay information is not reported. As previously reported for Ki database analysis, we first analyzed the types of errors, the redundancy and the variability that can be found in ChEMBL IC50 database. For assessing the variability of IC50 data independently measured in two different labs at least ten IC50 data for identical protein-ligand systems against the same target were searched in ChEMBL. As a not sufficient number of cases of this type are available, the variability of IC50 data was assessed by comparing all pairs of independent IC50 measurements on identical protein-ligand systems. The standard deviation of IC50 data is only 25% larger than the standard deviation of Ki data, suggesting that mixing IC50 data from different assays, even not knowing assay conditions details, only adds a moderate amount of noise to the overall data. The standard deviation of public ChEMBL IC50 data, as expected, resulted greater than the standard deviation of in-house intra-laboratory/inter-day IC50 data. Augmenting mixed public IC50 data by public Ki data does not deteriorate the quality of the mixed IC50 data, if the Ki is corrected by an offset. For a broad dataset such as ChEMBL database a Ki- IC50 conversion factor of 2 was found to be the most reasonable.
Statistical Power Analysis with Missing Data A Structural Equation Modeling Approach
Davey, Adam
2009-01-01
Statistical power analysis has revolutionized the ways in which we conduct and evaluate research. Similar developments in the statistical analysis of incomplete (missing) data are gaining more widespread applications. This volume brings statistical power and incomplete data together under a common framework, in a way that is readily accessible to those with only an introductory familiarity with structural equation modeling. It answers many practical questions such as: How missing data affects the statistical power in a study How much power is likely with different amounts and types
The art of data analysis how to answer almost any question using basic statistics
Jarman, Kristin H
2013-01-01
A friendly and accessible approach to applying statistics in the real worldWith an emphasis on critical thinking, The Art of Data Analysis: How to Answer Almost Any Question Using Basic Statistics presents fun and unique examples, guides readers through the entire data collection and analysis process, and introduces basic statistical concepts along the way.Leaving proofs and complicated mathematics behind, the author portrays the more engaging side of statistics and emphasizes its role as a problem-solving tool. In addition, light-hearted case studies
GIS application on spatial landslide analysis using statistical based models
Pradhan, Biswajeet; Lee, Saro; Buchroithner, Manfred F.
2009-09-01
This paper presents the assessment results of spatially based probabilistic three models using Geoinformation Techniques (GIT) for landslide susceptibility analysis at Penang Island in Malaysia. Landslide locations within the study areas were identified by interpreting aerial photographs, satellite images and supported with field surveys. Maps of the topography, soil type, lineaments and land cover were constructed from the spatial data sets. There are ten landslide related factors were extracted from the spatial database and the frequency ratio, fuzzy logic, and bivariate logistic regression coefficients of each factor was computed. Finally, landslide susceptibility maps were drawn for study area using frequency ratios, fuzzy logic and bivariate logistic regression models. For verification, the results of the analyses were compared with actual landslide locations in study area. The verification results show that bivariate logistic regression model provides slightly higher prediction accuracy than the frequency ratio and fuzzy logic models.
First metatarsocuneiform motion: a radiographic and statistical analysis.
Fritz, G R; Prieskorn, D
1995-03-01
Fifty volunteers with 100 asymptomatic feet were evaluated by physical examination, radiographic analysis, and questionnaire. This investigation was used to evaluate first metatarsocuneiform motion and establish normal values at this joint. Normal first ray sagittal range of motion was 4.37 degrees (SD, +/- 3.4 degrees). The shape of the distal cuneiform was then categorized by three classification methods. Multiple independent variables were cross-referenced to determine their relationship with motion and shape at the distal cuneiform. Hyperflexibility of the thumb correlated with first ray hypermobility. No correlation was found between first ray motion and sex, age, intermetatarsal angle, side, skin stretch, hyperextension of the knee, hyperextension of the elbow, or shape of the distal cuneiform.
Bayesian statistics, factor analysis, and PET images I. Mathematical background.
Phillips, P R
1989-01-01
The problem of image reconstruction in positron emission tomography (PET) is examined, although the approach is quite general and may have other applications. The approach is based on the maximum-likelihood method L.A. Shepp and Y. Vardi (1982), with their assumption that the number of image pixels is greater than the number of data points. In this situation a (nonunique) solution can be written down directly, although it is not guaranteed to be positive definite. The arbitrariness in this solution can be precisely characterized by a geometric argument. A unique solution can be obtained only by introducing prior information. It is suggested that factor analysis is an efficient way to do this. In the simplest application of the method, the solution is written as the sum of two parts, r(alpha )+t(alpha), where r(alpha) is determined solely by the data and t(alpha) is determined by r(alpha) and the prior information.
Statistical Analysis of Temple Orientation in Ancient India
Aller, Alba; Belmonte, Juan Antonio
2015-05-01
The great diversity of religions that have been followed in India for over 3000 years is the reason why there are hundreds of temples built to worship dozens of different divinities. In this work, more than one hundred temples geographically distributed over the whole Indian land have been analyzed, obtaining remarkable results. For this purpose, a deep analysis of the main deities who are worshipped in each of them, as well as of the different dynasties (or cultures) who built them has also been conducted. As a result, we have found that the main axes of the temples dedicated to Shiva seem to be oriented to the east cardinal point while those temples dedicated to Vishnu would be oriented to both the east and west cardinal points. To explain these cardinal directions we propose to look back to the origins of Hinduism. Besides these cardinal orientations, clear solar orientations have also been found, especially at the equinoctial declination.
[Laryngeal Papillomatosis: A Statistical Analysis of 60 Cases].
Kurita, Takashi; Umeno, Hirohito; Chitose, Shun-ichi; Ueda, Yoshihisa; Mihashi, Ryouta; Nakashima, Tadashi
2015-03-01
Laryngeal papillomatosis is the most common benign neoplasm of the larynx. Juvenile onset laryngeal papillomatosis tends to recur. In patients with adult onset laryngeal papillomatosis, laryngeal cancer rarely develops. This paper reports a clinical analysis of 60 patients with laryngeal papillomatosis who were treated at our clinic between January 1971 and September 2009. We analyzed the sex ratio, age at the onset of papilloma, type of developing papilloma (single or multiple type), site of developing papilloma, recurrence rate, and therapeutic modalities. Furthermore, the clinical characteristics of the patients with malignant transformation were examined. The patients were classified according to their age at the onset of the papilloma and the type of developing papilloma. The patients were grouped into a juvenile-onset group and an adult-onset group according to their age at the onset of the papilloma. They were also classified into single-type or multiple-type according to whether the initial papilloma appeared singly or multiply. The male to female sex ratios were 1.2 in the juvenile-onset group and 5.1 in the adult-onset group. Among the patients who developed papilloma at an age of under 10 years old, most of the juvenile cases had experienced onset by 4 years of age. Furthermore, the frequency of multiple-type papilloma was significantly higher in the juvenile-onset group, compared with the adult-onset group. The vocal fold was the most frequent site of the papilloma. The recurrence rate in the juvenile-onset group was significantly higher than that of the adult-onset group. A stratified analysis according to the type of papilloma occurrence, however, showed no significant difference in recurrences between the juvenile-onset and adult-onset groups. A stratified analysis according to the age at the onset of papilloma showed that the recurrence rate of multiple-type papilloma was significantly higher than that of single-type papilloma in the adult
Analysis of Subjective Evaluation of User Experience with Headphones
DEFF Research Database (Denmark)
Jensen, Rasmus; Lauridsen, Nikolaj; Poulsen, Andreas
2016-01-01
The aspects of what provides a good user experience with headphones is initially investigated by an exploratory study (experiment I). Using KJ-Technique, 5 workshop teams of 4-6 participants each provide a number of aspects influencing their experience with headphones. Analysing the aspects...... for uniqueness and relatedness provides 144 aspects of user experience with headphones, arranged in 12 categories. The 144 influencing aspects from experiment I are condensed, and 24 attributes regarding user experience with headphones are selected. These attributes are tested in regard to their correlation...... with and effects on overall evaluation of headphones in a second experiment, thus investigating which attributes are most influential for user experience. Using a within-subject design, eight different headphones are evaluated according to the attributes along with an overall evaluation. The attributes are listed...
Proteomic Analysis of Trypanosoma cruzi Epimastigotes Subjected to Heat Shock
Directory of Open Access Journals (Sweden)
Deyanira Pérez-Morales
2012-01-01
Full Text Available Trypanosoma cruzi is exposed to sudden temperature changes during its life cycle. Adaptation to these variations is crucial for parasite survival, reproduction, and transmission. Some of these conditions may change the pattern of genetic expression of proteins involved in homeostasis in the course of stress treatment. In the present study, the proteome of T. cruzi epimastigotes subjected to heat shock and epimastigotes grow normally was compared by two-dimensional gel electrophoresis followed by mass spectrometry for protein identification. Twenty-four spots differing in abundance were identified. Of the twenty-four changed spots, nineteen showed a greater intensity and five a lower intensity relative to the control. Several functional categories of the identified proteins were determined: metabolism, cell defense, hypothetical proteins, protein fate, protein synthesis, cellular transport, and cell cycle. Proteins involved in the interaction with the cellular environment were also identified, and the implications of these changes are discussed.
Statistical correlation analysis for comparing vibration data from test and analysis
Butler, T. G.; Strang, R. F.; Purves, L. R.; Hershfeld, D. J.
1986-01-01
A theory was developed to compare vibration modes obtained by NASTRAN analysis with those obtained experimentally. Because many more analytical modes can be obtained than experimental modes, the analytical set was treated as expansion functions for putting both sources in comparative form. The dimensional symmetry was developed for three general cases: nonsymmetric whole model compared with a nonsymmetric whole structural test, symmetric analytical portion compared with a symmetric experimental portion, and analytical symmetric portion with a whole experimental test. The theory was coded and a statistical correlation program was installed as a utility. The theory is established with small classical structures.
Statistical analysis for validating ACO-KNN algorithm as feature selection in sentiment analysis
Ahmad, Siti Rohaidah; Yusop, Nurhafizah Moziyana Mohd; Bakar, Azuraliza Abu; Yaakub, Mohd Ridzwan
2017-10-01
This research paper aims to propose a hybrid of ant colony optimization (ACO) and k-nearest neighbor (KNN) algorithms as feature selections for selecting and choosing relevant features from customer review datasets. Information gain (IG), genetic algorithm (GA), and rough set attribute reduction (RSAR) were used as baseline algorithms in a performance comparison with the proposed algorithm. This paper will also discuss the significance test, which was used to evaluate the performance differences between the ACO-KNN, IG-GA, and IG-RSAR algorithms. This study evaluated the performance of the ACO-KNN algorithm using precision, recall, and F-score, which were validated using the parametric statistical significance tests. The evaluation process has statistically proven that this ACO-KNN algorithm has been significantly improved compared to the baseline algorithms. The evaluation process has statistically proven that this ACO-KNN algorithm has been significantly improved compared to the baseline algorithms. In addition, the experimental results have proven that the ACO-KNN can be used as a feature selection technique in sentiment analysis to obtain quality, optimal feature subset that can represent the actual data in customer review data.
Statistical identification and analysis of defect development in digital imagers
Leung, Jenny; Chapman, Glenn H.; Koren, Zahava; Koren, Israel
2009-01-01
The lifetime of solid-state image sensors is limited by the appearance of defects, particularly hot-pixels, which we have previously shown to develop continuously over the sensor lifetime. Analysis based on spatial distribution and temporal growth of defects displayed no evidence of the defects being caused by material degradation. Instead, high radiation appears to accelerate defect development in image sensors. It is important to detect these faulty pixels prior to the use of image enhancement algorithms to avoid spreading the error to neighboring pixels. The date on which a defect has first developed can be extracted from past images. Previously, an automatic defect detection algorithm using Bayesian probability accumulation was introduced and tested. We performed extensive testing of this Bayes-based algorithm by detecting defects in image datasets obtained from four cameras. Our results have indicated that the Bayes detection scheme was able to identify all defects in these cameras with less than 3% difference from visual inspected result. In this paper, we introduce an alternative technique, the Maximum Likelihood detection algorithm, and evaluate its performance using Monte Carlo simulations based on three criterias: image exposure, defect parameters and pixel estimation. Preliminary results show that the Maximum likelihood detection algorithm is able to achieve higher accuracy than the Bayes detection algorithm, with 90% perfect detection in images captured at long exposures (>0.125s).
Shiavi, Richard
2007-01-01
Introduction to Applied Statistical Signal Analysis is designed for the experienced individual with a basic background in mathematics, science, and computer. With this predisposed knowledge, the reader will coast through the practical introduction and move on to signal analysis techniques, commonly used in a broad range of engineering areas such as biomedical engineering, communications, geophysics, and speech.Introduction to Applied Statistical Signal Analysis intertwines theory and implementation with practical examples and exercises. Topics presented in detail include: mathematical
Energy Technology Data Exchange (ETDEWEB)
Frome, EL
2005-09-20
Environmental exposure measurements are, in general, positive and may be subject to left censoring; i.e,. the measured value is less than a ''detection limit''. In occupational monitoring, strategies for assessing workplace exposures typically focus on the mean exposure level or the probability that any measurement exceeds a limit. Parametric methods used to determine acceptable levels of exposure, are often based on a two parameter lognormal distribution. The mean exposure level, an upper percentile, and the exceedance fraction are used to characterize exposure levels, and confidence limits are used to describe the uncertainty in these estimates. Statistical methods for random samples (without non-detects) from the lognormal distribution are well known for each of these situations. In this report, methods for estimating these quantities based on the maximum likelihood method for randomly left censored lognormal data are described and graphical methods are used to evaluate the lognormal assumption. If the lognormal model is in doubt and an alternative distribution for the exposure profile of a similar exposure group is not available, then nonparametric methods for left censored data are used. The mean exposure level, along with the upper confidence limit, is obtained using the product limit estimate, and the upper confidence limit on an upper percentile (i.e., the upper tolerance limit) is obtained using a nonparametric approach. All of these methods are well known but computational complexity has limited their use in routine data analysis with left censored data. The recent development of the R environment for statistical data analysis and graphics has greatly enhanced the availability of high-quality nonproprietary (open source) software that serves as the basis for implementing the methods in this paper.
Modeling gallic acid production rate by empirical and statistical analysis
Directory of Open Access Journals (Sweden)
Bratati Kar
2000-01-01
Full Text Available For predicting the rate of enzymatic reaction empirical correlation based on the experimental results obtained under various operating conditions have been developed. Models represent both the activation as well as deactivation conditions of enzymatic hydrolysis and the results have been analyzed by analysis of variance (ANOVA. The tannase activity was found maximum at incubation time 5 min, reaction temperature 40ºC, pH 4.0, initial enzyme concentration 0.12 v/v, initial substrate concentration 0.42 mg/ml, ionic strength 0.2 M and under these optimal conditions, the maximum rate of gallic acid production was 33.49 mumoles/ml/min.Para predizer a taxa das reações enzimaticas uma correlação empírica baseada nos resultados experimentais foi desenvolvida. Os modelos representam a ativação e a desativativação da hydrolise enzimatica. Os resultados foram avaliados pela análise de variança (ANOVA. A atividade máxima da tannase foi obtida após 5 minutos de incubação, temperatura 40ºC, pH 4,0, concentração inicial da enzima de 0,12 v/v, concentração inicial do substrato 0,42 mg/ml, força iônica 0,2 M. Sob essas condições a taxa máxima de produção ácido galico foi de 33,49 µmoles/ml/min.
Quantitative analysis and IBM SPSS statistics a guide for business and finance
Aljandali, Abdulkader
2016-01-01
This guide is for practicing statisticians and data scientists who use IBM SPSS for statistical analysis of big data in business and finance. This is the first of a two-part guide to SPSS for Windows, introducing data entry into SPSS, along with elementary statistical and graphical methods for summarizing and presenting data. Part I also covers the rudiments of hypothesis testing and business forecasting while Part II will present multivariate statistical methods, more advanced forecasting methods, and multivariate methods. IBM SPSS Statistics offers a powerful set of statistical and information analysis systems that run on a wide variety of personal computers. The software is built around routines that have been developed, tested, and widely used for more than 20 years. As such, IBM SPSS Statistics is extensively used in industry, commerce, banking, local and national governments, and education. Just a small subset of users of the package include the major clearing banks, the BBC, British Gas, British Airway...
Effect Size Measure and Analysis of Single Subject Designs
Society for Research on Educational Effectiveness, 2013
2013-01-01
One of the vexing problems in the analysis of SSD is in the assessment of the effect of intervention. Serial dependence notwithstanding, the linear model approach that has been advanced involves, in general, the fitting of regression lines (or curves) to the set of observations within each phase of the design and comparing the parameters of these…
Comparative analysis of actigraphy performance in healthy young subjects
Directory of Open Access Journals (Sweden)
Giannina J. Bellone
2016-10-01
Although data accessibility and ease of use was very different for the diverse devices, there were no significant differences for sleep onset, total sleep time and sleep efficiency recordings, where applicable. In conclusion, depending on the type of study and analysis desired (as well as cost and compliance of use, we propose some relative advantages for the different actigraphy/temperature recording devices.
Bayesian Statistics and Uncertainty Quantification for Safety Boundary Analysis in Complex Systems
He, Yuning; Davies, Misty Dawn
2014-01-01
The analysis of a safety-critical system often requires detailed knowledge of safe regions and their highdimensional non-linear boundaries. We present a statistical approach to iteratively detect and characterize the boundaries, which are provided as parameterized shape candidates. Using methods from uncertainty quantification and active learning, we incrementally construct a statistical model from only few simulation runs and obtain statistically sound estimates of the shape parameters for safety boundaries.
An overview of the mathematical and statistical analysis component of RICIS
Hallum, Cecil R.
1987-01-01
Mathematical and statistical analysis components of RICIS (Research Institute for Computing and Information Systems) can be used in the following problem areas: (1) quantification and measurement of software reliability; (2) assessment of changes in software reliability over time (reliability growth); (3) analysis of software-failure data; and (4) decision logic for whether to continue or stop testing software. Other areas of interest to NASA/JSC where mathematical and statistical analysis can be successfully employed include: math modeling of physical systems, simulation, statistical data reduction, evaluation methods, optimization, algorithm development, and mathematical methods in signal processing.
Finite Element Analysis of Saferooms Subjected to Tornado Impact Loads
Parfilko, Y.; Amaral de Arruda, F.; Varela, B.
2017-10-01
A Tornado is one of the most dreadful and unpredictable events in nature. Unfortunately, weather and geographic conditions make a large portion of the United States prone to this phenomenon. Tornado saferooms are monolithic reinforced concrete protective structures engineered to guard against these natural disasters. Saferooms must withstand impacts and wind loads from EF-5 tornadoes – where the wind speed reaches up to 150 m/s (300 mph) and airborne projectiles can reach up to 50 m/s (100 mph). The objective of this work is to evaluate the performance of a saferoom under impact from tornado-generated debris and tornado-dragged vehicles. Numerical simulations were performed to model the impact problem using explicit dynamics and energy methods. Finite element models of the saferoom, windborne debris, and vehicle models were studied using the LS-DYNA software. RHT concrete material was used to model the saferoom and vehicle models from NCAC were used to characterize damage from impacts at various speeds. Simulation results indicate good performance of the saferoom structure at vehicle impact speeds up to 25 meters per second. Damage is more significant and increases nonlinearly starting at impact velocities of 35 m/s (78 mph). Results of this study give valuable insight into the dynamic response of saferooms subjected to projectile impacts, and provide design considerations for civilian protective structures. Further work is being done to validate the models with experimental measurements.
Dulama, Maria Eliza; Magda?, Ioana
2014-01-01
In this paper, we analyze some aspects related to "Mathematics and Environmental Exploration" subject syllabus for preparatory grade approved by Minister of National Education of Romania. The analysis aim the place of the subject syllabus into the Framework Plan; the syllabus structure and the argumentation of studying this subject; the…
Valledor, Luis; Romero-Rodríguez, M Cristina; Jorrin-Novo, Jesus V
2014-01-01
Two-dimensional gel electrophoresis remains the most widely used technique for protein separation in plant proteomics experiments. Despite the continuous technical advances and improvements in current 2-DE protocols, an adequate and correct experimental design and statistical analysis of the data tend to be ignored or not properly documented in current literature. Both proper experimental design and appropriate statistical analysis are requested in order to confidently discuss our results and to conclude from experimental data.In this chapter, we describe a model procedure for a correct experimental design and a complete statistical analysis of proteomic dataset. Our model procedure covers all of the steps in data mining and processing, starting with the data preprocessing (transformation, missing value imputation, definition of outliers) and univariate statistics (parametric and nonparametric tests), and finishing with multivariate statistics (clustering, heat-mapping, PCA, ICA, PLS-DA).
Integrated Data Collection Analysis (IDCA) Program - Statistical Analysis of RDX Standard Data Sets
Energy Technology Data Exchange (ETDEWEB)
Sandstrom, Mary M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Brown, Geoffrey W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Preston, Daniel N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Pollard, Colin J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Warner, Kirstin F. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Sorensen, Daniel N. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Remmers, Daniel L. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Phillips, Jason J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Shelley, Timothy J. [Air Force Research Lab. (AFRL), Tyndall AFB, FL (United States); Reyes, Jose A. [Applied Research Associates, Tyndall AFB, FL (United States); Hsu, Peter C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Reynolds, John G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2015-10-30
The Integrated Data Collection Analysis (IDCA) program is conducting a Proficiency Test for Small- Scale Safety and Thermal (SSST) testing of homemade explosives (HMEs). Described here are statistical analyses of the results for impact, friction, electrostatic discharge, and differential scanning calorimetry analysis of the RDX Type II Class 5 standard. The material was tested as a well-characterized standard several times during the proficiency study to assess differences among participants and the range of results that may arise for well-behaved explosive materials. The analyses show that there are detectable differences among the results from IDCA participants. While these differences are statistically significant, most of them can be disregarded for comparison purposes to assess potential variability when laboratories attempt to measure identical samples using methods assumed to be nominally the same. The results presented in this report include the average sensitivity results for the IDCA participants and the ranges of values obtained. The ranges represent variation about the mean values of the tests of between 26% and 42%. The magnitude of this variation is attributed to differences in operator, method, and environment as well as the use of different instruments that are also of varying age. The results appear to be a good representation of the broader safety testing community based on the range of methods, instruments, and environments included in the IDCA Proficiency Test.
Stress ulcer prophylaxis in the intensive care unit trial : detailed statistical analysis plan
Krag, M.; Perner, A.; Wetterslev, J.; Lange, T.; Wise, M. P.; Borthwick, M.; Bendel, S.; Pelosi, P.; Keus, F.; Guttormsen, A. B.; Schefold, J. C.; Meyhoff, T. S.; Marker, S.; Moller, M. H.
BackgroundIn this statistical analysis plan, we aim to provide details of the pre-defined statistical analyses of the Stress Ulcer Prophylaxis in the Intensive Care Unit (SUP-ICU) trial. The aim of the SUP-ICU trial is to assess benefits and harms of stress ulcer prophylaxis with a proton pump
van Krimpen-Stoop, Edith; Meijer, R.R.
1998-01-01
In this study a cumulative-sum (CUSUM) procedure from the theory of Statistical Process Control was modified and applied in the context of person-fit analysis in a computerized adaptive testing (CAT) environment. Six person-fit statistics were proposed using the CUSUM procedure, and three of them
Measuring the Success of an Academic Development Programme: A Statistical Analysis
Smith, L. C.
2009-01-01
This study uses statistical analysis to estimate the impact of first-year academic development courses in microeconomics, statistics, accountancy, and information systems, offered by the University of Cape Town's Commerce Academic Development Programme, on students' graduation performance relative to that achieved by mainstream students. The data…
Papazoglou, Sebastian; Würfel, Jens; Paul, Friedemann; Brandt, Alexander U; Scheel, Michael
2017-04-22
Non-quantitative MRI is prone to intersubject intensity variation rendering signal intensity level based analyses limited. Here, we propose a method that fuses non-quantitative routine T1-weighted (T1w), T2w, and T2w fluid-saturated inversion recovery sequences using independent component analysis and validate it on age and sex matched healthy controls. The proposed method leads to consistent and independent components with a significantly reduced coefficient-of-variation across subjects, suggesting potential to serve as automatic intensity normalization and thus to enhance the power of intensity based statistical analyses. To exemplify this, we show that voxelwise statistical testing on single-subject independent components reveals in particular a widespread sex difference in white matter, which was previously shown using, for example, diffusion tensor imaging but unobservable in the native MRI contrasts. In conclusion, our study shows that single-subject independent component analysis can be applied to routine sequences, thereby enhancing comparability in-between subjects. Unlike quantitative MRI, which requires specific sequences during acquisition, our method is applicable to existing MRI data. Hum Brain Mapp, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Nonlinear Analysis of Frame Structures Subjected to Blast Overpressures
1977-05-01
inv thaersecondarysframinge ofeua shockn font praessur asd therbatost wav traersesontheystructure plan of the roof and walls from one framing bay to... treatment of f’mese second order effects would sig- nificantly increase the compiexity of the frame analysis. How- ever, both of these effects can be...elastic restraint at (B): A- Aa) + (l/2)(AyB - Am) ABB =0 Case 5 - Yield at (A); pin at (B): ABA AyA- A / A6B = 0 Case 6- Elastic restraint at (A); yield at
DEFF Research Database (Denmark)
Jones, Allan; Sommerlund, Bo
2007-01-01
The uses of null hypothesis significance testing (NHST) and statistical power analysis within psychological research are critically discussed. The article looks at the problems of relying solely on NHST when dealing with small and large sample sizes. The use of power-analysis in estimating...... the potential error introduced by small and large samples is advocated. Power analysis is not recommended as a replacement to NHST but as an additional source of information about the phenomena under investigation. Moreover, the importance of conceptual analysis in relation to statistical analysis of hypothesis...
National Research Council Canada - National Science Library
Bakraji, Elias Hanna; Abboud, Rana; Issa, Haissm
2014-01-01
Thermoluminescence (TL) dating and multivariate statistical methods based on radioisotope X-ray fluorescence analysis have been utilized to date and classify Syrian archaeological ceramics fragment from Tel Jamous site...
National Research Council Canada - National Science Library
Chunmei Guan; Rui Dang; Yu Cui; Liyan Liu; Xiaobei Chen; Xiaoyu Wang; Jingli Zhu; Donggang Li; Junwei Li; Decai Wang
.... We have used an analytical approach, based on inductively coupled plasma mass spectrometry coupled with multivariate statistical analysis, to study the profiles of a wide range of metals in AD...
National Research Council Canada - National Science Library
Chunmei Guan; Rui Dang; Yu Cui; Liyan Liu; Xiaobei Chen; Xiaoyu Wang; Jingli Zhu; Donggang Li; Junwei Li; Decai Wang
2017-01-01
.... We have used an analytical approach, based on inductively coupled plasma mass spectrometry coupled with multivariate statistical analysis, to study the profiles of a wide range of metals in AD...
Analysis of Subjective Evaluation of User Experience with Headphones
DEFF Research Database (Denmark)
Jensen, Rasmus; Lauridsen, Nikolaj; Poulsen, Andreas
2016-01-01
in the following categories: sound quality, comfort, build quality, design and brand. A factor analysis shows that the categories fit the attributes. Furthermore, some attributes show high correlations with the overall evaluation, suggesting that these attributes are important for user experience with headphones......The aspects of what provides a good user experience with headphones is initially investigated by an exploratory study (experiment I). Using KJ-Technique, 5 workshop teams of 4-6 participants each provide a number of aspects influencing their experience with headphones. Analysing the aspects...... for uniqueness and relatedness provides 144 aspects of user experience with headphones, arranged in 12 categories. The 144 influencing aspects from experiment I are condensed, and 24 attributes regarding user experience with headphones are selected. These attributes are tested in regard to their correlation...
Subjective experience analysis in women with breast cancer
Directory of Open Access Journals (Sweden)
Miriam Belber-Gómez
2018-01-01
Full Text Available In this article the psychological experience and needs shown in the discourse of women diagnosed with breast cancer in a psychological group intervention was analyzed. The sessions are transcribed and a discourse analysis is performed, selecting the most prevailing topics. The main psychological difficulties perceived by the participants are the following: body identity change, sexuality changes, new quality of interpersonal relationships, implications of positive thinking culture, fear of recurrence, the relationship with the hospital staff and change after diagnosis. The aspects of the group considered to be helpful are also addressed, i.e. feeling understood by the others, seeing the rest of participants as coping models, changing their relationship with the illness. Several clinical implications are highlighted in order to improve a comprehensive care.
Finite element analysis for dental implants subjected to thermal loads
Directory of Open Access Journals (Sweden)
Mohamad Reza Khalili
2013-10-01
Full Text Available Background and Aims: Dental implants have been studied for replacement of missing teeth for many years. Productivity of implants is extremely related to the stability and resistance under applied loads and the minimum stress in jaw bone. The purpose of this study was to study numerically the 3D model of implant under thermal loads. Materials and Methods: Bone and the ITI implant were modeled in “Solidworks” software. To obtain the exact model, the bone was assumed as a linear orthotropic material. The implant system, including implant, abutment, framework and crown were modeled and located in the bone. After importing the model in Abaqus software, the material properties and boundary conditions and loads were applied and after meshing, the model was analyzed. In this analysis, the loads were applied in two steps. In the first step, the mechanical load was applied as tightening torque to the abutment and the abutment was tightened in the implant with 35 N.cm torque. In the second step, the thermal load originated from drinking cold and hot water was applied as thermal flux on the ceramic crown surface in this model. Results: Thermal analysis results showed that the thermal gradient in the bone was about 5.5 and 4.9 degrees of centigrade in the case of drinking cold and hot water respectively , although the maximum gradient of the whole system was reduced to 14 degrees, which occurred, in the crown by drinking cold water. Conclusion Thermal stresses were so small and it was because of the low thermal gradient. Maximum stresses occurred in the abutment were due to the tension preloads which were originated from the tightening torque.
Psalm 151 of the Septuagint: a Subject Analysis
Directory of Open Access Journals (Sweden)
Veviurko Il'ia
2016-01-01
Full Text Available By its location in the Septuagint canon, Psalm 151 takes up a place of a certain epilogue of the whole Psalter. Due to its isolation in the Greek text and non-canonicity from the masoretic point of view, as well as to its apparent simplicity and triviality, the Psalm was not often attracting attention of the biblical scholars. Only after discovering of a longer Hebrew version of the Psalm in Qumran the situation began to change. Now the Greek Ps 151 has being engaged into comparative study, but generally remaining in the shadow of the Qumran text. This article deals principally with Septuagint version of the Psalm, that underlies the Slavonic and others recensions adopted in the Christian tradition. A through thematical analysis, beginning with the presumption of the texts meaningfulness, will allow then to compare it with the longer Qumran version, the latter to be found poetical interpretation of the former. The analysis reveals David of the Ps 151 to be very much archetypical than historical personality. This is enough to explain the almost complete withdrawal of the emotional «colours», though not depriving the Psalm of its poetical expressiveness. The hero of Ps 151 is a silhouette with some trates of the Anoited One that cometh. This conclusion leads us to the proper estimation of the signiﬁcance of the Psalm in the history of religious ideas: by the examination of this text we can determine more exactly, what kind of characteristics of the Psalters David collected in its ‘epilogue’ were perceived by the readers as protomessianic: the stainless moral purity, the unfamiliarity to the world, the mysterious conversation with God, the natural possession of power as a mode of the direct divine activity, and the readiness to became a ransomer for the people.
Directory of Open Access Journals (Sweden)
Н. В. Лещук
2017-12-01
Full Text Available Purpose. To define statistical methods and tools (application packages for creating the decision support system (DSS for qualifying examination of plant varieties suitable for dissemination (VSD in the context of data processing tasks. To substantiate the selection of software for processing statistical data relative to field and laboratory investigations that are included into the qualifying examination for VSD. Methods. Analytical one based on the comparison of methods of descriptive and multivariate statistics and tools of intellectual analysis of data obtained during qualifying examination for VSD. Comparative analysis of software tools for processing statistical data in order to prepare proposals for the final decision on plant variety application. Decomposition of tasks was carried out which were included into the decision support system for qualifying examination of varieties-candidates for VSD. Results. Statistical package SPSS, analysis package included in MS Excel and programe language R was compared for the following criteria: interface usability, functionality, quality of calculation result presentation, visibility of graphical information, software cost. The both packages were widely used in the world for statistical data processing, they have similar functions for statistics calculation. Conclusion. Tasks of VSD were separated and recommended to tackle using investigated tools. Programe language R was a product recommended to use as a tool. The main advantage of R as compared to the package IBM SPSS Statistics is the fact that R is an open source software.
TRAPR: R Package for Statistical Analysis and Visualization of RNA-Seq Data.
Lim, Jae Hyun; Lee, Soo Youn; Kim, Ju Han
2017-03-01
High-throughput transcriptome sequencing, also known as RNA sequencing (RNA-Seq), is a standard technology for measuring gene expression with unprecedented accuracy. Numerous bioconductor packages have been developed for the statistical analysis of RNA-Seq data. However, these tools focus on specific aspects of the data analysis pipeline, and are difficult to appropriately integrate with one another due to their disparate data structures and processing methods. They also lack visualization methods to confirm the integrity of the data and the process. In this paper, we propose an R-based RNA-Seq analysis pipeline called TRAPR, an integrated tool that facilitates the statistical analysis and visualization of RNA-Seq expression data. TRAPR provides various functions for data management, the filtering of low-quality data, normalization, transformation, statistical analysis, data visualization, and result visualization that allow researchers to build customized analysis pipelines.
EMOTIONAL AND VOLITIONAL RELIABILITY AS A SUBJECT OF SCIENTIFIC ANALYSIS
Directory of Open Access Journals (Sweden)
Xia Juan
2016-11-01
Full Text Available The article analyses the scientists’ views on the phenomenon of “reliability”. The author pays special attention to the fact that the concept “reliability” is considered by the scientists as the systematic characteristic, which is characterized by the specific number of professional, psychological, physiological qualities and functions at the different levels of a person’s activity. This activity provides stable and reliable work. It is found out that scientists think the quality is the most important property which provides definiteness to any phenomenon. Personal qualities that affect the maturity of emotional and volitional reliability are established. Scientific interpretation of the concept of “reliability” and the main features of performing reliability is revealed. The analysis of pedagogical and psychological literature proves that the emotional qualities are being formed throughout the life according to the person’s environmental and genetic conditions. Emotional irritability, emotional stability, emotional tone, emotional reactions, emotional stability are the qualities which depend upon a type of a person’s higher nervous activity. The person’s activity, especially music one, can’t exist without involving emotions and feelings. Music plays an important role; due to it emotions become conscious processes. Thanks to music the person’s higher emotions (moral, intellectual, esthetic are formed. It is noticed that individual differences in demonstration of emotions depend on a person’s volitional qualities. Volition is considered to be a person’s psychological activity that determines purposefulness of actions. The author concluded that the maturity level of emotional and volitional reliability depends on direction and resistance of socially significant motives (needs, interests, values, attitudes; personality’s psychophysical traits (abilities, capacities, which provide the required level and effectiveness
Diagnosis checking of statistical analysis in RCTs indexed in PubMed.
Lee, Paul H; Tse, Andy C Y
2017-11-01
Statistical analysis is essential for reporting of the results of randomized controlled trials (RCTs), as well as evaluating their effectiveness. However, the validity of a statistical analysis also depends on whether the assumptions of that analysis are valid. To review all RCTs published in journals indexed in PubMed during December 2014 to provide a complete picture of how RCTs handle assumptions of statistical analysis. We reviewed all RCTs published in December 2014 that appeared in journals indexed in PubMed using the Cochrane highly sensitive search strategy. The 2014 impact factors of the journals were used as proxies for their quality. The type of statistical analysis used and whether the assumptions of the analysis were tested were reviewed. In total, 451 papers were included. Of the 278 papers that reported a crude analysis for the primary outcomes, 31 (27·2%) reported whether the outcome was normally distributed. Of the 172 papers that reported an adjusted analysis for the primary outcomes, diagnosis checking was rarely conducted, with only 20%, 8·6% and 7% checked for generalized linear model, Cox proportional hazard model and multilevel model, respectively. Study characteristics (study type, drug trial, funding sources, journal type and endorsement of CONSORT guidelines) were not associated with the reporting of diagnosis checking. The diagnosis of statistical analyses in RCTs published in PubMed-indexed journals was usually absent. Journals should provide guidelines about the reporting of a diagnosis of assumptions. © 2017 Stichting European Society for Clinical Investigation Journal Foundation.
Tariq, Saadia R; Shah, Munir H; Shaheen, Nazia
2009-09-30
Two tanning units of Pakistan, namely, Kasur and Mian Channun were investigated with respect to the tanning processes (chrome and vegetable, respectively) and the effects of the tanning agents on the quality of soil in vicinity of tanneries were evaluated. The effluent and soil samples from 16 tanneries each of Kasur and Mian Channun were collected. The levels of selected metals (Na, K, Ca, Mg, Fe, Cr, Mn, Co, Cd, Ni, Pb and Zn) were determined by using flame atomic absorption spectrophotometer under optimum analytical conditions. The data thus obtained were subjected to univariate and multivariate statistical analyses. Most of the metals exhibited considerably higher concentrations in the effluents and soils of Kasur compared with those of Mian Channun. It was observed that the soil of Kasur was highly contaminated by Na, K, Ca and Mg emanating from various processes of leather manufacture. Furthermore, the levels of Cr were also present at much enhanced levels than its background concentration due to the adoption of chrome tanning. The levels of Cr determined in soil samples collected from the vicinity of Mian Channun tanneries were almost comparable to the background levels. The soil of this city was found to have contaminated only by the metals originating from pre-tanning processes. The apportionment of selected metals in the effluent and soil samples was determined by a multivariate cluster analysis, which revealed significant differences in chrome and vegetable tanning processes.
Tutorial: survival analysis--a statistic for clinical, efficacy, and theoretical applications.
Gruber, F A
1999-04-01
Current demands for increased research attention to therapeutic efficacy, efficiency, and also for improved developmental models call for analysis of longitudinal outcome data. Statistical treatment of longitudinal speech and language data is difficult, but there is a family of statistical techniques in common use in medicine, actuarial science, manufacturing, and sociology that has not been used in speech or language research. Survival analysis is introduced as a method that avoids many of the statistical problems of other techniques because it treats time as the outcome. In survival analysis, probabilities are calculated not just for groups but also for individuals in a group. This is a major advantage for clinical work. This paper provides a basic introduction to nonparametric and semiparametric survival analysis using speech outcomes as examples. A brief discussion of potential conflicts between actuarial analysis and clinical intuition is also provided.
Directory of Open Access Journals (Sweden)
Carvalho, Bettina
2012-01-01
Full Text Available Introduction: Anthropometric proportions and symmetry are considered determinants of beauty. These parameters have significant importance in facial plastic surgery, particularly in rhinoplasty. As the central organ of the face, the nose is especially important in determining facial symmetry, both through the perception of a crooked nose and through the determination of facial growth. The evaluation of the presence of facial asymmetry has great relevance preoperatively, both for surgical planning and counseling. Aim/Objective: To evaluate and document the presence of facial asymmetry in patients during rhinoplasty planning and to correlate the anthropometric measures with the perception of facial symmetry or asymmetry, assessing whether there is a higher prevalence of facial asymmetry in these patients compared to volunteers without nasal complaints. Methods: This prospective study was performed by comparing photographs of patients with rhinoplasty planning and volunteers (controls, n = 201, and by evaluating of anthropometric measurements taken from a line passing through the center of the face, until tragus, medial canthus, corner side wing margin, and oral commissure of each side, by statistical analysis (Z test and odds ratio. Results: None of the patients or volunteers had completely symmetric values. Subjectively, 59% of patients were perceived as asymmetric, against 54% of volunteers. Objectively, more than 89% of respondents had asymmetrical measures. Patients had greater RLMTr (MidLine Tragus Ratio asymmetry than volunteers, which was statistically significant. Discussion/Conclusion: Facial asymmetries are very common in patients seeking rhinoplasty, and special attention should be paid to these aspects both for surgical planning and for counseling of patients.
Statistical power analysis a simple and general model for traditional and modern hypothesis tests
Murphy, Kevin R; Wolach, Allen
2014-01-01
Noted for its accessible approach, this text applies the latest approaches of power analysis to both null hypothesis and minimum-effect testing using the same basic unified model. Through the use of a few simple procedures and examples, the authors show readers with little expertise in statistical analysis how to obtain the values needed to carry out the power analysis for their research. Illustrations of how these analyses work and how they can be used to choose the appropriate criterion for defining statistically significant outcomes are sprinkled throughout. The book presents a simple and g
Liu, Na; Li, Jun; Li, Bao-Guo
2014-11-01
The study of quality control of Chinese medicine has always been the hot and the difficulty spot of the development of traditional Chinese medicine (TCM), which is also one of the key problems restricting the modernization and internationalization of Chinese medicine. Multivariate statistical analysis is an analytical method which is suitable for the analysis of characteristics of TCM. It has been used widely in the study of quality control of TCM. Multivariate Statistical analysis was used for multivariate indicators and variables that appeared in the study of quality control and had certain correlation between each other, to find out the hidden law or the relationship between the data can be found,.which could apply to serve the decision-making and realize the effective quality evaluation of TCM. In this paper, the application of multivariate statistical analysis in the quality control of Chinese medicine was summarized, which could provided the basis for its further study.
Application of Multivariable Statistical Techniques in Plant-wide WWTP Control Strategies Analysis
DEFF Research Database (Denmark)
Flores Alsina, Xavier; Comas, J.; Rodríguez-Roda, I.
2007-01-01
The main objective of this paper is to present the application of selected multivariable statistical techniques in plant-wide wastewater treatment plant (WWTP) control strategies analysis. In this study, cluster analysis (CA), principal component analysis/factor analysis (PCA/FA) and discriminant...... analysis (DA) are applied to the evaluation matrix data set obtained by simulation of several control strategies applied to the plant-wide IWA Benchmark Simulation Model No 2 (BSM2). These techniques allow i) to determine natural groups or clusters of control strategies with a similar behaviour, ii......) to find and interpret hidden, complex and casual relation features in the data set and iii) to identify important discriminant variables within the groups found by the cluster analysis. This study illustrates the usefulness of multivariable statistical techniques for both analysis and interpretation...
Analysis of Variance with Summary Statistics in Microsoft® Excel®
Larson, David A.; Hsu, Ko-Cheng
2010-01-01
Students regularly are asked to solve Single Factor Analysis of Variance problems given only the sample summary statistics (number of observations per category, category means, and corresponding category standard deviations). Most undergraduate students today use Excel for data analysis of this type. However, Excel, like all other statistical…
Distribution and Statistical Analysis of Bacteria in Lake Alau in the ...
African Journals Online (AJOL)
Distribution and Statistical Analysis of Bacteria in Lake Alau in the Arid Northern Nigeria. ... Journal of Applied Sciences and Environmental Management ... probable number (MPN) and membrane filtration (MF) for the analysis of the distribution of bacteria in water samples from Lake Alau Dam in the arid Northern Nigeria.
Directory of Open Access Journals (Sweden)
Benedetti Maria
2012-08-01
Full Text Available Abstract Background Self-reported gait unsteadiness is often a problem in neurological patients without any clinical evidence of ataxia, because it leads to reduced activity and limitations in function. However, in the literature there are only a few papers that address this disorder. The aim of this study is to identify objectively subclinical abnormal gait strategies in these patients. Methods Eleven patients affected by self-reported unsteadiness during gait (4 TBI and 7 MS and ten healthy subjects underwent gait analysis while walking back and forth on a 15-m long corridor. Time-distance parameters, ankle sagittal motion, and muscular activity during gait were acquired by a wearable gait analysis system (Step32, DemItalia, Italy on a high number of successive strides in the same walk and statistically processed. Both self-selected gait speed and high speed were tested under relatively unconstrained conditions. Non-parametric statistical analysis (Mann–Whitney, Wilcoxon tests was carried out on the means of the data of the two examined groups. Results The main findings, with data adjusted for velocity of progression, show that increased double support and reduced velocity of progression are the main parameters to discriminate patients with self-reported unsteadiness from healthy controls. Muscular intervals of activation showed a significant increase in the activity duration of the Rectus Femoris and Tibialis Anterior in patients with respect to the control group at high speed. Conclusions Patients with a subjective sensation of instability, not clinically documented, walk with altered strategies, especially at high gait speed. This is thought to depend on the mechanisms of postural control and coordination. The gait anomalies detected might explain the symptoms reported by the patients and allow for a more focused treatment design. The wearable gait analysis system used for long distance statistical walking assessment was able to detect
PVeStA: A Parallel Statistical Model Checking and Quantitative Analysis Tool
AlTurki, Musab
2011-01-01
Statistical model checking is an attractive formal analysis method for probabilistic systems such as, for example, cyber-physical systems which are often probabilistic in nature. This paper is about drastically increasing the scalability of statistical model checking, and making such scalability of analysis available to tools like Maude, where probabilistic systems can be specified at a high level as probabilistic rewrite theories. It presents PVeStA, an extension and parallelization of the VeStA statistical model checking tool [10]. PVeStA supports statistical model checking of probabilistic real-time systems specified as either: (i) discrete or continuous Markov Chains; or (ii) probabilistic rewrite theories in Maude. Furthermore, the properties that it can model check can be expressed in either: (i) PCTL/CSL, or (ii) the QuaTEx quantitative temporal logic. As our experiments show, the performance gains obtained from parallelization can be very high. © 2011 Springer-Verlag.
Analysis and classification of ECG-waves and rhythms using circular statistics and vector strength
Directory of Open Access Journals (Sweden)
Janßen Jan-Dirk
2017-09-01
Full Text Available The most common way to analyse heart rhythm is to calculate the RR-interval and the heart rate variability. For further evaluation, descriptive statistics are often used. Here we introduce a new and more natural heart rhythm analysis tool that is based on circular statistics and vector strength. Vector strength is a tool to measure the periodicity or lack of periodicity of a signal. We divide the signal into non-overlapping window segments and project the detected R-waves around the unit circle using the complex exponential function and the median RR-interval. In addition, we calculate the vector strength and apply circular statistics as wells as an angular histogram on the R-wave vectors. This approach enables an intuitive visualization and analysis of rhythmicity. Our results show that ECG-waves and rhythms can be easily visualized, analysed and classified by circular statistics and vector strength.
Introduction to statistics and data analysis with exercises, solutions and applications in R
Heumann, Christian; Shalabh
2016-01-01
This introductory statistics textbook conveys the essential concepts and tools needed to develop and nurture statistical thinking. It presents descriptive, inductive and explorative statistical methods and guides the reader through the process of quantitative data analysis. In the experimental sciences and interdisciplinary research, data analysis has become an integral part of any scientific study. Issues such as judging the credibility of data, analyzing the data, evaluating the reliability of the obtained results and finally drawing the correct and appropriate conclusions from the results are vital. The text is primarily intended for undergraduate students in disciplines like business administration, the social sciences, medicine, politics, macroeconomics, etc. It features a wealth of examples, exercises and solutions with computer code in the statistical programming language R as well as supplementary material that will enable the reader to quickly adapt all methods to their own applications.
Directory of Open Access Journals (Sweden)
Jason H. Moore
2007-01-01
Full Text Available The biological interpretation of gene expression microarray results is a daunting challenge. For complex diseases such as cancer, wherein the body of published research is extensive, the incorporation of expert knowledge provides a useful analytical framework. We have previously developed the Exploratory Visual Analysis (EVA software for exploring data analysis results in the context of annotation information about each gene, as well as biologically relevant groups of genes. We present EVA as a fl exible combination of statistics and biological annotation that provides a straightforward visual interface for the interpretation of microarray analyses of gene expression in the most commonly occurring class of brain tumors, glioma. We demonstrate the utility of EVA for the biological interpretation of statistical results by analyzing publicly available gene expression profi les of two important glial tumors. The results of a statistical comparison between 21 malignant, high-grade glioblastoma multiforme (GBM tumors and 19 indolent, low-grade pilocytic astrocytomas were analyzed using EVA. By using EVA to examine the results of a relatively simple statistical analysis, we were able to identify tumor class-specifi c gene expression patterns having both statistical and biological signifi cance. Our interactive analysis highlighted the potential importance of genes involved in cell cycle progression, proliferation, signaling, adhesion, migration, motility, and structure, as well as candidate gene loci on a region of Chromosome 7 that has been implicated in glioma. Because EVA does not require statistical or computational expertise and has the fl exibility to accommodate any type of statistical analysis, we anticipate EVA will prove a useful addition to the repertoire of computational methods used for microarray data analysis. EVA is available at no charge to academic users and can be found at http://www.epistasis.org.
Analysis of health in health centers area in Depok using correspondence analysis and scan statistic
Basir, C.; Widyaningsih, Y.; Lestari, D.
2017-07-01
Hotspots indicate area that has a higher case intensity than others. For example, in health problems of an area, the number of sickness of a region can be used as parameter and condition of area that determined severity of an area. If this condition is known soon, it can be overcome preventively. Many factors affect the severity level of area. Some health factors to be considered in this study are the number of infant with low birth weight, malnourished children under five years old, under five years old mortality, maternal deaths, births without the help of health personnel, infants without handling the baby's health, and infant without basic immunization. The number of cases is based on every public health center area in Depok. Correspondence analysis provides graphical information about two nominal variables relationship. It create plot based on row and column scores and show categories that have strong relation in a close distance. Scan Statistic method is used to examine hotspot based on some selected variables that occurred in the study area; and Correspondence Analysis is used to picturing association between the regions and variables. Apparently, using SaTScan software, Sukatani health center is obtained as a point hotspot; and Correspondence Analysis method shows health centers and the seven variables have a very significant relationship and the majority of health centers close to all variables, except Cipayung which is distantly related to the number of pregnant mother death. These results can be used as input for the government agencies to upgrade the health level in the area.
Directory of Open Access Journals (Sweden)
Thomas Koenig
2011-01-01
Full Text Available We present a program (Ragu; Randomization Graphical User interface for statistical analyses of multichannel event-related EEG and MEG experiments. Based on measures of scalp field differences including all sensors, and using powerful, assumption-free randomization statistics, the program yields robust, physiologically meaningful conclusions based on the entire, untransformed, and unbiased set of measurements. Ragu accommodates up to two within-subject factors and one between-subject factor with multiple levels each. Significance is computed as function of time and can be controlled for type II errors with overall analyses. Results are displayed in an intuitive visual interface that allows further exploration of the findings. A sample analysis of an ERP experiment illustrates the different possibilities offered by Ragu. The aim of Ragu is to maximize statistical power while minimizing the need for a-priori choices of models and parameters (like inverse models or sensors of interest that interact with and bias statistics.