WorldWideScience

Sample records for inappropriate statistical analysis

  1. The Inappropriate Symmetries of Multivariate Statistical Analysis in Geometric Morphometrics.

    Science.gov (United States)

    Bookstein, Fred L

    In today's geometric morphometrics the commonest multivariate statistical procedures, such as principal component analysis or regressions of Procrustes shape coordinates on Centroid Size, embody a tacit roster of symmetries -axioms concerning the homogeneity of the multiple spatial domains or descriptor vectors involved-that do not correspond to actual biological fact. These techniques are hence inappropriate for any application regarding which we have a-priori biological knowledge to the contrary (e.g., genetic/morphogenetic processes common to multiple landmarks, the range of normal in anatomy atlases, the consequences of growth or function for form). But nearly every morphometric investigation is motivated by prior insights of this sort. We therefore need new tools that explicitly incorporate these elements of knowledge, should they be quantitative, to break the symmetries of the classic morphometric approaches. Some of these are already available in our literature but deserve to be known more widely: deflated (spatially adaptive) reference distributions of Procrustes coordinates, Sewall Wright's century-old variant of factor analysis, the geometric algebra of importing explicit biomechanical formulas into Procrustes space. Other methods, not yet fully formulated, might involve parameterized models for strain in idealized forms under load, principled approaches to the separation of functional from Brownian aspects of shape variation over time, and, in general, a better understanding of how the formalism of landmarks interacts with the many other approaches to quantification of anatomy. To more powerfully organize inferences from the high-dimensional measurements that characterize so much of today's organismal biology, tomorrow's toolkit must rely neither on principal component analysis nor on the Procrustes distance formula, but instead on sound prior biological knowledge as expressed in formulas whose coefficients are not all the same. I describe the problems

  2. Cost analysis of inappropriate treatments for suspected dermatomycoses

    Directory of Open Access Journals (Sweden)

    Emanuela Fiammenghi

    2015-06-01

    Full Text Available Superficial mycoses are estimated to affect more than 20-25% of the world’s population with a consistent increase over the years. Most patients referred to our clinic for suspected dermatomycoses have already been treated with pharmacotherapy, without a previous mycological examination and many show changes in the clinical manifestations. Indeed, some medications, such as steroids, antiviral, antibiotics and antihistamines are not able to erase a fungal infection, but also they can cause atypical clinical manifestations. The consequences of inappropriate treatment include delayed diagnosis, prolonged healing time, and additional costs. The aims of this study were (1 to evaluate the incidence of increased costs attributable to inappropriate therapy sustained by the National Health Service and patients and (2 to highlight the importance of mycological evaluation before starting treatment, in order to improve diagnostic accuracy. An observational retrospective and prospective study was performed from September 2013 to February 2014, in 765 patients referred to our center (University Hospital “ Federico II” in Naples, Italy, for suspected mycological infection. The following treatments (alone or in combination were defined as inappropriate: (1 cortisone in a patient with at least one positive site; (2 antifungals in (a patients with all negative sites or (b ineffective antifungal treatment (in terms of drug chosen, dose or duration in those with all positive sites; or (3 antibiotics; (4 antivirals or (5 antihistamines, in patients with ≥ 1 positive site. Five hundred and fifty patients were using medications before the assessment visit. The total amount of avoidable costs related to inappropriate previous treatments was € 121,417, representing 74% of the total treatment costs. 253/550 patients received drugs also after the visit. For these patients, the cost of treatment prescribed after mycological testing was € 42,952, with a decrease

  3. Functional Analysis and Treatment of Multiply Controlled Inappropriate Mealtime Behavior

    Science.gov (United States)

    Bachmeyer, Melanie H.; Piazza, Cathleen C.; Fredrick, Laura D.; Reed, Gregory K.; Rivas, Kristi D.; Kadey, Heather J.

    2009-01-01

    Functional analyses identified children whose inappropriate mealtime behavior was maintained by escape and adult attention. Function-based extinction procedures were tested individually and in combination. Attention extinction alone did not result in decreases in inappropriate mealtime behavior or a significant increase in acceptance. By contrast,…

  4. Understanding Factors Contributing to Inappropriate Critical Care: A Mixed-Methods Analysis of Medical Record Documentation.

    Science.gov (United States)

    Neville, Thanh H; Tarn, Derjung M; Yamamoto, Myrtle; Garber, Bryan J; Wenger, Neil S

    2017-11-01

    Factors leading to inappropriate critical care, that is treatment that should not be provided because it does not offer the patient meaningful benefit, have not been rigorously characterized. We explored medical record documentation about patients who received inappropriate critical care and those who received appropriate critical care to examine factors associated with the provision of inappropriate treatment. Medical records were abstracted from 123 patients who were assessed as receiving inappropriate treatment and 66 patients who were assessed as receiving appropriate treatment but died within six months of intensive care unit (ICU) admission. We used mixed methods combining qualitative analysis of medical record documentation with multivariable analysis to examine the relationship between patient and communication factors and the receipt of inappropriate treatment, and present these within a conceptual model. One academic health system. Medical records revealed 21 themes pertaining to prognosis and factors influencing treatment aggressiveness. Four themes were independently associated with patients receiving inappropriate treatment according to physicians. When decision making was not guided by physicians (odds ratio [OR] 3.76, confidence interval [95% CI] 1.21-11.70) or was delayed by patient/family (OR 4.52, 95% CI 1.69-12.04), patients were more likely to receive inappropriate treatment. Documented communication about goals of care (OR 0.29, 95% CI 0.10-0.84) and patient's preferences driving decision making (OR 0.02, 95% CI 0.00-0.27) were associated with lower odds of receiving inappropriate treatment. Medical record documentation suggests that inappropriate treatment occurs in the setting of communication and decision-making patterns that may be amenable to intervention.

  5. Correlation analysis between team communication characteristics and frequency of inappropriate communications

    International Nuclear Information System (INIS)

    Kim, Ar Ryum; Lee, Seung Woo; Park, Jinkyun; Kang, Hyun Gook; Seong, Poong Hyun

    2013-01-01

    Highlights: • We proposed a method to evaluate team communication characteristics based on social network analysis. • We compare team communication characteristics with the frequency of inappropriate communications. • Frequency of inappropriate communications were decreased when more operators perform the same types of role as others. • Frequency of inappropriate communications were decreased for teams who provide more number of acknowledgment. - Abstract: The characteristics of team communications are important since large process systems such as nuclear power plants, airline, and railways are operated by operating teams. In such situation, inappropriate communications can cause a lack of situational information and lead to serious consequences for the systems. As a result, the communication characteristics of operating teams should be understood in order to extract meaningful insights to address the nature of inappropriate communications. The purpose of this study was to develop a method to evaluate the characteristics of team communications based on social network analysis and compare them with the frequency of inappropriate communications. In order to perform the analysis, verbal protocol data, which were audio-visual recorded under training sessions by operating teams, were used and interfacing system loss of coolant accident scenarios were selected. As a result of the study, it was found that the frequency of inappropriate communications decreased when more operators perform the same types of role as other operators, since they can easily and effectively back up each other. Also, the frequency of inappropriate communication is decreased for teams which provide a relatively large communication content that acknowledge or confirm another communication content

  6. Reduction in inappropriate hospital use based on analysis of the causes

    Directory of Open Access Journals (Sweden)

    Soria-Aledo Víctor

    2012-10-01

    Full Text Available Abstract Background To reduce inappropriate admissions and stays with the application of an improvement cycle in patients admitted to a University Hospital. The secondary objective is to analyze the hospital cost saved by reducing inadequacy after the implementation of measures proposed by the group for improvement. Methods Pre- and post-analysis of a sample of clinical histories studied retrospectively, in which the Appropriateness Evaluation Protocol (AEP was applied to a representative hospital sample of 1350 clinical histories in two phases. In the first phase the AEP was applied retrospectively to 725 admissions and 1350 stays. The factors associated with inappropriateness were analysed together with the causes, and specific measures were implemented in a bid to reduce inappropriateness. In the second phase the AEP was reapplied to a similar group of clinical histories and the results of the two groups were compared. The cost of inappropriate stays was calculated by cost accounting. Setting: General University Hospital with 426 beds serving a population of 320,000 inhabitants in the centre of Murcia, a city in south-eastern Spain. Results Inappropriate admissions were reduced significantly: 7.4% in the control group and 3.2% in the intervention group. Likewise, inappropriate stays decreased significantly from 24.6% to 10.4%. The cost of inappropriateness in the study sample fell from 147,044 euros to 66,642 euros. The causes of inappropriateness for which corrective measures were adopted were those that showed the most significant decrease. Conclusions It is possible to reduce inadequacy by applying measures based on prior analysis of the situation in each hospital.

  7. Statistical data analysis handbook

    National Research Council Canada - National Science Library

    Wall, Francis J

    1986-01-01

    It must be emphasized that this is not a text book on statistics. Instead it is a working tool that presents data analysis in clear, concise terms which can be readily understood even by those without formal training in statistics...

  8. Per Object statistical analysis

    DEFF Research Database (Denmark)

    2008-01-01

    This RS code is to do Object-by-Object analysis of each Object's sub-objects, e.g. statistical analysis of an object's individual image data pixels. Statistics, such as percentiles (so-called "quartiles") are derived by the process, but the return of that can only be a Scene Variable, not an Object...... an analysis of the values of the object's pixels in MS-Excel. The shell of the proceedure could also be used for purposes other than just the derivation of Object - Sub-object statistics, e.g. rule-based assigment processes....... Variable. This procedure was developed in order to be able to export objects as ESRI shape data with the 90-percentile of the Hue of each object's pixels as an item in the shape attribute table. This procedure uses a sub-level single pixel chessboard segmentation, loops for each of the objects...

  9. Cost savings associated with improving appropriate and reducing inappropriate preventive care: cost-consequences analysis

    Directory of Open Access Journals (Sweden)

    Baskerville Neill

    2005-03-01

    Full Text Available Abstract Background Outreach facilitation has been proven successful in improving the adoption of clinical preventive care guidelines in primary care practice. The net costs and savings of delivering such an intensive intervention need to be understood. We wanted to estimate the proportion of a facilitation intervention cost that is offset and the potential for savings by reducing inappropriate screening tests and increasing appropriate screening tests in 22 intervention primary care practices affecting a population of 90,283 patients. Methods A cost-consequences analysis of one successful outreach facilitation intervention was done, taking into account the estimated cost savings to the health system of reducing five inappropriate tests and increasing seven appropriate tests. Multiple data sources were used to calculate costs and cost savings to the government. The cost of the intervention and costs of performing appropriate testing were calculated. Costs averted were calculated by multiplying the number of tests not performed as a result of the intervention. Further downstream cost savings were determined by calculating the direct costs associated with the number of false positive test follow-ups avoided. Treatment costs averted as a result of increasing appropriate testing were similarly calculated. Results The total cost of the intervention over 12 months was $238,388 and the cost of increasing the delivery of appropriate care was $192,912 for a total cost of $431,300. The savings from reduction in inappropriate testing were $148,568 and from avoiding treatment costs as a result of appropriate testing were $455,464 for a total savings of $604,032. On a yearly basis the net cost saving to the government is $191,733 per year (2003 $Can equating to $3,687 per physician or $63,911 per facilitator, an estimated return on intervention investment and delivery of appropriate preventive care of 40%. Conclusion Outreach facilitation is more expensive

  10. Beginning statistics with data analysis

    CERN Document Server

    Mosteller, Frederick; Rourke, Robert EK

    2013-01-01

    This introduction to the world of statistics covers exploratory data analysis, methods for collecting data, formal statistical inference, and techniques of regression and analysis of variance. 1983 edition.

  11. Functional Analysis of Inappropriate Social Interactions in Students with Asperger's Syndrome

    Science.gov (United States)

    Roantree, Christina F.; Kennedy, Craig H.

    2012-01-01

    We analyzed the inappropriate social interactions of 3 students with Asperger's syndrome whose behavior was maintained by social positive reinforcement. We tested whether inappropriate social behavior was sensitive to social positive reinforcement contingencies and whether such contingencies could be reversed to increase the probability of…

  12. Applied multivariate statistical analysis

    CERN Document Server

    Härdle, Wolfgang Karl

    2015-01-01

    Focusing on high-dimensional applications, this 4th edition presents the tools and concepts used in multivariate data analysis in a style that is also accessible for non-mathematicians and practitioners.  It surveys the basic principles and emphasizes both exploratory and inferential statistics; a new chapter on Variable Selection (Lasso, SCAD and Elastic Net) has also been added.  All chapters include practical exercises that highlight applications in different multivariate data analysis fields: in quantitative financial studies, where the joint dynamics of assets are observed; in medicine, where recorded observations of subjects in different locations form the basis for reliable diagnoses and medication; and in quantitative marketing, where consumers’ preferences are collected in order to construct models of consumer behavior.  All of these examples involve high to ultra-high dimensions and represent a number of major fields in big data analysis. The fourth edition of this book on Applied Multivariate ...

  13. Statistical finite element analysis.

    Science.gov (United States)

    Khalaji, Iman; Rahemifar, Kaamran; Samani, Abbas

    2008-01-01

    A novel technique is introduced for tissue deformation and stress analysis. Compared to the conventional Finite Element method, this technique is orders of magnitude faster and yet still very accurate. The proposed technique uses preprocessed data obtained from FE analyses of a number of similar objects in a Statistical Shape Model framework as described below. This technique takes advantage of the fact that the body organs have limited variability, especially in terms of their geometry. As such, it is well suited for calculating tissue displacements of body organs. The proposed technique can be applied in many biomedical applications such as image guided surgery, or virtual reality environment development where tissue behavior is simulated for training purposes.

  14. Clinical analysis of asthenopia caused by wearing inappropriate glasses in college students

    Directory of Open Access Journals (Sweden)

    Li Wang

    2015-01-01

    Full Text Available AIM: To proposed control measures by exploring visual fatigue caused by college students wearing inappropriate glasses.METHODS: A total of 124 cases of asthenopia patients underwent optometry students audition, checked the original spectacles; TOPCON-CL100 computer center was used to checked the original mirror glasses(glasses, the distance between the optical center; with near vision as the standard examination table nearly with vergence and regulation near point, and checked the visual function. RESULTS: All 124 cases(248 eyeshad refractive errors, 77% were spherical mirror and 69% column mirror with error ≥±0.50D, and the pupil center distance from the lens had significant difference(U=5.27, PCONCLUSION: Students wearing inappropriate spectacle asthenopia is caused by one of the main scientific wearing glasses can effectively control asthenopia.

  15. Statistical data analysis

    International Nuclear Information System (INIS)

    Hahn, A.A.

    1994-11-01

    The complexity of instrumentation sometimes requires data analysis to be done before the result is presented to the control room. This tutorial reviews some of the theoretical assumptions underlying the more popular forms of data analysis and presents simple examples to illuminate the advantages and hazards of different techniques

  16. Applied multivariate statistical analysis

    National Research Council Canada - National Science Library

    Johnson, Richard Arnold; Wichern, Dean W

    1988-01-01

    .... The authors hope that their discussions will meet the needs of experimental scientists, in a wide variety of subject matter areas, as a readable introduciton to the staistical analysis of multvariate observations...

  17. Inappropriate analysis does not reveal the ecological causes of evolution of stickleback armour: a critique of Spence et al. 2013.

    Science.gov (United States)

    MacColl, Andrew D C; Aucott, Beth

    2014-09-01

    In a recent paper in this journal, Spence et al. (2013) sought to identify the ecological causes of morphological evolution in three-spined sticklebacks Gasterosteus aculeatus, by examining phenotypic and environmental variation between populations on the island of North Uist, Scotland. However, by using simple qualitative assessments of phenotype and inappropriate measures of environmental variation, Spence et al. have come to a conclusion that is diametrically opposite to that which we have arrived at in studying the same populations. Our criticisms of their paper are threefold: (1) using a binomial qualitative measure of the variation in stickleback armour ("low" versus "minimal" (i.e., "normal" low-plated freshwater sticklebacks versus spineless and/or plateless fish)) does not represent the full range of phenotypes that can be described by quantitative measures of the individual elements of armour. (2) Their use of unspecified test kits, with a probable accuracy of 4 ppm, may not be accurate in the range of water chemistry on North Uist (1 to 30 ppm calcium). (3) Their qualitative assessment of the abundance of brown trout Salmo trutta as the major predator of sticklebacks does not accurately describe the variation in brown trout abundance that is revealed by catch-per-unit-effort statistics. Repeating Spence et al.'s analysis using our own measurements, we find, in direct contradiction to them, that variation in stickleback bony armour is strongly correlated with variation in trout abundance, and unrelated to variation in the concentration of calcium in the lochs in which they live. Field studies in ecology and evolution seldom address the same question in the same system at the same time, and it is salutary that in this rare instance two such studies arrived at diametrically opposite answers.

  18. Statistical Analysis Plan

    DEFF Research Database (Denmark)

    Ris Hansen, Inge; Søgaard, Karen; Gram, Bibi

    2015-01-01

    This is the analysis plan for the multicentre randomised control study looking at the effect of training and exercises in chronic neck pain patients that is being conducted in Jutland and Funen, Denmark. This plan will be used as a work description for the analyses of the data collected....

  19. Research design and statistical analysis

    CERN Document Server

    Myers, Jerome L; Lorch Jr, Robert F

    2013-01-01

    Research Design and Statistical Analysis provides comprehensive coverage of the design principles and statistical concepts necessary to make sense of real data.  The book's goal is to provide a strong conceptual foundation to enable readers to generalize concepts to new research situations.  Emphasis is placed on the underlying logic and assumptions of the analysis and what it tells the researcher, the limitations of the analysis, and the consequences of violating assumptions.  Sampling, design efficiency, and statistical models are emphasized throughout. As per APA recommendations

  20. Regularized Statistical Analysis of Anatomy

    DEFF Research Database (Denmark)

    Sjöstrand, Karl

    2007-01-01

    This thesis presents the application and development of regularized methods for the statistical analysis of anatomical structures. Focus is on structure-function relationships in the human brain, such as the connection between early onset of Alzheimer’s disease and shape changes of the corpus...... and mind. Statistics represents a quintessential part of such investigations as they are preluded by a clinical hypothesis that must be verified based on observed data. The massive amounts of image data produced in each examination pose an important and interesting statistical challenge...... efficient algorithms which make the analysis of large data sets feasible, and gives examples of applications....

  1. Pyrotechnic Shock Analysis Using Statistical Energy Analysis

    Science.gov (United States)

    2015-10-23

    2013. 3. Lyon, Richard H., and DeJong, Richard G., “ Theory and Application of Statistical Energy Analysis, 2nd Edition,” Butterworth-Heinemann, 1995... Dalton , Eric C., “Ballistic Shock Response Prediction through the Synergistic Use of Statistical Energy Analysis, Finite Element Analysis, and

  2. Statistical methods for bioimpedance analysis

    Directory of Open Access Journals (Sweden)

    Christian Tronstad

    2014-04-01

    Full Text Available This paper gives a basic overview of relevant statistical methods for the analysis of bioimpedance measurements, with an aim to answer questions such as: How do I begin with planning an experiment? How many measurements do I need to take? How do I deal with large amounts of frequency sweep data? Which statistical test should I use, and how do I validate my results? Beginning with the hypothesis and the research design, the methodological framework for making inferences based on measurements and statistical analysis is explained. This is followed by a brief discussion on correlated measurements and data reduction before an overview is given of statistical methods for comparison of groups, factor analysis, association, regression and prediction, explained in the context of bioimpedance research. The last chapter is dedicated to the validation of a new method by different measures of performance. A flowchart is presented for selection of statistical method, and a table is given for an overview of the most important terms of performance when evaluating new measurement technology.

  3. Bayesian Inference in Statistical Analysis

    CERN Document Server

    Box, George E P

    2011-01-01

    The Wiley Classics Library consists of selected books that have become recognized classics in their respective fields. With these new unabridged and inexpensive editions, Wiley hopes to extend the life of these important works by making them available to future generations of mathematicians and scientists. Currently available in the Series: T. W. Anderson The Statistical Analysis of Time Series T. S. Arthanari & Yadolah Dodge Mathematical Programming in Statistics Emil Artin Geometric Algebra Norman T. J. Bailey The Elements of Stochastic Processes with Applications to the Natural Sciences Rob

  4. Statistical considerations on safety analysis

    International Nuclear Information System (INIS)

    Pal, L.; Makai, M.

    2004-01-01

    The authors have investigated the statistical methods applied to safety analysis of nuclear reactors and arrived at alarming conclusions: a series of calculations with the generally appreciated safety code ATHLET were carried out to ascertain the stability of the results against input uncertainties in a simple experimental situation. Scrutinizing those calculations, we came to the conclusion that the ATHLET results may exhibit chaotic behavior. A further conclusion is that the technological limits are incorrectly set when the output variables are correlated. Another formerly unnoticed conclusion of the previous ATHLET calculations that certain innocent looking parameters (like wall roughness factor, the number of bubbles per unit volume, the number of droplets per unit volume) can influence considerably such output parameters as water levels. The authors are concerned with the statistical foundation of present day safety analysis practices and can only hope that their own misjudgment will be dispelled. Until then, the authors suggest applying correct statistical methods in safety analysis even if it makes the analysis more expensive. It would be desirable to continue exploring the role of internal parameters (wall roughness factor, steam-water surface in thermal hydraulics codes, homogenization methods in neutronics codes) in system safety codes and to study their effects on the analysis. In the validation and verification process of a code one carries out a series of computations. The input data are not precisely determined because measured data have an error, calculated data are often obtained from a more or less accurate model. Some users of large codes are content with comparing the nominal output obtained from the nominal input, whereas all the possible inputs should be taken into account when judging safety. At the same time, any statement concerning safety must be aleatory, and its merit can be judged only when the probability is known with which the

  5. Statistical analysis of JET disruptions

    International Nuclear Information System (INIS)

    Tanga, A.; Johnson, M.F.

    1991-07-01

    In the operation of JET and of any tokamak many discharges are terminated by a major disruption. The disruptive termination of a discharge is usually an unwanted event which may cause damage to the structure of the vessel. In a reactor disruptions are potentially a very serious problem, hence the importance of studying them and devising methods to avoid disruptions. Statistical information has been collected about the disruptions which have occurred at JET over a long span of operations. The analysis is focused on the operational aspects of the disruptions rather than on the underlining physics. (Author)

  6. Statistical data analysis using SAS intermediate statistical methods

    CERN Document Server

    Marasinghe, Mervyn G

    2018-01-01

    The aim of this textbook (previously titled SAS for Data Analytics) is to teach the use of SAS for statistical analysis of data for advanced undergraduate and graduate students in statistics, data science, and disciplines involving analyzing data. The book begins with an introduction beyond the basics of SAS, illustrated with non-trivial, real-world, worked examples. It proceeds to SAS programming and applications, SAS graphics, statistical analysis of regression models, analysis of variance models, analysis of variance with random and mixed effects models, and then takes the discussion beyond regression and analysis of variance to conclude. Pedagogically, the authors introduce theory and methodological basis topic by topic, present a problem as an application, followed by a SAS analysis of the data provided and a discussion of results. The text focuses on applied statistical problems and methods. Key features include: end of chapter exercises, downloadable SAS code and data sets, and advanced material suitab...

  7. Parametric statistical change point analysis

    CERN Document Server

    Chen, Jie

    2000-01-01

    This work is an in-depth study of the change point problem from a general point of view and a further examination of change point analysis of the most commonly used statistical models Change point problems are encountered in such disciplines as economics, finance, medicine, psychology, signal processing, and geology, to mention only several The exposition is clear and systematic, with a great deal of introductory material included Different models are presented in each chapter, including gamma and exponential models, rarely examined thus far in the literature Other models covered in detail are the multivariate normal, univariate normal, regression, and discrete models Extensive examples throughout the text emphasize key concepts and different methodologies are used, namely the likelihood ratio criterion, and the Bayesian and information criterion approaches A comprehensive bibliography and two indices complete the study

  8. Statistical analysis of management data

    CERN Document Server

    Gatignon, Hubert

    2013-01-01

    This book offers a comprehensive approach to multivariate statistical analyses. It provides theoretical knowledge of the concepts underlying the most important multivariate techniques and an overview of actual applications.

  9. Statistical Analysis by Statistical Physics Model for the STOCK Markets

    Science.gov (United States)

    Wang, Tiansong; Wang, Jun; Fan, Bingli

    A new stochastic stock price model of stock markets based on the contact process of the statistical physics systems is presented in this paper, where the contact model is a continuous time Markov process, one interpretation of this model is as a model for the spread of an infection. Through this model, the statistical properties of Shanghai Stock Exchange (SSE) and Shenzhen Stock Exchange (SZSE) are studied. In the present paper, the data of SSE Composite Index and the data of SZSE Component Index are analyzed, and the corresponding simulation is made by the computer computation. Further, we investigate the statistical properties, fat-tail phenomena, the power-law distributions, and the long memory of returns for these indices. The techniques of skewness-kurtosis test, Kolmogorov-Smirnov test, and R/S analysis are applied to study the fluctuation characters of the stock price returns.

  10. Impact of legislation and a prescription monitoring program on the prevalence of potentially inappropriate prescriptions for monitored drugs in Ontario: a time series analysis.

    Science.gov (United States)

    Gomes, Tara; Juurlink, David; Yao, Zhan; Camacho, Ximena; Paterson, J Michael; Singh, Samantha; Dhalla, Irfan; Sproule, Beth; Mamdani, Muhammad

    2014-10-01

    The increased use of opioid analgesics, sedative hypnotics and stimulants, coupled with the associated risks of overdose have raised concerns around the inappropriate prescribing of these monitored drugs. We assessed the impact of new legislation, the Narcotics Safety and Awareness Act, and a centralized Narcotics Monitoring System (implemented November 2011 and May 2012, respectively), on the dispensing of prescriptions suggestive of misuse. We conducted a time series analysis of publicly funded prescriptions for opioids, benzodiazepines and stimulants dispensed monthly in Ontario from January 2007 to May 2013, based on information in the Ontario Public Drug Benefit Database. In the primary analysis, a prescription was deemed potentially inappropriate if it was dispensed within 7 days of an earlier prescription and was for at least 30 tablets of a drug in the same class as the earlier prescription, but originated from a different physician and a different pharmacy. After enactment of the new legislation, the prevalence of potentially inappropriate opioid prescriptions decreased by 12.5% in 6 months (from 1.6% in October 2011 to 1.4% in April 2012; p = 0.01). No further significant change was observed after the introduction of the narcotic monitoring system (p = 0.8). By May 2013, the prevalence had dropped to 1.0%. Inappropriate benzodiazepine prescribing was significantly influenced by both the legislation (p monitoring system (p = 0.05), which together reduced potentially inappropriate prescribing by 50.0% between October 2011 and May 2013 (from 0.4% to 0.2%). The prevalence of potentially inappropriate prescribing of stimulants was significantly influenced by the introduction of the monitoring system in May 2012, falling from 0.7% in April 2012 to 0.3% in May 2013 (p = 0.02). For a select group of drugs prone to misuse and diversion, legislation and a prescription monitoring program reduced the prevalence of prescriptions suggestive of misuse. This

  11. Instant Replay: Investigating statistical Analysis in Sports

    OpenAIRE

    Sidhu, Gagan

    2011-01-01

    Technology has had an unquestionable impact on the way people watch sports. Along with this technological evolution has come a higher standard to ensure a good viewing experience for the casual sports fan. It can be argued that the pervasion of statistical analysis in sports serves to satiate the fan's desire for detailed sports statistics. The goal of statistical analysis in sports is a simple one: to eliminate subjective analysis. In this paper, we review previous work that attempts to anal...

  12. A Statistical Analysis of Cryptocurrencies

    Directory of Open Access Journals (Sweden)

    Stephen Chan

    2017-05-01

    Full Text Available We analyze statistical properties of the largest cryptocurrencies (determined by market capitalization, of which Bitcoin is the most prominent example. We characterize their exchange rates versus the U.S. Dollar by fitting parametric distributions to them. It is shown that returns are clearly non-normal, however, no single distribution fits well jointly to all the cryptocurrencies analysed. We find that for the most popular currencies, such as Bitcoin and Litecoin, the generalized hyperbolic distribution gives the best fit, while for the smaller cryptocurrencies the normal inverse Gaussian distribution, generalized t distribution, and Laplace distribution give good fits. The results are important for investment and risk management purposes.

  13. Morphological Analysis for Statistical Machine Translation

    National Research Council Canada - National Science Library

    Lee, Young-Suk

    2004-01-01

    We present a novel morphological analysis technique which induces a morphological and syntactic symmetry between two languages with highly asymmetrical morphological structures to improve statistical...

  14. Impact of legislation and a prescription monitoring program on the prevalence of potentially inappropriate prescriptions for monitored drugs in Ontario: a time series analysis

    Science.gov (United States)

    Juurlink, David; Yao, Zhan; Camacho, Ximena; Paterson, J. Michael; Singh, Samantha; Dhalla, Irfan; Sproule, Beth; Mamdani, Muhammad

    2014-01-01

    Background The increased use of opioid analgesics, sedative hypnotics and stimulants, coupled with the associated risks of overdose have raised concerns around the inappropriate prescribing of these monitored drugs. We assessed the impact of new legislation, the Narcotics Safety and Awareness Act, and a centralized Narcotics Monitoring System (implemented November 2011 and May 2012, respectively), on the dispensing of prescriptions suggestive of misuse. Methods We conducted a time series analysis of publicly funded prescriptions for opioids, benzodiazepines and stimulants dispensed monthly in Ontario from January 2007 to May 2013, based on information in the Ontario Public Drug Benefit Database. In the primary analysis, a prescription was deemed potentially inappropriate if it was dispensed within 7 days of an earlier prescription and was for at least 30 tablets of a drug in the same class as the earlier prescription, but originated from a different physician and a different pharmacy. Results After enactment of the new legislation, the prevalence of potentially inappropriate opioid prescriptions decreased by 12.5% in 6 months (from 1.6% in October 2011 to 1.4% in April 2012; p = 0.01). No further significant change was observed after the introduction of the narcotic monitoring system (p = 0.8). By May 2013, the prevalence had dropped to 1.0%. Inappropriate benzodiazepine prescribing was significantly influenced by both the legislation (p significantly influenced by the introduction of the monitoring system in May 2012, falling from 0.7% in April 2012 to 0.3% in May 2013 (p = 0.02). Interpretation For a select group of drugs prone to misuse and diversion, legislation and a prescription monitoring program reduced the prevalence of prescriptions suggestive of misuse. This suggests that regulatory interventions can promote appropriate prescribing which could potentially be applied to other jurisdictions and drugs of concern. PMID:25485251

  15. Statistical Power in Meta-Analysis

    Science.gov (United States)

    Liu, Jin

    2015-01-01

    Statistical power is important in a meta-analysis study, although few studies have examined the performance of simulated power in meta-analysis. The purpose of this study is to inform researchers about statistical power estimation on two sample mean difference test under different situations: (1) the discrepancy between the analytical power and…

  16. STATISTICAL ANALYSIS OF MONETARY POLICY INDICATORS VARIABILITY

    Directory of Open Access Journals (Sweden)

    ANAMARIA POPESCU

    2016-10-01

    Full Text Available This paper attempts to characterize through statistical indicators of statistical data that we have available. The purpose of this paper is to present statistical indicators, primary and secondary, simple and synthetic, which is frequently used for statistical characterization of statistical series. We can thus analyze central tendency, and data variability, form and concentration distributions package data using analytical tools in Microsoft Excel that enables automatic calculation of descriptive statistics using Data Analysis option from the Tools menu. We will also study the links which exist between statistical variables can be studied using two techniques, correlation and regression. From the analysis of monetary policy in the period 2003 - 2014 and information provided by the website of the National Bank of Romania (BNR seems to be a certain tendency towards eccentricity and asymmetry of financial data series.

  17. Statistical methods for astronomical data analysis

    CERN Document Server

    Chattopadhyay, Asis Kumar

    2014-01-01

    This book introduces “Astrostatistics” as a subject in its own right with rewarding examples, including work by the authors with galaxy and Gamma Ray Burst data to engage the reader. This includes a comprehensive blending of Astrophysics and Statistics. The first chapter’s coverage of preliminary concepts and terminologies for astronomical phenomenon will appeal to both Statistics and Astrophysics readers as helpful context. Statistics concepts covered in the book provide a methodological framework. A unique feature is the inclusion of different possible sources of astronomical data, as well as software packages for converting the raw data into appropriate forms for data analysis. Readers can then use the appropriate statistical packages for their particular data analysis needs. The ideas of statistical inference discussed in the book help readers determine how to apply statistical tests. The authors cover different applications of statistical techniques already developed or specifically introduced for ...

  18. Statistical analysis with Excel for dummies

    CERN Document Server

    Schmuller, Joseph

    2013-01-01

    Take the mystery out of statistical terms and put Excel to work! If you need to create and interpret statistics in business or classroom settings, this easy-to-use guide is just what you need. It shows you how to use Excel's powerful tools for statistical analysis, even if you've never taken a course in statistics. Learn the meaning of terms like mean and median, margin of error, standard deviation, and permutations, and discover how to interpret the statistics of everyday life. You'll learn to use Excel formulas, charts, PivotTables, and other tools to make sense of everything fro

  19. Collecting operational event data for statistical analysis

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1994-09-01

    This report gives guidance for collecting operational data to be used for statistical analysis, especially analysis of event counts. It discusses how to define the purpose of the study, the unit (system, component, etc.) to be studied, events to be counted, and demand or exposure time. Examples are given of classification systems for events in the data sources. A checklist summarizes the essential steps in data collection for statistical analysis

  20. Reproducible statistical analysis with multiple languages

    DEFF Research Database (Denmark)

    Lenth, Russell; Højsgaard, Søren

    2011-01-01

    This paper describes the system for making reproducible statistical analyses. differs from other systems for reproducible analysis in several ways. The two main differences are: (1) Several statistics programs can be in used in the same document. (2) Documents can be prepared using OpenOffice or ...

  1. Statistical shape analysis with applications in R

    CERN Document Server

    Dryden, Ian L

    2016-01-01

    A thoroughly revised and updated edition of this introduction to modern statistical methods for shape analysis Shape analysis is an important tool in the many disciplines where objects are compared using geometrical features. Examples include comparing brain shape in schizophrenia; investigating protein molecules in bioinformatics; and describing growth of organisms in biology. This book is a significant update of the highly-regarded `Statistical Shape Analysis’ by the same authors. The new edition lays the foundations of landmark shape analysis, including geometrical concepts and statistical techniques, and extends to include analysis of curves, surfaces, images and other types of object data. Key definitions and concepts are discussed throughout, and the relative merits of different approaches are presented. The authors have included substantial new material on recent statistical developments and offer numerous examples throughout the text. Concepts are introduced in an accessible manner, while reta...

  2. Spatial analysis statistics, visualization, and computational methods

    CERN Document Server

    Oyana, Tonny J

    2015-01-01

    An introductory text for the next generation of geospatial analysts and data scientists, Spatial Analysis: Statistics, Visualization, and Computational Methods focuses on the fundamentals of spatial analysis using traditional, contemporary, and computational methods. Outlining both non-spatial and spatial statistical concepts, the authors present practical applications of geospatial data tools, techniques, and strategies in geographic studies. They offer a problem-based learning (PBL) approach to spatial analysis-containing hands-on problem-sets that can be worked out in MS Excel or ArcGIS-as well as detailed illustrations and numerous case studies. The book enables readers to: Identify types and characterize non-spatial and spatial data Demonstrate their competence to explore, visualize, summarize, analyze, optimize, and clearly present statistical data and results Construct testable hypotheses that require inferential statistical analysis Process spatial data, extract explanatory variables, conduct statisti...

  3. Statistical distribution analysis of rubber fatigue data

    Science.gov (United States)

    DeRudder, J. L.

    1981-10-01

    Average rubber fatigue resistance has previously been related to such factors as elastomer type, cure system, cure temperature, and stress history. This paper extends this treatment to a full statistical analysis of rubber fatigue data. Analyses of laboratory fatigue data are used to predict service life. Particular emphasis is given to the prediction of early tire splice failures, and to adaptations of statistical fatigue analysis for the particular service conditions of the rubber industry.

  4. Advances in statistical models for data analysis

    CERN Document Server

    Minerva, Tommaso; Vichi, Maurizio

    2015-01-01

    This edited volume focuses on recent research results in classification, multivariate statistics and machine learning and highlights advances in statistical models for data analysis. The volume provides both methodological developments and contributions to a wide range of application areas such as economics, marketing, education, social sciences and environment. The papers in this volume were first presented at the 9th biannual meeting of the Classification and Data Analysis Group (CLADAG) of the Italian Statistical Society, held in September 2013 at the University of Modena and Reggio Emilia, Italy.

  5. Classification, (big) data analysis and statistical learning

    CERN Document Server

    Conversano, Claudio; Vichi, Maurizio

    2018-01-01

    This edited book focuses on the latest developments in classification, statistical learning, data analysis and related areas of data science, including statistical analysis of large datasets, big data analytics, time series clustering, integration of data from different sources, as well as social networks. It covers both methodological aspects as well as applications to a wide range of areas such as economics, marketing, education, social sciences, medicine, environmental sciences and the pharmaceutical industry. In addition, it describes the basic features of the software behind the data analysis results, and provides links to the corresponding codes and data sets where necessary. This book is intended for researchers and practitioners who are interested in the latest developments and applications in the field. The peer-reviewed contributions were presented at the 10th Scientific Meeting of the Classification and Data Analysis Group (CLADAG) of the Italian Statistical Society, held in Santa Margherita di Pul...

  6. Rweb:Web-based Statistical Analysis

    Directory of Open Access Journals (Sweden)

    Jeff Banfield

    1999-03-01

    Full Text Available Rweb is a freely accessible statistical analysis environment that is delivered through the World Wide Web (WWW. It is based on R, a well known statistical analysis package. The only requirement to run the basic Rweb interface is a WWW browser that supports forms. If you want graphical output you must, of course, have a browser that supports graphics. The interface provides access to WWW accessible data sets, so you may run Rweb on your own data. Rweb can provide a four window statistical computing environment (code input, text output, graphical output, and error information through browsers that support Javascript. There is also a set of point and click modules under development for use in introductory statistics courses.

  7. Statistics and analysis of scientific data

    CERN Document Server

    Bonamente, Massimiliano

    2013-01-01

    Statistics and Analysis of Scientific Data covers the foundations of probability theory and statistics, and a number of numerical and analytical methods that are essential for the present-day analyst of scientific data. Topics covered include probability theory, distribution functions of statistics, fits to two-dimensional datasheets and parameter estimation, Monte Carlo methods and Markov chains. Equal attention is paid to the theory and its practical application, and results from classic experiments in various fields are used to illustrate the importance of statistics in the analysis of scientific data. The main pedagogical method is a theory-then-application approach, where emphasis is placed first on a sound understanding of the underlying theory of a topic, which becomes the basis for an efficient and proactive use of the material for practical applications. The level is appropriate for undergraduates and beginning graduate students, and as a reference for the experienced researcher. Basic calculus is us...

  8. Semiclassical analysis, Witten Laplacians, and statistical mechanis

    CERN Document Server

    Helffer, Bernard

    2002-01-01

    This important book explains how the technique of Witten Laplacians may be useful in statistical mechanics. It considers the problem of analyzing the decay of correlations, after presenting its origin in statistical mechanics. In addition, it compares the Witten Laplacian approach with other techniques, such as the transfer matrix approach and its semiclassical analysis. The author concludes by providing a complete proof of the uniform Log-Sobolev inequality. Contents: Witten Laplacians Approach; Problems in Statistical Mechanics with Discrete Spins; Laplace Integrals and Transfer Operators; S

  9. A statistical approach to plasma profile analysis

    International Nuclear Information System (INIS)

    Kardaun, O.J.W.F.; McCarthy, P.J.; Lackner, K.; Riedel, K.S.

    1990-05-01

    A general statistical approach to the parameterisation and analysis of tokamak profiles is presented. The modelling of the profile dependence on both the radius and the plasma parameters is discussed, and pertinent, classical as well as robust, methods of estimation are reviewed. Special attention is given to statistical tests for discriminating between the various models, and to the construction of confidence intervals for the parameterised profiles and the associated global quantities. The statistical approach is shown to provide a rigorous approach to the empirical testing of plasma profile invariance. (orig.)

  10. Foundation of statistical energy analysis in vibroacoustics

    CERN Document Server

    Le Bot, A

    2015-01-01

    This title deals with the statistical theory of sound and vibration. The foundation of statistical energy analysis is presented in great detail. In the modal approach, an introduction to random vibration with application to complex systems having a large number of modes is provided. For the wave approach, the phenomena of propagation, group speed, and energy transport are extensively discussed. Particular emphasis is given to the emergence of diffuse field, the central concept of the theory.

  11. A Statistical Toolkit for Data Analysis

    International Nuclear Information System (INIS)

    Donadio, S.; Guatelli, S.; Mascialino, B.; Pfeiffer, A.; Pia, M.G.; Ribon, A.; Viarengo, P.

    2006-01-01

    The present project aims to develop an open-source and object-oriented software Toolkit for statistical data analysis. Its statistical testing component contains a variety of Goodness-of-Fit tests, from Chi-squared to Kolmogorov-Smirnov, to less known, but generally much more powerful tests such as Anderson-Darling, Goodman, Fisz-Cramer-von Mises, Kuiper, Tiku. Thanks to the component-based design and the usage of the standard abstract interfaces for data analysis, this tool can be used by other data analysis systems or integrated in experimental software frameworks. This Toolkit has been released and is downloadable from the web. In this paper we describe the statistical details of the algorithms, the computational features of the Toolkit and describe the code validation

  12. Statistical analysis of network data with R

    CERN Document Server

    Kolaczyk, Eric D

    2014-01-01

    Networks have permeated everyday life through everyday realities like the Internet, social networks, and viral marketing. As such, network analysis is an important growth area in the quantitative sciences, with roots in social network analysis going back to the 1930s and graph theory going back centuries. Measurement and analysis are integral components of network research. As a result, statistical methods play a critical role in network analysis. This book is the first of its kind in network research. It can be used as a stand-alone resource in which multiple R packages are used to illustrate how to conduct a wide range of network analyses, from basic manipulation and visualization, to summary and characterization, to modeling of network data. The central package is igraph, which provides extensive capabilities for studying network graphs in R. This text builds on Eric D. Kolaczyk’s book Statistical Analysis of Network Data (Springer, 2009).

  13. Statistical Analysis of Data for Timber Strengths

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2003-01-01

    Statistical analyses are performed for material strength parameters from a large number of specimens of structural timber. Non-parametric statistical analysis and fits have been investigated for the following distribution types: Normal, Lognormal, 2 parameter Weibull and 3-parameter Weibull....... The statistical fits have generally been made using all data and the lower tail of the data. The Maximum Likelihood Method and the Least Square Technique have been used to estimate the statistical parameters in the selected distributions. The results show that the 2-parameter Weibull distribution gives the best...... fits to the data available, especially if tail fits are used whereas the Log Normal distribution generally gives a poor fit and larger coefficients of variation, especially if tail fits are used. The implications on the reliability level of typical structural elements and on partial safety factors...

  14. Statistics and analysis of scientific data

    CERN Document Server

    Bonamente, Massimiliano

    2017-01-01

    The revised second edition of this textbook provides the reader with a solid foundation in probability theory and statistics as applied to the physical sciences, engineering and related fields. It covers a broad range of numerical and analytical methods that are essential for the correct analysis of scientific data, including probability theory, distribution functions of statistics, fits to two-dimensional data and parameter estimation, Monte Carlo methods and Markov chains. Features new to this edition include: • a discussion of statistical techniques employed in business science, such as multiple regression analysis of multivariate datasets. • a new chapter on the various measures of the mean including logarithmic averages. • new chapters on systematic errors and intrinsic scatter, and on the fitting of data with bivariate errors. • a new case study and additional worked examples. • mathematical derivations and theoretical background material have been appropriately marked,to improve the readabili...

  15. The fuzzy approach to statistical analysis

    NARCIS (Netherlands)

    Coppi, Renato; Gil, Maria A.; Kiers, Henk A. L.

    2006-01-01

    For the last decades, research studies have been developed in which a coalition of Fuzzy Sets Theory and Statistics has been established with different purposes. These namely are: (i) to introduce new data analysis problems in which the objective involves either fuzzy relationships or fuzzy terms;

  16. Vapor Pressure Data Analysis and Statistics

    Science.gov (United States)

    2016-12-01

    SUBJECT TERMS Vapor pressure Antoine equation Statistical analysis Clausius–Clapeyron equation Standard deviation Volatility Enthalpy of volatilization...11 5. Antoine Constants (Equation 3), Standard Deviations , and S for 1-Tetradecanol .............12 6. Vapor...13 7. Antoine Constants (Equation 3), Standard Deviations , and S for DEM ............................13 8. Vapor Pressures

  17. Plasma data analysis using statistical analysis system

    International Nuclear Information System (INIS)

    Yoshida, Z.; Iwata, Y.; Fukuda, Y.; Inoue, N.

    1987-01-01

    Multivariate factor analysis has been applied to a plasma data base of REPUTE-1. The characteristics of the reverse field pinch plasma in REPUTE-1 are shown to be explained by four independent parameters which are described in the report. The well known scaling laws F/sub chi/ proportional to I/sub p/, T/sub e/ proportional to I/sub p/, and tau/sub E/ proportional to N/sub e/ are also confirmed. 4 refs., 8 figs., 1 tab

  18. Selected papers on analysis, probability, and statistics

    CERN Document Server

    Nomizu, Katsumi

    1994-01-01

    This book presents papers that originally appeared in the Japanese journal Sugaku. The papers fall into the general area of mathematical analysis as it pertains to probability and statistics, dynamical systems, differential equations and analytic function theory. Among the topics discussed are: stochastic differential equations, spectra of the Laplacian and Schrödinger operators, nonlinear partial differential equations which generate dissipative dynamical systems, fractal analysis on self-similar sets and the global structure of analytic functions.

  19. The Statistical Analysis of Time Series

    CERN Document Server

    Anderson, T W

    2011-01-01

    The Wiley Classics Library consists of selected books that have become recognized classics in their respective fields. With these new unabridged and inexpensive editions, Wiley hopes to extend the life of these important works by making them available to future generations of mathematicians and scientists. Currently available in the Series: T. W. Anderson Statistical Analysis of Time Series T. S. Arthanari & Yadolah Dodge Mathematical Programming in Statistics Emil Artin Geometric Algebra Norman T. J. Bailey The Elements of Stochastic Processes with Applications to the Natural Sciences George

  20. Statistical analysis of next generation sequencing data

    CERN Document Server

    Nettleton, Dan

    2014-01-01

    Next Generation Sequencing (NGS) is the latest high throughput technology to revolutionize genomic research. NGS generates massive genomic datasets that play a key role in the big data phenomenon that surrounds us today. To extract signals from high-dimensional NGS data and make valid statistical inferences and predictions, novel data analytic and statistical techniques are needed. This book contains 20 chapters written by prominent statisticians working with NGS data. The topics range from basic preprocessing and analysis with NGS data to more complex genomic applications such as copy number variation and isoform expression detection. Research statisticians who want to learn about this growing and exciting area will find this book useful. In addition, many chapters from this book could be included in graduate-level classes in statistical bioinformatics for training future biostatisticians who will be expected to deal with genomic data in basic biomedical research, genomic clinical trials and personalized med...

  1. Robust statistics and geochemical data analysis

    International Nuclear Information System (INIS)

    Di, Z.

    1987-01-01

    Advantages of robust procedures over ordinary least-squares procedures in geochemical data analysis is demonstrated using NURE data from the Hot Springs Quadrangle, South Dakota, USA. Robust principal components analysis with 5% multivariate trimming successfully guarded the analysis against perturbations by outliers and increased the number of interpretable factors. Regression with SINE estimates significantly increased the goodness-of-fit of the regression and improved the correspondence of delineated anomalies with known uranium prospects. Because of the ubiquitous existence of outliers in geochemical data, robust statistical procedures are suggested as routine procedures to replace ordinary least-squares procedures

  2. Multivariate analysis: A statistical approach for computations

    Science.gov (United States)

    Michu, Sachin; Kaushik, Vandana

    2014-10-01

    Multivariate analysis is a type of multivariate statistical approach commonly used in, automotive diagnosis, education evaluating clusters in finance etc and more recently in the health-related professions. The objective of the paper is to provide a detailed exploratory discussion about factor analysis (FA) in image retrieval method and correlation analysis (CA) of network traffic. Image retrieval methods aim to retrieve relevant images from a collected database, based on their content. The problem is made more difficult due to the high dimension of the variable space in which the images are represented. Multivariate correlation analysis proposes an anomaly detection and analysis method based on the correlation coefficient matrix. Anomaly behaviors in the network include the various attacks on the network like DDOs attacks and network scanning.

  3. Statistical Methods for Conditional Survival Analysis.

    Science.gov (United States)

    Jung, Sin-Ho; Lee, Ho Yun; Chow, Shein-Chung

    2017-11-29

    We investigate the survival distribution of the patients who have survived over a certain time period. This is called a conditional survival distribution. In this paper, we show that one-sample estimation, two-sample comparison and regression analysis of conditional survival distributions can be conducted using the regular methods for unconditional survival distributions that are provided by the standard statistical software, such as SAS and SPSS. We conduct extensive simulations to evaluate the finite sample property of these conditional survival analysis methods. We illustrate these methods with real clinical data.

  4. Statistical analysis of brake squeal noise

    Science.gov (United States)

    Oberst, S.; Lai, J. C. S.

    2011-06-01

    Despite substantial research efforts applied to the prediction of brake squeal noise since the early 20th century, the mechanisms behind its generation are still not fully understood. Squealing brakes are of significant concern to the automobile industry, mainly because of the costs associated with warranty claims. In order to remedy the problems inherent in designing quieter brakes and, therefore, to understand the mechanisms, a design of experiments study, using a noise dynamometer, was performed by a brake system manufacturer to determine the influence of geometrical parameters (namely, the number and location of slots) of brake pads on brake squeal noise. The experimental results were evaluated with a noise index and ranked for warm and cold brake stops. These data are analysed here using statistical descriptors based on population distributions, and a correlation analysis, to gain greater insight into the functional dependency between the time-averaged friction coefficient as the input and the peak sound pressure level data as the output quantity. The correlation analysis between the time-averaged friction coefficient and peak sound pressure data is performed by applying a semblance analysis and a joint recurrence quantification analysis. Linear measures are compared with complexity measures (nonlinear) based on statistics from the underlying joint recurrence plots. Results show that linear measures cannot be used to rank the noise performance of the four test pad configurations. On the other hand, the ranking of the noise performance of the test pad configurations based on the noise index agrees with that based on nonlinear measures: the higher the nonlinearity between the time-averaged friction coefficient and peak sound pressure, the worse the squeal. These results highlight the nonlinear character of brake squeal and indicate the potential of using nonlinear statistical analysis tools to analyse disc brake squeal.

  5. The CALORIES trial: statistical analysis plan.

    Science.gov (United States)

    Harvey, Sheila E; Parrott, Francesca; Harrison, David A; Mythen, Michael; Rowan, Kathryn M

    2014-12-01

    The CALORIES trial is a pragmatic, open, multicentre, randomised controlled trial (RCT) of the clinical effectiveness and cost-effectiveness of early nutritional support via the parenteral route compared with early nutritional support via the enteral route in unplanned admissions to adult general critical care units (CCUs) in the United Kingdom. The trial derives from the need for a large, pragmatic RCT to determine the optimal route of delivery for early nutritional support in the critically ill. To describe the proposed statistical analyses for the evaluation of the clinical effectiveness in the CALORIES trial. With the primary and secondary outcomes defined precisely and the approach to safety monitoring and data collection summarised, the planned statistical analyses, including prespecified subgroups and secondary analyses, were developed and are described. The primary outcome is all-cause mortality at 30 days. The primary analysis will be reported as a relative risk and absolute risk reduction and tested with the Fisher exact test. Prespecified subgroup analyses will be based on age, degree of malnutrition, acute severity of illness, mechanical ventilation at admission to the CCU, presence of cancer and time from CCU admission to commencement of early nutritional support. Secondary analyses include adjustment for baseline covariates. In keeping with best trial practice, we have developed, described and published a statistical analysis plan for the CALORIES trial and are placing it in the public domain before inspecting data from the trial.

  6. Sensitivity analysis and related analysis : A survey of statistical techniques

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    1995-01-01

    This paper reviews the state of the art in five related types of analysis, namely (i) sensitivity or what-if analysis, (ii) uncertainty or risk analysis, (iii) screening, (iv) validation, and (v) optimization. The main question is: when should which type of analysis be applied; which statistical

  7. Prevalence and Predictors of Inappropriate Medications Prescribing ...

    African Journals Online (AJOL)

    Data analysis involved use of World Health Organization (WHO) prescribing indicators, Updated 2002 Beer's criteria and DRUG-REAX® system software package of MICROMEDEX (R) Healthcare Series to assess the prescribing pattern, identify potentially inappropriate medications and potential drug-drug interactions, ...

  8. Syndrome of inappropriate antidiuretic hormone secretion (SIADH) or hyponatraemia associated with valproic Acid : four case reports from the Netherlands and a case/non-case analysis of vigibase

    NARCIS (Netherlands)

    Beers, Erna; van Puijenbroek, Eugène P; Bartelink, Imke H; van der Linden, Carolien M J; Jansen, Paul A F

    The Netherlands Pharmacovigilance Centre Lareb received four cases of severe symptomatic hyponatraemia or syndrome of inappropriate antidiuretic hormone secretion (SIADH) in association with valproic acid use, in which a causal relationship was suspected. This study describes these cases and gives

  9. On the Statistical Validation of Technical Analysis

    Directory of Open Access Journals (Sweden)

    Rosane Riera Freire

    2007-06-01

    Full Text Available Technical analysis, or charting, aims on visually identifying geometrical patterns in price charts in order to antecipate price "trends". In this paper we revisit the issue of thecnical analysis validation which has been tackled in the literature without taking care for (i the presence of heterogeneity and (ii statistical dependence in the analyzed data - various agglutinated return time series from distinct financial securities. The main purpose here is to address the first cited problem by suggesting a validation methodology that also "homogenizes" the securities according to the finite dimensional probability distribution of their return series. The general steps go through the identification of the stochastic processes for the securities returns, the clustering of similar securities and, finally, the identification of presence, or absence, of informatinal content obtained from those price patterns. We illustrate the proposed methodology with a real data exercise including several securities of the global market. Our investigation shows that there is a statistically significant informational content in two out of three common patterns usually found through technical analysis, namely: triangle, rectangle and head and shoulders.

  10. Statistical trend analysis methods for temporal phenomena

    International Nuclear Information System (INIS)

    Lehtinen, E.; Pulkkinen, U.; Poern, K.

    1997-04-01

    We consider point events occurring in a random way in time. In many applications the pattern of occurrence is of intrinsic interest as indicating a trend or some other systematic feature in the rate of occurrence. The purpose of this report is to survey briefly different statistical trend analysis methods and illustrate their applicability to temporal phenomena in particular. The trend testing of point events is usually seen as the testing of the hypotheses concerning the intensity of the occurrence of events. When the intensity function is parametrized, the testing of trend is a typical parametric testing problem. In industrial applications the operational experience generally does not suggest any specified model and method in advance. Therefore, and particularly, if the Poisson process assumption is very questionable, it is desirable to apply tests that are valid for a wide variety of possible processes. The alternative approach for trend testing is to use some non-parametric procedure. In this report we have presented four non-parametric tests: The Cox-Stuart test, the Wilcoxon signed ranks test, the Mann test, and the exponential ordered scores test. In addition to the classical parametric and non-parametric approaches we have also considered the Bayesian trend analysis. First we discuss a Bayesian model, which is based on a power law intensity model. The Bayesian statistical inferences are based on the analysis of the posterior distribution of the trend parameters, and the probability of trend is immediately seen from these distributions. We applied some of the methods discussed in an example case. It should be noted, that this report is a feasibility study rather than a scientific evaluation of statistical methods, and the examples can only be seen as demonstrations of the methods

  11. STATISTICS, Program System for Statistical Analysis of Experimental Data

    International Nuclear Information System (INIS)

    Helmreich, F.

    1991-01-01

    1 - Description of problem or function: The package is composed of 83 routines, the most important of which are the following: BINDTR: Binomial distribution; HYPDTR: Hypergeometric distribution; POIDTR: Poisson distribution; GAMDTR: Gamma distribution; BETADTR: Beta-1 and Beta-2 distributions; NORDTR: Normal distribution; CHIDTR: Chi-square distribution; STUDTR : Distribution of 'Student's T'; FISDTR: Distribution of F; EXPDTR: Exponential distribution; WEIDTR: Weibull distribution; FRAKTIL: Calculation of the fractiles of the normal, chi-square, Student's, and F distributions; VARVGL: Test for equality of variance for several sample observations; ANPAST: Kolmogorov-Smirnov test and chi-square test of goodness of fit; MULIRE: Multiple linear regression analysis for a dependent variable and a set of independent variables; STPRG: Performs a stepwise multiple linear regression analysis for a dependent variable and a set of independent variables. At each step, the variable entered into the regression equation is the one which has the greatest amount of variance between it and the dependent variable. Any independent variable can be forced into or deleted from the regression equation, irrespective of its contribution to the equation. LTEST: Tests the hypotheses of linearity of the data. SPRANK: Calculates the Spearman rank correlation coefficient. 2 - Method of solution: VARVGL: The Bartlett's Test, the Cochran's Test and the Hartley's Test are performed in the program. MULIRE: The Gauss-Jordan method is used in the solution of the normal equations. STPRG: The abbreviated Doolittle method is used to (1) determine variables to enter into the regression, and (2) complete regression coefficient calculation. 3 - Restrictions on the complexity of the problem: VARVGL: The Hartley's Test is only performed if the sample observations are all of the same size

  12. Statistical analysis of solar proton events

    Directory of Open Access Journals (Sweden)

    V. Kurt

    2004-06-01

    Full Text Available A new catalogue of 253 solar proton events (SPEs with energy >10MeV and peak intensity >10 protons/cm2.s.sr (pfu at the Earth's orbit for three complete 11-year solar cycles (1970-2002 is given. A statistical analysis of this data set of SPEs and their associated flares that occurred during this time period is presented. It is outlined that 231 of these proton events are flare related and only 22 of them are not associated with Ha flares. It is also noteworthy that 42 of these events are registered as Ground Level Enhancements (GLEs in neutron monitors. The longitudinal distribution of the associated flares shows that a great number of these events are connected with west flares. This analysis enables one to understand the long-term dependence of the SPEs and the related flare characteristics on the solar cycle which are useful for space weather prediction.

  13. STATISTICAL ANALYSIS OF PUBLIC ADMINISTRATION PAY

    Directory of Open Access Journals (Sweden)

    Elena I. Dobrolyubova

    2014-01-01

    Full Text Available This article reviews the progress achieved inimproving the pay system in public administration and outlines the key issues to be resolved.The cross-country comparisons presented inthe article suggest high differentiation in pay levels depending on position held. In fact,this differentiation in Russia exceeds one in OECD almost twofold The analysis of theinternal pay structure demonstrates that thelow share of the base pay leads to perversenature of ‘stimulation elements’ of the paysystem which in fact appear to be used mostlyfor compensation purposes. The analysis of regional statistical data demonstrates thatdespite high differentiation among regionsin terms of their revenue potential, averagepublic official pay is strongly correlated withthe average regional pay.

  14. Inappropriate prescribing in the elderly.

    LENUS (Irish Health Repository)

    Gallagher, P

    2012-02-03

    BACKGROUND AND OBJECTIVE: Drug therapy is necessary to treat acute illness, maintain current health and prevent further decline. However, optimizing drug therapy for older patients is challenging and sometimes, drug therapy can do more harm than good. Drug utilization review tools can highlight instances of potentially inappropriate prescribing to those involved in elderly pharmacotherapy, i.e. doctors, nurses and pharmacists. We aim to provide a review of the literature on potentially inappropriate prescribing in the elderly and also to review the explicit criteria that have been designed to detect potentially inappropriate prescribing in the elderly. METHODS: We performed an electronic search of the PUBMED database for articles published between 1991 and 2006 and a manual search through major journals for articles referenced in those located through PUBMED. Search terms were elderly, inappropriate prescribing, prescriptions, prevalence, Beers criteria, health outcomes and Europe. RESULTS AND DISCUSSION: Prescription of potentially inappropriate medications to older people is highly prevalent in the United States and Europe, ranging from 12% in community-dwelling elderly to 40% in nursing home residents. Inappropriate prescribing is associated with adverse drug events. Limited data exists on health outcomes from use of inappropriate medications. There are no prospective randomized controlled studies that test the tangible clinical benefit to patients of using drug utilization review tools. Existing drug utilization review tools have been designed on the basis of North American and Canadian drug formularies and may not be appropriate for use in European countries because of the differences in national drug formularies and prescribing attitudes. CONCLUSION: Given the high prevalence of inappropriate prescribing despite the widespread use of drug-utilization review tools, prospective randomized controlled trials are necessary to identify useful interventions. Drug

  15. Statistical analysis of tourism destination competitiveness

    Directory of Open Access Journals (Sweden)

    Attilio Gardini

    2013-05-01

    Full Text Available The growing relevance of tourism industry for modern advanced economies has increased the interest among researchers and policy makers in the statistical analysis of destination competitiveness. In this paper we outline a new model of destination competitiveness based on sound theoretical grounds and we develop a statistical test of the model on sample data based on Italian tourist destination decisions and choices. Our model focuses on the tourism decision process which starts from the demand schedule for holidays and ends with the choice of a specific holiday destination. The demand schedule is a function of individual preferences and of destination positioning, while the final decision is a function of the initial demand schedule and the information concerning services for accommodation and recreation in the selected destinations. Moreover, we extend previous studies that focused on image or attributes (such as climate and scenery by paying more attention to the services for accommodation and recreation in the holiday destinations. We test the proposed model using empirical data collected from a sample of 1.200 Italian tourists interviewed in 2007 (October - December. Data analysis shows that the selection probability for the destination included in the consideration set is not proportional to the share of inclusion because the share of inclusion is determined by the brand image, while the selection of the effective holiday destination is influenced by the real supply conditions. The analysis of Italian tourists preferences underline the existence of a latent demand for foreign holidays which points out a risk of market share reduction for Italian tourism system in the global market. We also find a snow ball effect which helps the most popular destinations, mainly in the northern Italian regions.

  16. [Inappropriate test methods in allergy].

    Science.gov (United States)

    Kleine-Tebbe, J; Herold, D A

    2010-11-01

    Inappropriate test methods are increasingly utilized to diagnose allergy. They fall into two categories: I. Tests with obscure theoretical basis, missing validity and lacking reproducibility, such as bioresonance, electroacupuncture, applied kinesiology and the ALCAT-test. These methods lack both the technical and clinical validation needed to justify their use. II. Tests with real data, but misleading interpretation: Detection of IgG or IgG4-antibodies or lymphocyte proliferation tests to foods do not allow to separate healthy from diseased subjects, neither in case of food intolerance, allergy or other diagnoses. The absence of diagnostic specificity induces many false positive findings in healthy subjects. As a result unjustified diets might limit quality of life and lead to malnutrition. Proliferation of lymphocytes in response to foods can show elevated rates in patients with allergies. These values do not allow individual diagnosis of hypersensitivity due to their broad variation. Successful internet marketing, infiltration of academic programs and superficial reporting by the media promote the popularity of unqualified diagnostic tests; also in allergy. Therefore, critical observation and quick analysis of and clear comments to unqualified methods by the scientific medical societies are more important than ever.

  17. A statistical analysis of electrical cerebral activity

    International Nuclear Information System (INIS)

    Bassant, Marie-Helene

    1971-01-01

    The aim of this work was to study the statistical properties of the amplitude of the electroencephalographic signal. The experimental method is described (implantation of electrodes, acquisition and treatment of data). The program of the mathematical analysis is given (calculation of probability density functions, study of stationarity) and the validity of the tests discussed. The results concerned ten rabbits. Trips of EEG were sampled during 40 s. with very short intervals (500 μs). The probability density functions established for different brain structures (especially the dorsal hippocampus) and areas, were compared during sleep, arousal and visual stimulus. Using a Χ 2 test, it was found that the Gaussian distribution assumption was rejected in 96.7 per cent of the cases. For a given physiological state, there was no mathematical reason to reject the assumption of stationarity (in 96 per cent of the cases). (author) [fr

  18. SMACS, Probabilistic Seismic Analysis Chain with Statistics

    International Nuclear Information System (INIS)

    Johnson, J.J.; Maslenikov, O.R.; Tiong, L.W.; Mraz, M.J.; Bumpus, S.; Gerhard, M.A.

    1989-01-01

    1 - Description of program or function: The SMACS (Seismic Methodology Analysis Chain with Statistics) system of computer programs is one of the major computational tools of the U.S. NRC Seismic Safety Margins Research Program (SSMRP). SMACS is comprised of the core program SMAX, which performs the SSI response analyses, five pre- processing programs, and two post-processors. The pre-processing programs include: GLAY and CLAN, which generate the nominal impedance matrices and wave scattering vectors for surface-founded structures; INSSIN, which projects the dynamic properties of structures to the foundation in the form of modal participation factors and mass matrices; SAPPAC, which projects the dynamic and pseudo-static properties of multiply-supported piping systems to the support locations, and LNGEN, which can be used to generate the multiplication factors to be applied to the nominal soil, structural, and subsystem properties for each of the response calculations in accounting for random variations of these properties. The post-processors are: PRESTO, which performs statistical operations on the raw data from the response vectors that SMAX produces to calculate best fit lognormal distributions for each response location, and CHANGO, which manipulates the data produced by PRESTO to produce other results of interest to the user. Also included is the computer program SAP4 (a modified version of the University of California, Berkeley SAPIV program), a general linear structural analysis program used for eigenvalue extractions and pseudo-static mode calculations of the models of major structures and subsystems. SAP4 is used to prepare input to the INSSIN and SAPPAC preprocessing programs. The GLAY and CLAN programs were originally developed by J.E. Luco (UCSD) and H.L. Wong (USC). 2 - Method of solution: SMACS performs repeated deterministic analyses, each analysis simulating an earthquake occurrence. Uncertainty is accounted for by performing many such analyses

  19. POPI (Pediatrics: Omission of Prescriptions and Inappropriate prescriptions: development of a tool to identify inappropriate prescribing.

    Directory of Open Access Journals (Sweden)

    Sonia Prot-Labarthe

    Full Text Available INTRODUCTION: Rational prescribing for children is an issue for all countries and has been inadequately studied. Inappropriate prescriptions, including drug omissions, are one of the main causes of medication errors in this population. Our aim is to develop a screening tool to identify omissions and inappropriate prescriptions in pediatrics based on French and international guidelines. METHODS: A selection of diseases was included in the tool using data from social security and hospital statistics. A literature review was done to obtain criteria which could be included in the tool called POPI. A 2-round-Delphi consensus technique was used to establish the content validity of POPI; panelists were asked to rate their level of agreement with each proposition on a 9-point Likert scale and add suggestions if necessary. RESULTS: 108 explicit criteria (80 inappropriate prescriptions and 28 omissions were obtained and submitted to a 16-member expert panel (8 pharmacists, 8 pediatricians hospital-based -50%- or working in community -50%-. Criteria were categorized according to the main physiological systems (gastroenterology, respiratory infections, pain, neurology, dermatology and miscellaneous. Each criterion was accompanied by a concise explanation as to why the practice is potentially inappropriate in pediatrics (including references. Two round of Delphi process were completed via an online questionnaire. 104 out of the 108 criteria submitted to experts were selected after 2 Delphi rounds (79 inappropriate prescriptions and 25 omissions. DISCUSSION CONCLUSION: POPI is the first screening-tool develop to detect inappropriate prescriptions and omissions in pediatrics based on explicit criteria. Inter-user reliability study is necessary before using the tool, and prospective study to assess the effectiveness of POPI is also necessary.

  20. R: a statistical environment for hydrological analysis

    Science.gov (United States)

    Zambrano-Bigiarini, Mauricio; Bellin, Alberto

    2010-05-01

    The free software environment for statistical computing and graphics "R" has been developed and it is maintained by statistical programmers, with the support of an increasing community of users with many different backgrounds, which allows access to both well-established and experimental techniques. Hydrological modelling practitioners spent large amount of time in pre- and post-processing data and results with traditional instruments. In this work "R" and some of its packages are presented as powerful tools to explore and extract patterns from raw information, to pre-process input data of hydrological models, and post-processing its results. In particular, examples are taken from analysing 30-years of daily data for a basin of 85000 km2, saving a large amount of time that could be better spent in doing analysis. In doing so, vectorial and raster GIS files were imported, for carrying out spatial and geostatistical analysis. Thousands of raw text files with time series of precipitation, temperature and streamflow were summarized and organized. Gauging stations to be used in the modelling process are selected according to the amount of days with information, and missing time series data are filled in using spatial interpolation. Time series on the gauging stations are summarized through daily, monthly and annual plots. Input files in dbase format are automatically created in a batch process. Results of a hydrological model are compared with observed values through plots and numerical goodness of fit indexes. Two packages specifically developed to assists hydrologists in the previous tasks are briefly presented. At the end, we think the "R" environment would be a valuable tool to support undergraduate and graduate education in hydrology, because it is helpful to capture the main features of large amount of data; it is a flexible and fully functional programming language, able to be interfaced to existing Fortran and C code and well suited to the ever growing demands

  1. An R package for statistical provenance analysis

    Science.gov (United States)

    Vermeesch, Pieter; Resentini, Alberto; Garzanti, Eduardo

    2016-05-01

    This paper introduces provenance, a software package within the statistical programming environment R, which aims to facilitate the visualisation and interpretation of large amounts of sedimentary provenance data, including mineralogical, petrographic, chemical and isotopic provenance proxies, or any combination of these. provenance comprises functions to: (a) calculate the sample size required to achieve a given detection limit; (b) plot distributional data such as detrital zircon U-Pb age spectra as Cumulative Age Distributions (CADs) or adaptive Kernel Density Estimates (KDEs); (c) plot compositional data as pie charts or ternary diagrams; (d) correct the effects of hydraulic sorting on sandstone petrography and heavy mineral composition; (e) assess the settling equivalence of detrital minerals and grain-size dependence of sediment composition; (f) quantify the dissimilarity between distributional data using the Kolmogorov-Smirnov and Sircombe-Hazelton distances, or between compositional data using the Aitchison and Bray-Curtis distances; (e) interpret multi-sample datasets by means of (classical and nonmetric) Multidimensional Scaling (MDS) and Principal Component Analysis (PCA); and (f) simplify the interpretation of multi-method datasets by means of Generalised Procrustes Analysis (GPA) and 3-way MDS. All these tools can be accessed through an intuitive query-based user interface, which does not require knowledge of the R programming language. provenance is free software released under the GPL-2 licence and will be further expanded based on user feedback.

  2. Statistical analysis in MSW collection performance assessment.

    Science.gov (United States)

    Teixeira, Carlos Afonso; Avelino, Catarina; Ferreira, Fátima; Bentes, Isabel

    2014-09-01

    The increase of Municipal Solid Waste (MSW) generated over the last years forces waste managers pursuing more effective collection schemes, technically viable, environmentally effective and economically sustainable. The assessment of MSW services using performance indicators plays a crucial role for improving service quality. In this work, we focus on the relevance of regular system monitoring as a service assessment tool. In particular, we select and test a core-set of MSW collection performance indicators (effective collection distance, effective collection time and effective fuel consumption) that highlights collection system strengths and weaknesses and supports pro-active management decision-making and strategic planning. A statistical analysis was conducted with data collected in mixed collection system of Oporto Municipality, Portugal, during one year, a week per month. This analysis provides collection circuits' operational assessment and supports effective short-term municipality collection strategies at the level of, e.g., collection frequency and timetables, and type of containers. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Analysis of Inappropriate Medication Use in Older Adults Discharged From Hospitals Affiliated With Tehran University of Medical Sciences (TUMS Using the Beers Criteria in 2010

    Directory of Open Access Journals (Sweden)

    Leila Vali

    2011-10-01

    Full Text Available Objectives: Studies demonstrate that chronic diseases are more frequent among the elderly than other age groups. Therefore, it is reasonable to assume that more pharmaceuticals are consumed by this age group than by others and that older patients are more prone to pharmaceutical side effects and complications due to such higher drug consumption rates. Changes in pharmacokinetics and pharmacodynamics, among others, are considered as major causes of medication related complications among the elderly. Another factor worth noting is the inappropriate choice of medications prescribed for such patients, who can benefit from the identification of such medications and better care in their prescription. These issues are among the well known factors discussed in recent and relevant literature and may inflict significant harm on the health and well-being of the elderly population. Methods & Materials: For the purpose of the present study 212 patients aged 60yr and over (mean age: 69.32 yr discharged from 4 (2 teaching and 2 non-teaching general hospitals affiliated with TUMS were selected. The Beers Criteria was employed to assess inappropriate use of pharmaceuticals by the sample population. Results: Findings reveal that there was a significant relation between the level of income and the inappropriate use of medications among the sample population (P=0.041. The most frequent inappropriate use of medications, in order of frequency, included alprazolam (16.66%, chlordiazepoxide (14.28%, fluoxetine (11.90%, and oxazepam (11.90%. The highest rate of drug interactions was observed for the drug clopidogrel (29.4%. Benzodiazepines were recognized as the most frequent class of pharmaceuticals consumed by the patients (49.98%. There was no significant relationship between income rates and the amount of inappropriate drug use (P=0.041. Conclusion: Inappropriate consumption of pharmaceuticals was relatively high among the study population, in comparison to similar

  4. Transit safety & security statistics & analysis 2002 annual report (formerly SAMIS)

    Science.gov (United States)

    2004-12-01

    The Transit Safety & Security Statistics & Analysis 2002 Annual Report (formerly SAMIS) is a compilation and analysis of mass transit accident, casualty, and crime statistics reported under the Federal Transit Administrations (FTAs) National Tr...

  5. Transit safety & security statistics & analysis 2003 annual report (formerly SAMIS)

    Science.gov (United States)

    2005-12-01

    The Transit Safety & Security Statistics & Analysis 2003 Annual Report (formerly SAMIS) is a compilation and analysis of mass transit accident, casualty, and crime statistics reported under the Federal Transit Administrations (FTAs) National Tr...

  6. Statistical Analysis of Bus Networks in India.

    Science.gov (United States)

    Chatterjee, Atanu; Manohar, Manju; Ramadurai, Gitakrishnan

    2016-01-01

    In this paper, we model the bus networks of six major Indian cities as graphs in L-space, and evaluate their various statistical properties. While airline and railway networks have been extensively studied, a comprehensive study on the structure and growth of bus networks is lacking. In India, where bus transport plays an important role in day-to-day commutation, it is of significant interest to analyze its topological structure and answer basic questions on its evolution, growth, robustness and resiliency. Although the common feature of small-world property is observed, our analysis reveals a wide spectrum of network topologies arising due to significant variation in the degree-distribution patterns in the networks. We also observe that these networks although, robust and resilient to random attacks are particularly degree-sensitive. Unlike real-world networks, such as Internet, WWW and airline, that are virtual, bus networks are physically constrained. Our findings therefore, throw light on the evolution of such geographically and constrained networks that will help us in designing more efficient bus networks in the future.

  7. Developments in statistical analysis in quantitative genetics

    DEFF Research Database (Denmark)

    Sorensen, Daniel

    2009-01-01

    A remarkable research impetus has taken place in statistical genetics since the last World Conference. This has been stimulated by breakthroughs in molecular genetics, automated data-recording devices and computer-intensive statistical methods. The latter were revolutionized by the bootstrap and ...

  8. Statistical network analysis for analyzing policy networks

    DEFF Research Database (Denmark)

    Robins, Garry; Lewis, Jenny; Wang, Peng

    2012-01-01

    and policy network methodology is the development of statistical modeling approaches that can accommodate such dependent data. In this article, we review three network statistical methods commonly used in the current literature: quadratic assignment procedures, exponential random graph models (ERGMs...... has much to offer in analyzing the policy process....

  9. Surface Properties of TNOs: Preliminary Statistical Analysis

    Science.gov (United States)

    Antonieta Barucci, Maria; Fornasier, S.; Alvarez-Cantal, A.; de Bergh, C.; Merlin, F.; DeMeo, F.; Dumas, C.

    2009-09-01

    An overview of the surface properties based on the last results obtained during the Large Program performed at ESO-VLT (2007-2008) will be presented. Simultaneous high quality visible and near-infrared spectroscopy and photometry have been carried out on 40 objects with various dynamical properties, using FORS1 (V), ISAAC (J) and SINFONI (H+K bands) mounted respectively at UT2, UT1 and UT4 VLT-ESO telescopes (Cerro Paranal, Chile). For spectroscopy we computed the spectral slope for each object and searched for possible rotational inhomogeneities. A few objects show features in their visible spectra such as Eris, whose spectral bands are displaced with respect to pure methane-ice. We identify new faint absorption features on 10199 Chariklo and 42355 Typhon, possibly due to the presence of aqueous altered materials. The H+K band spectroscopy was performed with the new instrument SINFONI which is a 3D integral field spectrometer. While some objects show no diagnostic spectral bands, others reveal surface deposits of ices of H2O, CH3OH, CH4, and N2. To investigate the surface properties of these bodies, a radiative transfer model has been applied to interpret the entire 0.4-2.4 micron spectral region. The diversity of the spectra suggests that these objects represent a substantial range of bulk compositions. These different surface compositions can be diagnostic of original compositional diversity, interior source and/or different evolution with different physical processes affecting the surfaces. A statistical analysis is in progress to investigate the correlation of the TNOs’ surface properties with size and dynamical properties.

  10. Statistical Analysis of Data for Timber Strengths

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Hoffmeyer, P.

    . The statistical fits have generally been made using all data (100%) and the lower tail (30%) of the data. The Maximum Likelihood Method and the Least Square Technique have been used to estimate the statistical parameters in the selected distributions. 8 different databases are analysed. The results show that 2......Statistical analyses are performed for material strength parameters from approximately 6700 specimens of structural timber. Non-parametric statistical analyses and fits to the following distributions types have been investigated: Normal, Lognormal, 2 parameter Weibull and 3-parameter Weibull......-parameter Weibull (and Normal) distributions give the best fits to the data available, especially if tail fits are used whereas the LogNormal distribution generally gives poor fit and larger coefficients of variation, especially if tail fits are used....

  11. Analysis of Preference Data Using Intermediate Test Statistic Abstract

    African Journals Online (AJOL)

    PROF. O. E. OSUAGWU

    2013-06-01

    Jun 1, 2013 ... [5] Hill, I.D., Some Aspects of Election-to-fill one seat or many, Journal of Royal. Statistical Society A, No. 151, pp. 310-314. [6] Myers, R.H., A First Course in the Theorey of Linear Statistical Models, PWS-. KENT, Boston, 1991. [7] Taplin, R.H., The Statistical Analysis of Preference Data, Applied Statistics, No.

  12. Computer-Assisted Statistical Analysis: Mainframe or Microcomputer.

    Science.gov (United States)

    Shannon, David M.

    1993-01-01

    Describes a study that was designed to examine whether the computer attitudes of graduate students in a beginning statistics course differed based on their prior computer experience and the type of statistical analysis package used. Versions of statistical analysis packages using a mainframe and a microcomputer are compared. (14 references) (LRW)

  13. Statistical Analysis of Research Data | Center for Cancer Research

    Science.gov (United States)

    Recent advances in cancer biology have resulted in the need for increased statistical analysis of research data. The Statistical Analysis of Research Data (SARD) course will be held on April 5-6, 2018 from 9 a.m.-5 p.m. at the National Institutes of Health's Natcher Conference Center, Balcony C on the Bethesda Campus. SARD is designed to provide an overview on the general principles of statistical analysis of research data.  The first day will feature univariate data analysis, including descriptive statistics, probability distributions, one- and two-sample inferential statistics.

  14. On statistical analysis of compound point process

    Czech Academy of Sciences Publication Activity Database

    Volf, Petr

    2006-01-01

    Roč. 35, 2-3 (2006), s. 389-396 ISSN 1026-597X R&D Projects: GA ČR(CZ) GA402/04/1294 Institutional research plan: CEZ:AV0Z10750506 Keywords : counting process * compound process * hazard function * Cox -model Subject RIV: BB - Applied Statistics, Operational Research

  15. [Statistical analysis on andrological patients. I. Frequencies].

    Science.gov (United States)

    Nebe, K H; Schirren, C

    1980-01-01

    According a collective of 1619 andrological patients of the year 1975 some statistical data were given: age distribution, frequencies, frequency of sexual intercourse, anticonception and relation to age, coitus frequency and relation to age, impotence and relation to age, previous andrological treatment.

  16. Commentary Discrepancy between statistical analysis method and ...

    African Journals Online (AJOL)

    Malawi University of Science and Technology, Thyolo, Malawi. 2. Department of Statistical Sciences, University of Cape Town, Cape Town, South Africa. 3. Malawi College of Medicine–Johns Hopkins University Research Project, College of Medicine, University of Malawi, Blantyre, Malawi. 4. Mahidol–Oxford Research Unit ...

  17. Uncertainty analysis with statistically correlated failure data

    International Nuclear Information System (INIS)

    Modarres, M.; Dezfuli, H.; Roush, M.L.

    1987-01-01

    Likelihood of occurrence of the top event of a fault tree or sequences of an event tree is estimated from the failure probability of components that constitute the events of the fault/event tree. Component failure probabilities are subject to statistical uncertainties. In addition, there are cases where the failure data are statistically correlated. At present most fault tree calculations are based on uncorrelated component failure data. This chapter describes a methodology for assessing the probability intervals for the top event failure probability of fault trees or frequency of occurrence of event tree sequences when event failure data are statistically correlated. To estimate mean and variance of the top event, a second-order system moment method is presented through Taylor series expansion, which provides an alternative to the normally used Monte Carlo method. For cases where component failure probabilities are statistically correlated, the Taylor expansion terms are treated properly. Moment matching technique is used to obtain the probability distribution function of the top event through fitting the Johnson Ssub(B) distribution. The computer program, CORRELATE, was developed to perform the calculations necessary for the implementation of the method developed. (author)

  18. Statistical Analysis Of Reconnaissance Geochemical Data From ...

    African Journals Online (AJOL)

    , Co, Mo, Hg, Sb, Tl, Sc, Cr, Ni, La, W, V, U, Th, Bi, Sr and Ga in 56 stream sediment samples collected from Orle drainage system were subjected to univariate and multivariate statistical analyses. The univariate methods used include ...

  19. Statistical models and methods for reliability and survival analysis

    CERN Document Server

    Couallier, Vincent; Huber-Carol, Catherine; Mesbah, Mounir; Huber -Carol, Catherine; Limnios, Nikolaos; Gerville-Reache, Leo

    2013-01-01

    Statistical Models and Methods for Reliability and Survival Analysis brings together contributions by specialists in statistical theory as they discuss their applications providing up-to-date developments in methods used in survival analysis, statistical goodness of fit, stochastic processes for system reliability, amongst others. Many of these are related to the work of Professor M. Nikulin in statistics over the past 30 years. The authors gather together various contributions with a broad array of techniques and results, divided into three parts - Statistical Models and Methods, Statistical

  20. Closed-loop spontaneous baroreflex transfer function is inappropriate for system identification of neural arc but partly accurate for peripheral arc: predictability analysis

    Science.gov (United States)

    Kamiya, Atsunori; Kawada, Toru; Shimizu, Shuji; Sugimachi, Masaru

    2011-01-01

    . Furthermore, the predictabilities of the neural arc transfer functions obtained in open-loop and closed-loop conditions were validated by closed-loop pharmacological (phenylephrine and nitroprusside infusions) pressure interventions. Time-series SNA responses to drug-induced AP changes predicted by the open-loop transfer function matched closely the measured responses (r2, 0.9 ± 0.1), whereas SNA responses predicted by closed-loop-spontaneous transfer function deviated greatly and were the inverse of measured responses (r, −0.8 ± 0.2). These results indicate that although the spontaneous baroreflex transfer function obtained by closed-loop analysis has been believed to represent the neural arc function, it is inappropriate for system identification of the neural arc but is essentially appropriate for the peripheral arc under resting conditions, when compared with open-loop analysis. PMID:21486839

  1. Statistical analysis of medical data using SAS

    CERN Document Server

    Der, Geoff

    2005-01-01

    An Introduction to SASDescribing and Summarizing DataBasic InferenceScatterplots Correlation: Simple Regression and SmoothingAnalysis of Variance and CovarianceMultiple RegressionLogistic RegressionThe Generalized Linear ModelGeneralized Additive ModelsNonlinear Regression ModelsThe Analysis of Longitudinal Data IThe Analysis of Longitudinal Data II: Models for Normal Response VariablesThe Analysis of Longitudinal Data III: Non-Normal ResponseSurvival AnalysisAnalysis Multivariate Date: Principal Components and Cluster AnalysisReferences

  2. Methods of statistical analysis of fluctuating asymmetry

    Directory of Open Access Journals (Sweden)

    Zorina Anastasia

    2012-10-01

    Full Text Available Methodical problems concerning the practical use of fluctuating asymmetry level of bio-objects are considered. The questions connected with the variety of value asymmetry calculation methods and the use of asymmetry indicators efficiency and integrated indexes are discussed in detail. Discrepancy of research results when using several estimates of asymmetry is connected with their statistical properties and peculiarity of their normal variability which define sensitivity and operability of indicators. Concrete examples illustrating the negative influence of arithmetic transformations on the revealing properties of indicators are given: disturbance of normal distribution and the need of using rough nonparametric criteria , the increase of the importance of rare casual deviations, the introduction of additional variability components into an asymmetry level. Problems which arise in calculating asymmetry integrated indexes when signs unite with different levels of statistical parameters are separately considered. It is recommended to use the indicator of fluctuating asymmetry based on normalized deviation.

  3. Statistical uncertainty analysis in reactor risk estimation

    International Nuclear Information System (INIS)

    Modarres, M.; Cadman, T.

    1985-01-01

    Two promising methods of statistical uncertainty evaluation for use in probabilistic risk assessment (PRA) are described, tested, and compared in this study. These two methods are the Bootsrap technique and the System Reduction technique. Both of these methods use binomial distributions to model all probability estimates. Necessary modifications to these two methods are discussed. These modifications are necessary for an objective use of the methods in the PRA's. The methods are applied to important generic pressurized water reactor transient and loss of coolant accident event trees. The results of this application are presented and compared. Finally, conclusions are drawn regarding the applicability of the methods and the results obtained in the study. It is concluded that both of the methods yield results which are comparable and that both can be used in statistical uncertainty evaluations under certain specified conditions. (orig.)

  4. Fundamentals of statistical experimental design and analysis

    CERN Document Server

    Easterling, Robert G

    2015-01-01

    Professionals in all areas - business; government; the physical, life, and social sciences; engineering; medicine, etc. - benefit from using statistical experimental design to better understand their worlds and then use that understanding to improve the products, processes, and programs they are responsible for. This book aims to provide the practitioners of tomorrow with a memorable, easy to read, engaging guide to statistics and experimental design. This book uses examples, drawn from a variety of established texts, and embeds them in a business or scientific context, seasoned with a dash of humor, to emphasize the issues and ideas that led to the experiment and the what-do-we-do-next? steps after the experiment. Graphical data displays are emphasized as means of discovery and communication and formulas are minimized, with a focus on interpreting the results that software produce. The role of subject-matter knowledge, and passion, is also illustrated. The examples do not require specialized knowledge, and t...

  5. Network analysis based on large deviation statistics

    Science.gov (United States)

    Miyazaki, Syuji

    2007-07-01

    A chaotic piecewise linear map whose statistical properties are identical to those of a random walk on directed graphs such as the world wide web (WWW) is constructed, and the dynamic quantity is analyzed in the framework of large deviation statistics. Gibbs measures include the weight factor appearing in the weighted average of the dynamic quantity, which can also quantitatively measure the importance of web sites. Currently used levels of importance in the commercial search engines are independent of search terms, which correspond to the stationary visiting frequency of each node obtained from a random walk on the network or equivalent chaotic dynamics. Levels of importance based on the Gibbs measure depend on each search term which is specified by the searcher. The topological conjugate transformation between one dynamical system with a Gibbs measure and another dynamical system whose standard invariant probability measure is identical to the Gibbs measure is also discussed.

  6. Common misconceptions about data analysis and statistics.

    Science.gov (United States)

    Motulsky, Harvey J

    2014-11-01

    Ideally, any experienced investigator with the right tools should be able to reproduce a finding published in a peer-reviewed biomedical science journal. In fact, the reproducibility of a large percentage of published findings has been questioned. Undoubtedly, there are many reasons for this, but one reason maybe that investigators fool themselves due to a poor understanding of statistical concepts. In particular, investigators often make these mistakes: 1. P-Hacking. This is when you reanalyze a data set in many different ways, or perhaps reanalyze with additional replicates, until you get the result you want. 2. Overemphasis on P values rather than on the actual size of the observed effect. 3. Overuse of statistical hypothesis testing, and being seduced by the word "significant". 4. Overreliance on standard errors, which are often misunderstood.

  7. Statistical analysis of radioactivity in the environment

    International Nuclear Information System (INIS)

    Barnes, M.G.; Giacomini, J.J.

    1980-05-01

    The pattern of radioactivity in surface soils of Area 5 of the Nevada Test Site is analyzed statistically by means of kriging. The 1962 event code-named Smallboy effected the greatest proportion of the area sampled, but some of the area was also affected by a number of other events. The data for this study were collected on a regular grid to take advantage of the efficiency of grid sampling

  8. Multinomial analysis of behavior: statistical methods.

    Science.gov (United States)

    Koster, Jeremy; McElreath, Richard

    2017-01-01

    Behavioral ecologists frequently use observational methods, such as instantaneous scan sampling, to record the behavior of animals at discrete moments in time. We develop and apply multilevel, multinomial logistic regression models for analyzing such data. These statistical methods correspond to the multinomial character of the response variable while also accounting for the repeated observations of individuals that characterize behavioral datasets. Correlated random effects potentially reveal individual-level trade-offs across behaviors, allowing for models that reveal the extent to which individuals who regularly engage in one behavior also exhibit relatively more or less of another behavior. Using an example dataset, we demonstrate the estimation of these models using Hamiltonian Monte Carlo algorithms, as implemented in the RStan package in the R statistical environment. The supplemental files include a coding script and data that demonstrate auxiliary functions to prepare the data, estimate the models, summarize the posterior samples, and generate figures that display model predictions. We discuss possible extensions to our approach, including models with random slopes to allow individual-level behavioral strategies to vary over time and the need for models that account for temporal autocorrelation. These models can potentially be applied to a broad class of statistical analyses by behavioral ecologists, focusing on other polytomous response variables, such as behavior, habitat choice, or emotional states.

  9. Effects of inappropriate empirical antibiotic therapy on mortality in patients with healthcare-associated methicillin-resistant Staphylococcus aureus bacteremia: a propensity-matched analysis.

    Science.gov (United States)

    Yoon, Young Kyung; Park, Dae Won; Sohn, Jang Wook; Kim, Hyo Youl; Kim, Yeon-Sook; Lee, Chang-Seop; Lee, Mi Suk; Ryu, Seong-Yeol; Jang, Hee-Chang; Choi, Young Ju; Kang, Cheol-In; Choi, Hee Jung; Lee, Seung Soon; Kim, Shin Woo; Kim, Sang Il; Kim, Eu Suk; Kim, Jeong Yeon; Yang, Kyung Sook; Peck, Kyong Ran; Kim, Min Ja

    2016-07-15

    The purported value of empirical therapy to cover methicillin-resistant Staphylococcus aureus (MRSA) has been debated for decades. The purpose of this study was to evaluate the effects of inappropriate empirical antibiotic therapy on clinical outcomes in patients with healthcare-associated MRSA bacteremia (HA-MRSAB). A prospective, multicenter, observational study was conducted in 15 teaching hospitals in the Republic of Korea from February 2010 to July 2011. The study subjects included adult patients with HA-MRSAB. Covariate adjustment using the propensity score was performed to control for bias in treatment assignment. The predictors of in-hospital mortality were determined by multivariate logistic regression analyses. In total, 345 patients with HA-MRSAB were analyzed. The overall in-hospital mortality rate was 33.0 %. Appropriate empirical antibiotic therapy was given to 154 (44.6 %) patients. The vancomycin minimum inhibitory concentrations of the MRSA isolates ranged from 0.5 to 2 mg/L by E-test. There was no significant difference in mortality between propensity-matched patient pairs receiving inappropriate or appropriate empirical antibiotics (odds ratio [OR] = 1.20; 95 % confidence interval [CI] = 0.71-2.03). Among patients with severe sepsis or septic shock, there was no significant difference in mortality between the treatment groups. In multivariate analyses, severe sepsis or septic shock (OR = 5.45; 95 % CI = 2.14-13.87), Charlson's comorbidity index (per 1-point increment; OR = 1.52; 95 % CI = 1.27-1.83), and prior receipt of glycopeptides (OR = 3.24; 95 % CI = 1.08-9.67) were independent risk factors for mortality. Inappropriate empirical antibiotic therapy was not associated with clinical outcome in patients with HA-MRSAB. Prudent use of empirical glycopeptide therapy should be justified even in hospitals with high MRSA prevalence.

  10. Statistical analysis of random duration times

    International Nuclear Information System (INIS)

    Engelhardt, M.E.

    1996-04-01

    This report presents basic statistical methods for analyzing data obtained by observing random time durations. It gives nonparametric estimates of the cumulative distribution function, reliability function and cumulative hazard function. These results can be applied with either complete or censored data. Several models which are commonly used with time data are discussed, and methods for model checking and goodness-of-fit tests are discussed. Maximum likelihood estimates and confidence limits are given for the various models considered. Some results for situations where repeated durations such as repairable systems are also discussed

  11. Statistical learning methods in high-energy and astrophysics analysis

    International Nuclear Information System (INIS)

    Zimmermann, J.; Kiesling, C.

    2004-01-01

    We discuss several popular statistical learning methods used in high-energy- and astro-physics analysis. After a short motivation for statistical learning we present the most popular algorithms and discuss several examples from current research in particle- and astro-physics. The statistical learning methods are compared with each other and with standard methods for the respective application

  12. Statistical learning methods in high-energy and astrophysics analysis

    Energy Technology Data Exchange (ETDEWEB)

    Zimmermann, J. [Forschungszentrum Juelich GmbH, Zentrallabor fuer Elektronik, 52425 Juelich (Germany) and Max-Planck-Institut fuer Physik, Foehringer Ring 6, 80805 Munich (Germany)]. E-mail: zimmerm@mppmu.mpg.de; Kiesling, C. [Max-Planck-Institut fuer Physik, Foehringer Ring 6, 80805 Munich (Germany)

    2004-11-21

    We discuss several popular statistical learning methods used in high-energy- and astro-physics analysis. After a short motivation for statistical learning we present the most popular algorithms and discuss several examples from current research in particle- and astro-physics. The statistical learning methods are compared with each other and with standard methods for the respective application.

  13. Why Flash Type Matters: A Statistical Analysis

    Science.gov (United States)

    Mecikalski, Retha M.; Bitzer, Phillip M.; Carey, Lawrence D.

    2017-09-01

    While the majority of research only differentiates between intracloud (IC) and cloud-to-ground (CG) flashes, there exists a third flash type, known as hybrid flashes. These flashes have extensive IC components as well as return strokes to ground but are misclassified as CG flashes in current flash type analyses due to the presence of a return stroke. In an effort to show that IC, CG, and hybrid flashes should be separately classified, the two-sample Kolmogorov-Smirnov (KS) test was applied to the flash sizes, flash initiation, and flash propagation altitudes for each of the three flash types. The KS test statistically showed that IC, CG, and hybrid flashes do not have the same parent distributions and thus should be separately classified. Separate classification of hybrid flashes will lead to improved lightning-related research, because unambiguously classified hybrid flashes occur on the same order of magnitude as CG flashes for multicellular storms.

  14. Statistical models for competing risk analysis

    International Nuclear Information System (INIS)

    Sather, H.N.

    1976-08-01

    Research results on three new models for potential applications in competing risks problems. One section covers the basic statistical relationships underlying the subsequent competing risks model development. Another discusses the problem of comparing cause-specific risk structure by competing risks theory in two homogeneous populations, P1 and P2. Weibull models which allow more generality than the Berkson and Elveback models are studied for the effect of time on the hazard function. The use of concomitant information for modeling single-risk survival is extended to the multiple failure mode domain of competing risks. The model used to illustrate the use of this methodology is a life table model which has constant hazards within pre-designated intervals of the time scale. Two parametric models for bivariate dependent competing risks, which provide interesting alternatives, are proposed and examined

  15. Statistical analysis of the Martian surface

    Science.gov (United States)

    Landais, F.; Schmidt, F.; Lovejoy, S.

    2015-10-01

    We investigate the scaling properties of the topography of Mars [10]. Planetary topographic fields are well known to exhibit (mono)fractal behavior. Indeed, fractal formalism is efficient to reproduce the variability observed in topography. Still, a single fractal dimension is not enough to explain the huge variability and intermittency. Previous study have shown that fractal dimensions might be different from a region to another, excluding a general description at the planetary scale. In this project, we are analyzing the Martian topographic data with a multifractal formalism to study the scaling intermittency. In the multifractal paradigm, the local variation of the fractal dimension is interpreted as a statistical property of multifractal fields. The results suggest a multifractal behaviour from planetary scale down to 10 km. From 10 km to 600 m, the topography seems to be simple monofractal. This transition indicates a significant in the geological processes governing the Red Planet's surface.

  16. Statistical analysis of earthquake ground motion parameters

    International Nuclear Information System (INIS)

    1979-12-01

    Several earthquake ground response parameters that define the strength, duration, and frequency content of the motions are investigated using regression analyses techniques; these techniques incorporate statistical significance testing to establish the terms in the regression equations. The parameters investigated are the peak acceleration, velocity, and displacement; Arias intensity; spectrum intensity; bracketed duration; Trifunac-Brady duration; and response spectral amplitudes. The study provides insight into how these parameters are affected by magnitude, epicentral distance, local site conditions, direction of motion (i.e., whether horizontal or vertical), and earthquake event type. The results are presented in a form so as to facilitate their use in the development of seismic input criteria for nuclear plants and other major structures. They are also compared with results from prior investigations that have been used in the past in the criteria development for such facilities

  17. Statistical analysis of earthquake ground motion parameters

    Energy Technology Data Exchange (ETDEWEB)

    1979-12-01

    Several earthquake ground response parameters that define the strength, duration, and frequency content of the motions are investigated using regression analyses techniques; these techniques incorporate statistical significance testing to establish the terms in the regression equations. The parameters investigated are the peak acceleration, velocity, and displacement; Arias intensity; spectrum intensity; bracketed duration; Trifunac-Brady duration; and response spectral amplitudes. The study provides insight into how these parameters are affected by magnitude, epicentral distance, local site conditions, direction of motion (i.e., whether horizontal or vertical), and earthquake event type. The results are presented in a form so as to facilitate their use in the development of seismic input criteria for nuclear plants and other major structures. They are also compared with results from prior investigations that have been used in the past in the criteria development for such facilities.

  18. Statistical analysis of concrete creep effects

    International Nuclear Information System (INIS)

    Floris, C.

    1989-01-01

    The principal sources of uncertainty in concrete creep effects are the following: uncertainty in the stochastic evolution in time of the mechanism of creep (internal uncertainty); uncertainty in the prediction of the properties of the materials; uncertainty in the stochastic evolution of environmental conditions; uncertainty of the theoretical models; errors of measurement. Interest in the random nature of concrete creep (and shrinkage) effects is discussed. The late beginning of the studies on this subject is perhaps due to their theoretical and computational complexity: nevertheless, since creep and shrinkage affect features of concrete structures as the residual prestressing force in prestressed sections, the stress redistribution in steel-concrete composite beams, deflections and deformations, stress distributions in non-homogenous structures, reactions due to delayed restraints and creep buckling, these studies are very important. This paper is aimed to find the statistics of some of these effects taking into the account the third type of source of uncertainty

  19. Statistical power analysis for the behavioral sciences

    National Research Council Canada - National Science Library

    Cohen, Jacob

    1988-01-01

    ... offers a unifying framework and some new data-analytic possibilities. 2. A new chapter (Chapter 11) considers some general topics in power analysis in more integrted form than is possible in the earlier...

  20. Statistical power analysis for the behavioral sciences

    National Research Council Canada - National Science Library

    Cohen, Jacob

    1988-01-01

    .... A chapter has been added for power analysis in set correlation and multivariate methods (Chapter 10). Set correlation is a realization of the multivariate general linear model, and incorporates the standard multivariate methods...

  1. Statistical methods for categorical data analysis

    CERN Document Server

    Powers, Daniel

    2008-01-01

    This book provides a comprehensive introduction to methods and models for categorical data analysis and their applications in social science research. Companion website also available, at https://webspace.utexas.edu/dpowers/www/

  2. Statistics

    CERN Document Server

    Hayslett, H T

    1991-01-01

    Statistics covers the basic principles of Statistics. The book starts by tackling the importance and the two kinds of statistics; the presentation of sample data; the definition, illustration and explanation of several measures of location; and the measures of variation. The text then discusses elementary probability, the normal distribution and the normal approximation to the binomial. Testing of statistical hypotheses and tests of hypotheses about the theoretical proportion of successes in a binomial population and about the theoretical mean of a normal population are explained. The text the

  3. The Statistical Analysis of Failure Time Data

    CERN Document Server

    Kalbfleisch, John D

    2011-01-01

    Contains additional discussion and examples on left truncation as well as material on more general censoring and truncation patterns.Introduces the martingale and counting process formulation swil lbe in a new chapter.Develops multivariate failure time data in a separate chapter and extends the material on Markov and semi Markov formulations.Presents new examples and applications of data analysis.

  4. Statistical Modelling of Wind Proles - Data Analysis and Modelling

    DEFF Research Database (Denmark)

    Jónsson, Tryggvi; Pinson, Pierre

    The aim of the analysis presented in this document is to investigate whether statistical models can be used to make very short-term predictions of wind profiles.......The aim of the analysis presented in this document is to investigate whether statistical models can be used to make very short-term predictions of wind profiles....

  5. Sensitivity analysis of ranked data: from order statistics to quantiles

    NARCIS (Netherlands)

    Heidergott, B.F.; Volk-Makarewicz, W.

    2015-01-01

    In this paper we provide the mathematical theory for sensitivity analysis of order statistics of continuous random variables, where the sensitivity is with respect to a distributional parameter. Sensitivity analysis of order statistics over a finite number of observations is discussed before

  6. Analysis of room transfer function and reverberant signal statistics

    DEFF Research Database (Denmark)

    Georganti, Eleftheria; Mourjopoulos, John; Jacobsen, Finn

    2008-01-01

    For some time now, statistical analysis has been a valuable tool in analyzing room transfer functions (RTFs). This work examines existing statistical time-frequency models and techniques for RTF analysis (e.g., Schroeder's stochastic model and the standard deviation over frequency bands for the RTF...... smoothing (e.g., as in complex smoothing) with respect to the original RTF statistics. More specifically, the RTF statistics, derived after the complex smoothing calculation, are compared to the original statistics across space inside typical rooms, by varying the source, the receiver position...... and the corresponding ratio of the direct and reverberant signal. In addition, this work examines the statistical quantities for speech and audio signals prior to their reproduction within rooms and when recorded in rooms. Histograms and other statistical distributions are used to compare RTF minima of typical...

  7. Statistical Image Analysis of Longitudinal RAVENS Images

    Directory of Open Access Journals (Sweden)

    Seonjoo eLee

    2015-10-01

    Full Text Available Regional analysis of volumes examined in normalized space (RAVENS are transformation images used in the study of brain morphometry. In this paper, RAVENS images are analyzed using a longitudinal variant of voxel-based morphometry (VBM and longitudinal functional principal component analysis (LFPCA for high-dimensional images. We demonstrate that the latter overcomes the limitations of standard longitudinal VBM analyses, which does not separate registration errors from other longitudinal changes and baseline patterns. This is especially important in contexts where longitudinal changes are only a small fraction of the overall observed variability, which is typical in normal aging and many chronic diseases. Our simulation study shows that LFPCA effectively separates registration error from baseline and longitudinal signals of interest by decomposing RAVENS images measured at multiple visits into three components: a subject-specific imaging random intercept that quantifies the cross-sectional variability, a subject-specific imaging slope that quantifies the irreversible changes over multiple visits, and a subject-visit specific imaging deviation. We describe strategies to identify baseline/longitudinal variation and registration errors combined with covariates of interest. Our analysis suggests that specific regional brain atrophy and ventricular enlargement are associated with multiple sclerosis (MS disease progression.

  8. An analysis of UK wind farm statistics

    International Nuclear Information System (INIS)

    Milborrow, D.J.

    1995-01-01

    An analysis of key data for 22 completed wind projects shows 134 MW of plant cost Pound 152 million, giving an average cost of Pound 1136/kW. The energy generation potential of these windfarms is around 360 GWh, derived from sites with windspeeds between 6.2 and 8.8 m/s. Relationships between wind speed, energy production and cost were examined and it was found that costs increased with wind speed, due to the difficulties of access in hilly regions. It also appears that project costs fell with time and wind energy prices have fallen much faster than electricity prices. (Author)

  9. Domain analysis and modeling to improve comparability of health statistics.

    Science.gov (United States)

    Okada, M; Hashimoto, H; Ohida, T

    2001-01-01

    Health statistics is an essential element to improve the ability of managers of health institutions, healthcare researchers, policy makers, and health professionals to formulate appropriate course of reactions and to make decisions based on evidence. To ensure adequate health statistics, standards are of critical importance. A study on healthcare statistics domain analysis is underway in an effort to improve usability and comparability of health statistics. The ongoing study focuses on structuring the domain knowledge and making the knowledge explicit with a data element dictionary being the core. Supplemental to the dictionary are a domain term list, a terminology dictionary, and a data model to help organize the concepts constituting the health statistics domain.

  10. Statistics

    Science.gov (United States)

    Links to sources of cancer-related statistics, including the Surveillance, Epidemiology and End Results (SEER) Program, SEER-Medicare datasets, cancer survivor prevalence data, and the Cancer Trends Progress Report.

  11. Statistical analysis of radial interface growth

    International Nuclear Information System (INIS)

    Masoudi, A A; Hosseinabadi, S; Khorrami, M; Davoudi, J; Kohandel, M

    2012-01-01

    Recent studies have questioned the application of standard scaling analysis to study radially growing interfaces (Escudero 2008 Phys. Rev. Lett. 100 116101; 2009 Ann. Phys. 324 1796). We show that the radial Edwards–Wilkinson (EW) equation belongs to the same universality as that obtained in the planar geometry. In addition, we use numerical simulations to calculate the interface width for both random deposition with surface relaxation (RDSR) and restricted solid on solid (RSOS) models, assuming that the system size increases linearly with time (due to radial geometry). By applying appropriate rules for each model, we show that the interface width increases with time as t β , where the exponent β is the same as those obtained from the corresponding planar geometries. (letter)

  12. A statistical analysis of UK financial networks

    Science.gov (United States)

    Chu, J.; Nadarajah, S.

    2017-04-01

    In recent years, with a growing interest in big or large datasets, there has been a rise in the application of large graphs and networks to financial big data. Much of this research has focused on the construction and analysis of the network structure of stock markets, based on the relationships between stock prices. Motivated by Boginski et al. (2005), who studied the characteristics of a network structure of the US stock market, we construct network graphs of the UK stock market using same method. We fit four distributions to the degree density of the vertices from these graphs, the Pareto I, Fréchet, lognormal, and generalised Pareto distributions, and assess the goodness of fit. Our results show that the degree density of the complements of the market graphs, constructed using a negative threshold value close to zero, can be fitted well with the Fréchet and lognormal distributions.

  13. Inappropriate prescribing in geriatric patients.

    LENUS (Irish Health Repository)

    Barry, Patrick J

    2012-02-03

    Inappropriate prescribing in older people is a common condition associated with significant morbidity, mortality, and financial costs. Medication use increases with age, and this, in conjunction with an increasing disease burden, is associated with adverse drug reactions. This review outlines why older people are more likely to develop adverse drug reactions and how common the problem is. The use of different tools to identify and measure the problem is reviewed. Common syndromes seen in older adults (eg, falling, cognitive impairment, sleep disturbance) are considered, and recent evidence in relation to medication use for these conditions is reviewed. Finally, we present a brief summary of significant developments in the recent literature for those caring for older people.

  14. Comparative analysis of positive and negative attitudes toward statistics

    Science.gov (United States)

    Ghulami, Hassan Rahnaward; Ab Hamid, Mohd Rashid; Zakaria, Roslinazairimah

    2015-02-01

    Many statistics lecturers and statistics education researchers are interested to know the perception of their students' attitudes toward statistics during the statistics course. In statistics course, positive attitude toward statistics is a vital because it will be encourage students to get interested in the statistics course and in order to master the core content of the subject matters under study. Although, students who have negative attitudes toward statistics they will feel depressed especially in the given group assignment, at risk for failure, are often highly emotional, and could not move forward. Therefore, this study investigates the students' attitude towards learning statistics. Six latent constructs have been the measurement of students' attitudes toward learning statistic such as affect, cognitive competence, value, difficulty, interest, and effort. The questionnaire was adopted and adapted from the reliable and validate instrument of Survey of Attitudes towards Statistics (SATS). This study is conducted among engineering undergraduate engineering students in the university Malaysia Pahang (UMP). The respondents consist of students who were taking the applied statistics course from different faculties. From the analysis, it is found that the questionnaire is acceptable and the relationships among the constructs has been proposed and investigated. In this case, students show full effort to master the statistics course, feel statistics course enjoyable, have confidence that they have intellectual capacity, and they have more positive attitudes then negative attitudes towards statistics learning. In conclusion in terms of affect, cognitive competence, value, interest and effort construct the positive attitude towards statistics was mostly exhibited. While negative attitudes mostly exhibited by difficulty construct.

  15. CORSSA: The Community Online Resource for Statistical Seismicity Analysis

    Science.gov (United States)

    Michael, Andrew J.; Wiemer, Stefan

    2010-01-01

    Statistical seismology is the application of rigorous statistical methods to earthquake science with the goal of improving our knowledge of how the earth works. Within statistical seismology there is a strong emphasis on the analysis of seismicity data in order to improve our scientific understanding of earthquakes and to improve the evaluation and testing of earthquake forecasts, earthquake early warning, and seismic hazards assessments. Given the societal importance of these applications, statistical seismology must be done well. Unfortunately, a lack of educational resources and available software tools make it difficult for students and new practitioners to learn about this discipline. The goal of the Community Online Resource for Statistical Seismicity Analysis (CORSSA) is to promote excellence in statistical seismology by providing the knowledge and resources necessary to understand and implement the best practices, so that the reader can apply these methods to their own research. This introduction describes the motivation for and vision of CORRSA. It also describes its structure and contents.

  16. Statistics

    International Nuclear Information System (INIS)

    2005-01-01

    For the years 2004 and 2005 the figures shown in the tables of Energy Review are partly preliminary. The annual statistics published in Energy Review are presented in more detail in a publication called Energy Statistics that comes out yearly. Energy Statistics also includes historical time-series over a longer period of time (see e.g. Energy Statistics, Statistics Finland, Helsinki 2004.) The applied energy units and conversion coefficients are shown in the back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supplies and total consumption of electricity GWh, Energy imports by country of origin in January-June 2003, Energy exports by recipient country in January-June 2003, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes, precautionary stock fees and oil pollution fees

  17. Statistical evaluation of diagnostic performance topics in ROC analysis

    CERN Document Server

    Zou, Kelly H; Bandos, Andriy I; Ohno-Machado, Lucila; Rockette, Howard E

    2016-01-01

    Statistical evaluation of diagnostic performance in general and Receiver Operating Characteristic (ROC) analysis in particular are important for assessing the performance of medical tests and statistical classifiers, as well as for evaluating predictive models or algorithms. This book presents innovative approaches in ROC analysis, which are relevant to a wide variety of applications, including medical imaging, cancer research, epidemiology, and bioinformatics. Statistical Evaluation of Diagnostic Performance: Topics in ROC Analysis covers areas including monotone-transformation techniques in parametric ROC analysis, ROC methods for combined and pooled biomarkers, Bayesian hierarchical transformation models, sequential designs and inferences in the ROC setting, predictive modeling, multireader ROC analysis, and free-response ROC (FROC) methodology. The book is suitable for graduate-level students and researchers in statistics, biostatistics, epidemiology, public health, biomedical engineering, radiology, medi...

  18. Online Statistical Modeling (Regression Analysis) for Independent Responses

    Science.gov (United States)

    Made Tirta, I.; Anggraeni, Dian; Pandutama, Martinus

    2017-06-01

    Regression analysis (statistical analmodelling) are among statistical methods which are frequently needed in analyzing quantitative data, especially to model relationship between response and explanatory variables. Nowadays, statistical models have been developed into various directions to model various type and complex relationship of data. Rich varieties of advanced and recent statistical modelling are mostly available on open source software (one of them is R). However, these advanced statistical modelling, are not very friendly to novice R users, since they are based on programming script or command line interface. Our research aims to developed web interface (based on R and shiny), so that most recent and advanced statistical modelling are readily available, accessible and applicable on web. We have previously made interface in the form of e-tutorial for several modern and advanced statistical modelling on R especially for independent responses (including linear models/LM, generalized linier models/GLM, generalized additive model/GAM and generalized additive model for location scale and shape/GAMLSS). In this research we unified them in the form of data analysis, including model using Computer Intensive Statistics (Bootstrap and Markov Chain Monte Carlo/ MCMC). All are readily accessible on our online Virtual Statistics Laboratory. The web (interface) make the statistical modeling becomes easier to apply and easier to compare them in order to find the most appropriate model for the data.

  19. statistical analysis of wind speed for electrical power generation

    African Journals Online (AJOL)

    HOD

    Keywords: Wind speed - probability - density function – wind energy conversion system- statistical analyses. 1. INTRODUCTION. In order ..... "Statistical analysis of wind speed distribution based on six Weibull Methods for wind power evaluation in. Garoua, Cameroon," Revue des Energies. Renouvelables, vol. 18, no. 1, pp.

  20. Statistical Compilation of the ICT Sector and Policy Analysis | IDRC ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Final technical report / statistical compilation of the ICT sector and policy analysis : a communication for development approach to scientific training and research and its extension digital transformations; seeking applied frameworks and indicators. Download PDF. Studies. Statistical Compilation of the ICT Sector and Policy ...

  1. Statistics

    International Nuclear Information System (INIS)

    2000-01-01

    For the year 1999 and 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g., Energiatilastot 1998, Statistics Finland, Helsinki 1999, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-March 2000, Energy exports by recipient country in January-March 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products

  2. Statistics

    International Nuclear Information System (INIS)

    1999-01-01

    For the year 1998 and the year 1999, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g. Energiatilastot 1998, Statistics Finland, Helsinki 1999, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-June 1999, Energy exports by recipient country in January-June 1999, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products

  3. Statistics

    International Nuclear Information System (INIS)

    2001-01-01

    For the year 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy Review appear in more detail from the publication Energiatilastot - Energy Statistics issued annually, which also includes historical time series over a longer period (see e.g. Energiatilastot 1999, Statistics Finland, Helsinki 2000, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions from the use of fossil fuels, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in 2000, Energy exports by recipient country in 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products

  4. Instrumental Neutron Activation Analysis and Multivariate Statistics for Pottery Provenance

    Science.gov (United States)

    Glascock, M. D.; Neff, H.; Vaughn, K. J.

    2004-06-01

    The application of instrumental neutron activation analysis and multivariate statistics to archaeological studies of ceramics and clays is described. A small pottery data set from the Nasca culture in southern Peru is presented for illustration.

  5. Instrumental Neutron Activation Analysis and Multivariate Statistics for Pottery Provenance

    International Nuclear Information System (INIS)

    Glascock, M. D.; Neff, H.; Vaughn, K. J.

    2004-01-01

    The application of instrumental neutron activation analysis and multivariate statistics to archaeological studies of ceramics and clays is described. A small pottery data set from the Nasca culture in southern Peru is presented for illustration.

  6. Instrumental Neutron Activation Analysis and Multivariate Statistics for Pottery Provenance

    Energy Technology Data Exchange (ETDEWEB)

    Glascock, M. D.; Neff, H. [University of Missouri, Research Reactor Center (United States); Vaughn, K. J. [Pacific Lutheran University, Department of Anthropology (United States)

    2004-06-15

    The application of instrumental neutron activation analysis and multivariate statistics to archaeological studies of ceramics and clays is described. A small pottery data set from the Nasca culture in southern Peru is presented for illustration.

  7. Inappropriate oophorectomy at time of benign premenopausal hysterectomy.

    Science.gov (United States)

    Mahal, Amandeep S; Rhoads, Kim F; Elliott, Christopher S; Sokol, Eric R

    2017-08-01

    We assessed rates of oophorectomy during benign hysterectomy around the release of the American College of Obstetricians and Gynecologists 2008 practice bulletin on prophylactic oophorectomy, and evaluated predictors of inappropriate premenopausal oophorectomy. A cross-sectional administrative database analysis was performed utilizing the California Office of Statewide Health Planning Development Patient Discharge Database for years 2005 to 2011. After identifying all premenopausal women undergoing hysterectomy for benign conditions, International Classification of Diseases (ICD)-9 diagnosis codes were reviewed to create a master list of indications for oophorectomy. We defined appropriate oophorectomy as cases with concomitant coding for ovarian cyst, breast cancer susceptibility gene carrier status, and other diagnoses. Using patient demographics and hospital characteristics to predict inappropriate oophorectomy, a logistic regression model was created. We identified 57,776 benign premenopausal hysterectomies with oophorectomies during the period studied. Of the premenopausal oophorectomies, 37.7% (21,783) were deemed "inappropriate" with no documented reason for removal. The total number of premenopausal inpatient hysterectomies with oophorectomy decreased yearly (12,227/y in 2005 to 5,930/y in 2011). However, the percentage of inappropriate oophorectomies remained stable. In multivariate analysis, Hispanic and African American ethnicity/race associated with increased odds of inappropriate oophorectomy (P Urban and at low Medi-Cal utilization hospitals showed increased odds of inappropriate oophorectomy. In premenopausal women undergoing benign hysterectomy, over one-third undergo oophorectomy without an appropriate indication documented. The rate of inappropriate oophorectomy in California has not changed since the 2008 American College of Obstetricians and Gynecologists guidelines. Whereas the absolute number of inpatient hysterectomies for benign

  8. [Statistical analysis using freely-available "EZR (Easy R)" software].

    Science.gov (United States)

    Kanda, Yoshinobu

    2015-10-01

    Clinicians must often perform statistical analyses for purposes such evaluating preexisting evidence and designing or executing clinical studies. R is a free software environment for statistical computing. R supports many statistical analysis functions, but does not incorporate a statistical graphical user interface (GUI). The R commander provides an easy-to-use basic-statistics GUI for R. However, the statistical function of the R commander is limited, especially in the field of biostatistics. Therefore, the author added several important statistical functions to the R commander and named it "EZR (Easy R)", which is now being distributed on the following website: http://www.jichi.ac.jp/saitama-sct/. EZR allows the application of statistical functions that are frequently used in clinical studies, such as survival analyses, including competing risk analyses and the use of time-dependent covariates and so on, by point-and-click access. In addition, by saving the script automatically created by EZR, users can learn R script writing, maintain the traceability of the analysis, and assure that the statistical process is overseen by a supervisor.

  9. Propensity Score Analysis: An Alternative Statistical Approach for HRD Researchers

    Science.gov (United States)

    Keiffer, Greggory L.; Lane, Forrest C.

    2016-01-01

    Purpose: This paper aims to introduce matching in propensity score analysis (PSA) as an alternative statistical approach for researchers looking to make causal inferences using intact groups. Design/methodology/approach: An illustrative example demonstrated the varying results of analysis of variance, analysis of covariance and PSA on a heuristic…

  10. Advanced data analysis in neuroscience integrating statistical and computational models

    CERN Document Server

    Durstewitz, Daniel

    2017-01-01

    This book is intended for use in advanced graduate courses in statistics / machine learning, as well as for all experimental neuroscientists seeking to understand statistical methods at a deeper level, and theoretical neuroscientists with a limited background in statistics. It reviews almost all areas of applied statistics, from basic statistical estimation and test theory, linear and nonlinear approaches for regression and classification, to model selection and methods for dimensionality reduction, density estimation and unsupervised clustering.  Its focus, however, is linear and nonlinear time series analysis from a dynamical systems perspective, based on which it aims to convey an understanding also of the dynamical mechanisms that could have generated observed time series. Further, it integrates computational modeling of behavioral and neural dynamics with statistical estimation and hypothesis testing. This way computational models in neuroscience are not only explanat ory frameworks, but become powerfu...

  11. Statistics

    International Nuclear Information System (INIS)

    2003-01-01

    For the year 2002, part of the figures shown in the tables of the Energy Review are partly preliminary. The annual statistics of the Energy Review also includes historical time-series over a longer period (see e.g. Energiatilastot 2001, Statistics Finland, Helsinki 2002). The applied energy units and conversion coefficients are shown in the inside back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supply and total consumption of electricity GWh, Energy imports by country of origin in January-June 2003, Energy exports by recipient country in January-June 2003, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Excise taxes, precautionary stock fees on oil pollution fees on energy products

  12. Statistics

    International Nuclear Information System (INIS)

    2000-01-01

    For the year 1999 and 2000, part of the figures shown in the tables of the Energy Review are preliminary or estimated. The annual statistics of the Energy also includes historical time series over a longer period (see e.g., Energiatilastot 1999, Statistics Finland, Helsinki 2000, ISSN 0785-3165). The inside of the Review's back cover shows the energy units and the conversion coefficients used for them. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in the volume of GNP and energy consumption, Changes in the volume of GNP and electricity, Coal consumption, Natural gas consumption, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices for heat production, Fuel prices for electricity production, Carbon dioxide emissions, Total energy consumption by source and CO 2 -emissions, Electricity supply, Energy imports by country of origin in January-June 2000, Energy exports by recipient country in January-June 2000, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Average electricity price by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Energy taxes and precautionary stock fees on oil products

  13. Statistics

    International Nuclear Information System (INIS)

    2004-01-01

    For the year 2003 and 2004, the figures shown in the tables of the Energy Review are partly preliminary. The annual statistics of the Energy Review also includes historical time-series over a longer period (see e.g. Energiatilastot, Statistics Finland, Helsinki 2003, ISSN 0785-3165). The applied energy units and conversion coefficients are shown in the inside back cover of the Review. Explanatory notes to the statistical tables can be found after tables and figures. The figures presents: Changes in GDP, energy consumption and electricity consumption, Carbon dioxide emissions from fossile fuels use, Coal consumption, Consumption of natural gas, Peat consumption, Domestic oil deliveries, Import prices of oil, Consumer prices of principal oil products, Fuel prices in heat production, Fuel prices in electricity production, Price of electricity by type of consumer, Average monthly spot prices at the Nord pool power exchange, Total energy consumption by source and CO 2 -emissions, Supplies and total consumption of electricity GWh, Energy imports by country of origin in January-March 2004, Energy exports by recipient country in January-March 2004, Consumer prices of liquid fuels, Consumer prices of hard coal, natural gas and indigenous fuels, Price of natural gas by type of consumer, Price of electricity by type of consumer, Price of district heating by type of consumer, Excise taxes, value added taxes and fiscal charges and fees included in consumer prices of some energy sources and Excise taxes, precautionary stock fees on oil pollution fees

  14. Numeric computation and statistical data analysis on the Java platform

    CERN Document Server

    Chekanov, Sergei V

    2016-01-01

    Numerical computation, knowledge discovery and statistical data analysis integrated with powerful 2D and 3D graphics for visualization are the key topics of this book. The Python code examples powered by the Java platform can easily be transformed to other programming languages, such as Java, Groovy, Ruby and BeanShell. This book equips the reader with a computational platform which, unlike other statistical programs, is not limited by a single programming language. The author focuses on practical programming aspects and covers a broad range of topics, from basic introduction to the Python language on the Java platform (Jython), to descriptive statistics, symbolic calculations, neural networks, non-linear regression analysis and many other data-mining topics. He discusses how to find regularities in real-world data, how to classify data, and how to process data for knowledge discoveries. The code snippets are so short that they easily fit into single pages. Numeric Computation and Statistical Data Analysis ...

  15. Statistical analysis of dynamic parameters of the core

    International Nuclear Information System (INIS)

    Ionov, V.S.

    2007-01-01

    The transients of various types were investigated for the cores of zero power critical facilities in RRC KI and NPP. Dynamic parameters of neutron transients were explored by tool statistical analysis. Its have sufficient duration, few channels for currents of chambers and reactivity and also some channels for technological parameters. On these values the inverse period. reactivity, lifetime of neutrons, reactivity coefficients and some effects of a reactivity are determinate, and on the values were restored values of measured dynamic parameters as result of the analysis. The mathematical means of statistical analysis were used: approximation(A), filtration (F), rejection (R), estimation of parameters of descriptive statistic (DSP), correlation performances (kk), regression analysis(KP), the prognosis (P), statistician criteria (SC). The calculation procedures were realized by computer language MATLAB. The reasons of methodical and statistical errors are submitted: inadequacy of model operation, precision neutron-physical parameters, features of registered processes, used mathematical model in reactivity meters, technique of processing for registered data etc. Examples of results of statistical analysis. Problems of validity of the methods used for definition and certification of values of statistical parameters and dynamic characteristics are considered (Authors)

  16. Statistical learning in specific language impairment : A meta-analysis

    NARCIS (Netherlands)

    Lammertink, Imme; Boersma, Paul; Wijnen, Frank; Rispens, Judith

    2017-01-01

    Purpose: The current meta-analysis provides a quantitative overview of published and unpublished studies on statistical learning in the auditory verbal domain in people with and without specific language impairment (SLI). The database used for the meta-analysis is accessible online and open to

  17. Statistical analysis of planktic foraminifera of the surface Continental ...

    African Journals Online (AJOL)

    Planktic foraminiferal assemblage recorded from selected samples obtained from shallow continental shelf sediments off southwestern Nigeria were subjected to statistical analysis. The Principal Component Analysis (PCA) was used to determine variants of planktic parameters. Values obtained for these parameters were ...

  18. PRECISE - pregabalin in addition to usual care: Statistical analysis plan

    NARCIS (Netherlands)

    S. Mathieson (Stephanie); L. Billot (Laurent); C. Maher (Chris); A.J. McLachlan (Andrew J.); J. Latimer (Jane); B.W. Koes (Bart); M.J. Hancock (Mark J.); I. Harris (Ian); R.O. Day (Richard O.); J. Pik (Justin); S. Jan (Stephen); C.-W.C. Lin (Chung-Wei Christine)

    2016-01-01

    textabstractBackground: Sciatica is a severe, disabling condition that lacks high quality evidence for effective treatment strategies. This a priori statistical analysis plan describes the methodology of analysis for the PRECISE study. Methods/design: PRECISE is a prospectively registered, double

  19. Using multivariate statistical analysis to assess changes in water ...

    African Journals Online (AJOL)

    2000; Evans et al., 2001; Kernan and Helliwell, 2001; Wright et al., 2001 .... Statistical analysis was used to examine the water quality at the five sites for ... An analysis of covariance. (ANCOVA) was used to test for site (spatial) differences in water quality. To assess for differences between sites, the ANCOVA compared the ...

  20. Directions for new developments on statistical design and analysis of small population group trials.

    Science.gov (United States)

    Hilgers, Ralf-Dieter; Roes, Kit; Stallard, Nigel

    2016-06-14

    Most statistical design and analysis methods for clinical trials have been developed and evaluated where at least several hundreds of patients could be recruited. These methods may not be suitable to evaluate therapies if the sample size is unavoidably small, which is usually termed by small populations. The specific sample size cut off, where the standard methods fail, needs to be investigated. In this paper, the authors present their view on new developments for design and analysis of clinical trials in small population groups, where conventional statistical methods may be inappropriate, e.g., because of lack of power or poor adherence to asymptotic approximations due to sample size restrictions. Following the EMA/CHMP guideline on clinical trials in small populations, we consider directions for new developments in the area of statistical methodology for design and analysis of small population clinical trials. We relate the findings to the research activities of three projects, Asterix, IDeAl, and InSPiRe, which have received funding since 2013 within the FP7-HEALTH-2013-INNOVATION-1 framework of the EU. As not all aspects of the wide research area of small population clinical trials can be addressed, we focus on areas where we feel advances are needed and feasible. The general framework of the EMA/CHMP guideline on small population clinical trials stimulates a number of research areas. These serve as the basis for the three projects, Asterix, IDeAl, and InSPiRe, which use various approaches to develop new statistical methodology for design and analysis of small population clinical trials. Small population clinical trials refer to trials with a limited number of patients. Small populations may result form rare diseases or specific subtypes of more common diseases. New statistical methodology needs to be tailored to these specific situations. The main results from the three projects will constitute a useful toolbox for improved design and analysis of small

  1. HistFitter software framework for statistical data analysis

    CERN Document Server

    Baak, M.; Côte, D.; Koutsman, A.; Lorenz, J.; Short, D.

    2015-01-01

    We present a software framework for statistical data analysis, called HistFitter, that has been used extensively by the ATLAS Collaboration to analyze big datasets originating from proton-proton collisions at the Large Hadron Collider at CERN. Since 2012 HistFitter has been the standard statistical tool in searches for supersymmetric particles performed by ATLAS. HistFitter is a programmable and flexible framework to build, book-keep, fit, interpret and present results of data models of nearly arbitrary complexity. Starting from an object-oriented configuration, defined by users, the framework builds probability density functions that are automatically fitted to data and interpreted with statistical tests. A key innovation of HistFitter is its design, which is rooted in core analysis strategies of particle physics. The concepts of control, signal and validation regions are woven into its very fabric. These are progressively treated with statistically rigorous built-in methods. Being capable of working with mu...

  2. A Divergence Statistics Extension to VTK for Performance Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Pebay, Philippe Pierre [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bennett, Janine Camille [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    This report follows the series of previous documents ([PT08, BPRT09b, PT09, BPT09, PT10, PB13], where we presented the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k -means, order and auto-correlative statistics engines which we developed within the Visualization Tool Kit ( VTK ) as a scalable, parallel and versatile statistics package. We now report on a new engine which we developed for the calculation of divergence statistics, a concept which we hereafter explain and whose main goal is to quantify the discrepancy, in a stasticial manner akin to measuring a distance, between an observed empirical distribution and a theoretical, "ideal" one. The ease of use of the new diverence statistics engine is illustrated by the means of C++ code snippets. Although this new engine does not yet have a parallel implementation, it has already been applied to HPC performance analysis, of which we provide an example.

  3. Statistical analysis applied to safety culture self-assessment

    International Nuclear Information System (INIS)

    Macedo Soares, P.P.

    2002-01-01

    Interviews and opinion surveys are instruments used to assess the safety culture in an organization as part of the Safety Culture Enhancement Programme. Specific statistical tools are used to analyse the survey results. This paper presents an example of an opinion survey with the corresponding application of the statistical analysis and the conclusions obtained. Survey validation, Frequency statistics, Kolmogorov-Smirnov non-parametric test, Student (T-test) and ANOVA means comparison tests and LSD post-hoc multiple comparison test, are discussed. (author)

  4. Longitudinal data analysis a handbook of modern statistical methods

    CERN Document Server

    Fitzmaurice, Garrett; Verbeke, Geert; Molenberghs, Geert

    2008-01-01

    Although many books currently available describe statistical models and methods for analyzing longitudinal data, they do not highlight connections between various research threads in the statistical literature. Responding to this void, Longitudinal Data Analysis provides a clear, comprehensive, and unified overview of state-of-the-art theory and applications. It also focuses on the assorted challenges that arise in analyzing longitudinal data. After discussing historical aspects, leading researchers explore four broad themes: parametric modeling, nonparametric and semiparametric methods, joint

  5. Highly Robust Statistical Methods in Medical Image Analysis

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2012-01-01

    Roč. 32, č. 2 (2012), s. 3-16 ISSN 0208-5216 R&D Projects: GA MŠk(CZ) 1M06014 Institutional research plan: CEZ:AV0Z10300504 Keywords : robust statistics * classification * faces * robust image analysis * forensic science Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.208, year: 2012 http://www.ibib.waw.pl/bbe/bbefulltext/BBE_32_2_003_FT.pdf

  6. Network similarity and statistical analysis of earthquake seismic data

    OpenAIRE

    Deyasi, Krishanu; Chakraborty, Abhijit; Banerjee, Anirban

    2016-01-01

    We study the structural similarity of earthquake networks constructed from seismic catalogs of different geographical regions. A hierarchical clustering of underlying undirected earthquake networks is shown using Jensen-Shannon divergence in graph spectra. The directed nature of links indicates that each earthquake network is strongly connected, which motivates us to study the directed version statistically. Our statistical analysis of each earthquake region identifies the hub regions. We cal...

  7. Towards proper sampling and statistical analysis of defects

    Directory of Open Access Journals (Sweden)

    Cetin Ali

    2014-06-01

    Full Text Available Advancements in applied statistics with great relevance to defect sampling and analysis are presented. Three main issues are considered; (i proper handling of multiple defect types, (ii relating sample data originating from polished inspection surfaces (2D to finite material volumes (3D, and (iii application of advanced extreme value theory in statistical analysis of block maximum data. Original and rigorous, but practical mathematical solutions are presented. Finally, these methods are applied to make prediction regarding defect sizes in a steel alloy containing multiple defect types.

  8. Adaptive strategy for the statistical analysis of connectomes.

    Directory of Open Access Journals (Sweden)

    Djalel Eddine Meskaldji

    Full Text Available We study an adaptive statistical approach to analyze brain networks represented by brain connection matrices of interregional connectivity (connectomes. Our approach is at a middle level between a global analysis and single connections analysis by considering subnetworks of the global brain network. These subnetworks represent either the inter-connectivity between two brain anatomical regions or by the intra-connectivity within the same brain anatomical region. An appropriate summary statistic, that characterizes a meaningful feature of the subnetwork, is evaluated. Based on this summary statistic, a statistical test is performed to derive the corresponding p-value. The reformulation of the problem in this way reduces the number of statistical tests in an orderly fashion based on our understanding of the problem. Considering the global testing problem, the p-values are corrected to control the rate of false discoveries. Finally, the procedure is followed by a local investigation within the significant subnetworks. We contrast this strategy with the one based on the individual measures in terms of power. We show that this strategy has a great potential, in particular in cases where the subnetworks are well defined and the summary statistics are properly chosen. As an application example, we compare structural brain connection matrices of two groups of subjects with a 22q11.2 deletion syndrome, distinguished by their IQ scores.

  9. Statistical margin to DNB safety analysis approach for LOFT

    International Nuclear Information System (INIS)

    Atkinson, S.A.

    1982-01-01

    A method was developed and used for LOFT thermal safety analysis to estimate the statistical margin to DNB for the hot rod, and to base safety analysis on desired DNB probability limits. This method is an advanced approach using response surface analysis methods, a very efficient experimental design, and a 2nd-order response surface equation with a 2nd-order error propagation analysis to define the MDNBR probability density function. Calculations for limiting transients were used in the response surface analysis thereby including transient interactions and trip uncertainties in the MDNBR probability density

  10. Inappropriate self-medication among adolescents and its association with lower medication literacy and substance use

    OpenAIRE

    Lee, Chun-Hsien; Chang, Fong-Ching; Hsu, Sheng-Der; Chi, Hsueh-Yun; Huang, Li-Jung; Yeh, Ming-Kung

    2017-01-01

    Background While self-medication is common, inappropriate self-medication has potential risks. This study assesses inappropriate self-medication among adolescents and examines the relationships among medication literacy, substance use, and inappropriate self-medication. Method In 2016, a national representative sample of 6,226 students from 99 primary, middle, and high schools completed an online self-administered questionnaire. Multiple logistic regression analysis was used to examine factor...

  11. Data analysis using the Gnu R system for statistical computation

    Energy Technology Data Exchange (ETDEWEB)

    Simone, James; /Fermilab

    2011-07-01

    R is a language system for statistical computation. It is widely used in statistics, bioinformatics, machine learning, data mining, quantitative finance, and the analysis of clinical drug trials. Among the advantages of R are: it has become the standard language for developing statistical techniques, it is being actively developed by a large and growing global user community, it is open source software, it is highly portable (Linux, OS-X and Windows), it has a built-in documentation system, it produces high quality graphics and it is easily extensible with over four thousand extension library packages available covering statistics and applications. This report gives a very brief introduction to R with some examples using lattice QCD simulation results. It then discusses the development of R packages designed for chi-square minimization fits for lattice n-pt correlation functions.

  12. Statistical analysis of Geopotential Height (GH) timeseries based on Tsallis non-extensive statistical mechanics

    Science.gov (United States)

    Karakatsanis, L. P.; Iliopoulos, A. C.; Pavlos, E. G.; Pavlos, G. P.

    2018-02-01

    In this paper, we perform statistical analysis of time series deriving from Earth's climate. The time series are concerned with Geopotential Height (GH) and correspond to temporal and spatial components of the global distribution of month average values, during the period (1948-2012). The analysis is based on Tsallis non-extensive statistical mechanics and in particular on the estimation of Tsallis' q-triplet, namely {qstat, qsens, qrel}, the reconstructed phase space and the estimation of correlation dimension and the Hurst exponent of rescaled range analysis (R/S). The deviation of Tsallis q-triplet from unity indicates non-Gaussian (Tsallis q-Gaussian) non-extensive character with heavy tails probability density functions (PDFs), multifractal behavior and long range dependences for all timeseries considered. Also noticeable differences of the q-triplet estimation found in the timeseries at distinct local or temporal regions. Moreover, in the reconstructive phase space revealed a lower-dimensional fractal set in the GH dynamical phase space (strong self-organization) and the estimation of Hurst exponent indicated multifractality, non-Gaussianity and persistence. The analysis is giving significant information identifying and characterizing the dynamical characteristics of the earth's climate.

  13. Three Cases With Inappropriate TSH Syndrome

    Directory of Open Access Journals (Sweden)

    Hatice Sebila Dökmetaş

    2012-12-01

    Full Text Available Inappropriate thyroid-stimulating hormone (TSH syndrome or central hyperthyroidism is a rare disorder characterized by inappropriately normal or elevated levels of TSH and elevated levels of T3 and T4. The syndrome is associated with TSH-secreting pituitary adenoma (TSHoma or thyroid hormone resistance (THR. Thyroid-releasing hormone stimulation test and T3 suppression test can be useful for the differential diagnosis of central hyperthyroidism. In the present study, we report three cases of inappropriate TSH syndrome diagnosed after TRH stimulation and T3 suppression tests. Turk Jem 2012; 16: 105-8

  14. A κ-generalized statistical mechanics approach to income analysis

    International Nuclear Information System (INIS)

    Clementi, F; Gallegati, M; Kaniadakis, G

    2009-01-01

    This paper proposes a statistical mechanics approach to the analysis of income distribution and inequality. A new distribution function, having its roots in the framework of κ-generalized statistics, is derived that is particularly suitable for describing the whole spectrum of incomes, from the low–middle income region up to the high income Pareto power-law regime. Analytical expressions for the shape, moments and some other basic statistical properties are given. Furthermore, several well-known econometric tools for measuring inequality, which all exist in a closed form, are considered. A method for parameter estimation is also discussed. The model is shown to fit remarkably well the data on personal income for the United States, and the analysis of inequality performed in terms of its parameters is revealed as very powerful

  15. A novel statistic for genome-wide interaction analysis.

    Directory of Open Access Journals (Sweden)

    Xuesen Wu

    2010-09-01

    Full Text Available Although great progress in genome-wide association studies (GWAS has been made, the significant SNP associations identified by GWAS account for only a few percent of the genetic variance, leading many to question where and how we can find the missing heritability. There is increasing interest in genome-wide interaction analysis as a possible source of finding heritability unexplained by current GWAS. However, the existing statistics for testing interaction have low power for genome-wide interaction analysis. To meet challenges raised by genome-wide interactional analysis, we have developed a novel statistic for testing interaction between two loci (either linked or unlinked. The null distribution and the type I error rates of the new statistic for testing interaction are validated using simulations. Extensive power studies show that the developed statistic has much higher power to detect interaction than classical logistic regression. The results identified 44 and 211 pairs of SNPs showing significant evidence of interactions with FDR<0.001 and 0.001analysis is a valuable tool for finding remaining missing heritability unexplained by the current GWAS, and the developed novel statistic is able to search significant interaction between SNPs across the genome. Real data analysis showed that the results of genome-wide interaction analysis can be replicated in two independent studies.

  16. Using multivariate statistical analysis to assess changes in water ...

    African Journals Online (AJOL)

    Multivariate statistical analysis was used to investigate changes in water chemistry at 5 river sites in the Vaal Dam catchment, draining the Highveld grasslands. These grasslands receive more than 8 kg sulphur (S) ha-1·year-1 and 6 kg nitrogen (N) ha-1·year-1 via atmospheric deposition. It was hypothesised that between ...

  17. Statistical Compilation of the ICT Sector and Policy Analysis | CRDI ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Statistical Compilation of the ICT Sector and Policy Analysis. As the presence and influence of information and communication technologies (ICTs) continues to widen and deepen, so too does its impact on economic development. However, much work needs to be done before the linkages between economic development ...

  18. Multivariate statistical analysis of major and trace element data for ...

    African Journals Online (AJOL)

    Multivariate statistical analysis of major and trace element data for niobium exploration in the peralkaline granites of the anorogenic ring-complex province of Nigeria. PO Ogunleye, EC Ike, I Garba. Abstract. No Abstract Available Journal of Mining and Geology Vol.40(2) 2004: 107-117. Full Text: EMAIL FULL TEXT EMAIL ...

  19. Multiple defect distributions on weibull statistical analysis of fatigue ...

    African Journals Online (AJOL)

    By relaxing the assumptions of a single cast defect distribution, of uniformity throughout the material and of uniformity from specimen to specimen, Weibull statistical analysis for multiple defect distributions have been applied to correctly describe the fatigue life data of aluminium alloy castings having multiple cast defects ...

  20. Toward a theory of statistical tree-shape analysis

    DEFF Research Database (Denmark)

    Feragen, Aasa; Lo, Pechin Chien Pau; de Bruijne, Marleen

    2013-01-01

    has nice geometric properties which are needed for statistical analysis: geodesics always exist, and are generically locally unique. Following this we can also show existence and generic local uniqueness of average trees for QED. TED, while having some algorithmic advantages, does not share...

  1. French University Libraries 1988-1998: A Statistical Analysis

    Directory of Open Access Journals (Sweden)

    Gernot U. Gabel

    2001-07-01

    Full Text Available Based on an analysis of statistical data from the past decade which have been published annually by the French Ministry of Education (Annuaire des bibliothèques universitaires, the article gives an overview of developments with regard to buildings, personnel, services, acquisitions and collections of French university libraries during the last decade.

  2. Statistical analysis of thermal conductivity of nanofluid containing ...

    Indian Academy of Sciences (India)

    Thermal conductivity measurements of nanofluids were analysed via two-factor completely randomized design and comparison of data means is carried out with Duncan's multiple-range test. Statistical analysis of experimental data show that temperature and weight fraction have a reasonable impact on the thermal ...

  3. Statistical analysis of thermal conductivity of nanofluid containing ...

    Indian Academy of Sciences (India)

    Abstract. In this paper, we report for the first time the statistical analysis of thermal conductivity of nanofluids containing TiO2 nanoparticles, pristine MWCNTs and decorated MWCNTs with different amounts of TiO2 nanoparticles. The functionalized MWCNT and synthesized hybrid of MWCNT–TiO2 were characterized using ...

  4. Evaluation of Statistical Models for Analysis of Insect, Disease and ...

    African Journals Online (AJOL)

    It is concluded that LMMs and GLMs simultaneously consider the effect of treatments and heterogeneity of variance and hence are more appropriate for analysis of abundance and incidence data than ordinary ANOVA. Keywords: Mixed Models; Generalized Linear Models; Statistical Power East African Journal of Sciences ...

  5. A Statistical Analysis of Women's Perceptions on Politics and Peace ...

    African Journals Online (AJOL)

    This article is a statistical analysis of the perception that more women in politics would enhance peace building. The data was drawn from a comparative survey of 325 women and four men (community leaders) in the regions of the Niger Delta (Nigeria) and KwaZulu-Natal (South Africa). According to the findings, the ...

  6. Implementation and statistical analysis of Metropolis algorithm for SU(3)

    International Nuclear Information System (INIS)

    Katznelson, E.; Nobile, A.

    1984-12-01

    In this paper we study the statistical properties of an implementation of the Metropolis algorithm for SU(3) gauge theory. It is shown that the results have normal distribution. We demonstrate that in this case error analysis can be carried on in a simple way and we show that applying it to both the measurement strategy and the output data analysis has an important influence on the performance and reliability of the simulation. (author)

  7. Reducing bias in the analysis of counting statistics data

    International Nuclear Information System (INIS)

    Hammersley, A.P.; Antoniadis, A.

    1997-01-01

    In the analysis of counting statistics data it is common practice to estimate the variance of the measured data points as the data points themselves. This practice introduces a bias into the results of further analysis which may be significant, and under certain circumstances lead to false conclusions. In the case of normal weighted least squares fitting this bias is quantified and methods to avoid it are proposed. (orig.)

  8. Statistical analysis of stretch film production process capabilities

    OpenAIRE

    Kovačić, Goran; Kondić, Živko

    2012-01-01

    The basic concept of statistical process control is based on the comparison of data collected from the process with calculated control limits and conclusions about the process based on the above. This process is recognized as a modern method for the analysis of process capabilities over different capability indexes. This paper describes the application of this method in monitoring and analysis of stretch film production process capabilities.

  9. HistFitter software framework for statistical data analysis

    International Nuclear Information System (INIS)

    Baak, M.; Besjes, G.J.; Cote, D.; Koutsman, A.; Lorenz, J.; Short, D.

    2015-01-01

    We present a software framework for statistical data analysis, called HistFitter, that has been used extensively by the ATLAS Collaboration to analyze big datasets originating from proton-proton collisions at the Large Hadron Collider at CERN. Since 2012 HistFitter has been the standard statistical tool in searches for supersymmetric particles performed by ATLAS. HistFitter is a programmable and flexible framework to build, book-keep, fit, interpret and present results of data models of nearly arbitrary complexity. Starting from an object-oriented configuration, defined by users, the framework builds probability density functions that are automatically fit to data and interpreted with statistical tests. Internally HistFitter uses the statistics packages RooStats and HistFactory. A key innovation of HistFitter is its design, which is rooted in analysis strategies of particle physics. The concepts of control, signal and validation regions are woven into its fabric. These are progressively treated with statistically rigorous built-in methods. Being capable of working with multiple models at once that describe the data, HistFitter introduces an additional level of abstraction that allows for easy bookkeeping, manipulation and testing of large collections of signal hypotheses. Finally, HistFitter provides a collection of tools to present results with publication quality style through a simple command-line interface. (orig.)

  10. Inappropriate eating behavior: a longitudinal study with female adolescents

    Directory of Open Access Journals (Sweden)

    Leonardo de Sousa Fortes

    2014-03-01

    Full Text Available Objective: To evaluate the inappropriate eating behaviors (IEB of female adolescents over a one-year period. Methods: 290 adolescents aged between 11 and 14 years old participated in the three research stages (T1: first four months, T2: second four months and T3: third four months. The Eating Attitudes Test (EAT-26 was applied to assess the IEB. Weight and height were measured to calculate body mass index (BMI in the three study periods. Analysis of variance for repeated measures was used to analyze the data, adjusted for the scores of the Body Shape Questionnaire and the Brazil Economic Classification Criteria. Results: Girls at T1 showed a higher frequency of IEB compared to T2 (p=0.001 and T3 (p=0.001. The findings also indicated higher values for BMI in T3 in relation to T1 (p=0.04. The other comparisons did not show statistically significant differences. Conclusions: IEB scores of female adolescents declined over one year.

  11. Primary Sjogren's syndrome associated with inappropriate ...

    African Journals Online (AJOL)

    A patient in whom primary Sjogren's syndrome and inappropriate antiduretic hormone secretion were associated is reported. This is the first report of such an association. The possible pathophysiological mechanisms are discussed and vasculitis proposed as the underlying pathogenetic mechanism.

  12. Statistical analysis of absorptive laser damage in dielectric thin films

    Energy Technology Data Exchange (ETDEWEB)

    Budgor, A.B.; Luria-Budgor, K.F.

    1978-09-11

    The Weibull distribution arises as an example of the theory of extreme events. It is commonly used to fit statistical data arising in the failure analysis of electrical components and in DC breakdown of materials. This distribution is employed to analyze time-to-damage and intensity-to-damage statistics obtained when irradiating thin film coated samples of SiO/sub 2/, ZrO/sub 2/, and Al/sub 2/O/sub 3/ with tightly focused laser beams. The data used is furnished by Milam. The fit to the data is excellent; and least squared correlation coefficients greater than 0.9 are often obtained.

  13. Statistical analysis of absorptive laser damage in dielectric thin films

    International Nuclear Information System (INIS)

    Budgor, A.B.; Luria-Budgor, K.F.

    1978-01-01

    The Weibull distribution arises as an example of the theory of extreme events. It is commonly used to fit statistical data arising in the failure analysis of electrical components and in DC breakdown of materials. This distribution is employed to analyze time-to-damage and intensity-to-damage statistics obtained when irradiating thin film coated samples of SiO 2 , ZrO 2 , and Al 2 O 3 with tightly focused laser beams. The data used is furnished by Milam. The fit to the data is excellent; and least squared correlation coefficients greater than 0.9 are often obtained

  14. Inappropriate prescribing: criteria, detection and prevention.

    LENUS (Irish Health Repository)

    O'Connor, Marie N

    2012-06-01

    Inappropriate prescribing is highly prevalent in older people and is a major healthcare concern because of its association with negative healthcare outcomes including adverse drug events, related morbidity and hospitalization. With changing population demographics resulting in increasing proportions of older people worldwide, improving the quality and safety of prescribing in older people poses a global challenge. To date a number of different strategies have been used to identify potentially inappropriate prescribing in older people. Over the last two decades, a number of criteria have been published to assist prescribers in detecting inappropriate prescribing, the majority of which have been explicit sets of criteria, though some are implicit. The majority of these prescribing indicators pertain to overprescribing and misprescribing, with only a minority focussing on the underprescribing of indicated medicines. Additional interventions to optimize prescribing in older people include comprehensive geriatric assessment, clinical pharmacist review, and education of prescribers as well as computerized prescribing with clinical decision support systems. In this review, we describe the inappropriate prescribing detection tools or criteria most frequently cited in the literature and examine their role in preventing inappropriate prescribing and other related healthcare outcomes. We also discuss other measures commonly used in the detection and prevention of inappropriate prescribing in older people and the evidence supporting their use and their application in everyday clinical practice.

  15. Drug Utilization and Inappropriate Prescribing in Centenarians.

    Science.gov (United States)

    Hazra, Nisha C; Dregan, Alex; Jackson, Stephen; Gulliford, Martin C

    2016-05-01

    To use primary care electronic health records (EHRs) to evaluate prescriptions and inappropriate prescribing in men and women at age 100. Population-based cohort study. Primary care database in the United Kingdom, 1990 to 2013. Individuals reaching the age of 100 between 1990 and 2013 (N = 11,084; n = 8,982 women, n = 2,102 men). Main drug classes prescribed and potentially inappropriate prescribing according to the 2012 American Geriatrics Society Beers Criteria. At the age of 100, 73% of individuals (79% of women, 54% of men) had received one or more prescription drugs, with a median of 7 (interquartile range 0-12) prescription items. The most frequently prescribed drug classes were cardiovascular (53%), central nervous system (CNS) (53%), and gastrointestinal (47%). Overall, 32% of participants (28% of men, 32% of women) who received drug prescriptions may have received one or more potentially inappropriate prescriptions, with temazepam and amitriptyline being the most frequent. CNS prescriptions were potentially inappropriate in 23% of individuals, and anticholinergic prescriptions were potentially inappropriate in 18% of individuals. The majority of centenarians are prescribed one or more drug therapies, and the prescription may be inappropriate for up to one-third of these individuals. Research using EHRs offers opportunities to understand prescribing trends and improve pharmacological care of the oldest adults. © 2016 The Authors. The Journal of the American Geriatrics Society published by Wiley Periodicals, Inc. on behalf of The American Geriatrics Society.

  16. Statistical analysis for coded aperture γ-ray telescope

    International Nuclear Information System (INIS)

    Ducros, G.; Ducros, R.

    1984-01-01

    We have developed a statistical analysis of the image recorded by a position sensitive detector associated with a coded mask for the French gamma ray satellite SIGMA, in the energy range (20-2 000 keV). The aperture of the telescope is not limited to the size of the mask. In the first part, we described the principle of the image analysis based on the least squares method with a fit function generated and tested term after term. The statistical test is performed on the F distribution followed by the relative improvement of chi 2 when the fit function has an additional term. The second part deals with digital processing aspects: the adjustment of the method to reduce computation time, and the analysis results of two simulated images. (orig.)

  17. Data management and statistical analysis for environmental assessment

    International Nuclear Information System (INIS)

    Wendelberger, J.R.; McVittie, T.I.

    1995-01-01

    Data management and statistical analysis for environmental assessment are important issues on the interface of computer science and statistics. Data collection for environmental decision making can generate large quantities of various types of data. A database/GIS system developed is described which provides efficient data storage as well as visualization tools which may be integrated into the data analysis process. FIMAD is a living database and GIS system. The system has changed and developed over time to meet the needs of the Los Alamos National Laboratory Restoration Program. The system provides a repository for data which may be accessed by different individuals for different purposes. The database structure is driven by the large amount and varied types of data required for environmental assessment. The integration of the database with the GIS system provides the foundation for powerful visualization and analysis capabilities

  18. Explorations in statistics: the analysis of ratios and normalized data.

    Science.gov (United States)

    Curran-Everett, Douglas

    2013-09-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This ninth installment of Explorations in Statistics explores the analysis of ratios and normalized-or standardized-data. As researchers, we compute a ratio-a numerator divided by a denominator-to compute a proportion for some biological response or to derive some standardized variable. In each situation, we want to control for differences in the denominator when the thing we really care about is the numerator. But there is peril lurking in a ratio: only if the relationship between numerator and denominator is a straight line through the origin will the ratio be meaningful. If not, the ratio will misrepresent the true relationship between numerator and denominator. In contrast, regression techniques-these include analysis of covariance-are versatile: they can accommodate an analysis of the relationship between numerator and denominator when a ratio is useless.

  19. Feature-Based Statistical Analysis of Combustion Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, J; Krishnamoorthy, V; Liu, S; Grout, R; Hawkes, E; Chen, J; Pascucci, V; Bremer, P T

    2011-11-18

    We present a new framework for feature-based statistical analysis of large-scale scientific data and demonstrate its effectiveness by analyzing features from Direct Numerical Simulations (DNS) of turbulent combustion. Turbulent flows are ubiquitous and account for transport and mixing processes in combustion, astrophysics, fusion, and climate modeling among other disciplines. They are also characterized by coherent structure or organized motion, i.e. nonlocal entities whose geometrical features can directly impact molecular mixing and reactive processes. While traditional multi-point statistics provide correlative information, they lack nonlocal structural information, and hence, fail to provide mechanistic causality information between organized fluid motion and mixing and reactive processes. Hence, it is of great interest to capture and track flow features and their statistics together with their correlation with relevant scalar quantities, e.g. temperature or species concentrations. In our approach we encode the set of all possible flow features by pre-computing merge trees augmented with attributes, such as statistical moments of various scalar fields, e.g. temperature, as well as length-scales computed via spectral analysis. The computation is performed in an efficient streaming manner in a pre-processing step and results in a collection of meta-data that is orders of magnitude smaller than the original simulation data. This meta-data is sufficient to support a fully flexible and interactive analysis of the features, allowing for arbitrary thresholds, providing per-feature statistics, and creating various global diagnostics such as Cumulative Density Functions (CDFs), histograms, or time-series. We combine the analysis with a rendering of the features in a linked-view browser that enables scientists to interactively explore, visualize, and analyze the equivalent of one terabyte of simulation data. We highlight the utility of this new framework for combustion

  20. SMART: Statistical Metabolomics Analysis-An R Tool.

    Science.gov (United States)

    Liang, Yu-Jen; Lin, Yu-Ting; Chen, Chia-Wei; Lin, Chien-Wei; Chao, Kun-Mao; Pan, Wen-Harn; Yang, Hsin-Chou

    2016-06-21

    Metabolomics data provide unprecedented opportunities to decipher metabolic mechanisms by analyzing hundreds to thousands of metabolites. Data quality concerns and complex batch effects in metabolomics must be appropriately addressed through statistical analysis. This study developed an integrated analysis tool for metabolomics studies to streamline the complete analysis flow from initial data preprocessing to downstream association analysis. We developed Statistical Metabolomics Analysis-An R Tool (SMART), which can analyze input files with different formats, visually represent various types of data features, implement peak alignment and annotation, conduct quality control for samples and peaks, explore batch effects, and perform association analysis. A pharmacometabolomics study of antihypertensive medication was conducted and data were analyzed using SMART. Neuromedin N was identified as a metabolite significantly associated with angiotensin-converting-enzyme inhibitors in our metabolome-wide association analysis (p = 1.56 × 10(-4) in an analysis of covariance (ANCOVA) with an adjustment for unknown latent groups and p = 1.02 × 10(-4) in an ANCOVA with an adjustment for hidden substructures). This endogenous neuropeptide is highly related to neurotensin and neuromedin U, which are involved in blood pressure regulation and smooth muscle contraction. The SMART software, a user guide, and example data can be downloaded from http://www.stat.sinica.edu.tw/hsinchou/metabolomics/SMART.htm .

  1. Higher Education in Persons with Disabilities: Statistical Analysis

    Directory of Open Access Journals (Sweden)

    Arzhanykh E.V.,

    2017-08-01

    Full Text Available The paper presents statistical research data on teaching/learning in individuals with disabilities enrolled in higher education programmes. The analysis is based on the information drawn from a statistical form VPO-1 “Information on educational organization offering bachelor’s, master’s and specialist programmes in higher education”. The following indicators were analysed: the dynamics of the number of students with disabilities studying at universities; distribution of students according to the level of higher education and the type of their disability; distribution of students according to the chosen profession; and the data collected in the Russian regions. The paper concludes that even though the available statistical data do not allow for a full complex exploration into the subject of higher education in students with disabilities, the scope of the accessible information is reasonably wide.

  2. Statistical analysis of surgical pathology data using the R program.

    Science.gov (United States)

    Cuff, Justin; Higgins, John P T

    2012-05-01

    An understanding of statistics is essential for analysis of many types of data including data sets typically reported in surgical pathology research papers. Fortunately, a relatively small number of statistical tests apply to data relevant to surgical pathologists. An understanding of when to apply these tests would greatly benefit surgical pathologists who read and/or write papers. In this review, we show how the publicly available statistical program R can be used to analyze recently published surgical pathology papers to replicate the p-values and survival curves presented in these papers. Areas covered include: T-test, chi-square and Fisher exact tests of proportionality, Kaplan-Meier survival curves, the log rank test, and Cox proportional hazards.

  3. Building the Community Online Resource for Statistical Seismicity Analysis (CORSSA)

    Science.gov (United States)

    Michael, A. J.; Wiemer, S.; Zechar, J. D.; Hardebeck, J. L.; Naylor, M.; Zhuang, J.; Steacy, S.; Corssa Executive Committee

    2010-12-01

    Statistical seismology is critical to the understanding of seismicity, the testing of proposed earthquake prediction and forecasting methods, and the assessment of seismic hazard. Unfortunately, despite its importance to seismology - especially to those aspects with great impact on public policy - statistical seismology is mostly ignored in the education of seismologists, and there is no central repository for the existing open-source software tools. To remedy these deficiencies, and with the broader goal to enhance the quality of statistical seismology research, we have begun building the Community Online Resource for Statistical Seismicity Analysis (CORSSA). CORSSA is a web-based educational platform that is authoritative, up-to-date, prominent, and user-friendly. We anticipate that the users of CORSSA will range from beginning graduate students to experienced researchers. More than 20 scientists from around the world met for a week in Zurich in May 2010 to kick-start the creation of CORSSA: the format and initial table of contents were defined; a governing structure was organized; and workshop participants began drafting articles. CORSSA materials are organized with respect to six themes, each containing between four and eight articles. The CORSSA web page, www.corssa.org, officially unveiled on September 6, 2010, debuts with an initial set of approximately 10 to 15 articles available online for viewing and commenting with additional articles to be added over the coming months. Each article will be peer-reviewed and will present a balanced discussion, including illustrative examples and code snippets. Topics in the initial set of articles will include: introductions to both CORSSA and statistical seismology, basic statistical tests and their role in seismology; understanding seismicity catalogs and their problems; basic techniques for modeling seismicity; and methods for testing earthquake predictability hypotheses. A special article will compare and review

  4. Software for statistical data analysis used in Higgs searches

    International Nuclear Information System (INIS)

    Gumpert, Christian; Moneta, Lorenzo; Cranmer, Kyle; Kreiss, Sven; Verkerke, Wouter

    2014-01-01

    The analysis and interpretation of data collected by the Large Hadron Collider (LHC) requires advanced statistical tools in order to quantify the agreement between observation and theoretical models. RooStats is a project providing a statistical framework for data analysis with the focus on discoveries, confidence intervals and combination of different measurements in both Bayesian and frequentist approaches. It employs the RooFit data modelling language where mathematical concepts such as variables, (probability density) functions and integrals are represented as C++ objects. RooStats and RooFit rely on the persistency technology of the ROOT framework. The usage of a common data format enables the concept of digital publishing of complicated likelihood functions. The statistical tools have been developed in close collaboration with the LHC experiments to ensure their applicability to real-life use cases. Numerous physics results have been produced using the RooStats tools, with the discovery of the Higgs boson by the ATLAS and CMS experiments being certainly the most popular among them. We will discuss tools currently used by LHC experiments to set exclusion limits, to derive confidence intervals and to estimate discovery significances based on frequentist statistics and the asymptotic behaviour of likelihood functions. Furthermore, new developments in RooStats and performance optimisation necessary to cope with complex models depending on more than 1000 variables will be reviewed

  5. Statistical analysis of the Ft. Calhoun reactor coolant pump system

    International Nuclear Information System (INIS)

    Heising, Carolyn D.

    1998-01-01

    In engineering science, statistical quality control techniques have traditionally been applied to control manufacturing processes. An application to commercial nuclear power plant maintenance and control is presented that can greatly improve plant safety. As a demonstration of such an approach to plant maintenance and control, a specific system is analyzed: the reactor coolant pumps (RCPs) of the Ft. Calhoun nuclear power plant. This research uses capability analysis, Shewhart X-bar, R-charts, canonical correlation methods, and design of experiments to analyze the process for the state of statistical control. The results obtained show that six out of ten parameters are under control specifications limits and four parameters are not in the state of statistical control. The analysis shows that statistical process control methods can be applied as an early warning system capable of identifying significant equipment problems well in advance of traditional control room alarm indicators Such a system would provide operators with ample time to respond to possible emergency situations and thus improve plant safety and reliability. (author)

  6. Significant Association of Urinary Toxic Metals and Autism-Related Symptoms-A Nonlinear Statistical Analysis with Cross Validation.

    Science.gov (United States)

    Adams, James; Howsmon, Daniel P; Kruger, Uwe; Geis, Elizabeth; Gehn, Eva; Fimbres, Valeria; Pollard, Elena; Mitchell, Jessica; Ingram, Julie; Hellmers, Robert; Quig, David; Hahn, Juergen

    2017-01-01

    A number of previous studies examined a possible association of toxic metals and autism, and over half of those studies suggest that toxic metal levels are different in individuals with Autism Spectrum Disorders (ASD). Additionally, several studies found that those levels correlate with the severity of ASD. In order to further investigate these points, this paper performs the most detailed statistical analysis to date of a data set in this field. First morning urine samples were collected from 67 children and adults with ASD and 50 neurotypical controls of similar age and gender. The samples were analyzed to determine the levels of 10 urinary toxic metals (UTM). Autism-related symptoms were assessed with eleven behavioral measures. Statistical analysis was used to distinguish participants on the ASD spectrum and neurotypical participants based upon the UTM data alone. The analysis also included examining the association of autism severity with toxic metal excretion data using linear and nonlinear analysis. "Leave-one-out" cross-validation was used to ensure statistical independence of results. Average excretion levels of several toxic metals (lead, tin, thallium, antimony) were significantly higher in the ASD group. However, ASD classification using univariate statistics proved difficult due to large variability, but nonlinear multivariate statistical analysis significantly improved ASD classification with Type I/II errors of 15% and 18%, respectively. These results clearly indicate that the urinary toxic metal excretion profiles of participants in the ASD group were significantly different from those of the neurotypical participants. Similarly, nonlinear methods determined a significantly stronger association between the behavioral measures and toxic metal excretion. The association was strongest for the Aberrant Behavior Checklist (including subscales on Irritability, Stereotypy, Hyperactivity, and Inappropriate Speech), but significant associations were found

  7. Significant Association of Urinary Toxic Metals and Autism-Related Symptoms—A Nonlinear Statistical Analysis with Cross Validation

    Science.gov (United States)

    Adams, James; Kruger, Uwe; Geis, Elizabeth; Gehn, Eva; Fimbres, Valeria; Pollard, Elena; Mitchell, Jessica; Ingram, Julie; Hellmers, Robert; Quig, David; Hahn, Juergen

    2017-01-01

    Introduction A number of previous studies examined a possible association of toxic metals and autism, and over half of those studies suggest that toxic metal levels are different in individuals with Autism Spectrum Disorders (ASD). Additionally, several studies found that those levels correlate with the severity of ASD. Methods In order to further investigate these points, this paper performs the most detailed statistical analysis to date of a data set in this field. First morning urine samples were collected from 67 children and adults with ASD and 50 neurotypical controls of similar age and gender. The samples were analyzed to determine the levels of 10 urinary toxic metals (UTM). Autism-related symptoms were assessed with eleven behavioral measures. Statistical analysis was used to distinguish participants on the ASD spectrum and neurotypical participants based upon the UTM data alone. The analysis also included examining the association of autism severity with toxic metal excretion data using linear and nonlinear analysis. “Leave-one-out” cross-validation was used to ensure statistical independence of results. Results and Discussion Average excretion levels of several toxic metals (lead, tin, thallium, antimony) were significantly higher in the ASD group. However, ASD classification using univariate statistics proved difficult due to large variability, but nonlinear multivariate statistical analysis significantly improved ASD classification with Type I/II errors of 15% and 18%, respectively. These results clearly indicate that the urinary toxic metal excretion profiles of participants in the ASD group were significantly different from those of the neurotypical participants. Similarly, nonlinear methods determined a significantly stronger association between the behavioral measures and toxic metal excretion. The association was strongest for the Aberrant Behavior Checklist (including subscales on Irritability, Stereotypy, Hyperactivity, and Inappropriate

  8. Collagen morphology and texture analysis: from statistics to classification

    Science.gov (United States)

    Mostaço-Guidolin, Leila B.; Ko, Alex C.-T.; Wang, Fei; Xiang, Bo; Hewko, Mark; Tian, Ganghong; Major, Arkady; Shiomi, Masashi; Sowa, Michael G.

    2013-07-01

    In this study we present an image analysis methodology capable of quantifying morphological changes in tissue collagen fibril organization caused by pathological conditions. Texture analysis based on first-order statistics (FOS) and second-order statistics such as gray level co-occurrence matrix (GLCM) was explored to extract second-harmonic generation (SHG) image features that are associated with the structural and biochemical changes of tissue collagen networks. Based on these extracted quantitative parameters, multi-group classification of SHG images was performed. With combined FOS and GLCM texture values, we achieved reliable classification of SHG collagen images acquired from atherosclerosis arteries with >90% accuracy, sensitivity and specificity. The proposed methodology can be applied to a wide range of conditions involving collagen re-modeling, such as in skin disorders, different types of fibrosis and muscular-skeletal diseases affecting ligaments and cartilage.

  9. Statistical analysis of first period of operation of FTU Tokamak

    International Nuclear Information System (INIS)

    Crisanti, F.; Apruzzese, G.; Frigione, D.; Kroegler, H.; Lovisetto, L.; Mazzitelli, G.; Podda, S.

    1996-09-01

    On the FTU Tokamak the plasma physics operations started on the 20/4/90. The first plasma had a plasma current Ip=0.75 MA for about a second. The experimental phase lasted until 7/7/94, when a long shut-down begun for installing the toroidal limiter in the inner side of the vacuum vessel. In these four years of operations plasma experiments have been successfully exploited, e.g. experiments of single and multiple pellet injections; full current drive up to Ip=300 KA was obtained by using waves at the frequency of the Lower Hybrid; analysis of ohmic plasma parameters with different materials (from the low Z silicon to high Z tungsten) as plasma facing element was performed. In this work a statistical analysis of the full period of operation is presented. Moreover, a comparison with the statistical data from other Tokamaks is attempted

  10. Statistics in experimental design, preprocessing, and analysis of proteomics data.

    Science.gov (United States)

    Jung, Klaus

    2011-01-01

    High-throughput experiments in proteomics, such as 2-dimensional gel electrophoresis (2-DE) and mass spectrometry (MS), yield usually high-dimensional data sets of expression values for hundreds or thousands of proteins which are, however, observed on only a relatively small number of biological samples. Statistical methods for the planning and analysis of experiments are important to avoid false conclusions and to receive tenable results. In this chapter, the most frequent experimental designs for proteomics experiments are illustrated. In particular, focus is put on studies for the detection of differentially regulated proteins. Furthermore, issues of sample size planning, statistical analysis of expression levels as well as methods for data preprocessing are covered.

  11. GNSS Spoofing Detection Based on Signal Power Measurements: Statistical Analysis

    Directory of Open Access Journals (Sweden)

    V. Dehghanian

    2012-01-01

    Full Text Available A threat to GNSS receivers is posed by a spoofing transmitter that emulates authentic signals but with randomized code phase and Doppler values over a small range. Such spoofing signals can result in large navigational solution errors that are passed onto the unsuspecting user with potentially dire consequences. An effective spoofing detection technique is developed in this paper, based on signal power measurements and that can be readily applied to present consumer grade GNSS receivers with minimal firmware changes. An extensive statistical analysis is carried out based on formulating a multihypothesis detection problem. Expressions are developed to devise a set of thresholds required for signal detection and identification. The detection processing methods developed are further manipulated to exploit incidental antenna motion arising from user interaction with a GNSS handheld receiver to further enhance the detection performance of the proposed algorithm. The statistical analysis supports the effectiveness of the proposed spoofing detection technique under various multipath conditions.

  12. Statistical analysis of the determinations of the Sun's Galactocentric distance

    Science.gov (United States)

    Malkin, Zinovy

    2013-02-01

    Based on several tens of R0 measurements made during the past two decades, several studies have been performed to derive the best estimate of R0. Some used just simple averaging to derive a result, whereas others provided comprehensive analyses of possible errors in published results. In either case, detailed statistical analyses of data used were not performed. However, a computation of the best estimates of the Galactic rotation constants is not only an astronomical but also a metrological task. Here we perform an analysis of 53 R0 measurements (published in the past 20 years) to assess the consistency of the data. Our analysis shows that they are internally consistent. It is also shown that any trend in the R0 estimates from the last 20 years is statistically negligible, which renders the presence of a bandwagon effect doubtful. On the other hand, the formal errors in the published R0 estimates improve significantly with time.

  13. Lifetime statistics of quantum chaos studied by a multiscale analysis

    KAUST Repository

    Di Falco, A.

    2012-04-30

    In a series of pump and probe experiments, we study the lifetime statistics of a quantum chaotic resonator when the number of open channels is greater than one. Our design embeds a stadium billiard into a two dimensional photonic crystal realized on a silicon-on-insulator substrate. We calculate resonances through a multiscale procedure that combines energy landscape analysis and wavelet transforms. Experimental data is found to follow the universal predictions arising from random matrix theory with an excellent level of agreement.

  14. Maximum Likelihood, Consistency and Data Envelopment Analysis: A Statistical Foundation

    OpenAIRE

    Rajiv D. Banker

    1993-01-01

    This paper provides a formal statistical basis for the efficiency evaluation techniques of data envelopment analysis (DEA). DEA estimators of the best practice monotone increasing and concave production function are shown to be also maximum likelihood estimators if the deviation of actual output from the efficient output is regarded as a stochastic variable with a monotone decreasing probability density function. While the best practice frontier estimator is biased below the theoretical front...

  15. Statistical Analysis of the Exchange Rate of Bitcoin.

    Directory of Open Access Journals (Sweden)

    Jeffrey Chu

    Full Text Available Bitcoin, the first electronic payment system, is becoming a popular currency. We provide a statistical analysis of the log-returns of the exchange rate of Bitcoin versus the United States Dollar. Fifteen of the most popular parametric distributions in finance are fitted to the log-returns. The generalized hyperbolic distribution is shown to give the best fit. Predictions are given for future values of the exchange rate.

  16. Statistical Challenges of Big Data Analysis in Medicine

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2015-01-01

    Roč. 3, č. 1 (2015), s. 24-27 ISSN 1805-8698 R&D Projects: GA ČR GA13-23940S Grant - others:CESNET Development Fund(CZ) 494/2013 Institutional support: RVO:67985807 Keywords : big data * variable selection * classification * cluster analysis Subject RIV: BB - Applied Statistics, Operational Research http://www.ijbh.org/ijbh2015-1.pdf

  17. Statistical and machine learning approaches for network analysis

    CERN Document Server

    Dehmer, Matthias

    2012-01-01

    Explore the multidisciplinary nature of complex networks through machine learning techniques Statistical and Machine Learning Approaches for Network Analysis provides an accessible framework for structurally analyzing graphs by bringing together known and novel approaches on graph classes and graph measures for classification. By providing different approaches based on experimental data, the book uniquely sets itself apart from the current literature by exploring the application of machine learning techniques to various types of complex networks. Comprised of chapters written by internation

  18. Statistical Analysis of the Exchange Rate of Bitcoin

    Science.gov (United States)

    Chu, Jeffrey; Nadarajah, Saralees; Chan, Stephen

    2015-01-01

    Bitcoin, the first electronic payment system, is becoming a popular currency. We provide a statistical analysis of the log-returns of the exchange rate of Bitcoin versus the United States Dollar. Fifteen of the most popular parametric distributions in finance are fitted to the log-returns. The generalized hyperbolic distribution is shown to give the best fit. Predictions are given for future values of the exchange rate. PMID:26222702

  19. Analysis of spectral data with rare events statistics

    International Nuclear Information System (INIS)

    Ilyushchenko, V.I.; Chernov, N.I.

    1990-01-01

    The case is considered of analyzing experimental data, when the results of individual experimental runs cannot be summed due to large systematic errors. A statistical analysis of the hypothesis about the persistent peaks in the spectra has been performed by means of the Neyman-Pearson test. The computations demonstrate the confidence level for the hypothesis about the presence of a persistent peak in the spectrum is proportional to the square root of the number of independent experimental runs, K. 5 refs

  20. Statistical Analysis of the Exchange Rate of Bitcoin.

    Science.gov (United States)

    Chu, Jeffrey; Nadarajah, Saralees; Chan, Stephen

    2015-01-01

    Bitcoin, the first electronic payment system, is becoming a popular currency. We provide a statistical analysis of the log-returns of the exchange rate of Bitcoin versus the United States Dollar. Fifteen of the most popular parametric distributions in finance are fitted to the log-returns. The generalized hyperbolic distribution is shown to give the best fit. Predictions are given for future values of the exchange rate.

  1. Neutron activation and statistical analysis of pottery from Thera, Greece

    International Nuclear Information System (INIS)

    Kilikoglou, V.; Grimanis, A.P.; Karayannis, M.I.

    1990-01-01

    Neutron activation analysis, in combination with multivariate analysis of the generated data, was used for the chemical characterization of prehistoric pottery from the Greek islands of Thera, Melos (islands with similar geology) and Crete. The statistical procedure which proved that Theran pottery could be distinguished from Melian is described. This discrimination, attained for the first time, was mainly based on the concentrations of the trace elements Sm, Yb, Lu and Cr. Also, Cretan imports to both Thera and Melos were clearly separable from local products. (author) 22 refs.; 1 fig.; 4 tabs

  2. Statistical Analysis of Hypercalcaemia Data related to Transferability

    DEFF Research Database (Denmark)

    Frølich, Anne; Nielsen, Bo Friis

    2005-01-01

    In this report we describe statistical analysis related to a study of hypercalcaemia carried out in the Copenhagen area in the ten year period from 1984 to 1994. Results from the study have previously been publised in a number of papers [3, 4, 5, 6, 7, 8, 9] and in various abstracts and posters...... at conferences during the late eighties and early nineties. In this report we give a more detailed description of many of the analysis and provide some new results primarily by simultaneous studies of several databases....

  3. SAS and R data management, statistical analysis, and graphics

    CERN Document Server

    Kleinman, Ken

    2009-01-01

    An All-in-One Resource for Using SAS and R to Carry out Common TasksProvides a path between languages that is easier than reading complete documentationSAS and R: Data Management, Statistical Analysis, and Graphics presents an easy way to learn how to perform an analytical task in both SAS and R, without having to navigate through the extensive, idiosyncratic, and sometimes unwieldy software documentation. The book covers many common tasks, such as data management, descriptive summaries, inferential procedures, regression analysis, and the creation of graphics, along with more complex applicat

  4. Statistical Analysis of 30 Years Rainfall Data: A Case Study

    Science.gov (United States)

    Arvind, G.; Ashok Kumar, P.; Girish Karthi, S.; Suribabu, C. R.

    2017-07-01

    Rainfall is a prime input for various engineering design such as hydraulic structures, bridges and culverts, canals, storm water sewer and road drainage system. The detailed statistical analysis of each region is essential to estimate the relevant input value for design and analysis of engineering structures and also for crop planning. A rain gauge station located closely in Trichy district is selected for statistical analysis where agriculture is the prime occupation. The daily rainfall data for a period of 30 years is used to understand normal rainfall, deficit rainfall, Excess rainfall and Seasonal rainfall of the selected circle headquarters. Further various plotting position formulae available is used to evaluate return period of monthly, seasonally and annual rainfall. This analysis will provide useful information for water resources planner, farmers and urban engineers to assess the availability of water and create the storage accordingly. The mean, standard deviation and coefficient of variation of monthly and annual rainfall was calculated to check the rainfall variability. From the calculated results, the rainfall pattern is found to be erratic. The best fit probability distribution was identified based on the minimum deviation between actual and estimated values. The scientific results and the analysis paved the way to determine the proper onset and withdrawal of monsoon results which were used for land preparation and sowing.

  5. HistFitter: a flexible framework for statistical data analysis

    CERN Document Server

    Besjes, G J; Côté, D; Koutsman, A; Lorenz, J M; Short, D

    2015-01-01

    HistFitter is a software framework for statistical data analysis that has been used extensively in the ATLAS Collaboration to analyze data of proton-proton collisions produced by the Large Hadron Collider at CERN. Most notably, HistFitter has become a de-facto standard in searches for supersymmetric particles since 2012, with some usage for Exotic and Higgs boson physics. HistFitter coherently combines several statistics tools in a programmable and flexible framework that is capable of bookkeeping hundreds of data models under study using thousands of generated input histograms.HistFitter interfaces with the statistics tools HistFactory and RooStats to construct parametric models and to perform statistical tests of the data, and extends these tools in four key areas. The key innovations are to weave the concepts of control, validation and signal regions into the very fabric of HistFitter, and to treat these with rigorous methods. Multiple tools to visualize and interpret the results through a simple configura...

  6. A Laboratory Exercise in Statistical Analysis of Data

    Science.gov (United States)

    Vitha, Mark F.; Carr, Peter W.

    1997-08-01

    An undergraduate laboratory exercise in statistical analysis of data has been developed based on facile weighings of vitamin E pills. The use of electronic top-loading balances allows for very rapid data collection. Therefore, students obtain a sufficiently large number of replicates to provide statistically meaningful data sets. Through this exercise, students explore the effects of sample size and different types of sample averaging on the standard deviation of the average weight per pill. An emphasis is placed on the difference between the standard deviation of the mean and the standard deviation of the population. Students also perform the Q-test and t-test and are introduced to the X2-test. In this report, the class data from two consecutive offerings of the course are compared and reveal a statistically significant increase in the average weight per pill, presumably due to the absorption of water over time. Histograms of the class data are shown and used to illustrate the importance of plotting the data. Overall, through this brief laboratory exercise, students are exposed to many important statistical tests and concepts which are then used and further developed throughout the remainder of the course.

  7. Validation of statistical models for creep rupture by parametric analysis

    Energy Technology Data Exchange (ETDEWEB)

    Bolton, J., E-mail: john.bolton@uwclub.net [65, Fisher Ave., Rugby, Warks CV22 5HW (United Kingdom)

    2012-01-15

    Statistical analysis is an efficient method for the optimisation of any candidate mathematical model of creep rupture data, and for the comparative ranking of competing models. However, when a series of candidate models has been examined and the best of the series has been identified, there is no statistical criterion to determine whether a yet more accurate model might be devised. Hence there remains some uncertainty that the best of any series examined is sufficiently accurate to be considered reliable as a basis for extrapolation. This paper proposes that models should be validated primarily by parametric graphical comparison to rupture data and rupture gradient data. It proposes that no mathematical model should be considered reliable for extrapolation unless the visible divergence between model and data is so small as to leave no apparent scope for further reduction. This study is based on the data for a 12% Cr alloy steel used in BS PD6605:1998 to exemplify its recommended statistical analysis procedure. The models considered in this paper include a) a relatively simple model, b) the PD6605 recommended model and c) a more accurate model of somewhat greater complexity. - Highlights: Black-Right-Pointing-Pointer The paper discusses the validation of creep rupture models derived from statistical analysis. Black-Right-Pointing-Pointer It demonstrates that models can be satisfactorily validated by a visual-graphic comparison of models to data. Black-Right-Pointing-Pointer The method proposed utilises test data both as conventional rupture stress and as rupture stress gradient. Black-Right-Pointing-Pointer The approach is shown to be more reliable than a well-established and widely used method (BS PD6605).

  8. Teaching statistics in biology: using inquiry-based learning to strengthen understanding of statistical analysis in biology laboratory courses.

    Science.gov (United States)

    Metz, Anneke M

    2008-01-01

    There is an increasing need for students in the biological sciences to build a strong foundation in quantitative approaches to data analyses. Although most science, engineering, and math field majors are required to take at least one statistics course, statistical analysis is poorly integrated into undergraduate biology course work, particularly at the lower-division level. Elements of statistics were incorporated into an introductory biology course, including a review of statistics concepts and opportunity for students to perform statistical analysis in a biological context. Learning gains were measured with an 11-item statistics learning survey instrument developed for the course. Students showed a statistically significant 25% (p biology. Students improved their scores on the survey after completing introductory biology, even if they had previously completed an introductory statistics course (9%, improvement p biology showed no loss of their statistics knowledge as measured by this instrument, suggesting that the use of statistics in biology course work may aid long-term retention of statistics knowledge. No statistically significant differences in learning were detected between male and female students in the study.

  9. The Effects of Statistical Analysis Software and Calculators on Statistics Achievement

    Science.gov (United States)

    Christmann, Edwin P.

    2009-01-01

    This study compared the effects of microcomputer-based statistical software and hand-held calculators on the statistics achievement of university males and females. The subjects, 73 graduate students enrolled in univariate statistics classes at a public comprehensive university, were randomly assigned to groups that used either microcomputer-based…

  10. Multivariate statistical analysis a high-dimensional approach

    CERN Document Server

    Serdobolskii, V

    2000-01-01

    In the last few decades the accumulation of large amounts of in­ formation in numerous applications. has stimtllated an increased in­ terest in multivariate analysis. Computer technologies allow one to use multi-dimensional and multi-parametric models successfully. At the same time, an interest arose in statistical analysis with a de­ ficiency of sample data. Nevertheless, it is difficult to describe the recent state of affairs in applied multivariate methods as satisfactory. Unimprovable (dominating) statistical procedures are still unknown except for a few specific cases. The simplest problem of estimat­ ing the mean vector with minimum quadratic risk is unsolved, even for normal distributions. Commonly used standard linear multivari­ ate procedures based on the inversion of sample covariance matrices can lead to unstable results or provide no solution in dependence of data. Programs included in standard statistical packages cannot process 'multi-collinear data' and there are no theoretical recommen­ ...

  11. Inappropriate shocks in the subcutaneous ICD

    DEFF Research Database (Denmark)

    Olde Nordkamp, Louise R A; Brouwer, Tom F; Barr, Craig

    2015-01-01

    shocks have been reported. METHODS: We analyzed the incidence, predictors and management of inappropriate shocks in the EFFORTLESS S-ICD Registry, which collects S-ICD implantation information and follow-up data from clinical centers in Europe and New Zealand. RESULTS: During a follow-up of 21 ± 13...... months, 48 out of 581 S-ICD patients (71% male, age 49 ± 18 years) experienced 101 inappropriate shocks (8.3%). The most common cause was cardiac signal oversensing (73%), such as T-wave oversensing. Eighteen shocks (18%) were due to supraventricular tachycardias (SVT), of which 15 occurred in the shock......-only zone. Cox-proportional hazard modeling using time-dependent covariates demonstrated that patients with a history of atrial fibrillation (HR 2.4) and patients with hypertrophic cardiomyopathy (HR 4.6) had an increased risk for inappropriate shocks, while programming the primary vector for sensing (from...

  12. Bayesian Sensitivity Analysis of Statistical Models with Missing Data.

    Science.gov (United States)

    Zhu, Hongtu; Ibrahim, Joseph G; Tang, Niansheng

    2014-04-01

    Methods for handling missing data depend strongly on the mechanism that generated the missing values, such as missing completely at random (MCAR) or missing at random (MAR), as well as other distributional and modeling assumptions at various stages. It is well known that the resulting estimates and tests may be sensitive to these assumptions as well as to outlying observations. In this paper, we introduce various perturbations to modeling assumptions and individual observations, and then develop a formal sensitivity analysis to assess these perturbations in the Bayesian analysis of statistical models with missing data. We develop a geometric framework, called the Bayesian perturbation manifold, to characterize the intrinsic structure of these perturbations. We propose several intrinsic influence measures to perform sensitivity analysis and quantify the effect of various perturbations to statistical models. We use the proposed sensitivity analysis procedure to systematically investigate the tenability of the non-ignorable missing at random (NMAR) assumption. Simulation studies are conducted to evaluate our methods, and a dataset is analyzed to illustrate the use of our diagnostic measures.

  13. Reactor noise analysis by statistical pattern recognition methods

    International Nuclear Information System (INIS)

    Howington, L.C.; Gonzalez, R.C.

    1976-01-01

    A multivariate statistical pattern recognition system for reactor noise analysis is presented. The basis of the system is a transformation for decoupling correlated variables and algorithms for inferring probability density functions. The system is adaptable to a variety of statistical properties of the data, and it has learning, tracking, updating, and data compacting capabilities. System design emphasizes control of the false-alarm rate. Its abilities to learn normal patterns, to recognize deviations from these patterns, and to reduce the dimensionality of data with minimum error were evaluated by experiments at the Oak Ridge National Laboratory (ORNL) High-Flux Isotope Reactor. Power perturbations of less than 0.1 percent of the mean value in selected frequency ranges were detected by the pattern recognition system

  14. Multivariate statistical pattern recognition system for reactor noise analysis

    International Nuclear Information System (INIS)

    Gonzalez, R.C.; Howington, L.C.; Sides, W.H. Jr.; Kryter, R.C.

    1975-01-01

    A multivariate statistical pattern recognition system for reactor noise analysis was developed. The basis of the system is a transformation for decoupling correlated variables and algorithms for inferring probability density functions. The system is adaptable to a variety of statistical properties of the data, and it has learning, tracking, and updating capabilities. System design emphasizes control of the false-alarm rate. The ability of the system to learn normal patterns of reactor behavior and to recognize deviations from these patterns was evaluated by experiments at the ORNL High-Flux Isotope Reactor (HFIR). Power perturbations of less than 0.1 percent of the mean value in selected frequency ranges were detected by the system. 19 references

  15. Statistical analysis of subjective preferences for video enhancement

    Science.gov (United States)

    Woods, Russell L.; Satgunam, PremNandhini; Bronstad, P. Matthew; Peli, Eli

    2010-02-01

    Measuring preferences for moving video quality is harder than for static images due to the fleeting and variable nature of moving video. Subjective preferences for image quality can be tested by observers indicating their preference for one image over another. Such pairwise comparisons can be analyzed using Thurstone scaling (Farrell, 1999). Thurstone (1927) scaling is widely used in applied psychology, marketing, food tasting and advertising research. Thurstone analysis constructs an arbitrary perceptual scale for the items that are compared (e.g. enhancement levels). However, Thurstone scaling does not determine the statistical significance of the differences between items on that perceptual scale. Recent papers have provided inferential statistical methods that produce an outcome similar to Thurstone scaling (Lipovetsky and Conklin, 2004). Here, we demonstrate that binary logistic regression can analyze preferences for enhanced video.

  16. Statistical Analysis of Sport Movement Observations: the Case of Orienteering

    Science.gov (United States)

    Amouzandeh, K.; Karimipour, F.

    2017-09-01

    Study of movement observations is becoming more popular in several applications. Particularly, analyzing sport movement time series has been considered as a demanding area. However, most of the attempts made on analyzing movement sport data have focused on spatial aspects of movement to extract some movement characteristics, such as spatial patterns and similarities. This paper proposes statistical analysis of sport movement observations, which refers to analyzing changes in the spatial movement attributes (e.g. distance, altitude and slope) and non-spatial movement attributes (e.g. speed and heart rate) of athletes. As the case study, an example dataset of movement observations acquired during the "orienteering" sport is presented and statistically analyzed.

  17. Statistical analysis of nanoparticle dosing in a dynamic cellular system.

    Science.gov (United States)

    Summers, Huw D; Rees, Paul; Holton, Mark D; Brown, M Rowan; Chappell, Sally C; Smith, Paul J; Errington, Rachel J

    2011-03-01

    The delivery of nanoparticles into cells is important in therapeutic applications and in nanotoxicology. Nanoparticles are generally targeted to receptors on the surfaces of cells and internalized into endosomes by endocytosis, but the kinetics of the process and the way in which cell division redistributes the particles remain unclear. Here we show that the chance of success or failure of nanoparticle uptake and inheritance is random. Statistical analysis of nanoparticle-loaded endosomes indicates that particle capture is described by an over-dispersed Poisson probability distribution that is consistent with heterogeneous adsorption and internalization. Partitioning of nanoparticles in cell division is random and asymmetric, following a binomial distribution with mean probability of 0.52-0.72. These results show that cellular targeting of nanoparticles is inherently imprecise due to the randomness of nature at the molecular scale, and the statistical framework offers a way to predict nanoparticle dosage for therapy and for the study of nanotoxins.

  18. Statistical analysis of effective singular values in matrix rank determination

    Science.gov (United States)

    Konstantinides, Konstantinos; Yao, Kung

    1988-01-01

    A major problem in using SVD (singular-value decomposition) as a tool in determining the effective rank of a perturbed matrix is that of distinguishing between significantly small and significantly large singular values to the end, conference regions are derived for the perturbed singular values of matrices with noisy observation data. The analysis is based on the theories of perturbations of singular values and statistical significance test. Threshold bounds for perturbation due to finite-precision and i.i.d. random models are evaluated. In random models, the threshold bounds depend on the dimension of the matrix, the noisy variance, and predefined statistical level of significance. Results applied to the problem of determining the effective order of a linear autoregressive system from the approximate rank of a sample autocorrelation matrix are considered. Various numerical examples illustrating the usefulness of these bounds and comparisons to other previously known approaches are given.

  19. Statistical Analysis Of Tank 19F Floor Sample Results

    International Nuclear Information System (INIS)

    Harris, S.

    2010-01-01

    Representative sampling has been completed for characterization of the residual material on the floor of Tank 19F as per the statistical sampling plan developed by Harris and Shine. Samples from eight locations have been obtained from the tank floor and two of the samples were archived as a contingency. Six samples, referred to in this report as the current scrape samples, have been submitted to and analyzed by SRNL. This report contains the statistical analysis of the floor sample analytical results to determine if further data are needed to reduce uncertainty. Included are comparisons with the prior Mantis samples results to determine if they can be pooled with the current scrape samples to estimate the upper 95% confidence limits (UCL95%) for concentration. Statistical analysis revealed that the Mantis and current scrape sample results are not compatible. Therefore, the Mantis sample results were not used to support the quantification of analytes in the residual material. Significant spatial variability among the current scrape sample results was not found. Constituent concentrations were similar between the North and South hemispheres as well as between the inner and outer regions of the tank floor. The current scrape sample results from all six samples fall within their 3-sigma limits. In view of the results from numerous statistical tests, the data were pooled from all six current scrape samples. As such, an adequate sample size was provided for quantification of the residual material on the floor of Tank 19F. The uncertainty is quantified in this report by an UCL95% on each analyte concentration. The uncertainty in analyte concentration was calculated as a function of the number of samples, the average, and the standard deviation of the analytical results. The UCL95% was based entirely on the six current scrape sample results (each averaged across three analytical determinations).

  20. STATISTICAL ANALYSIS OF TANK 18F FLOOR SAMPLE RESULTS

    Energy Technology Data Exchange (ETDEWEB)

    Harris, S.

    2010-09-02

    Representative sampling has been completed for characterization of the residual material on the floor of Tank 18F as per the statistical sampling plan developed by Shine [1]. Samples from eight locations have been obtained from the tank floor and two of the samples were archived as a contingency. Six samples, referred to in this report as the current scrape samples, have been submitted to and analyzed by SRNL [2]. This report contains the statistical analysis of the floor sample analytical results to determine if further data are needed to reduce uncertainty. Included are comparisons with the prior Mantis samples results [3] to determine if they can be pooled with the current scrape samples to estimate the upper 95% confidence limits (UCL{sub 95%}) for concentration. Statistical analysis revealed that the Mantis and current scrape sample results are not compatible. Therefore, the Mantis sample results were not used to support the quantification of analytes in the residual material. Significant spatial variability among the current sample results was not found. Constituent concentrations were similar between the North and South hemispheres as well as between the inner and outer regions of the tank floor. The current scrape sample results from all six samples fall within their 3-sigma limits. In view of the results from numerous statistical tests, the data were pooled from all six current scrape samples. As such, an adequate sample size was provided for quantification of the residual material on the floor of Tank 18F. The uncertainty is quantified in this report by an upper 95% confidence limit (UCL{sub 95%}) on each analyte concentration. The uncertainty in analyte concentration was calculated as a function of the number of samples, the average, and the standard deviation of the analytical results. The UCL{sub 95%} was based entirely on the six current scrape sample results (each averaged across three analytical determinations).

  1. A Statistical-Probabilistic Pattern for Determination of Tunnel Advance Step by Quantitative Risk Analysis

    Directory of Open Access Journals (Sweden)

    sasan ghorbani

    2017-12-01

    Full Text Available One of the main challenges faced in design and construction phases of tunneling projects is the determination of maximum allowable advance step to maximize excavation rate and reduce project delivery time. Considering the complexity of determining this factor and unexpected risks associated with inappropriate determination of that, it is necessary to employ a method which is capable of accounting for interactions among uncertain geotechnical parameters and advance step. The main objective in the present research is to undertake optimization and risk management of advance step length in water diversion tunnel at Shahriar Dam based on uncertainty of geotechnical parameters following a statistic-probabilistic approach. In the present research, in order to determine optimum advance step for excavation operation, two hybrid methods were used: strength reduction method-discrete element method- Monte Carlo simulation (SRM/DEM/MCS and strength reduction method- discrete element method- point estimate method (SRM/DEM/PEM. Moreover, Taguchi analysis was used to investigate the sensitivity of advance step to changes in statistical distribution function of input parameters under three tunneling scenarios at sections of poor to good qualities (as per RMR classification system. Final results implied the optimality of the advance step defined in scenario 2 where 2 m advance per excavation round was proposed, according to shear strain criterion and SRM/DEM/MCS, with minimum failure probability and risk of 8.05% and 75281.56 $, respectively, at 95% confidence level. Moreover, in either of normal, lognormal, and gamma distributions, as the advance step increased from Scenario 1 to 2, failure probability was observed to increase at lower rate than that observed when advance step in scenario 2 was increased to that In Scenario 3. In addition, Taguchi tests were subjected to signal-to-noise analysis and the results indicated that, considering the three statistical

  2. Composition and Statistical Analysis of Biophenols in Apulian Italian EVOOs.

    Science.gov (United States)

    Ragusa, Andrea; Centonze, Carla; Grasso, Maria Elena; Latronico, Maria Francesca; Mastrangelo, Pier Francesco; Fanizzi, Francesco Paolo; Maffia, Michele

    2017-10-18

    Extra-virgin olive oil (EVOO) is among the basic constituents of the Mediterranean diet. Its nutraceutical properties are due mainly, but not only, to a plethora of molecules with antioxidant activity known as biophenols. In this article, several biophenols were measured in EVOOs from South Apulia, Italy. Hydroxytyrosol, tyrosol and their conjugated structures to elenolic acid in different forms were identified and quantified by high performance liquid chromatography (HPLC) together with lignans, luteolin and α-tocopherol. The concentration of the analyzed metabolites was quite high in all the cultivars studied, but it was still possible to discriminate them through multivariate statistical analysis (MVA). Furthermore, principal component analysis (PCA) and orthogonal partial least-squares discriminant analysis (OPLS-DA) were also exploited for determining variances among samples depending on the interval time between harvesting and milling, on the age of the olive trees, and on the area where the olive trees were grown.

  3. STATISTICS. The reusable holdout: Preserving validity in adaptive data analysis.

    Science.gov (United States)

    Dwork, Cynthia; Feldman, Vitaly; Hardt, Moritz; Pitassi, Toniann; Reingold, Omer; Roth, Aaron

    2015-08-07

    Misapplication of statistical data analysis is a common cause of spurious discoveries in scientific research. Existing approaches to ensuring the validity of inferences drawn from data assume a fixed procedure to be performed, selected before the data are examined. In common practice, however, data analysis is an intrinsically adaptive process, with new analyses generated on the basis of data exploration, as well as the results of previous analyses on the same data. We demonstrate a new approach for addressing the challenges of adaptivity based on insights from privacy-preserving data analysis. As an application, we show how to safely reuse a holdout data set many times to validate the results of adaptively chosen analyses. Copyright © 2015, American Association for the Advancement of Science.

  4. International Conference on Modern Problems of Stochastic Analysis and Statistics

    CERN Document Server

    2017-01-01

    This book brings together the latest findings in the area of stochastic analysis and statistics. The individual chapters cover a wide range of topics from limit theorems, Markov processes, nonparametric methods, acturial science, population dynamics, and many others. The volume is dedicated to Valentin Konakov, head of the International Laboratory of Stochastic Analysis and its Applications on the occasion of his 70th birthday. Contributions were prepared by the participants of the international conference of the international conference “Modern problems of stochastic analysis and statistics”, held at the Higher School of Economics in Moscow from May 29 - June 2, 2016. It offers a valuable reference resource for researchers and graduate students interested in modern stochastics.

  5. Statistical analysis of C/NOFS planar Langmuir probe data

    Directory of Open Access Journals (Sweden)

    E. Costa

    2014-07-01

    Full Text Available The planar Langmuir probe (PLP onboard the Communication/Navigation Outage Forecasting System (C/NOFS satellite has been monitoring ionospheric plasma densities and their irregularities with high resolution almost seamlessly since May 2008. Considering the recent changes in status of the C/NOFS mission, it may be interesting to summarize some statistical results from these measurements. PLP data from 2 different years (1 October 2008–30 September 2009 and 1 January 2012–31 December 2012 were selected for analysis. The first data set corresponds to solar minimum conditions and the second one is as close to solar maximum conditions of solar cycle 24 as possible at the time of the analysis. The results from the analysis show how the values of the standard deviation of the ion density which are greater than specified thresholds are statistically distributed as functions of several combinations of the following geophysical parameters: (i solar activity, (ii altitude range, (iii longitude sector, (iv local time interval, (v geomagnetic latitude interval, and (vi season.

  6. Consolidity analysis for fully fuzzy functions, matrices, probability and statistics

    Directory of Open Access Journals (Sweden)

    Walaa Ibrahim Gabr

    2015-03-01

    Full Text Available The paper presents a comprehensive review of the know-how for developing the systems consolidity theory for modeling, analysis, optimization and design in fully fuzzy environment. The solving of systems consolidity theory included its development for handling new functions of different dimensionalities, fuzzy analytic geometry, fuzzy vector analysis, functions of fuzzy complex variables, ordinary differentiation of fuzzy functions and partial fraction of fuzzy polynomials. On the other hand, the handling of fuzzy matrices covered determinants of fuzzy matrices, the eigenvalues of fuzzy matrices, and solving least-squares fuzzy linear equations. The approach demonstrated to be also applicable in a systematic way in handling new fuzzy probabilistic and statistical problems. This included extending the conventional probabilistic and statistical analysis for handling fuzzy random data. Application also covered the consolidity of fuzzy optimization problems. Various numerical examples solved have demonstrated that the new consolidity concept is highly effective in solving in a compact form the propagation of fuzziness in linear, nonlinear, multivariable and dynamic problems with different types of complexities. Finally, it is demonstrated that the implementation of the suggested fuzzy mathematics can be easily embedded within normal mathematics through building special fuzzy functions library inside the computational Matlab Toolbox or using other similar software languages.

  7. Signal processing and statistical analysis of spaced-based measurements

    International Nuclear Information System (INIS)

    Iranpour, K.

    1996-05-01

    The reports deals with data obtained by the ROSE rocket project. This project was designed to investigate the low altitude auroral instabilities in the electrojet region. The spectral and statistical analyses indicate the existence of unstable waves in the ionized gas in the region. An experimentally obtained dispersion relation for these waves were established. It was demonstrated that the characteristic phase velocities are much lower than what is expected from the standard theoretical results. This analysis of the ROSE data indicate the cascading of energy from lower to higher frequencies. 44 refs., 54 figs

  8. Statistical analysis of muscle contraction based on MR images

    International Nuclear Information System (INIS)

    Horio, Hideyuki; Kuroda, Yoshihiro; Imura, Masataka; Oshiro, Osamu

    2011-01-01

    The purpose of this study was to distinguish the changes of MR signals during relaxation and contraction of muscles. First, MR images were acquired in relaxation and contraction states. The subject clasped his hands in relaxation state and unclasped in contraction state. Next, the images were segmented using mixture Gaussian distributions and expectation-maximization (EM) algorithm. Finally, we evaluated statistical values gotten from mixture Gaussian distributions. As a result, mixing coefficients were different during relaxation and contraction. The experimental results indicated that the proposed analysis has the potential to discriminate between two states. (author)

  9. Statistical Analysis of Designed Experiments Theory and Applications

    CERN Document Server

    Tamhane, Ajit C

    2012-01-01

    A indispensable guide to understanding and designing modern experiments The tools and techniques of Design of Experiments (DOE) allow researchers to successfully collect, analyze, and interpret data across a wide array of disciplines. Statistical Analysis of Designed Experiments provides a modern and balanced treatment of DOE methodology with thorough coverage of the underlying theory and standard designs of experiments, guiding the reader through applications to research in various fields such as engineering, medicine, business, and the social sciences. The book supplies a foundation for the

  10. Spatial Analysis Along Networks Statistical and Computational Methods

    CERN Document Server

    Okabe, Atsuyuki

    2012-01-01

    In the real world, there are numerous and various events that occur on and alongside networks, including the occurrence of traffic accidents on highways, the location of stores alongside roads, the incidence of crime on streets and the contamination along rivers. In order to carry out analyses of those events, the researcher needs to be familiar with a range of specific techniques. Spatial Analysis Along Networks provides a practical guide to the necessary statistical techniques and their computational implementation. Each chapter illustrates a specific technique, from Stochastic Point Process

  11. SAS and R data management, statistical analysis, and graphics

    CERN Document Server

    Kleinman, Ken

    2014-01-01

    An Up-to-Date, All-in-One Resource for Using SAS and R to Perform Frequent TasksThe first edition of this popular guide provided a path between SAS and R using an easy-to-understand, dictionary-like approach. Retaining the same accessible format, SAS and R: Data Management, Statistical Analysis, and Graphics, Second Edition explains how to easily perform an analytical task in both SAS and R, without having to navigate through the extensive, idiosyncratic, and sometimes unwieldy software documentation. The book covers many common tasks, such as data management, descriptive summaries, inferentia

  12. Using R for Data Management, Statistical Analysis, and Graphics

    CERN Document Server

    Horton, Nicholas J

    2010-01-01

    This title offers quick and easy access to key element of documentation. It includes worked examples across a wide variety of applications, tasks, and graphics. "Using R for Data Management, Statistical Analysis, and Graphics" presents an easy way to learn how to perform an analytical task in R, without having to navigate through the extensive, idiosyncratic, and sometimes unwieldy software documentation and vast number of add-on packages. Organized by short, clear descriptive entries, the book covers many common tasks, such as data management, descriptive summaries, inferential proc

  13. Sensitivity analysis and optimization of system dynamics models : Regression analysis and statistical design of experiments

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    1995-01-01

    This tutorial discusses what-if analysis and optimization of System Dynamics models. These problems are solved, using the statistical techniques of regression analysis and design of experiments (DOE). These issues are illustrated by applying the statistical techniques to a System Dynamics model for

  14. Inappropriate Intensive Care Unit admissions: Nigerian doctors ...

    African Journals Online (AJOL)

    2015-12-04

    Dec 4, 2015 ... Conclusion: Inappropriate ICU admissions were perceived as a common event and were mainly attributed to pressure from seniors, referring clinicians, and hospital management. Further work is ..... Financial support and sponsorship. Nil. Conflicts of interest. There are no conflicts of interest. References. 1.

  15. Bullying and Inappropriate Behaviour among Faculty Personnel

    Science.gov (United States)

    Meriläinen, Matti; Sinkkonen, Hanna-Maija; Puhakka, Helena; Käyhkö, Katinka

    2016-01-01

    This study focuses on the degree, nature and consequences of bullying or inappropriate behaviour among faculty personnel (n = 303) in a Finnish university. A total of 114 (38%) faculty members answered the email questionnaire. According to the results, 15% of the respondents had experienced bullying; in addition, 45% had experienced inappropriate…

  16. Prevalence of inappropriate prescribing in primary care

    DEFF Research Database (Denmark)

    Bregnhøj, Lisbeth; Thirstrup, Steffen; Kristensen, Mogens Brandt

    2007-01-01

    OBJECTIVE: To describe the prevalence of inappropriate prescribing in primary care in Copenhagen County, according to the Medication Appropriateness Index (MAI) and to identify the therapeutic areas most commonly involved. SETTING: A cross-sectional study was conducted among 212 elderly ( >65 years...

  17. Missed opportunities and inappropriately given vaccines reduce ...

    African Journals Online (AJOL)

    Coverage would have increased by 10% for diphtheria pertusistetanus (DPT) doses DPTI and DPT2, and 7% for DPT3. Measles immunisation coverage would have increased by 19% had missed immunisation opportunities and inappropriately administered vaccinations been avoided. The overall missed opportunities rate ...

  18. Data analysis for radiological characterisation: Geostatistical and statistical complementarity

    International Nuclear Information System (INIS)

    Desnoyers, Yvon; Dubot, Didier

    2012-01-01

    Radiological characterisation may cover a large range of evaluation objectives during a decommissioning and dismantling (D and D) project: removal of doubt, delineation of contaminated materials, monitoring of the decontamination work and final survey. At each stage, collecting relevant data to be able to draw the conclusions needed is quite a big challenge. In particular two radiological characterisation stages require an advanced sampling process and data analysis, namely the initial categorization and optimisation of the materials to be removed and the final survey to demonstrate compliance with clearance levels. On the one hand the latter is widely used and well developed in national guides and norms, using random sampling designs and statistical data analysis. On the other hand a more complex evaluation methodology has to be implemented for the initial radiological characterisation, both for sampling design and for data analysis. The geostatistical framework is an efficient way to satisfy the radiological characterisation requirements providing a sound decision-making approach for the decommissioning and dismantling of nuclear premises. The relevance of the geostatistical methodology relies on the presence of a spatial continuity for radiological contamination. Thus geo-statistics provides reliable methods for activity estimation, uncertainty quantification and risk analysis, leading to a sound classification of radiological waste (surfaces and volumes). This way, the radiological characterization of contaminated premises can be divided into three steps. First, the most exhaustive facility analysis provides historical and qualitative information. Then, a systematic (exhaustive or not) surface survey of the contamination is implemented on a regular grid. Finally, in order to assess activity levels and contamination depths, destructive samples are collected at several locations within the premises (based on the surface survey results) and analysed. Combined with

  19. Using Statistical Analysis Software to Advance Nitro Plasticizer Wettability

    Energy Technology Data Exchange (ETDEWEB)

    Shear, Trevor Allan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-29

    Statistical analysis in science is an extremely powerful tool that is often underutilized. Additionally, it is frequently the case that data is misinterpreted or not used to its fullest extent. Utilizing the advanced software JMP®, many aspects of experimental design and data analysis can be evaluated and improved. This overview will detail the features of JMP® and how they were used to advance a project, resulting in time and cost savings, as well as the collection of scientifically sound data. The project analyzed in this report addresses the inability of a nitro plasticizer to coat a gold coated quartz crystal sensor used in a quartz crystal microbalance. Through the use of the JMP® software, the wettability of the nitro plasticizer was increased by over 200% using an atmospheric plasma pen, ensuring good sample preparation and reliable results.

  20. Topics in statistical data analysis for high-energy physics

    International Nuclear Information System (INIS)

    Cowan, G.

    2011-01-01

    These lectures concert two topics that are becoming increasingly important in the analysis of high-energy physics data: Bayesian statistics and multivariate methods. In the Bayesian approach, we extend the interpretation of probability not only to cover the frequency of repeatable outcomes but also to include a degree of belief. In this way we are able to associate probability with a hypothesis and thus to answer directly questions that cannot be addressed easily with traditional frequentist methods. In multivariate analysis, we try to exploit as much information as possible from the characteristics that we measure for each event to distinguish between event types. In particular we will look at a method that has gained popularity in high-energy physics in recent years: the boosted decision tree. Finally, we give a brief sketch of how multivariate methods may be applied in a search for a new signal process. (author)

  1. Image analysis and statistical inference in neuroimaging with R.

    Science.gov (United States)

    Tabelow, K; Clayden, J D; de Micheaux, P Lafaye; Polzehl, J; Schmid, V J; Whitcher, B

    2011-04-15

    R is a language and environment for statistical computing and graphics. It can be considered an alternative implementation of the S language developed in the 1970s and 1980s for data analysis and graphics (Becker and Chambers, 1984; Becker et al., 1988). The R language is part of the GNU project and offers versions that compile and run on almost every major operating system currently available. We highlight several R packages built specifically for the analysis of neuroimaging data in the context of functional MRI, diffusion tensor imaging, and dynamic contrast-enhanced MRI. We review their methodology and give an overview of their capabilities for neuroimaging. In addition we summarize some of the current activities in the area of neuroimaging software development in R. Copyright © 2011 Elsevier Inc. All rights reserved.

  2. First statistical analysis of Geant4 quality software metrics

    Science.gov (United States)

    Ronchieri, Elisabetta; Grazia Pia, Maria; Giacomini, Francesco

    2015-12-01

    Geant4 is a simulation system of particle transport through matter, widely used in several experimental areas from high energy physics and nuclear experiments to medical studies. Some of its applications may involve critical use cases; therefore they would benefit from an objective assessment of the software quality of Geant4. In this paper, we provide a first statistical evaluation of software metrics data related to a set of Geant4 physics packages. The analysis aims at identifying risks for Geant4 maintainability, which would benefit from being addressed at an early stage. The findings of this pilot study set the grounds for further extensions of the analysis to the whole of Geant4 and to other high energy physics software systems.

  3. Statistical learning analysis in neuroscience: aiming for transparency.

    Science.gov (United States)

    Hanke, Michael; Halchenko, Yaroslav O; Haxby, James V; Pollmann, Stefan

    2010-01-01

    Encouraged by a rise of reciprocal interest between the machine learning and neuroscience communities, several recent studies have demonstrated the explanatory power of statistical learning techniques for the analysis of neural data. In order to facilitate a wider adoption of these methods, neuroscientific research needs to ensure a maximum of transparency to allow for comprehensive evaluation of the employed procedures. We argue that such transparency requires "neuroscience-aware" technology for the performance of multivariate pattern analyses of neural data that can be documented in a comprehensive, yet comprehensible way. Recently, we introduced PyMVPA, a specialized Python framework for machine learning based data analysis that addresses this demand. Here, we review its features and applicability to various neural data modalities.

  4. Statistical learning analysis in neuroscience: aiming for transparency

    Directory of Open Access Journals (Sweden)

    Michael Hanke

    2010-05-01

    Full Text Available Encouraged by a rise of reciprocal interest between the machine learning and neuroscience communities, several recent studies have demonstrated the explanatory power of statistical learning techniques for the analysis of neural data. In order to facilitate a wider adoption of these methods neuroscientific research needs to ensure a maximum of transparency to allow for comprehensive evaluation of the employed procedures. We argue that such transparency requires ``neuroscience-aware'' technology for the performance of multivariate pattern analyses of neural data that can be documented in a comprehensive, yet comprehensible way. Recently, we introduced PyMVPA, a specialized Python framework for machine learning based data analysis that addresses this demand. Here we review its features and applicability to various neural data modalities.

  5. Pattern recognition in menstrual bleeding diaries by statistical cluster analysis

    Directory of Open Access Journals (Sweden)

    Wessel Jens

    2009-07-01

    Full Text Available Abstract Background The aim of this paper is to empirically identify a treatment-independent statistical method to describe clinically relevant bleeding patterns by using bleeding diaries of clinical studies on various sex hormone containing drugs. Methods We used the four cluster analysis methods single, average and complete linkage as well as the method of Ward for the pattern recognition in menstrual bleeding diaries. The optimal number of clusters was determined using the semi-partial R2, the cubic cluster criterion, the pseudo-F- and the pseudo-t2-statistic. Finally, the interpretability of the results from a gynecological point of view was assessed. Results The method of Ward yielded distinct clusters of the bleeding diaries. The other methods successively chained the observations into one cluster. The optimal number of distinctive bleeding patterns was six. We found two desirable and four undesirable bleeding patterns. Cyclic and non cyclic bleeding patterns were well separated. Conclusion Using this cluster analysis with the method of Ward medications and devices having an impact on bleeding can be easily compared and categorized.

  6. Statistical mechanical analysis of LMFBR fuel cladding tubes

    International Nuclear Information System (INIS)

    Poncelet, J.-P.; Pay, A.

    1977-01-01

    The most important design requirement on fuel pin cladding for LMFBR's is its mechanical integrity. Disruptive factors include internal pressure from mixed oxide fuel fission gas release, thermal stresses and high temperature creep, neutron-induced differential void-swelling as a source of stress in the cladding and irradiation creep of stainless steel material, corrosion by fission products. Under irradiation these load-restraining mechanisms are accentuated by stainless steel embrittlement and strength alterations. To account for the numerous uncertainties involved in the analysis by theoretical models and computer codes statistical tools are unavoidably requested, i.e. Monte Carlo simulation methods. Thanks to these techniques, uncertainties in nominal characteristics, material properties and environmental conditions can be linked up in a correct way and used for a more accurate conceptual design. First, a thermal creep damage index is set up through a sufficiently sophisticated clad physical analysis including arbitrary time dependence of power and neutron flux as well as effects of sodium temperature, burnup and steel mechanical behavior. Although this strain limit approach implies a more general but time consuming model., on the counterpart the net output is improved and e.g. clad temperature, stress and strain maxima may be easily assessed. A full spectrum of variables are statistically treated to account for their probability distributions. Creep damage probability may be obtained and can contribute to a quantitative fuel probability estimation

  7. Statistical analysis of magnetically soft particles in magnetorheological elastomers

    Science.gov (United States)

    Gundermann, T.; Cremer, P.; Löwen, H.; Menzel, A. M.; Odenbach, S.

    2017-04-01

    The physical properties of magnetorheological elastomers (MRE) are a complex issue and can be influenced and controlled in many ways, e.g. by applying a magnetic field, by external mechanical stimuli, or by an electric potential. In general, the response of MRE materials to these stimuli is crucially dependent on the distribution of the magnetic particles inside the elastomer. Specific knowledge of the interactions between particles or particle clusters is of high relevance for understanding the macroscopic rheological properties and provides an important input for theoretical calculations. In order to gain a better insight into the correlation between the macroscopic effects and microstructure and to generate a database for theoretical analysis, x-ray micro-computed tomography (X-μCT) investigations as a base for a statistical analysis of the particle configurations were carried out. Different MREs with quantities of 2-15 wt% (0.27-2.3 vol%) of iron powder and different allocations of the particles inside the matrix were prepared. The X-μCT results were edited by an image processing software regarding the geometrical properties of the particles with and without the influence of an external magnetic field. Pair correlation functions for the positions of the particles inside the elastomer were calculated to statistically characterize the distributions of the particles in the samples.

  8. Statistical analysis of non-homogeneous Poisson processes. Statistical processing of a particle multidetector

    International Nuclear Information System (INIS)

    Lacombe, J.P.

    1985-12-01

    Statistic study of Poisson non-homogeneous and spatial processes is the first part of this thesis. A Neyman-Pearson type test is defined concerning the intensity measurement of these processes. Conditions are given for which consistency of the test is assured, and others giving the asymptotic normality of the test statistics. Then some techniques of statistic processing of Poisson fields and their applications to a particle multidetector study are given. Quality tests of the device are proposed togetherwith signal extraction methods [fr

  9. Statistical Models and Methods for Network Meta-Analysis.

    Science.gov (United States)

    Madden, L V; Piepho, H-P; Paul, P A

    2016-08-01

    Meta-analysis, the methodology for analyzing the results from multiple independent studies, has grown tremendously in popularity over the last four decades. Although most meta-analyses involve a single effect size (summary result, such as a treatment difference) from each study, there are often multiple treatments of interest across the network of studies in the analysis. Multi-treatment (or network) meta-analysis can be used for simultaneously analyzing the results from all the treatments. However, the methodology is considerably more complicated than for the analysis of a single effect size, and there have not been adequate explanations of the approach for agricultural investigations. We review the methods and models for conducting a network meta-analysis based on frequentist statistical principles, and demonstrate the procedures using a published multi-treatment plant pathology data set. A major advantage of network meta-analysis is that correlations of estimated treatment effects are automatically taken into account when an appropriate model is used. Moreover, treatment comparisons may be possible in a network meta-analysis that are not possible in a single study because all treatments of interest may not be included in any given study. We review several models that consider the study effect as either fixed or random, and show how to interpret model-fitting output. We further show how to model the effect of moderator variables (study-level characteristics) on treatment effects, and present one approach to test for the consistency of treatment effects across the network. Online supplemental files give explanations on fitting the network meta-analytical models using SAS.

  10. Short-run and Current Analysis Model in Statistics

    Directory of Open Access Journals (Sweden)

    Constantin Anghelache

    2006-01-01

    Full Text Available Using the short-run statistic indicators is a compulsory requirement implied in the current analysis. Therefore, there is a system of EUROSTAT indicators on short run which has been set up in this respect, being recommended for utilization by the member-countries. On the basis of these indicators, there are regular, usually monthly, analysis being achieved in respect of: the production dynamic determination; the evaluation of the short-run investment volume; the development of the turnover; the wage evolution: the employment; the price indexes and the consumer price index (inflation; the volume of exports and imports and the extent to which the imports are covered by the exports and the sold of trade balance. The EUROSTAT system of indicators of conjuncture is conceived as an open system, so that it can be, at any moment extended or restricted, allowing indicators to be amended or even removed, depending on the domestic users requirements as well as on the specific requirements of the harmonization and integration. For the short-run analysis, there is also the World Bank system of indicators of conjuncture, which is utilized, relying on the data sources offered by the World Bank, The World Institute for Resources or other international organizations statistics. The system comprises indicators of the social and economic development and focuses on the indicators for the following three fields: human resources, environment and economic performances. At the end of the paper, there is a case study on the situation of Romania, for which we used all these indicators.

  11. Failed Attempts to Reduce Inappropriate Laboratory Utilization in an Emergency Department Setting in Cyprus: Lessons Learned.

    Science.gov (United States)

    Petrou, Panagiotis

    2016-03-01

    Laboratory test ordering is a significant part of the diagnosis definition and disease treatment monitoring process. Inappropriate laboratory test ordering wastes scarce resources, places unnecessary burden on the health care delivery system, and exposes patients to unnecessary discomfort. Inappropriate ordering is caused by many factors, such as lack of guidelines, defensive medicine, thoughtless ordering, and lack of awareness of costs incurred to the system. The purpose of this study is to assess two successive measures, which were introduced in a Cyprus emergency department (ED) for the purpose of synergistically reducing inappropriate laboratory ordering: the introduction of a copayment fee to reduce nonemergent visits, and the development of a Web-based protocol defining the tests emergency physicians could order. An autoregressive integrated moving average model for interrupted time series analysis was constructed. Data include number and type of tests ordered, along with number of visits for a period of 4 years from an ED in Cyprus. Copayment fee and introduction of a revised Web-based protocol for a test ordering form did not reduce the number of ordered tests in the ED unit. Copayment fee alone resulted in a statistically significant reduction in ED visits. The implementation of two consecutive measures resulted in an increase of ordered tests per patient. Laboratory ordering is a multidimensional process that is primarily supplier induced, therefore, all underlying possible causes must be scrutinized by health authorities. These include lack of guidelines, defensive medicine and thoughtless prescribing. To attain significant gains, an integrated approach must be implemented. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Polypharmacy and potentially inappropriate medication use as the precipitating factor in readmissions to the hospital

    Directory of Open Access Journals (Sweden)

    Vishal Sehgal

    2013-01-01

    Full Text Available Background and Aim: Readmission to the hospital within 30 days of discharge from the hospital is a common occurrence. Congestive heart failure is the most common cause of readmissions in the hospital. We hypothesized that irrespective of the admission diagnosis polypharmacy and potentially inappropriate use of medications (PIM leads to readmissions within 30 days of discharge from the hospital. Materials and Methods: A retrospective study was carried out by reviewing the hospital records of 414 patients who were readmitted to the hospital within 30 days of discharge from the hospital between January 2008 and December 2009. The data was stratified to see which patients were on polypharmacy and/or on PIM. Polypharmacy was defined as use of more than 5 medications. PIM was defined as per the modified Beers criteria. Day 0 was defined as the day of discharge and day1 was defined as the day-after Admission to the hospital. Statistical analysis was carried out using a two-way analysis of variance (ANOVA on the data to see if polypharmacy and/or PIM was related to readmission within 30 days of discharge irrespective of admission diagnosis. Results: Polypharmacy was related to hospital readmission at day 1 and day 0, however inappropriate drug use was found to be not related at any day. Polypharmacy and PIM combined had a positive correlation to readmission only on days 1 and 0 and it was statistically significant. The use of minimal and appropriate use of drugs was statistically significant compared to polypharmacy and PIM use. Conclusions: Polypharmacy and PIM are under recognized cause of readmissions to the hospital.

  13. The system for statistical analysis of logistic information

    Directory of Open Access Journals (Sweden)

    Khayrullin Rustam Zinnatullovich

    2015-05-01

    Full Text Available The current problem for managers in logistic and trading companies is the task of improving the operational business performance and developing the logistics support of sales. The development of logistics sales supposes development and implementation of a set of works for the development of the existing warehouse facilities, including both a detailed description of the work performed, and the timing of their implementation. Logistics engineering of warehouse complex includes such tasks as: determining the number and the types of technological zones, calculation of the required number of loading-unloading places, development of storage structures, development and pre-sales preparation zones, development of specifications of storage types, selection of loading-unloading equipment, detailed planning of warehouse logistics system, creation of architectural-planning decisions, selection of information-processing equipment, etc. The currently used ERP and WMS systems did not allow us to solve the full list of logistics engineering problems. In this regard, the development of specialized software products, taking into account the specifics of warehouse logistics, and subsequent integration of these software with ERP and WMS systems seems to be a current task. In this paper we suggest a system of statistical analysis of logistics information, designed to meet the challenges of logistics engineering and planning. The system is based on the methods of statistical data processing.The proposed specialized software is designed to improve the efficiency of the operating business and the development of logistics support of sales. The system is based on the methods of statistical data processing, the methods of assessment and prediction of logistics performance, the methods for the determination and calculation of the data required for registration, storage and processing of metal products, as well as the methods for planning the reconstruction and development

  14. Statistical analysis and Kalman filtering applied to nuclear materials accountancy

    International Nuclear Information System (INIS)

    Annibal, P.S.

    1990-08-01

    Much theoretical research has been carried out on the development of statistical methods for nuclear material accountancy. In practice, physical, financial and time constraints mean that the techniques must be adapted to give an optimal performance in plant conditions. This thesis aims to bridge the gap between theory and practice, to show the benefits to be gained from a knowledge of the facility operation. Four different aspects are considered; firstly, the use of redundant measurements to reduce the error on the estimate of the mass of heavy metal in an 'accountancy tank' is investigated. Secondly, an analysis of the calibration data for the same tank is presented, establishing bounds for the error and suggesting a means of reducing them. Thirdly, a plant-specific method of producing an optimal statistic from the input, output and inventory data, to help decide between 'material loss' and 'no loss' hypotheses, is developed and compared with existing general techniques. Finally, an application of the Kalman Filter to materials accountancy is developed, to demonstrate the advantages of state-estimation techniques. The results of the analyses and comparisons illustrate the importance of taking into account a complete and accurate knowledge of the plant operation, measurement system, and calibration methods, to derive meaningful results from statistical tests on materials accountancy data, and to give a better understanding of critical random and systematic error sources. The analyses were carried out on the head-end of the Fast Reactor Reprocessing Plant, where fuel from the prototype fast reactor is cut up and dissolved. However, the techniques described are general in their application. (author)

  15. Using robust statistics to improve neutron activation analysis results

    International Nuclear Information System (INIS)

    Zahn, Guilherme S.; Genezini, Frederico A.; Ticianelli, Regina B.; Figueiredo, Ana Maria G.

    2011-01-01

    Neutron activation analysis (NAA) is an analytical technique where an unknown sample is submitted to a neutron flux in a nuclear reactor, and its elemental composition is calculated by measuring the induced activity produced. By using the relative NAA method, one or more well-characterized samples (usually certified reference materials - CRMs) are irradiated together with the unknown ones, and the concentration of each element is then calculated by comparing the areas of the gamma ray peaks related to that element. When two or more CRMs are used as reference, the concentration of each element can be determined by several different ways, either using more than one gamma ray peak for that element (when available), or using the results obtained in the comparison with each CRM. Therefore, determining the best estimate for the concentration of each element in the sample can be a delicate issue. In this work, samples from three CRMs were irradiated together and the elemental concentration in one of them was calculated using the other two as reference. Two sets of peaks were analyzed for each element: a smaller set containing only the literature-recommended gamma-ray peaks and a larger one containing all peaks related to that element that could be quantified in the gamma-ray spectra; the most recommended transition was also used as a benchmark. The resulting data for each element was then reduced using up to five different statistical approaches: the usual (and not robust) unweighted and weighted means, together with three robust means: the Limitation of Relative Statistical Weight, Normalized Residuals and Rajeval. The resulting concentration values were then compared to the certified value for each element, allowing for discussion on both the performance of each statistical tool and on the best choice of peaks for each element. (author)

  16. Analysis of neutron flux measurement systems using statistical functions

    International Nuclear Information System (INIS)

    Pontes, Eduardo Winston

    1997-01-01

    This work develops an integrated analysis for neutron flux measurement systems using the concepts of cumulants and spectra. Its major contribution is the generalization of Campbell's theorem in the form of spectra in the frequency domain, and its application to the analysis of neutron flux measurement systems. Campbell's theorem, in its generalized form, constitutes an important tool, not only to find the nth-order frequency spectra of the radiation detector, but also in the system analysis. The radiation detector, an ionization chamber for neutrons, is modeled for cylindrical, plane and spherical geometries. The detector current pulses are characterized by a vector of random parameters, and the associated charges, statistical moments and frequency spectra of the resulting current are calculated. A computer program is developed for application of the proposed methodology. In order for the analysis to integrate the associated electronics, the signal processor is studied, considering analog and digital configurations. The analysis is unified by developing the concept of equivalent systems that can be used to describe the cumulants and spectra in analog or digital systems. The noise in the signal processor input stage is analysed in terms of second order spectrum. Mathematical expressions are presented for cumulants and spectra up to fourth order, for important cases of filter positioning relative to detector spectra. Unbiased conventional estimators for cumulants are used, and, to evaluate systems precision and response time, expressions are developed for their variances. Finally, some possibilities for obtaining neutron radiation flux as a function of cumulants are discussed. In summary, this work proposes some analysis tools which make possible important decisions in the design of better neutron flux measurement systems. (author)

  17. Analysis of Official Suicide Statistics in Spain (1910-2011

    Directory of Open Access Journals (Sweden)

    2017-01-01

    Full Text Available In this article we examine the evolution of suicide rates in Spain from 1910 to 2011. As something new, we use standardised suicide rates, making them perfectly comparable geographically and in time, as they no longer reflect population structure. Using historical data from a series of socioeconomic variables for all Spain's provinces and applying new techniques for the statistical analysis of panel data, we are able to confirm many of the hypotheses established by Durkheim at the end of the 19th century, especially those related to fertility and marriage rates, age, sex and the aging index. Our findings, however, contradict Durkheim's approach regarding the impact of urbanisation processes and poverty on suicide.

  18. Detecting fire in video stream using statistical analysis

    Directory of Open Access Journals (Sweden)

    Koplík Karel

    2017-01-01

    Full Text Available The real time fire detection in video stream is one of the most interesting problems in computer vision. In fact, in most cases it would be nice to have fire detection algorithm implemented in usual industrial cameras and/or to have possibility to replace standard industrial cameras with one implementing the fire detection algorithm. In this paper, we present new algorithm for detecting fire in video. The algorithm is based on tracking suspicious regions in time with statistical analysis of their trajectory. False alarms are minimized by combining multiple detection criteria: pixel brightness, trajectories of suspicious regions for evaluating characteristic fire flickering and persistence of alarm state in sequence of frames. The resulting implementation is fast and therefore can run on wide range of affordable hardware.

  19. Wavelet Statistical Analysis of Low-Latitude Geomagnetic Measurements

    Science.gov (United States)

    Papa, A. R.; Akel, A. F.

    2009-05-01

    Following previous works by our group (Papa et al., JASTP, 2006), where we analyzed a series of records acquired at the Vassouras National Geomagnetic Observatory in Brazil for the month of October 2000, we introduced a wavelet analysis for the same type of data and for other periods. It is well known that wavelets allow a more detailed study in several senses: the time window for analysis can be drastically reduced if compared to other traditional methods (Fourier, for example) and at the same time allow an almost continuous accompaniment of both amplitude and frequency of signals as time goes by. This advantage brings some possibilities for potentially useful forecasting methods of the type also advanced by our group in previous works (see for example, Papa and Sosman, JASTP, 2008). However, the simultaneous statistical analysis of both time series (in our case amplitude and frequency) is a challenging matter and is in this sense that we have found what we consider our main goal. Some possible trends for future works are advanced.

  20. On the analysis of line profile variations: A statistical approach

    International Nuclear Information System (INIS)

    McCandliss, S.R.

    1988-01-01

    This study is concerned with the empirical characterization of the line profile variations (LPV), which occur in many of and Wolf-Rayet stars. The goal of the analysis is to gain insight into the physical mechanisms producing the variations. The analytic approach uses a statistical method to quantify the significance of the LPV and to identify those regions in the line profile which are undergoing statistically significant variations. Line positions and flux variations are then measured and subject to temporal and correlative analysis. Previous studies of LPV have for the most part been restricted to observations of a single line. Important information concerning the range and amplitude of the physical mechanisms involved can be obtained by simultaneously observing spectral features formed over a range of depths in the extended mass losing atmospheres of massive, luminous stars. Time series of a Wolf-Rayet and two of stars with nearly complete spectral coverage from 3940 angstrom to 6610 angstrom and with spectral resolution of R = 10,000 are analyzed here. These three stars exhibit a wide range of both spectral and temporal line profile variations. The HeII Pickering lines of HD 191765 show a monotonic increase in the peak rms variation amplitude with lines formed at progressively larger radii in the Wolf-Rayet star wind. Two times scales of variation have been identified in this star: a less than one day variation associated with small scale flickering in the peaks of the line profiles and a greater than one day variation associated with large scale asymmetric changes in the overall line profile shapes. However, no convincing period phenomena are evident at those periods which are well sampled in this time series

  1. A STATISTICAL ANALYSIS OF LARYNGEAL MALIGNANCIES AT OUR INSTITUTION

    Directory of Open Access Journals (Sweden)

    Bharathi Mohan Mathan

    2017-03-01

    Full Text Available BACKGROUND Malignancies of larynx are an increasing global burden with a distribution of approximately 2-5% of all malignancies with an incidence of 3.6/1,00,000 for men and 1.3/1,00,000 for women with a male-to-female ratio of 4:1. Smoking and alcohol are major established risk factors. More than 90-95% of all malignancies are squamous cell type. Three main subsite of laryngeal malignancies are glottis, supraglottis and subglottis. Improved surgical techniques and advanced chemoradiotherapy has increased the overall 5 year survival rate. The above study is statistical analysis of laryngeal malignancies at our institution for a period of one year and analysis of pattern of distribution, aetiology, sites and subsites and causes for recurrence. MATERIALS AND METHODS Based on the statistical data available in the institution for the period of one year from January 2016-December 2016, all laryngeal malignancies were analysed with respect to demographic pattern, age, gender, site, subsite, aetiology, staging, treatment received and probable cause for failure of treatment. Patients were followed up for 12 months period during the study. RESULTS Total number of cases studied are 27 (twenty seven. Male cases are 23 and female cases are 4, male-to-female ratio is 5.7:1, most common age is above 60 years, most common site is supraglottis, most common type is moderately-differentiated squamous cell carcinoma, most common cause for relapse or recurrence is advanced stage of disease and poor differentiation. CONCLUSION The commonest age occurrence at the end of the study is above 60 years and male-to-female ratio is 5.7:1, which is slightly above the international standards. Most common site is supraglottis and not glottis. The relapse and recurrences are higher compared to the international standards.

  2. Statistical Distribution Analysis of Lineated Bands on Europa

    Science.gov (United States)

    Chen, T.; Phillips, C. B.; Pappalardo, R. T.

    2016-12-01

    Tina Chen, Cynthia B. Phillips, Robert T. Pappalardo Europa's surface is covered with intriguing linear and disrupted features, including lineated bands that range in scale and size. Previous studies have shown the possibility of an icy shell at the surface that may be concealing a liquid ocean with the potential to harboring life (Pappalardo et al., 1999). Utilizing the high-resolution imaging data from the Galileo spacecraft, we examined bands through a morphometric and morphologic approach. Greeley et al. (2000) and Procktor et al. (2002) have defined bands as wide, hummocky to lineated features that have distinctive surface texture and albedo compared to its surrounding terrain. We took morphometric measurements of lineated bands to find correlations in properties such as size, location, and orientation, and to shed light on formation models. We will present our measurements of over 100 bands on Europa that was mapped on the USGS Europa Global Mosaic Base Map (2002). We also conducted a statistical analysis to understand the distribution of lineated bands globally, and whether the widths of the bands differ by location. Our preliminary analysis from our statistical distribution evaluation, combined with the morphometric measurements, supports a uniform ice shell thickness for Europa rather than one that varies geographically. References: Greeley, Ronald, et al. "Geologic mapping of Europa." Journal of Geophysical Research: Planets 105.E9 (2000): 22559-22578.; Pappalardo, R. T., et al. "Does Europa have a subsurface ocean? Evaluation of the geological evidence." Journal of Geophysical Research: Planets 104.E10 (1999): 24015-24055.; Prockter, Louise M., et al. "Morphology of Europan bands at high resolution: A mid-ocean ridge-type rift mechanism." Journal of Geophysical Research: Planets 107.E5 (2002).; U.S. Geological Survey, 2002, Controlled photomosaic map of Europa, Je 15M CMN: U.S. Geological Survey Geologic Investigations Series I-2757, available at http

  3. Spectral signature verification using statistical analysis and text mining

    Science.gov (United States)

    DeCoster, Mallory E.; Firpi, Alexe H.; Jacobs, Samantha K.; Cone, Shelli R.; Tzeng, Nigel H.; Rodriguez, Benjamin M.

    2016-05-01

    In the spectral science community, numerous spectral signatures are stored in databases representative of many sample materials collected from a variety of spectrometers and spectroscopists. Due to the variety and variability of the spectra that comprise many spectral databases, it is necessary to establish a metric for validating the quality of spectral signatures. This has been an area of great discussion and debate in the spectral science community. This paper discusses a method that independently validates two different aspects of a spectral signature to arrive at a final qualitative assessment; the textual meta-data and numerical spectral data. Results associated with the spectral data stored in the Signature Database1 (SigDB) are proposed. The numerical data comprising a sample material's spectrum is validated based on statistical properties derived from an ideal population set. The quality of the test spectrum is ranked based on a spectral angle mapper (SAM) comparison to the mean spectrum derived from the population set. Additionally, the contextual data of a test spectrum is qualitatively analyzed using lexical analysis text mining. This technique analyzes to understand the syntax of the meta-data to provide local learning patterns and trends within the spectral data, indicative of the test spectrum's quality. Text mining applications have successfully been implemented for security2 (text encryption/decryption), biomedical3 , and marketing4 applications. The text mining lexical analysis algorithm is trained on the meta-data patterns of a subset of high and low quality spectra, in order to have a model to apply to the entire SigDB data set. The statistical and textual methods combine to assess the quality of a test spectrum existing in a database without the need of an expert user. This method has been compared to other validation methods accepted by the spectral science community, and has provided promising results when a baseline spectral signature is

  4. Classification of Malaysia aromatic rice using multivariate statistical analysis

    Energy Technology Data Exchange (ETDEWEB)

    Abdullah, A. H.; Adom, A. H.; Shakaff, A. Y. Md; Masnan, M. J.; Zakaria, A.; Rahim, N. A. [School of Mechatronic Engineering, Universiti Malaysia Perlis, Kampus Pauh Putra, 02600 Arau, Perlis (Malaysia); Omar, O. [Malaysian Agriculture Research and Development Institute (MARDI), Persiaran MARDI-UPM, 43400 Serdang, Selangor (Malaysia)

    2015-05-15

    Aromatic rice (Oryza sativa L.) is considered as the best quality premium rice. The varieties are preferred by consumers because of its preference criteria such as shape, colour, distinctive aroma and flavour. The price of aromatic rice is higher than ordinary rice due to its special needed growth condition for instance specific climate and soil. Presently, the aromatic rice quality is identified by using its key elements and isotopic variables. The rice can also be classified via Gas Chromatography Mass Spectrometry (GC-MS) or human sensory panels. However, the uses of human sensory panels have significant drawbacks such as lengthy training time, and prone to fatigue as the number of sample increased and inconsistent. The GC–MS analysis techniques on the other hand, require detailed procedures, lengthy analysis and quite costly. This paper presents the application of in-house developed Electronic Nose (e-nose) to classify new aromatic rice varieties. The e-nose is used to classify the variety of aromatic rice based on the samples odour. The samples were taken from the variety of rice. The instrument utilizes multivariate statistical data analysis, including Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA) and K-Nearest Neighbours (KNN) to classify the unknown rice samples. The Leave-One-Out (LOO) validation approach is applied to evaluate the ability of KNN to perform recognition and classification of the unspecified samples. The visual observation of the PCA and LDA plots of the rice proves that the instrument was able to separate the samples into different clusters accordingly. The results of LDA and KNN with low misclassification error support the above findings and we may conclude that the e-nose is successfully applied to the classification of the aromatic rice varieties.

  5. Inappropriate self-medication among adolescents and its association with lower medication literacy and substance use.

    Science.gov (United States)

    Lee, Chun-Hsien; Chang, Fong-Ching; Hsu, Sheng-Der; Chi, Hsueh-Yun; Huang, Li-Jung; Yeh, Ming-Kung

    2017-01-01

    While self-medication is common, inappropriate self-medication has potential risks. This study assesses inappropriate self-medication among adolescents and examines the relationships among medication literacy, substance use, and inappropriate self-medication. In 2016, a national representative sample of 6,226 students from 99 primary, middle, and high schools completed an online self-administered questionnaire. Multiple logistic regression analysis was used to examine factors related to inappropriate self-medication. The prevalence of self-medication in the past year among the adolescents surveyed was 45.8%, and the most frequently reported drugs for self-medication included nonsteroidal anti-inflammatory drugs or pain relievers (prevalence = 31.1%), cold or cough medicines (prevalence = 21.6%), analgesics (prevalence = 19.3%), and antacids (prevalence = 17.3%). Of the participants who practiced self-medication, the prevalence of inappropriate self-medication behaviors included not reading drug labels or instructions (10.1%), using excessive dosages (21.6%), and using prescription and nonprescription medicine simultaneously without advice from a health provider (polypharmacy) (30.3%). The results of multiple logistic regression analysis showed that after controlling for school level, gender, and chronic diseases, the participants with lower medication knowledge, lower self-efficacy, lower medication literacy, and who consumed tobacco or alcohol were more likely to engage in inappropriate self-medication. Lower medication literacy and substance use were associated with inappropriate self-medication among adolescents.

  6. Inappropriate self-medication among adolescents and its association with lower medication literacy and substance use.

    Directory of Open Access Journals (Sweden)

    Chun-Hsien Lee

    Full Text Available While self-medication is common, inappropriate self-medication has potential risks. This study assesses inappropriate self-medication among adolescents and examines the relationships among medication literacy, substance use, and inappropriate self-medication.In 2016, a national representative sample of 6,226 students from 99 primary, middle, and high schools completed an online self-administered questionnaire. Multiple logistic regression analysis was used to examine factors related to inappropriate self-medication.The prevalence of self-medication in the past year among the adolescents surveyed was 45.8%, and the most frequently reported drugs for self-medication included nonsteroidal anti-inflammatory drugs or pain relievers (prevalence = 31.1%, cold or cough medicines (prevalence = 21.6%, analgesics (prevalence = 19.3%, and antacids (prevalence = 17.3%. Of the participants who practiced self-medication, the prevalence of inappropriate self-medication behaviors included not reading drug labels or instructions (10.1%, using excessive dosages (21.6%, and using prescription and nonprescription medicine simultaneously without advice from a health provider (polypharmacy (30.3%. The results of multiple logistic regression analysis showed that after controlling for school level, gender, and chronic diseases, the participants with lower medication knowledge, lower self-efficacy, lower medication literacy, and who consumed tobacco or alcohol were more likely to engage in inappropriate self-medication.Lower medication literacy and substance use were associated with inappropriate self-medication among adolescents.

  7. A Statistic Analysis Of Romanian Seaside Hydro Tourism

    OpenAIRE

    Secara Mirela

    2011-01-01

    Tourism represents one of the ways of spending spare time for rest, recreation, treatment and entertainment, and the specific aspect of Constanta County economy is touristic and spa capitalization of Romanian seaside. In order to analyze hydro tourism on Romanian seaside we have used statistic indicators within tourism as well as statistic methods such as chronological series, interdependent statistic series, regression and statistic correlation. The major objective of this research is to rai...

  8. Tucker tensor analysis of Matern functions in spatial statistics

    KAUST Repository

    Litvinenko, Alexander

    2018-04-20

    Low-rank Tucker tensor methods in spatial statistics 1. Motivation: improve statistical models 2. Motivation: disadvantages of matrices 3. Tools: Tucker tensor format 4. Tensor approximation of Matern covariance function via FFT 5. Typical statistical operations in Tucker tensor format 6. Numerical experiments

  9. Higher order statistical frequency domain decomposition for operational modal analysis

    Science.gov (United States)

    Nita, G. M.; Mahgoub, M. A.; Sharyatpanahi, S. G.; Cretu, N. C.; El-Fouly, T. M.

    2017-02-01

    Experimental methods based on modal analysis under ambient vibrational excitation are often employed to detect structural damages of mechanical systems. Many of such frequency domain methods, such as Basic Frequency Domain (BFD), Frequency Domain Decomposition (FFD), or Enhanced Frequency Domain Decomposition (EFFD), use as first step a Fast Fourier Transform (FFT) estimate of the power spectral density (PSD) associated with the response of the system. In this study it is shown that higher order statistical estimators such as Spectral Kurtosis (SK) and Sample to Model Ratio (SMR) may be successfully employed not only to more reliably discriminate the response of the system against the ambient noise fluctuations, but also to better identify and separate contributions from closely spaced individual modes. It is shown that a SMR-based Maximum Likelihood curve fitting algorithm may improve the accuracy of the spectral shape and location of the individual modes and, when combined with the SK analysis, it provides efficient means to categorize such individual spectral components according to their temporal dynamics as coherent or incoherent system responses to unknown ambient excitations.

  10. Data Analysis & Statistical Methods for Command File Errors

    Science.gov (United States)

    Meshkat, Leila; Waggoner, Bruce; Bryant, Larry

    2014-01-01

    This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.

  11. Criminal victimization in Ukraine: analysis of statistical data

    Directory of Open Access Journals (Sweden)

    Serhiy Nezhurbida

    2007-12-01

    Full Text Available The article is based on the analysis of statistical data provided by law-enforcement, judicial and other bodies of Ukraine. The given analysis allows us to give an accurate quantity of a current status of crime victimization in Ukraine, to characterize its basic features (level, rate, structure, dynamics, and etc.. L’article se concentre sur l’analyse des données statystiques fournies par les institutions de contrôle sociale (forces de police et magistrature et par d’autres organes institutionnels ukrainiens. Les analyses effectuées attirent l'attention sur la situation actuelle des victimes du crime en Ukraine et aident à délinéer leur principales caractéristiques (niveau, taux, structure, dynamiques, etc.L’articolo si basa sull’analisi dei dati statistici forniti dalle agenzie del controllo sociale (forze dell'ordine e magistratura e da altri organi istituzionali ucraini. Le analisi effettuate forniscono molte informazioni sulla situazione attuale delle vittime del crimine in Ucraina e aiutano a delinearne le caratteristiche principali (livello, tasso, struttura, dinamiche, ecc..

  12. A statistical method for draft tube pressure pulsation analysis

    International Nuclear Information System (INIS)

    Doerfler, P K; Ruchonnet, N

    2012-01-01

    Draft tube pressure pulsation (DTPP) in Francis turbines is composed of various components originating from different physical phenomena. These components may be separated because they differ by their spatial relationships and by their propagation mechanism. The first step for such an analysis was to distinguish between so-called synchronous and asynchronous pulsations; only approximately periodic phenomena could be described in this manner. However, less regular pulsations are always present, and these become important when turbines have to operate in the far off-design range, in particular at very low load. The statistical method described here permits to separate the stochastic (random) component from the two traditional 'regular' components. It works in connection with the standard technique of model testing with several pressure signals measured in draft tube cone. The difference between the individual signals and the averaged pressure signal, together with the coherence between the individual pressure signals is used for analysis. An example reveals that a generalized, non-periodic version of the asynchronous pulsation is important at low load.

  13. Latest Results From the QuakeFinder Statistical Analysis Framework

    Science.gov (United States)

    Kappler, K. N.; MacLean, L. S.; Schneider, D.; Bleier, T.

    2017-12-01

    Since 2005 QuakeFinder (QF) has acquired an unique dataset with outstanding spatial and temporal sampling of earth's magnetic field along several active fault systems. This QF network consists of 124 stations in California and 45 stations along fault zones in Greece, Taiwan, Peru, Chile and Indonesia. Each station is equipped with three feedback induction magnetometers, two ion sensors, a 4 Hz geophone, a temperature sensor, and a humidity sensor. Data are continuously recorded at 50 Hz with GPS timing and transmitted daily to the QF data center in California for analysis. QF is attempting to detect and characterize anomalous EM activity occurring ahead of earthquakes. There have been many reports of anomalous variations in the earth's magnetic field preceding earthquakes. Specifically, several authors have drawn attention to apparent anomalous pulsations seen preceding earthquakes. Often studies in long term monitoring of seismic activity are limited by availability of event data. It is particularly difficult to acquire a large dataset for rigorous statistical analyses of the magnetic field near earthquake epicenters because large events are relatively rare. Since QF has acquired hundreds of earthquakes in more than 70 TB of data, we developed an automated approach for finding statistical significance of precursory behavior and developed an algorithm framework. Previously QF reported on the development of an Algorithmic Framework for data processing and hypothesis testing. The particular instance of algorithm we discuss identifies and counts magnetic variations from time series data and ranks each station-day according to the aggregate number of pulses in a time window preceding the day in question. If the hypothesis is true that magnetic field activity increases over some time interval preceding earthquakes, this should reveal itself by the station-days on which earthquakes occur receiving higher ranks than they would if the ranking scheme were random. This can

  14. Statistical analysis of cone penetration resistance of railway ballast

    Science.gov (United States)

    Saussine, Gilles; Dhemaied, Amine; Delforge, Quentin; Benfeddoul, Selim

    2017-06-01

    Dynamic penetrometer tests are widely used in geotechnical studies for soils characterization but their implementation tends to be difficult. The light penetrometer test is able to give information about a cone resistance useful in the field of geotechnics and recently validated as a parameter for the case of coarse granular materials. In order to characterize directly the railway ballast on track and sublayers of ballast, a huge test campaign has been carried out for more than 5 years in order to build up a database composed of 19,000 penetration tests including endoscopic video record on the French railway network. The main objective of this work is to give a first statistical analysis of cone resistance in the coarse granular layer which represents a major component of railway track: the ballast. The results show that the cone resistance (qd) increases with depth and presents strong variations corresponding to layers of different natures identified using the endoscopic records. In the first zone corresponding to the top 30cm, (qd) increases linearly with a slope of around 1MPa/cm for fresh ballast and fouled ballast. In the second zone below 30cm deep, (qd) increases more slowly with a slope of around 0,3MPa/cm and decreases below 50cm. These results show that there is no clear difference between fresh and fouled ballast. Hence, the (qd) sensitivity is important and increases with depth. The (qd) distribution for a set of tests does not follow a normal distribution. In the upper 30cm layer of ballast of track, data statistical treatment shows that train load and speed do not have any significant impact on the (qd) distribution for clean ballast; they increase by 50% the average value of (qd) for fouled ballast and increase the thickness as well. Below the 30cm upper layer, train load and speed have a clear impact on the (qd) distribution.

  15. STATISTICAL ANALYSIS OF RAW SUGAR MATERIAL FOR SUGAR PRODUCER COMPLEX

    Directory of Open Access Journals (Sweden)

    A. A. Gromkovskii

    2015-01-01

    Full Text Available Summary. In the article examines the statistical data on the development of average weight and average sugar content of sugar beet roots. The successful solution of the problem of forecasting these raw indices is essential for solving problems of sugar producing complex control. In the paper by calculating the autocorrelation function demonstrated that the predominant trend component of the growth raw characteristics. For construct the prediction model is proposed to use an autoregressive first and second order. It is shown that despite the small amount of experimental data, which provide raw sugar producing enterprises laboratory, using autoregression is justified. The proposed model allows correctly out properly the dynamics of changes raw indexes in the time, which confirms the estimates. In the article highlighted the fact that in the case the predominance trend components in the dynamics of the studied characteristics of sugar beet proposed prediction models provide the better quality of the forecast. In the presence the oscillations portions of the curve describing the change raw performance, for better construction of the forecast required increase number of measurements data. In the article also presents the results of the use adaptive prediction Brown’s model for predicting sugar beet raw performance. The statistical analysis allowed conclusions about the level of quality sufficient to describe changes raw indices for the forecast development. The optimal discount rates data are identified that determined by the form of the curve of growth sugar content of the beet root and mass in the process of maturation. Formulated conclusions of the quality of the forecast, depending on these factors that determines the expert forecaster. In the article shows the calculated expression, derived from experimental data that allow calculate changes of the raw material feature of sugar beet in the process of maturation.

  16. Vector field statistical analysis of kinematic and force trajectories.

    Science.gov (United States)

    Pataky, Todd C; Robinson, Mark A; Vanrenterghem, Jos

    2013-09-27

    When investigating the dynamics of three-dimensional multi-body biomechanical systems it is often difficult to derive spatiotemporally directed predictions regarding experimentally induced effects. A paradigm of 'non-directed' hypothesis testing has emerged in the literature as a result. Non-directed analyses typically consist of ad hoc scalar extraction, an approach which substantially simplifies the original, highly multivariate datasets (many time points, many vector components). This paper describes a commensurately multivariate method as an alternative to scalar extraction. The method, called 'statistical parametric mapping' (SPM), uses random field theory to objectively identify field regions which co-vary significantly with the experimental design. We compared SPM to scalar extraction by re-analyzing three publicly available datasets: 3D knee kinematics, a ten-muscle force system, and 3D ground reaction forces. Scalar extraction was found to bias the analyses of all three datasets by failing to consider sufficient portions of the dataset, and/or by failing to consider covariance amongst vector components. SPM overcame both problems by conducting hypothesis testing at the (massively multivariate) vector trajectory level, with random field corrections simultaneously accounting for temporal correlation and vector covariance. While SPM has been widely demonstrated to be effective for analyzing 3D scalar fields, the current results are the first to demonstrate its effectiveness for 1D vector field analysis. It was concluded that SPM offers a generalized, statistically comprehensive solution to scalar extraction's over-simplification of vector trajectories, thereby making it useful for objectively guiding analyses of complex biomechanical systems. © 2013 Published by Elsevier Ltd. All rights reserved.

  17. Statistical analysis of cone penetration resistance of railway ballast

    Directory of Open Access Journals (Sweden)

    Saussine Gilles

    2017-01-01

    Full Text Available Dynamic penetrometer tests are widely used in geotechnical studies for soils characterization but their implementation tends to be difficult. The light penetrometer test is able to give information about a cone resistance useful in the field of geotechnics and recently validated as a parameter for the case of coarse granular materials. In order to characterize directly the railway ballast on track and sublayers of ballast, a huge test campaign has been carried out for more than 5 years in order to build up a database composed of 19,000 penetration tests including endoscopic video record on the French railway network. The main objective of this work is to give a first statistical analysis of cone resistance in the coarse granular layer which represents a major component of railway track: the ballast. The results show that the cone resistance (qd increases with depth and presents strong variations corresponding to layers of different natures identified using the endoscopic records. In the first zone corresponding to the top 30cm, (qd increases linearly with a slope of around 1MPa/cm for fresh ballast and fouled ballast. In the second zone below 30cm deep, (qd increases more slowly with a slope of around 0,3MPa/cm and decreases below 50cm. These results show that there is no clear difference between fresh and fouled ballast. Hence, the (qd sensitivity is important and increases with depth. The (qd distribution for a set of tests does not follow a normal distribution. In the upper 30cm layer of ballast of track, data statistical treatment shows that train load and speed do not have any significant impact on the (qd distribution for clean ballast; they increase by 50% the average value of (qd for fouled ballast and increase the thickness as well. Below the 30cm upper layer, train load and speed have a clear impact on the (qd distribution.

  18. Tucker Tensor analysis of Matern functions in spatial statistics

    KAUST Repository

    Litvinenko, Alexander

    2018-03-09

    In this work, we describe advanced numerical tools for working with multivariate functions and for the analysis of large data sets. These tools will drastically reduce the required computing time and the storage cost, and, therefore, will allow us to consider much larger data sets or finer meshes. Covariance matrices are crucial in spatio-temporal statistical tasks, but are often very expensive to compute and store, especially in 3D. Therefore, we approximate covariance functions by cheap surrogates in a low-rank tensor format. We apply the Tucker and canonical tensor decompositions to a family of Matern- and Slater-type functions with varying parameters and demonstrate numerically that their approximations exhibit exponentially fast convergence. We prove the exponential convergence of the Tucker and canonical approximations in tensor rank parameters. Several statistical operations are performed in this low-rank tensor format, including evaluating the conditional covariance matrix, spatially averaged estimation variance, computing a quadratic form, determinant, trace, loglikelihood, inverse, and Cholesky decomposition of a large covariance matrix. Low-rank tensor approximations reduce the computing and storage costs essentially. For example, the storage cost is reduced from an exponential O(n^d) to a linear scaling O(drn), where d is the spatial dimension, n is the number of mesh points in one direction, and r is the tensor rank. Prerequisites for applicability of the proposed techniques are the assumptions that the data, locations, and measurements lie on a tensor (axes-parallel) grid and that the covariance function depends on a distance, ||x-y||.

  19. Development of computer-assisted instruction application for statistical data analysis android platform as learning resource

    Science.gov (United States)

    Hendikawati, P.; Arifudin, R.; Zahid, M. Z.

    2018-03-01

    This study aims to design an android Statistics Data Analysis application that can be accessed through mobile devices to making it easier for users to access. The Statistics Data Analysis application includes various topics of basic statistical along with a parametric statistics data analysis application. The output of this application system is parametric statistics data analysis that can be used for students, lecturers, and users who need the results of statistical calculations quickly and easily understood. Android application development is created using Java programming language. The server programming language uses PHP with the Code Igniter framework, and the database used MySQL. The system development methodology used is the Waterfall methodology with the stages of analysis, design, coding, testing, and implementation and system maintenance. This statistical data analysis application is expected to support statistical lecturing activities and make students easier to understand the statistical analysis of mobile devices.

  20. Statistical Analysis of Data with Non-Detectable Values

    Energy Technology Data Exchange (ETDEWEB)

    Frome, E.L.

    2004-08-26

    Environmental exposure measurements are, in general, positive and may be subject to left censoring, i.e. the measured value is less than a ''limit of detection''. In occupational monitoring, strategies for assessing workplace exposures typically focus on the mean exposure level or the probability that any measurement exceeds a limit. A basic problem of interest in environmental risk assessment is to determine if the mean concentration of an analyte is less than a prescribed action level. Parametric methods, used to determine acceptable levels of exposure, are often based on a two parameter lognormal distribution. The mean exposure level and/or an upper percentile (e.g. the 95th percentile) are used to characterize exposure levels, and upper confidence limits are needed to describe the uncertainty in these estimates. In certain situations it is of interest to estimate the probability of observing a future (or ''missed'') value of a lognormal variable. Statistical methods for random samples (without non-detects) from the lognormal distribution are well known for each of these situations. In this report, methods for estimating these quantities based on the maximum likelihood method for randomly left censored lognormal data are described and graphical methods are used to evaluate the lognormal assumption. If the lognormal model is in doubt and an alternative distribution for the exposure profile of a similar exposure group is not available, then nonparametric methods for left censored data are used. The mean exposure level, along with the upper confidence limit, is obtained using the product limit estimate, and the upper confidence limit on the 95th percentile (i.e. the upper tolerance limit) is obtained using a nonparametric approach. All of these methods are well known but computational complexity has limited their use in routine data analysis with left censored data. The recent development of the R environment for statistical

  1. Prevalence of inappropriate medication using Beers criteria in Japanese long-term care facilities

    Directory of Open Access Journals (Sweden)

    Yamada Yukari

    2006-01-01

    Full Text Available Abstract Background The prevalence and risk factors of potentially inappropriate medication use among the elderly patients have been studied in various countries, but because of the difficulty of obtaining data on patient characteristics and medications they have not been studied in Japan. Methods We conducted a retrospective cross-sectional study in 17 Japanese long-term care (LTC facilities by collecting data from the comprehensive MDS assessment forms for 1669 patients aged 65 years and over who were assessed between January and July of 2002. Potentially inappropriate medications were identified on the basis of the 2003 Beers criteria. Results The patients in the sample were similar in terms of demographic characteristics to those in the national survey. Our study revealed that 356 (21.1% of the patients were treated with potentially inappropriate medication independent of disease or condition. The most commonly inappropriately prescribed medication was ticlopidine, which had been prescribed for 107 patients (6.3%. There were 300 (18.0% patients treated with at least 1 inappropriate medication dependent on the disease or condition. The highest prevalence of inappropriate medication use dependent on the disease or condition was found in patients with chronic constipation. Multiple logistic regression analysis revealed psychotropic drug use (OR = 1.511, medication cost of per day (OR = 1.173, number of medications (OR = 1.140, and age (OR = 0.981 as factors related to inappropriate medication use independent of disease or condition. Neither patient characteristics nor facility characteristics emerged as predictors of inappropriate prescription. Conclusion The prevalence and predictors of inappropriate medication use in Japanese LTC facilities were similar to those in other countries.

  2. Statistical analysis of CSP plants by simulating extensive meteorological series

    Science.gov (United States)

    Pavón, Manuel; Fernández, Carlos M.; Silva, Manuel; Moreno, Sara; Guisado, María V.; Bernardos, Ana

    2017-06-01

    The feasibility analysis of any power plant project needs the estimation of the amount of energy it will be able to deliver to the grid during its lifetime. To achieve this, its feasibility study requires a precise knowledge of the solar resource over a long term period. In Concentrating Solar Power projects (CSP), financing institutions typically requires several statistical probability of exceedance scenarios of the expected electric energy output. Currently, the industry assumes a correlation between probabilities of exceedance of annual Direct Normal Irradiance (DNI) and energy yield. In this work, this assumption is tested by the simulation of the energy yield of CSP plants using as input a 34-year series of measured meteorological parameters and solar irradiance. The results of this work show that, even if some correspondence between the probabilities of exceedance of annual DNI values and energy yields is found, the intra-annual distribution of DNI may significantly affect this correlation. This result highlights the need of standardized procedures for the elaboration of representative DNI time series representative of a given probability of exceedance of annual DNI.

  3. Ensemble Solar Forecasting Statistical Quantification and Sensitivity Analysis: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, WanYin; Zhang, Jie; Florita, Anthony; Hodge, Bri-Mathias; Lu, Siyuan; Hamann, Hendrik F.; Sun, Qian; Lehman, Brad

    2015-12-08

    Uncertainties associated with solar forecasts present challenges to maintain grid reliability, especially at high solar penetrations. This study aims to quantify the errors associated with the day-ahead solar forecast parameters and the theoretical solar power output for a 51-kW solar power plant in a utility area in the state of Vermont, U.S. Forecasts were generated by three numerical weather prediction (NWP) models, including the Rapid Refresh, the High Resolution Rapid Refresh, and the North American Model, and a machine-learning ensemble model. A photovoltaic (PV) performance model was adopted to calculate theoretical solar power generation using the forecast parameters (e.g., irradiance, cell temperature, and wind speed). Errors of the power outputs were quantified using statistical moments and a suite of metrics, such as the normalized root mean squared error (NRMSE). In addition, the PV model's sensitivity to different forecast parameters was quantified and analyzed. Results showed that the ensemble model yielded forecasts in all parameters with the smallest NRMSE. The NRMSE of solar irradiance forecasts of the ensemble NWP model was reduced by 28.10% compared to the best of the three NWP models. Further, the sensitivity analysis indicated that the errors of the forecasted cell temperature attributed only approximately 0.12% to the NRMSE of the power output as opposed to 7.44% from the forecasted solar irradiance.

  4. Plutonium metal exchange program : current status and statistical analysis

    Energy Technology Data Exchange (ETDEWEB)

    Tandon, L. (Lav); Eglin, J. L. (Judith Lynn); Michalak, S. E. (Sarah E.); Picard, R. R.; Temer, D. J. (Donald J.)

    2004-01-01

    The Rocky Flats Plutonium (Pu) Metal Sample Exchange program was conducted to insure the quality and intercomparability of measurements such as Pu assay, Pu isotopics, and impurity analyses. The Rocky Flats program was discontinued in 1989 after more than 30 years. In 2001, Los Alamos National Laboratory (LANL) reestablished the Pu Metal Exchange program. In addition to the Atomic Weapons Establishment (AWE) at Aldermaston, six Department of Energy (DOE) facilities Argonne East, Argonne West, Livermore, Los Alamos, New Brunswick Laboratory, and Savannah River are currently participating in the program. Plutonium metal samples are prepared and distributed to the sites for destructive measurements to determine elemental concentration, isotopic abundance, and both metallic and nonmetallic impurity levels. The program provides independent verification of analytical measurement capabilies for each participating facility and allows problems in analytical methods to be identified. The current status of the program will be discussed with emphasis on the unique statistical analysis and modeling of the data developed for the program. The discussion includes the definition of the consensus values for each analyte (in the presence and absence of anomalous values and/or censored values), and interesting features of the data and the results.

  5. Statistical analysis and optimization of igbt manufacturing flow

    Directory of Open Access Journals (Sweden)

    Baranov V. V.

    2015-02-01

    Full Text Available The use of computer simulation, design and optimization of power electronic devices formation technological processes can significantly reduce development time, improve the accuracy of calculations, choose the best options for implementation based on strict mathematical analysis. One of the most common power electronic devices is isolated gate bipolar transistor (IGBT, which combines the advantages of MOSFET and bipolar transistor. The achievement of high requirements for these devices is only possible by optimizing device design and manufacturing process parameters. Therefore important and necessary step in the modern cycle of IC design and manufacturing is to carry out the statistical analysis. Procedure of the IGBT threshold voltage optimization was realized. Through screening experiments according to the Plackett-Burman design the most important input parameters (factors that have the greatest impact on the output characteristic was detected. The coefficients of the approximation polynomial adequately describing the relationship between the input parameters and investigated output characteristics ware determined. Using the calculated approximation polynomial, a series of multiple, in a cycle of Monte Carlo, calculations to determine the spread of threshold voltage values at selected ranges of input parameters deviation were carried out. Combinations of input process parameters values were determined randomly by a normal distribution within a given range of changes. The procedure of IGBT process parameters optimization consist a mathematical problem of determining the value range of the input significant structural and technological parameters providing the change of the IGBT threshold voltage in a given interval. The presented results demonstrate the effectiveness of the proposed optimization techniques.

  6. Statistical Analysis of Development Trends in Global Renewable Energy

    Directory of Open Access Journals (Sweden)

    Marina D. Simonova

    2016-01-01

    Full Text Available The article focuses on the economic and statistical analysis of industries associated with the use of renewable energy sources in several countries. The dynamic development and implementation of technologies based on renewable energy sources (hereinafter RES is the defining trend of world energy development. The uneven distribution of hydrocarbon reserves, increasing demand of developing countries and environmental risks associated with the production and consumption of fossil resources has led to an increasing interest of many states to this field. Creating low-carbon economies involves the implementation of plans to increase the proportion of clean energy through renewable energy sources, energy efficiency, reduce greenhouse gas emissions. The priority of this sector is a characteristic feature of modern development of developed (USA, EU, Japan and emerging economies (China, India, Brazil, etc., as evidenced by the inclusion of the development of this segment in the state energy strategies and the revision of existing approaches to energy security. The analysis of the use of renewable energy, its contribution to value added of countries-producers is of a particular interest. Over the last decade, the share of energy produced from renewable sources in the energy balances of the world's largest economies increased significantly. Every year the number of power generating capacity based on renewable energy is growing, especially, this trend is apparent in China, USA and European Union countries. There is a significant increase in direct investment in renewable energy. The total investment over the past ten years increased by 5.6 times. The most rapidly developing kinds are solar energy and wind power.

  7. The Statistics Concept Inventory: Development and analysis of a cognitive assessment instrument in statistics

    Science.gov (United States)

    Allen, Kirk

    The Statistics Concept Inventory (SCI) is a multiple choice test designed to assess students' conceptual understanding of topics typically encountered in an introductory statistics course. This dissertation documents the development of the SCI from Fall 2002 up to Spring 2006. The first phase of the project essentially sought to answer the question: "Can you write a test to assess topics typically encountered in introductory statistics?" Book One presents the results utilized in answering this question in the affirmative. The bulk of the results present the development and evolution of the items, primarily relying on objective metrics to gauge effectiveness but also incorporating student feedback. The second phase boils down to: "Now that you have the test, what else can you do with it?" This includes an exploration of Cronbach's alpha, the most commonly-used measure of test reliability in the literature. An online version of the SCI was designed, and its equivalency to the paper version is assessed. Adding an extra wrinkle to the online SCI, subjects rated their answer confidence. These results show a general positive trend between confidence and correct responses. However, some items buck this trend, revealing potential sources of misunderstandings, with comparisons offered to the extant statistics and probability educational research. The third phase is a re-assessment of the SCI: "Are you sure?" A factor analytic study favored a uni-dimensional structure for the SCI, although maintaining the likelihood of a deeper structure if more items can be written to tap similar topics. A shortened version of the instrument is proposed, demonstrated to be able to maintain a reliability nearly identical to that of the full instrument. Incorporating student feedback and a faculty topics survey, improvements to the items and recommendations for further research are proposed. The state of the concept inventory movement is assessed, to offer a comparison to the work presented

  8. Potentially inappropriate medications among older adults in Pelotas, Southern Brazil.

    Science.gov (United States)

    Lutz, Bárbara Heather; Miranda, Vanessa Irribarem Avena; Bertoldi, Andréa Dâmaso

    2017-06-22

    To assess the use of potentially inappropriate medications among older adults. This is a population-based cross-sectional study with 1,451 older individuals aged 60 years or more in the city of Pelotas, State of Rio Grande do Sul, Brazil, in 2014. We have investigated the use of medications in the last 15 days. Using the Beers criteria (2012), we have verified the use of potentially inappropriate medications and their relationship with socioeconomic and demographic variables, polypharmacy, self-medication, and burden of disease. Among the 5,700 medications used, 5,651 could be assessed as to being inappropriate. Of these, 937 were potentially inappropriate for the older adults according to the 2012 Beers criteria (16.6%). Approximately 42.4% of the older adults studied used at least one medication considered as potentially inappropriate. The group of medications for the nervous system accounted for 48.9% of the total of the potentially inappropriate medications. In the adjusted analysis, the variables female, advanced age, white race, low educational level, polypharmacy, self-medication, and burden of disease were associated with the use of potentially inappropriate medications. It is important to known the possible consequences of the use of medication among older adults. Special attention should be given to the older adults who use polypharmacy. Specific lists should be created with more appropriate medications for the older population in the National Essential Medicine List. Avaliar o uso de medicamentos potencialmente inadequados entre idosos. Estudo transversal de base populacional com 1.451 idosos com 60 anos ou mais em Pelotas, RS, em 2014. Investigou-se o uso de medicamentos nos últimos 15 dias. Utilizando os critérios de Beers (2012), verificou-se a potencial inadequação dos medicamentos e sua relação com variáveis socioeconômicas e demográficas, polifarmácia, automedicação e carga de doença. Dentre os 5.700 medicamentos utilizados, 5

  9. Suprathreshold fiber cluster statistics: Leveraging white matter geometry to enhance tractography statistical analysis.

    Science.gov (United States)

    Zhang, Fan; Wu, Weining; Ning, Lipeng; McAnulty, Gloria; Waber, Deborah; Gagoski, Borjan; Sarill, Kiera; Hamoda, Hesham M; Song, Yang; Cai, Weidong; Rathi, Yogesh; O'Donnell, Lauren J

    2018-05-01

    This work presents a suprathreshold fiber cluster (STFC) method that leverages the whole brain fiber geometry to enhance statistical group difference analyses. The proposed method consists of 1) a well-established study-specific data-driven tractography parcellation to obtain white matter tract parcels and 2) a newly proposed nonparametric, permutation-test-based STFC method to identify significant differences between study populations. The basic idea of our method is that a white matter parcel's neighborhood (nearby parcels with similar white matter anatomy) can support the parcel's statistical significance when correcting for multiple comparisons. We propose an adaptive parcel neighborhood strategy to allow suprathreshold fiber cluster formation that is robust to anatomically varying inter-parcel distances. The method is demonstrated by application to a multi-shell diffusion MRI dataset from 59 individuals, including 30 attention deficit hyperactivity disorder patients and 29 healthy controls. Evaluations are conducted using both synthetic and in-vivo data. The results indicate that the STFC method gives greater sensitivity in finding group differences in white matter tract parcels compared to several traditional multiple comparison correction methods. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. Development of new taxonomy of inappropriate communication and its application to operating teams in nuclear power plants

    International Nuclear Information System (INIS)

    Kim, Ar Ryum; Lee, Seung Woo; Jang, In Seok; Kang, Hyun Gook; Seong, Poong Hyun; Park, Jin Kyun

    2012-01-01

    Inappropriate communications can cause a lack of necessary information exchange between operators and lead to serious consequences in large process systems such as nuclear power plants (NPPs). In this regard, various kinds of taxonomies of inappropriate communications have been developed to prevent inappropriate communications. However, there seems to be difficult to identify inappropriate communications from verbal protocol data between operators. Because the existing taxonomies were developed for use in report analysis, there is a problem of 'uncertainty'. In consequence, this paper proposes a new taxonomy of inappropriate communications and provides some insights to prevent inappropriate communications. In order to develop the taxonomy, existing taxonomies for four industries from 1980 to 2010 were collected and a new taxonomy is developed based on the simplified one-way communication model. In addition, the ratio of inappropriate communications from 8 samples of audio-visual format verbal protocol data recorded during emergency training sessions by operating teams is compared with performance scores calculated based on the task analysis. As a result, inappropriate communications can be easily identified from the verbal protocol data using the suggested taxonomy, and teams with a higher ratio of inappropriate communications tend to have a lower performance score.

  11. Parallelization of the Physical-Space Statistical Analysis System (PSAS)

    Science.gov (United States)

    Larson, J. W.; Guo, J.; Lyster, P. M.

    1999-01-01

    Atmospheric data assimilation is a method of combining observations with model forecasts to produce a more accurate description of the atmosphere than the observations or forecast alone can provide. Data assimilation plays an increasingly important role in the study of climate and atmospheric chemistry. The NASA Data Assimilation Office (DAO) has developed the Goddard Earth Observing System Data Assimilation System (GEOS DAS) to create assimilated datasets. The core computational components of the GEOS DAS include the GEOS General Circulation Model (GCM) and the Physical-space Statistical Analysis System (PSAS). The need for timely validation of scientific enhancements to the data assimilation system poses computational demands that are best met by distributed parallel software. PSAS is implemented in Fortran 90 using object-based design principles. The analysis portions of the code solve two equations. The first of these is the "innovation" equation, which is solved on the unstructured observation grid using a preconditioned conjugate gradient (CG) method. The "analysis" equation is a transformation from the observation grid back to a structured grid, and is solved by a direct matrix-vector multiplication. Use of a factored-operator formulation reduces the computational complexity of both the CG solver and the matrix-vector multiplication, rendering the matrix-vector multiplications as a successive product of operators on a vector. Sparsity is introduced to these operators by partitioning the observations using an icosahedral decomposition scheme. PSAS builds a large (approx. 128MB) run-time database of parameters used in the calculation of these operators. Implementing a message passing parallel computing paradigm into an existing yet developing computational system as complex as PSAS is nontrivial. One of the technical challenges is balancing the requirements for computational reproducibility with the need for high performance. The problem of computational

  12. Statistical analysis of compressive low rank tomography with random measurements

    Science.gov (United States)

    Acharya, Anirudh; Guţă, Mădălin

    2017-05-01

    We consider the statistical problem of ‘compressive’ estimation of low rank states (r\\ll d ) with random basis measurements, where r, d are the rank and dimension of the state respectively. We investigate whether for a fixed sample size N, the estimation error associated with a ‘compressive’ measurement setup is ‘close’ to that of the setting where a large number of bases are measured. We generalise and extend previous results, and show that the mean square error (MSE) associated with the Frobenius norm attains the optimal rate rd/N with only O(r log{d}) random basis measurements for all states. An important tool in the analysis is the concentration of the Fisher information matrix (FIM). We demonstrate that although a concentration of the MSE follows from a concentration of the FIM for most states, the FIM fails to concentrate for states with eigenvalues close to zero. We analyse this phenomenon in the case of a single qubit and demonstrate a concentration of the MSE about its optimal despite a lack of concentration of the FIM for states close to the boundary of the Bloch sphere. We also consider the estimation error in terms of a different metric-the quantum infidelity. We show that a concentration in the mean infidelity (MINF) does not exist uniformly over all states, highlighting the importance of loss function choice. Specifically, we show that for states that are nearly pure, the MINF scales as 1/\\sqrt{N} but the constant converges to zero as the number of settings is increased. This demonstrates a lack of ‘compressive’ recovery for nearly pure states in this metric.

  13. SUBMILLIMETER NUMBER COUNTS FROM STATISTICAL ANALYSIS OF BLAST MAPS

    International Nuclear Information System (INIS)

    Patanchon, Guillaume; Ade, Peter A. R.; Griffin, Matthew; Hargrave, Peter C.; Mauskopf, Philip; Moncelsi, Lorenzo; Pascale, Enzo; Bock, James J.; Chapin, Edward L.; Halpern, Mark; Marsden, Gaelen; Scott, Douglas; Devlin, Mark J.; Dicker, Simon R.; Klein, Jeff; Rex, Marie; Gundersen, Joshua O.; Hughes, David H.; Netterfield, Calvin B.; Olmi, Luca

    2009-01-01

    We describe the application of a statistical method to estimate submillimeter galaxy number counts from confusion-limited observations by the Balloon-borne Large Aperture Submillimeter Telescope (BLAST). Our method is based on a maximum likelihood fit to the pixel histogram, sometimes called 'P(D)', an approach which has been used before to probe faint counts, the difference being that here we advocate its use even for sources with relatively high signal-to-noise ratios. This method has an advantage over standard techniques of source extraction in providing an unbiased estimate of the counts from the bright end down to flux densities well below the confusion limit. We specifically analyze BLAST observations of a roughly 10 deg 2 map centered on the Great Observatories Origins Deep Survey South field. We provide estimates of number counts at the three BLAST wavelengths 250, 350, and 500 μm; instead of counting sources in flux bins we estimate the counts at several flux density nodes connected with power laws. We observe a generally very steep slope for the counts of about -3.7 at 250 μm, and -4.5 at 350 and 500 μm, over the range ∼0.02-0.5 Jy, breaking to a shallower slope below about 0.015 Jy at all three wavelengths. We also describe how to estimate the uncertainties and correlations in this method so that the results can be used for model-fitting. This method should be well suited for analysis of data from the Herschel satellite.

  14. A statistical framework for differential network analysis from microarray data

    Directory of Open Access Journals (Sweden)

    Datta Somnath

    2010-02-01

    Full Text Available Abstract Background It has been long well known that genes do not act alone; rather groups of genes act in consort during a biological process. Consequently, the expression levels of genes are dependent on each other. Experimental techniques to detect such interacting pairs of genes have been in place for quite some time. With the advent of microarray technology, newer computational techniques to detect such interaction or association between gene expressions are being proposed which lead to an association network. While most microarray analyses look for genes that are differentially expressed, it is of potentially greater significance to identify how entire association network structures change between two or more biological settings, say normal versus diseased cell types. Results We provide a recipe for conducting a differential analysis of networks constructed from microarray data under two experimental settings. At the core of our approach lies a connectivity score that represents the strength of genetic association or interaction between two genes. We use this score to propose formal statistical tests for each of following queries: (i whether the overall modular structures of the two networks are different, (ii whether the connectivity of a particular set of "interesting genes" has changed between the two networks, and (iii whether the connectivity of a given single gene has changed between the two networks. A number of examples of this score is provided. We carried out our method on two types of simulated data: Gaussian networks and networks based on differential equations. We show that, for appropriate choices of the connectivity scores and tuning parameters, our method works well on simulated data. We also analyze a real data set involving normal versus heavy mice and identify an interesting set of genes that may play key roles in obesity. Conclusions Examining changes in network structure can provide valuable information about the

  15. Olive mill wastewater characteristics: modelling and statistical analysis

    Directory of Open Access Journals (Sweden)

    Martins-Dias, Susete

    2004-09-01

    Full Text Available A synthesis of the work carried out on Olive Mill Wastewater (OMW characterisation is given, covering articles published over the last 50 years. Data on OMW characterisation found in the literature are summarised and correlations between them and with phenolic compounds content are sought. This permits the characteristics of an OMW to be estimated from one simple measurement: the phenolic compounds concentration. A model based on OMW characterisations accounting 6 countries was developed along with a model for Portuguese OMW. The statistical analysis of the correlations obtained indicates that Chemical Oxygen Demand of a given OMW is a second-degree polynomial function of its phenolic compounds concentration. Tests to evaluate the regressions significance were carried out, based on multivariable ANOVA analysis, on visual standardised residuals distribution and their means for confidence levels of 95 and 99 %, validating clearly these models. This modelling work will help in the future planning, operation and monitoring of an OMW treatment plant.Presentamos una síntesis de los trabajos realizados en los últimos 50 años relacionados con la caracterización del alpechín. Realizamos una recopilación de los datos publicados, buscando correlaciones entre los datos relativos al alpechín y los compuestos fenólicos. Esto permite la determinación de las características del alpechín a partir de una sola medida: La concentración de compuestos fenólicos. Proponemos dos modelos, uno basado en datos relativos a seis países y un segundo aplicado únicamente a Portugal. El análisis estadístico de las correlaciones obtenidas indica que la demanda química de oxígeno de un determinado alpechín es una función polinómica de segundo grado de su concentración de compuestos fenólicos. Se comprobó la significancia de esta correlación mediante la aplicación del análisis multivariable ANOVA, y además se evaluó la distribución de residuos y sus

  16. Developmental Coordination Disorder: Validation of a Qualitative Analysis Using Statistical Factor Analysis

    Directory of Open Access Journals (Sweden)

    Kathy Ahern

    2002-09-01

    Full Text Available This study investigates triangulation of the findings of a qualitative analysis by applying an exploratory factor analysis to themes identified in a phenomenological study. A questionnaire was developed from a phenomenological analysis of parents' experiences of parenting a child with Developmental Coordination Disorder (DCD. The questionnaire was administered to 114 parents of DCD children and data were analyzed using an exploratory factor analysis. The extracted factors provided support for the validity of the original qualitative analysis, and a commentary on the validity of the process is provided. The emerging description is of the compromises that were necessary to translate qualitative themes into statistical factors, and of the ways in which the statistical analysis suggests further qualitative study.

  17. Gregor Mendel's Genetic Experiments: A Statistical Analysis after 150 Years

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2016-01-01

    Roč. 12, č. 2 (2016), s. 20-26 ISSN 1801-5603 Institutional support: RVO:67985807 Keywords : genetics * history of science * biostatistics * design of experiments Subject RIV: BB - Applied Statistics, Operational Research

  18. Climate time series analysis classical statistical and bootstrap methods

    CERN Document Server

    Mudelsee, Manfred

    2010-01-01

    This book presents bootstrap resampling as a computationally intensive method able to meet the challenges posed by the complexities of analysing climate data. It shows how the bootstrap performs reliably in the most important statistical estimation techniques.

  19. A Statistical Analysis of the Nuffield Physical Science Project Assessment

    Science.gov (United States)

    Hockey, S. W.

    1973-01-01

    Discusses measurement techniques developed in the Nuffield A level physical science assessment and statistical results obtained in 1968 and 1971. Concludes that individual projects are contributors of positive and valuable educational experiences to the course. (CC)

  20. statistical analysis of wind speed for electrical power generation

    African Journals Online (AJOL)

    HOD

    1, 4, 5 DEPARTMENT OF ELECTRICAL AND ELECTRONICS ENGINEERING, UNIVERSITY OF ILORIN, KWARA STATE, NIGERIA. 2DEPARTMENT OF ... Keywords: Wind speed - probability - density function – wind energy conversion system- statistical analyses. 1. ..... weather data for energy assessments of hybrid.

  1. Statistical Methods for Analysis of Neurofibromatosis Clinical Data

    National Research Council Canada - National Science Library

    Joe, Harry

    2002-01-01

    ... to the burden of the disease. The goals of this project are to devise new statistical methods to find patterns and relationships within the phenotypes and genotypes of people with NF, and to effectively model tumor formation in these disorders...

  2. On Conceptual Analysis as the Primary Qualitative Approach to Statistics Education Research in Psychology

    Science.gov (United States)

    Petocz, Agnes; Newbery, Glenn

    2010-01-01

    Statistics education in psychology often falls disappointingly short of its goals. The increasing use of qualitative approaches in statistics education research has extended and enriched our understanding of statistical cognition processes, and thus facilitated improvements in statistical education and practices. Yet conceptual analysis, a…

  3. 75 FR 24718 - Guidance for Industry on Documenting Statistical Analysis Programs and Data Files; Availability

    Science.gov (United States)

    2010-05-05

    ...] Guidance for Industry on Documenting Statistical Analysis Programs and Data Files; Availability AGENCY... documenting statistical analyses and data files submitted to the Center for Veterinary Medicine (CVM) for the... on Documenting Statistical Analysis Programs and Data Files; Availability'' giving interested persons...

  4. Conjunction analysis and propositional logic in fMRI data analysis using Bayesian statistics.

    Science.gov (United States)

    Rudert, Thomas; Lohmann, Gabriele

    2008-12-01

    To evaluate logical expressions over different effects in data analyses using the general linear model (GLM) and to evaluate logical expressions over different posterior probability maps (PPMs). In functional magnetic resonance imaging (fMRI) data analysis, the GLM was applied to estimate unknown regression parameters. Based on the GLM, Bayesian statistics can be used to determine the probability of conjunction, disjunction, implication, or any other arbitrary logical expression over different effects or contrast. For second-level inferences, PPMs from individual sessions or subjects are utilized. These PPMs can be combined to a logical expression and its probability can be computed. The methods proposed in this article are applied to data from a STROOP experiment and the methods are compared to conjunction analysis approaches for test-statistics. The combination of Bayesian statistics with propositional logic provides a new approach for data analyses in fMRI. Two different methods are introduced for propositional logic: the first for analyses using the GLM and the second for common inferences about different probability maps. The methods introduced extend the idea of conjunction analysis to a full propositional logic and adapt it from test-statistics to Bayesian statistics. The new approaches allow inferences that are not possible with known standard methods in fMRI. (c) 2008 Wiley-Liss, Inc.

  5. Statistics and data analysis for financial engineering with R examples

    CERN Document Server

    Ruppert, David

    2015-01-01

    The new edition of this influential textbook, geared towards graduate or advanced undergraduate students, teaches the statistics necessary for financial engineering. In doing so, it illustrates concepts using financial markets and economic data, R Labs with real-data exercises, and graphical and analytic methods for modeling and diagnosing modeling errors. Financial engineers now have access to enormous quantities of data. To make use of these data, the powerful methods in this book, particularly about volatility and risks, are essential. Strengths of this fully-revised edition include major additions to the R code and the advanced topics covered. Individual chapters cover, among other topics, multivariate distributions, copulas, Bayesian computations, risk management, multivariate volatility and cointegration. Suggested prerequisites are basic knowledge of statistics and probability, matrices and linear algebra, and calculus. There is an appendix on probability, statistics and linear algebra. Practicing fina...

  6. Statistical analysis of natural disasters and related losses

    CERN Document Server

    Pisarenko, VF

    2014-01-01

    The study of disaster statistics and disaster occurrence is a complicated interdisciplinary field involving the interplay of new theoretical findings from several scientific fields like mathematics, physics, and computer science. Statistical studies on the mode of occurrence of natural disasters largely rely on fundamental findings in the statistics of rare events, which were derived in the 20th century. With regard to natural disasters, it is not so much the fact that the importance of this problem for mankind was recognized during the last third of the 20th century - the myths one encounters in ancient civilizations show that the problem of disasters has always been recognized - rather, it is the fact that mankind now possesses the necessary theoretical and practical tools to effectively study natural disasters, which in turn supports effective, major practical measures to minimize their impact. All the above factors have resulted in considerable progress in natural disaster research. Substantial accrued ma...

  7. Categorical and nonparametric data analysis choosing the best statistical technique

    CERN Document Server

    Nussbaum, E Michael

    2014-01-01

    Featuring in-depth coverage of categorical and nonparametric statistics, this book provides a conceptual framework for choosing the most appropriate type of test in various research scenarios. Class tested at the University of Nevada, the book's clear explanations of the underlying assumptions, computer simulations, and Exploring the Concept boxes help reduce reader anxiety. Problems inspired by actual studies provide meaningful illustrations of the techniques. The underlying assumptions of each test and the factors that impact validity and statistical power are reviewed so readers can explain

  8. Multivariate Analysis and Statistics in Pharmaceutical Process Research and Development.

    Science.gov (United States)

    Tabora, José E; Domagalski, Nathan

    2017-06-07

    The application of statistics in pharmaceutical process research and development has evolved significantly over the past decades, motivated in part by the introduction of the Quality by Design paradigm, a landmark change in regulatory expectations for the level of scientific understanding associated with the manufacturing process. Today, statistical methods are increasingly applied to accelerate the characterization and optimization of new drugs created via numerous unit operations well known to the chemical engineering discipline. We offer here a review of the maturity in the implementation of design of experiment techniques, the increased incorporation of latent variable methods in process and material characterization, and the adoption of Bayesian methodology for process risk assessment.

  9. Multivariate Statistical Methods as a Tool of Financial Analysis of Farm Business

    Czech Academy of Sciences Publication Activity Database

    Novák, J.; Sůvová, H.; Vondráček, Jiří

    2002-01-01

    Roč. 48, č. 1 (2002), s. 9-12 ISSN 0139-570X Institutional research plan: AV0Z1030915 Keywords : financial analysis * financial ratios * multivariate statistical methods * correlation analysis * discriminant analysis * cluster analysis Subject RIV: BB - Applied Statistics, Operational Research

  10. Inappropriate colonoscopic surveillance of hyperplastic polyps.

    LENUS (Irish Health Repository)

    Keane, R A

    2011-11-15

    Colonoscopic surveillance of hyperplastic polyps alone is controversial and may be inappropriate. The colonoscopy surveillance register at a university teaching hospital was audited to determine the extent of such hyperplastic polyp surveillance. The surveillance endoscopy records were reviewed, those patients with hyperplastic polyps were identified, their clinical records were examined and contact was made with each patient. Of the 483 patients undergoing surveillance for colonic polyps 113 (23%) had hyperplastic polyps alone on last colonoscopy. 104 patients remained after exclusion of those under appropriate surveillance. 87 of the 104 patients (84%) were successfully contacted. 37 patients (8%) were under appropriate colonoscopic surveillance for a significant family history of colorectal carcinoma. 50 (10%) patients with hyperplastic polyps alone and no other clinical indication for colonoscopic surveillance were booked for follow up colonoscopy. This represents not only a budgetary but more importantly a clinical opportunity cost the removal of which could liberate valuable colonoscopy time for more appropriate indications.

  11. Utilization of potentially inappropriate medications in elderly patients in a tertiary care teaching hospital in India

    Directory of Open Access Journals (Sweden)

    Binit N Jhaveri

    2014-01-01

    Full Text Available Aim: To evaluate the use of potentially inappropriate medicines in elderly inpatients in a tertiary care teaching hospital. Materials and Methods: Retrospective analysis was performed for cases of elderly patients admitted between January 2010 and December 2010. Data on age, gender, diagnosis, duration of hospital stay, treatment, and outcome were collected. Prescriptions were assessed for the use of potentially inappropriate medications in geriatric patients by using American Geriatric Society Beer′s criteria (2012 and PRISCUS list (2010. Results: A total of 676 geriatric patients (52.12% females were admitted in the medicine ward. The average age of geriatric patients was 72.69 years. According to Beer′s criteria, at least one inappropriate medicine was prescribed in 590 (87.3% patients. Metoclopramide (54.3%, alprazolam (9%, diazepam (8%, digoxin > 0.125 mg/day (5%, and diclofenac (3.7% were the commonly used inappropriate medications. Use of nonsteroidal anti-inflammatory drugs (NSAIDs in heart and renal failure patients was the commonly identified drug-disease interaction. According to PRISCUS list, at least one inappropriate medication was prescribed in 210 (31.06% patients. Conclusion: Use of inappropriate medicines is highly prevalent in elderly patients.

  12. Potentially inappropriate medication use: the Beers' Criteria used among older adults with depressive symptoms

    Directory of Open Access Journals (Sweden)

    Lee D

    2013-09-01

    Full Text Available INTRODUCTION: The ageing population means prescribing for chronic illnesses in older people is expected to rise. Comorbidities and compromised organ function may complicate prescribing and increase medication-related risks. Comorbid depression in older people is highly prevalent and complicates medication prescribing decisions. AIM: To determine the prevalence of potentially inappropriate medication use in a community-dwelling population of older adults with depressive symptoms. METHODS: The medications of 191 community-dwelling older people selected because of depressive symptoms for a randomised trial were reviewed and assessed using the modified version of the Beers' Criteria. The association between inappropriate medication use and various population characteristics was assessed using Chi-square statistics and logistic regression analyses. RESULTS: The mean age was 81 (±4.3 years and 59% were women. The median number of medications used was 6 (range 1-21 medications. The most commonly prescribed potentially inappropriate medications were amitriptyline, dextropropoxyphene, quinine and benzodiazepines. Almost half (49% of the participants were prescribed at least one potentially inappropriate medication; 29% were considered to suffer significant depressive symptoms (Geriatric Depression Scale ≥5 and no differences were found in the number of inappropriate medications used between those with and without significant depressive symptoms (Chi-square 0.005 p=0.54. DISCUSSION: Potentially inappropriate medication use, as per the modified Beers' Criteria, is very common among community-dwelling older people with depressive symptoms. However, the utility of the Beers' Criteria is lessened by lack of clinical correlation. Ongoing research to examine outcomes related to apparent inappropriate medication use is needed.

  13. A Bayesian Statistical Analysis of the Enhanced Greenhouse Effect

    NARCIS (Netherlands)

    de Vos, A.F.; Tol, R.S.J.

    1998-01-01

    This paper demonstrates that there is a robust statistical relationship between the records of the global mean surface air temperature and the atmospheric concentration of carbon dioxide over the period 1870-1991. As such, the enhanced greenhouse effect is a plausible explanation for the observed

  14. Statistical analysis of the profile of consumer Internet services

    OpenAIRE

    Arzhenovskii Sergei Valentinovich; Sountoura Lansine

    2014-01-01

    Article is devoted to the construction of the Russian Internet user profile. Statistical methods of summary, grouping and the graphical representation of information about Internet consumer by socio-demographic characteristics and settlement are used. RLMS at 2005-2012 years are the information base.

  15. Statistical Lineament Analysis in South Greenland Based on Landsat Imagery

    DEFF Research Database (Denmark)

    Conradsen, Knut; Nilsson, Gert; Thyrsted, Tage

    1986-01-01

    Linear features, mapped visually from MSS channel-7 photoprints (1: 1 000 000) of Landsat images from South Greenland, were digitized and analyzed statistically. A sinusoidal curve was fitted to the frequency distribution which was then divided into ten significant classes of azimuthal trends. Maps...

  16. Statistical analysis of agarwood oil compounds in discriminating the ...

    African Journals Online (AJOL)

    Enhancing and improving the discrimination technique is the main aim to determine or grade the good quality of agarwood oil. In this paper, all statistical works were performed via SPSS software. Two parameters involved are abundance of compound (%) and quality of t agarwood oil either low or high quality. The result ...

  17. Statistical Analysis of Large-Scale Structure of Universe

    Science.gov (United States)

    Tugay, A. V.

    While galaxy cluster catalogs were compiled many decades ago, other structural elements of cosmic web are detected at definite level only in the newest works. For example, extragalactic filaments were described by velocity field and SDSS galaxy distribution during the last years. Large-scale structure of the Universe could be also mapped in the future using ATHENA observations in X-rays and SKA in radio band. Until detailed observations are not available for the most volume of Universe, some integral statistical parameters can be used for its description. Such methods as galaxy correlation function, power spectrum, statistical moments and peak statistics are commonly used with this aim. The parameters of power spectrum and other statistics are important for constraining the models of dark matter, dark energy, inflation and brane cosmology. In the present work we describe the growth of large-scale density fluctuations in one- and three-dimensional case with Fourier harmonics of hydrodynamical parameters. In result we get power-law relation for the matter power spectrum.

  18. Herbal gardens of India: A statistical analysis report | Rao | African ...

    African Journals Online (AJOL)

    A knowledge system of the herbal garden in India was developed and these herbal gardens' information was statistically classified for efficient data processing, sharing and retrieving of information, which could act as a decision tool to the farmers, researchers, decision makers and policy makers in the field of medicinal ...

  19. On cumulative process model and its statistical analysis

    Czech Academy of Sciences Publication Activity Database

    Volf, Petr

    2000-01-01

    Roč. 36, č. 2 (2000), s. 165-176 ISSN 0023-5954 R&D Projects: GA ČR GA201/97/0354; GA ČR GA402/98/0742 Institutional research plan: AV0Z1075907 Subject RIV: BB - Applied Statistics, Operational Research

  20. Did Tanzania Achieve the Second Millennium Development Goal? Statistical Analysis

    Science.gov (United States)

    Magoti, Edwin

    2016-01-01

    Development Goal "Achieve universal primary education", the challenges faced, along with the way forward towards achieving the fourth Sustainable Development Goal "Ensure inclusive and equitable quality education and promote lifelong learning opportunities for all". Statistics show that Tanzania has made very promising steps…

  1. Statistical methods for data analysis in particle physics

    CERN Document Server

    Lista, Luca

    2017-01-01

    This concise set of course-based notes provides the reader with the main concepts and tools needed to perform statistical analyses of experimental data, in particular in the field of high-energy physics (HEP). First, the book provides an introduction to probability theory and basic statistics, mainly intended as a refresher from readers’ advanced undergraduate studies, but also to help them clearly distinguish between the Frequentist and Bayesian approaches and interpretations in subsequent applications. More advanced concepts and applications are gradually introduced, culminating in the chapter on both discoveries and upper limits, as many applications in HEP concern hypothesis testing, where the main goal is often to provide better and better limits so as to eventually be able to distinguish between competing hypotheses, or to rule out some of them altogether. Many worked-out examples will help newcomers to the field and graduate students alike understand the pitfalls involved in applying theoretical co...

  2. STATISTICAL ANALYSYS OF THE SCFE OF A BRAZILAN MINERAL COAL

    Directory of Open Access Journals (Sweden)

    DARIVA Cláudio

    1997-01-01

    Full Text Available The influence of some process variables on the productivity of the fractions (liquid yield times fraction percent obtained from SCFE of a Brazilian mineral coal using isopropanol and ethanol as primary solvents is analyzed using statistical techniques. A full factorial 23 experimental design was adopted to investigate the effects of process variables (temperature, pressure and cosolvent concentration on the extraction products. The extracts were analyzed by the Preparative Liquid Chromatography-8 fractions method (PLC-8, a reliable, non destructive solvent fractionation method, especially developed for coal-derived liquids. Empirical statistical modeling was carried out in order to reproduce the experimental data. Correlations obtained were always greater than 0.98. Four specific process criteria were used to allow process optimization. Results obtained show that it is not possible to maximize both extract productivity and purity (through the minimization of heavy fraction content simultaneously by manipulating the mentioned process variables.

  3. Integration of Advanced Statistical Analysis Tools and Geophysical Modeling

    Science.gov (United States)

    2012-08-01

    1.56 0.48 Beale: MetalMapper Cued: Beale_MMstat Target: 477 Cell 202 of 1547 (SOI, 2OI) Model 1 of 3 (Inv #1 / 2 = SOI: 1 / 1) Tag...Statistical classification of buried unexploded ordnance using nonparametric prior models. IEEE Trans. Geosci. Remote Sensing, 45: 2794–2806, 2007. T...Bell and B. Barrow. Subsurface discrimination using electromagnetic induction sensors. IEEE Trans. Geosci. Remote Sensing, 39:1286–1293, 2001. S. D

  4. Advocacy, analysis and quality. The Bermuda triangle of Statistics

    OpenAIRE

    SAISANA Michaela

    2013-01-01

    One might muse that what official statistics are to the consolidation of the modern nation state, composite indicators are to the emergence of post-modernity, – meaning by this the philosophical critique of the exact science and rational knowledge programme of Descartes and Galileo. Composite indicators give voice to a plurality of different actors and normative views of post-modernity. Not only has the use of composite indicators increased dramatically over the past ten to fifteen years, ...

  5. Detailed statistical analysis plan for the pulmonary protection trial

    DEFF Research Database (Denmark)

    Buggeskov, Katrine B; Jakobsen, Janus C; Secher, Niels H

    2014-01-01

    BACKGROUND: Pulmonary dysfunction complicates cardiac surgery that includes cardiopulmonary bypass. The pulmonary protection trial evaluates effect of pulmonary perfusion on pulmonary function in patients suffering from chronic obstructive pulmonary disease. This paper presents the statistical plan...... serious adverse events: pneumothorax or pleural effusion requiring drainage, major bleeding, reoperation, severe infection, cerebral event, hyperkaliemia, acute myocardial infarction, cardiac arrhythmia, renal replacement therapy, and readmission for a respiratory-related problem. CONCLUSIONS...

  6. Learning to Translate: A Statistical and Computational Analysis

    Directory of Open Access Journals (Sweden)

    Marco Turchi

    2012-01-01

    Full Text Available We present an extensive experimental study of Phrase-based Statistical Machine Translation, from the point of view of its learning capabilities. Very accurate Learning Curves are obtained, using high-performance computing, and extrapolations of the projected performance of the system under different conditions are provided. Our experiments confirm existing and mostly unpublished beliefs about the learning capabilities of statistical machine translation systems. We also provide insight into the way statistical machine translation learns from data, including the respective influence of translation and language models, the impact of phrase length on performance, and various unlearning and perturbation analyses. Our results support and illustrate the fact that performance improves by a constant amount for each doubling of the data, across different language pairs, and different systems. This fundamental limitation seems to be a direct consequence of Zipf law governing textual data. Although the rate of improvement may depend on both the data and the estimation method, it is unlikely that the general shape of the learning curve will change without major changes in the modeling and inference phases. Possible research directions that address this issue include the integration of linguistic rules or the development of active learning procedures.

  7. Performance Analysis of Statistical Time Division Multiplexing Systems

    Directory of Open Access Journals (Sweden)

    Johnson A. AJIBOYE

    2010-12-01

    Full Text Available Multiplexing is a way of accommodating many input sources of a low capacity over a high capacity outgoing channel. Statistical Time Division Multiplexing (STDM is a technique that allows the number of users to be multiplexed over the channel more than the channel can afford. The STDM normally exploits unused time slots by the non-active users and allocates those slots for the active users. Therefore, STDM is appropriate for bursty sources. In this way STDM normally utilizes channel bandwidth better than traditional Time Division Multiplexing (TDM. In this work, the statistical multiplexer is viewed as M/M/1queuing system and the performance is measured by comparing analytical results to simulation results using Matlab. The index used to determine the performance of the statistical multiplexer is the number of packets both in the system and the queue. Comparison of analytical results was also done between M/M/1 and M/M/2 and also between M/M/1 and M/D/1 queue systems. At high utilizations, M/M/2 performs better than M/M/1. M/D/1 also outperforms M/M1.

  8. The Digital Divide in Romania – A Statistical Analysis

    Directory of Open Access Journals (Sweden)

    Daniela BORISOV

    2012-06-01

    Full Text Available The digital divide is a subject of major importance in the current economic circumstances in which Information and Communication Technologies (ICT are seen as a significant determinant of increasing the domestic competitiveness and contribute to better life quality. Latest international reports regarding various aspects of ICT usage in modern society reveal a decrease of overall digital disparity towards the average trends of the worldwide ITC’s sector – this relates to latest advances of mobile and computer penetration rates, both for personal use and for households/ business. In Romania, the low starting point in the development of economy and society in the ICT direction was, in some extent, compensated by the rapid annual growth of the last decade. Even with these dynamic developments, the statistical data still indicate poor positions in European Union hierarchy; in this respect, the prospects of a rapid recovery of the low performance of the Romanian ICT endowment and usage and the issue continue to be regarded as a challenge for progress in economic and societal terms. The paper presents several methods for assessing the current state of ICT related aspects in terms of Internet usage based on the latest data provided by international databases. The current position of Romanian economy is judged according to several economy using statistical methods based on variability measurements: the descriptive statistics indicators, static measures of disparities and distance metrics.

  9. Appropriate management of common bile duct stones: a RAND Corporation/UCLA Appropriateness Method statistical analysis.

    Science.gov (United States)

    Parra-Membrives, Pablo; Díaz-Gómez, Daniel; Vilegas-Portero, Román; Molina-Linde, Máximo; Gómez-Bujedo, Lourdes; Lacalle-Remigio, Juan Ramón

    2010-05-01

    Bile duct stones affect 10% of patients who undergo a cholecystectomy and therefore represent a major health problem. Laparoscopic common bile duct exploration, endoscopic sphincterotomy, and open surgical choledocholithotomy are the three available methods for dealing with choledocholithiasis. Though many trials and reviews have compared all three strategies, a list of indications for defined patient profiles is lacking. We employed the RAND Corporation/UCLA Appropriateness Method (RAM) to evaluate the three procedures for bile duct stone clearance. An expert panel judged appropriateness after a comprehensive bibliography review, a first-round private rating of 108 different clinical situations, a consensus meeting, and a second round of definitive rating. A list of indications for each procedure was statistically calculated. A consensus was reached for 41 indications (38%). The endoscopic approach was always appropriate for preoperatively diagnosed bile duct stones and inappropriate for patients with single intraoperative detected stones causing cholangitis and bile duct dilatation. Laparoscopic bile duct exploration was appropriate for preoperatively diagnosed choledocholithiasis if patients had not undergone a previous cholecystectomy and no signs of cholangitis were detected. The laparoscopic approach was also appropriate for intraoperatively incidentally detected stones, except for septic patients with poor performance status and multiple calculi. Laparoscopic bile duct clearance was judged inappropriate for septic patients with poor performance status and absence of bile duct dilatation. Open surgery was appropriate in all patients with intraoperative diagnosis of choledocholithiasis and cholangitis and in septic patients with bile duct dilatation. There was no clinical situation in which open surgery was appropriate when bile duct stones were preoperatively diagnosed. There is still uncertainty with respect to the management of choledocholithiasis, showing

  10. Post-processing for statistical image analysis in light microscopy.

    Science.gov (United States)

    Cardullo, Richard A; Hinchcliffe, Edward H

    2013-01-01

    Image processing of images serves a number of important functions including noise reduction, contrast enhancement, and feature extraction. Whatever the final goal, an understanding of the nature of image acquisition and digitization and subsequent mathematical manipulations of that digitized image is essential. Here we discuss the basic mathematical and statistical processes that are routinely used by microscopists to routinely produce high quality digital images and to extract key features of interest using a variety of extraction and thresholding tools. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Statistical analysis of DNT detection using chemically functionalized microcantilever arrays

    DEFF Research Database (Denmark)

    Bosco, Filippo; Bache, M.; Hwu, E.-T.

    2012-01-01

    from 1 to 2 cantilevers have been reported, without any information on repeatability and reliability of the presented data. In explosive detection high reliability is needed and thus a statistical measurement approach needs to be developed and implemented. We have developed a DVD-based read-out system...... capable of generating large sets of cantilever data for vapor and liquid phase detection of 2,4-dinitrotoluene (DNT). Gold coated cantilevers are initially functionalized with tetraTTF-calix[4]pyrrole molecules, specifically designed to bind nitro-aromatic compounds. The selective binding of DNT molecules...

  12. Symbolic Data Analysis Conceptual Statistics and Data Mining

    CERN Document Server

    Billard, Lynne

    2012-01-01

    With the advent of computers, very large datasets have become routine. Standard statistical methods don't have the power or flexibility to analyse these efficiently, and extract the required knowledge. An alternative approach is to summarize a large dataset in such a way that the resulting summary dataset is of a manageable size and yet retains as much of the knowledge in the original dataset as possible. One consequence of this is that the data may no longer be formatted as single values, but be represented by lists, intervals, distributions, etc. The summarized data have their own internal s

  13. An invariant approach to statistical analysis of shapes

    CERN Document Server

    Lele, Subhash R

    2001-01-01

    INTRODUCTIONA Brief History of MorphometricsFoundations for the Study of Biological FormsDescription of the data SetsMORPHOMETRIC DATATypes of Morphometric DataLandmark Homology and CorrespondenceCollection of Landmark CoordinatesReliability of Landmark Coordinate DataSummarySTATISTICAL MODELS FOR LANDMARK COORDINATE DATAStatistical Models in GeneralModels for Intra-Group VariabilityEffect of Nuisance ParametersInvariance and Elimination of Nuisance ParametersA Definition of FormCoordinate System Free Representation of FormEst

  14. JAWS data collection, analysis highlights, and microburst statistics

    Science.gov (United States)

    Mccarthy, J.; Roberts, R.; Schreiber, W.

    1983-01-01

    Organization, equipment, and the current status of the Joint Airport Weather Studies project initiated in relation to the microburst phenomenon are summarized. Some data collection techniques and preliminary statistics on microburst events recorded by Doppler radar are discussed as well. Radar studies show that microbursts occur much more often than expected, with majority of the events being potentially dangerous to landing or departing aircraft. Seventy events were registered, with the differential velocities ranging from 10 to 48 m/s; headwind/tailwind velocity differentials over 20 m/s are considered seriously hazardous. It is noted that a correlation is yet to be established between the velocity differential and incoherent radar reflectivity.

  15. Bayesian statistical analysis of censored data in geotechnical engineering

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager; Tarp-Johansen, Niels Jacob; Denver, Hans

    2000-01-01

    The geotechnical engineer is often faced with the problem ofhow to assess the statistical properties of a soil parameter on the basis ofa sample measured in-situ or in the laboratory with the defect that somevalues have been replaced by interval bounds because the corresponding soilparameter values...... is available about the soil parameter distribution.The present paper shows how a characteristic value by computer calcula-tions can be assessed systematically from the actual sample of censored datasupplemented with prior information from a soil parameter data base....

  16. Statistical analysis of phase formation in 2D colloidal systems.

    Science.gov (United States)

    Carstensen, Hauke; Kapaklis, Vassilios; Wolff, Max

    2018-01-23

    Colloidal systems offer unique opportunities for the study of phase formation and structure since their characteristic length scales are accessible to visible light. As a model system the two-dimensional assembly of colloidal magnetic and non-magnetic particles dispersed in a ferrofluid (FF) matrix is studied by transmission optical microscopy. We present a method to statistically evaluate images with thousands of particles and map phases by extraction of local variables. Different lattice structures and long-range connected branching chains are observed, when tuning the effective magnetic interaction and varying particle ratios.

  17. Introduction to statistical data analysis for the life sciences

    CERN Document Server

    Ekstrom, Claus Thorn

    2014-01-01

    This text provides a computational toolbox that enables students to analyze real datasets and gain the confidence and skills to undertake more sophisticated analyses. Although accessible with any statistical software, the text encourages a reliance on R. For those new to R, an introduction to the software is available in an appendix. The book also includes end-of-chapter exercises as well as an entire chapter of case exercises that help students apply their knowledge to larger datasets and learn more about approaches specific to the life sciences.

  18. Statistical analysis of s-wave neutron reduced widths

    International Nuclear Information System (INIS)

    Pandita Anita; Agrawal, H.M.

    1992-01-01

    The fluctuations of the s-wave neutron reduced widths for many nuclei have been analyzed with emphasis on recent measurements by a statistical procedure which is based on the method of maximum likelihood. It is shown that the s-wave neutron reduced widths of nuclei follow single channel Porter Thomas distribution (x 2 -distribution with degree of freedom ν = 1) for most of the cases. However there are apparent deviations from ν = 1 and possible explanation and significance of this deviation is given. These considerations are likely to modify the evaluation of neutron cross section. (author)

  19. Data analysis of asymmetric structures advanced approaches in computational statistics

    CERN Document Server

    Saito, Takayuki

    2004-01-01

    Data Analysis of Asymmetric Structures provides a comprehensive presentation of a variety of models and theories for the analysis of asymmetry and its applications and provides a wealth of new approaches in every section. It meets both the practical and theoretical needs of research professionals across a wide range of disciplines and  considers data analysis in fields such as psychology, sociology, social science, ecology, and marketing. In seven comprehensive chapters this guide details theories, methods, and models for the analysis of asymmetric structures in a variety of disciplines and presents future opportunities and challenges affecting research developments and business applications.

  20. Potentially inappropriate prescribing in community-dwelling older people across Europe: a systematic literature review.

    Science.gov (United States)

    Tommelein, Eline; Mehuys, Els; Petrovic, Mirko; Somers, Annemie; Colin, Pieter; Boussery, Koen

    2015-12-01

    Potentially inappropriate prescribing (PIP) is one of the main risk factors for adverse drug events (ADEs) in older people. This systematic literature review aims to determine prevalence and type of PIP in community-dwelling older people across Europe, as well as identifying risk factors for PIP. The PubMed and Web of Science database were searched systematically for relevant manuscripts (January 1, 2000-December 31, 2014). Manuscripts were included if the study design was observational, the study participants were community-dwelling older patients in Europe, and if a published screening method for PIP was used. Studies that focused on specific pathologies or that focused on merely one inappropriate prescribing issue were excluded. Data analysis was performed using R statistics. Fifty-two manuscripts were included, describing 82 different sample screenings with an estimated overall PIP prevalence of 22.6 % (CI 19.2-26.7 %; range 0.0-98.0 %). Ten of the sample screenings were based on the Beers 1997 criteria, 19 on the Beers 2003 criteria, 14 on STOPP criteria (2008 version), 8 on START-criteria (2008 version), and 7 on the PRISCUS list. The 24 remaining sample screenings were carried out using compilations of screening methods or used country-specific lists such as the Laroche criteria. It appears that only PIP prevalence calculated from insurance data significantly differs from the other data collection method categories. Furthermore, risk factors most often positively associated with PIP prevalence were polypharmacy, poor functional status, and depression. Drug groups most often involved in PIP were anxiolytics (ATC-code: N05B), antidepressants (N06A), and nonsteroidal anti-inflammatory and anti-rheumatic products (M01A). PIP prevalence in European community-dwelling older adults is high and depends partially on the data collection method used. Polypharmacy, poor functional status, and depression were identified as the most common risk factors for PIP.

  1. Radar Derived Spatial Statistics of Summer Rain. Volume 2; Data Reduction and Analysis

    Science.gov (United States)

    Konrad, T. G.; Kropfli, R. A.

    1975-01-01

    Data reduction and analysis procedures are discussed along with the physical and statistical descriptors used. The statistical modeling techniques are outlined and examples of the derived statistical characterization of rain cells in terms of the several physical descriptors are presented. Recommendations concerning analyses which can be pursued using the data base collected during the experiment are included.

  2. EFFICIENCY OF KNOWLEDGE TRANSFER THROUGH KNOWLEDGE TEXTS: STATISTICAL ANALYSIS

    Directory of Open Access Journals (Sweden)

    RAUCHOVÁ, Tereza

    2013-03-01

    Full Text Available Texts are an important way to share and transfer knowledge. In this paper we analyse the impact of a specific form of texts, so called “knowledge texts”, on the efficiency of knowledge transfer. The objective is to verify or reject several hypotheses on the relationships among the style of educational texts (standard or knowledge styles, learning outcomes (performance of the students after learning and subjective evaluation of conformity of working with individual styles of the texts. For this purpose, we carry out experiment with a homogeneous group of the students (n = 41 divided into an experimental group and a control group. We use statistical methods to process the results of the experiments; ability of the students to solve specific tasks and their opinions on readability and understandability of the texts subject to the time spent for learning. Even if we determine statistically significant relationships between the style of texts and accuracy of the problem solving in the experimental group only, the results allow us to improve the experiment and apply the methodology developed in a less structured branch than the Operational Research (Graph Theory is. The methodology is another benefit of the paper, because it can be applied independently on a particular domain.

  3. The R software fundamentals of programming and statistical analysis

    CERN Document Server

    Lafaye de Micheaux, Pierre; Liquet, Benoit

    2013-01-01

    The contents of The R Software are presented so as to be both comprehensive and easy for the reader to use. Besides its application as a self-learning text, this book can support lectures on R at any level from beginner to advanced. This book can serve as a textbook on R for beginners as well as more advanced users, working on Windows, MacOs or Linux OSes. The first part of the book deals with the heart of the R language and its fundamental concepts, including data organization, import and export, various manipulations, documentation, plots, programming and maintenance.  The last chapter in this part deals with oriented object programming as well as interfacing R with C/C++ or Fortran, and contains a section on debugging techniques. This is followed by the second part of the book, which provides detailed explanations on how to perform many standard statistical analyses, mainly in the Biostatistics field. Topics from mathematical and statistical settings that are included are matrix operations, integration, o...

  4. Sealed-bid auction of Netherlands mussels: statistical analysis

    NARCIS (Netherlands)

    Kleijnen, J.P.C.; van Schaik, F.D.J.

    2011-01-01

    This article presents an econometric analysis of the many data on the sealed-bid auction that sells mussels in Yerseke town, the Netherlands. The goals of this analysis are obtaining insight into the important factors that determine the price of these mussels, and quantifying the performance of an

  5. [Patients with hyperlipidemia: inappropriate nutritional intake].

    Science.gov (United States)

    Lecerf, Jean-Michel; Hottin, Delphine Mastin

    2004-10-23

    Gather knowledge on nutritional supplementation in patients with hyperlipidemia. In an observational study on patients with hyperlipidemia, nutritional intake was assessed using a 7-day dietary questionnaire, provided on the first visit to a lipid clinic. 291 patients (201 men and 90 women) were studied. Calorie intake and proportion of energetic nutrients revealed low carbohydrate intake, low intake of dietary fibres, and excessive lipid and saturated fatty acid intakes. Patients with isolated hypercholesterolemia had nutritional intake very similar to the daily allowances recommended in France. Men with type III hyperlipidemia had the highest calorie intake and those with type IV dyslipidemia had the highest alcohol intake. Triglycerides increased with total energy intake and with fat intake (%). Body mass index was inversely correlated to carbohydrate intake. The duration of dyslipidemia was related to low vitamin C and B9 intake. The existence of risk factors (type 2 diabetes, hypertension, smoking or inactivity) was associated with less well-balanced diet and low protective micronutrient status. In the case of atherosclerosis, vitamin B9, C, E and beta-carotene intake was insufficient. Interactions existed between nutrient intake with correlations between fibres, vitamin B9, C and beta-carotene, suggesting that nutritional education should favour foodstuffs that provide them simultaneously. Nutritional intake in patients with hyperlipidemia is often far from that recommended and does not greatly differ from that in large non-selected populations. It can be considered as inappropriate because of the metabolic and cardiovascular risks in these patients. Adapted nutritional management is crucial.

  6. Accommodating Presuppositions Is Inappropriate in Implausible Contexts.

    Science.gov (United States)

    Singh, Raj; Fedorenko, Evelina; Mahowald, Kyle; Gibson, Edward

    2016-04-01

    According to one view of linguistic information (Karttunen, 1974; Stalnaker, 1974), a speaker can convey contextually new information in one of two ways: (a) by asserting the content as new information; or (b) by presupposing the content as given information which would then have to be accommodated. This distinction predicts that it is conversationally more appropriate to assert implausible information rather than presuppose it (e.g., von Fintel, 2008; Heim, 1992; Stalnaker, 2002). A second view rejects the assumption that presuppositions are accommodated; instead, presuppositions are assimilated into asserted content and both are correspondingly open to challenge (e.g., Gazdar, 1979; van der Sandt, 1992). Under this view, we should not expect to find a difference in conversational appropriateness between asserting implausible information and presupposing it. To distinguish between these two views of linguistic information, we performed two self-paced reading experiments with an on-line stops-making-sense judgment. The results of the two experiments-using the presupposition triggers the and too-show that accommodation is inappropriate (makes less sense) relative to non-presuppositional controls when the presupposed information is implausible but not when it is plausible. These results provide support for the first view of linguistic information: the contrast in implausible contexts can only be explained if there is a presupposition-assertion distinction and accommodation is a mechanism dedicated to reasoning about presuppositions. Copyright © 2015 Cognitive Science Society, Inc.

  7. Statistical analysis of questionnaires a unified approach based on R and Stata

    CERN Document Server

    Bartolucci, Francesco; Gnaldi, Michela

    2015-01-01

    Statistical Analysis of Questionnaires: A Unified Approach Based on R and Stata presents special statistical methods for analyzing data collected by questionnaires. The book takes an applied approach to testing and measurement tasks, mirroring the growing use of statistical methods and software in education, psychology, sociology, and other fields. It is suitable for graduate students in applied statistics and psychometrics and practitioners in education, health, and marketing.The book covers the foundations of classical test theory (CTT), test reliability, va

  8. Statistical analysis of Nomao customer votes for spots of France

    Science.gov (United States)

    Pálovics, Róbert; Daróczy, Bálint; Benczúr, András; Pap, Julia; Ermann, Leonardo; Phan, Samuel; Chepelianskii, Alexei D.; Shepelyansky, Dima L.

    2015-08-01

    We investigate the statistical properties of votes of customers for spots of France collected by the startup company Nomao. The frequencies of votes per spot and per customer are characterized by a power law distribution which remains stable on a time scale of a decade when the number of votes is varied by almost two orders of magnitude. Using the computer science methods we explore the spectrum and the eigenvalues of a matrix containing user ratings to geolocalized items. Eigenvalues nicely map to large towns and regions but show certain level of instability as we modify the interpretation of the underlying matrix. We evaluate imputation strategies that provide improved prediction performance by reaching geographically smooth eigenvectors. We point on possible links between distribution of votes and the phenomenon of self-organized criticality.

  9. Statistical analysis of complex systems with nonclassical invariant measures

    KAUST Repository

    Fratalocchi, Andrea

    2011-02-28

    I investigate the problem of finding a statistical description of a complex many-body system whose invariant measure cannot be constructed stemming from classical thermodynamics ensembles. By taking solitons as a reference system and by employing a general formalism based on the Ablowitz-Kaup-Newell-Segur scheme, I demonstrate how to build an invariant measure and, within a one-dimensional phase space, how to develop a suitable thermodynamics. A detailed example is provided with a universal model of wave propagation, with reference to a transparent potential sustaining gray solitons. The system shows a rich thermodynamic scenario, with a free-energy landscape supporting phase transitions and controllable emergent properties. I finally discuss the origin of such behavior, trying to identify common denominators in the area of complex dynamics.

  10. Statistical Analysis of Conductor Motion in LHC Superconducting Dipole Magnets

    CERN Document Server

    Calvi, M; Pugnat, P; Siemko, A

    2004-01-01

    Premature training quenches are usually caused by the transient energy release within the magnet coil as it is energised. The dominant disturbances originate in cable motion and produce observable rapid variation in voltage signals called spikes. The experimental set up and the raw data treatment to detect these phenomena are briefly recalled. The statistical properties of different features of spikes are presented like for instance the maximal amplitude, the energy, the duration and the time correlation between events. The parameterisation of the mechanical activity of magnets is addressed. The mechanical activity of full-scale prototype and first preseries LHC dipole magnets is analysed and correlations with magnet manufacturing procedures and quench performance are established. The predictability of the quench occurrence is discussed and examples presented.

  11. Statistical Analysis of Haralick Texture Features to Discriminate Lung Abnormalities

    Science.gov (United States)

    Zayed, Nourhan; Elnemr, Heba A.

    2015-01-01

    The Haralick texture features are a well-known mathematical method to detect the lung abnormalities and give the opportunity to the physician to localize the abnormality tissue type, either lung tumor or pulmonary edema. In this paper, statistical evaluation of the different features will represent the reported performance of the proposed method. Thirty-seven patients CT datasets with either lung tumor or pulmonary edema were included in this study. The CT images are first preprocessed for noise reduction and image enhancement, followed by segmentation techniques to segment the lungs, and finally Haralick texture features to detect the type of the abnormality within the lungs. In spite of the presence of low contrast and high noise in images, the proposed algorithms introduce promising results in detecting the abnormality of lungs in most of the patients in comparison with the normal and suggest that some of the features are significantly recommended than others. PMID:26557845

  12. Statistical Analysis of Haralick Texture Features to Discriminate Lung Abnormalities

    Directory of Open Access Journals (Sweden)

    Nourhan Zayed

    2015-01-01

    Full Text Available The Haralick texture features are a well-known mathematical method to detect the lung abnormalities and give the opportunity to the physician to localize the abnormality tissue type, either lung tumor or pulmonary edema. In this paper, statistical evaluation of the different features will represent the reported performance of the proposed method. Thirty-seven patients CT datasets with either lung tumor or pulmonary edema were included in this study. The CT images are first preprocessed for noise reduction and image enhancement, followed by segmentation techniques to segment the lungs, and finally Haralick texture features to detect the type of the abnormality within the lungs. In spite of the presence of low contrast and high noise in images, the proposed algorithms introduce promising results in detecting the abnormality of lungs in most of the patients in comparison with the normal and suggest that some of the features are significantly recommended than others.

  13. Statistical analysis of intramembranous particles using freeze fracture specimens.

    Science.gov (United States)

    Schladitz, Katja; Särkkä, Aila; Pavenstädt, Iris; Haferkamp, Otto; Mattfeldt, Torsten

    2003-08-01

    We studied the point processes of intramembranous particles of mitochondrial membranes from HeLa cells using the freeze fracture technique. Three groups - under normal conditions, after exposition with rotenone, and after exposition with sodium acid - were compared. First, we used several summary statistics in order to study the two-dimensional point patterns of intramembranous particles within each group. Then, we compared the patterns in different groups by bootstrap tests using the K-function and the nearest neighbour distance function G(r). Estimation of the G-function provided significant results but no significant differences between groups were found using the classical K-function; estimation of G(r) should therefore not be omitted when studying observed planar point patterns.

  14. Statistical analysis of P-wave neutron reduced widths

    International Nuclear Information System (INIS)

    Joshi, G.C.; Agrawal, H.M.

    2000-01-01

    The fluctuations of the p-wave neutron reduced widths for fifty one nuclei have been analyzed with emphasis on recent measurements by a statistical procedure which is based on the method of maximum likelihood. It is shown that the p-wave neutron reduced widths of even-even nuclei fallow single channel Porter Thomas distribution (χ 2 -distribution with degree of freedom ν=1) for most of the cases where there are no intermediate structure. It is emphasized that the distribution in nuclei other than even-even may differ from a χ 2 -distribution with one degree of freedom. Possible explanation and significance of this deviation from ν=1 is given. (author)

  15. Comparative Analysis of Kernel Methods for Statistical Shape Learning

    National Research Council Canada - National Science Library

    Rathi, Yogesh; Dambreville, Samuel; Tannenbaum, Allen

    2006-01-01

    .... In this work, we perform a comparative analysis of shape learning techniques such as linear PCA, kernel PCA, locally linear embedding and propose a new method, kernelized locally linear embedding...

  16. Consolidity analysis for fully fuzzy functions, matrices, probability and statistics

    OpenAIRE

    Walaa Ibrahim Gabr

    2015-01-01

    The paper presents a comprehensive review of the know-how for developing the systems consolidity theory for modeling, analysis, optimization and design in fully fuzzy environment. The solving of systems consolidity theory included its development for handling new functions of different dimensionalities, fuzzy analytic geometry, fuzzy vector analysis, functions of fuzzy complex variables, ordinary differentiation of fuzzy functions and partial fraction of fuzzy polynomials. On the other hand, ...

  17. [Statistical analysis of German radiologic periodicals: developmental trends in the last 10 years].

    Science.gov (United States)

    Golder, W

    1999-09-01

    To identify which statistical tests are applied in German radiological publications, to what extent their use has changed during the last decade, and which factors might be responsible for this development. The major articles published in "ROFO" and "DER RADIOLOGE" during 1988, 1993 and 1998 were reviewed for statistical content. The contributions were classified by principal focus and radiological subspecialty. The methods used were assigned to descriptive, basal and advanced statistics. Sample size, significance level and power were established. The use of experts' assistance was monitored. Finally, we calculated the so-called cumulative accessibility of the publications. 525 contributions were found to be eligible. In 1988, 87% used descriptive statistics only, 12.5% basal, and 0.5% advanced statistics. The corresponding figures in 1993 and 1998 are 62 and 49%, 32 and 41%, and 6 and 10%, respectively. Statistical techniques were most likely to be used in research on musculoskeletal imaging and articles dedicated to MRI. Six basic categories of statistical methods account for the complete statistical analysis appearing in 90% of the articles. ROC analysis is the single most common advanced technique. Authors make increasingly use of statistical experts' opinion and programs. During the last decade, the use of statistical methods in German radiological journals has fundamentally improved, both quantitatively and qualitatively. Presently, advanced techniques account for 20% of the pertinent statistical tests. This development seems to be promoted by the increasing availability of statistical analysis software.

  18. Potentially inappropriate prescriptions for older patients in long-term care

    Directory of Open Access Journals (Sweden)

    Laurin Danielle

    2004-10-01

    Full Text Available Abstract Background Inappropriate medication use is a major healthcare issue for the elderly population. This study explored the prevalence of potentially inappropriate prescriptions (PIPs in long-term care in metropolitan Quebec. Methods A cross sectional chart review of 2,633 long-term care older patients of the Quebec City area was performed. An explicit criteria list for PIPs was developed based on the literature and validated by a modified Delphi method. Medication orders were reviewed to describe prescribing patterns and to determine the prevalence of PIPs. A multivariate analysis was performed to identify predictors of PIPs. Results Almost all residents (94.0% were receiving one or more prescribed medication; on average patients had 4.8 prescribed medications. A majority (54.7% of treated patients had a potentially inappropriate prescription (PIP. Most common PIPs were drug interactions (33.9% of treated patients, followed by potentially inappropriate duration (23.6%, potentially inappropriate medication (14.7% and potentially inappropriate dosage (9.6%. PIPs were most frequent for medications of the central nervous system (10.8% of prescribed medication. The likelihood of PIP increased significantly as the number of drugs prescribed increased (odds ratio [OR]: 1.38, 95% confidence interval [CI]: 1.33 – 1.43 and with the length of stay (OR: 1.78, CI: 1.43 – 2.20. On the other hand, the risk of receiving a PIP decreased with age. Conclusion Potentially inappropriate prescribing is a serious problem in the highly medicated long-term care population in metropolitan Quebec. Use of explicit criteria lists may help identify the most critical issues and prioritize interventions to improve quality of care and patient safety.

  19. New Statistical Approach to the Analysis of Hierarchical Data

    Science.gov (United States)

    Neuman, S. P.; Guadagnini, A.; Riva, M.

    2014-12-01

    Many variables possess a hierarchical structure reflected in how their increments vary in space and/or time. Quite commonly the increments (a) fluctuate in a highly irregular manner; (b) possess symmetric, non-Gaussian frequency distributions characterized by heavy tails that often decay with separation distance or lag; (c) exhibit nonlinear power-law scaling of sample structure functions in a midrange of lags, with breakdown in such scaling at small and large lags; (d) show extended power-law scaling (ESS) at all lags; and (e) display nonlinear scaling of power-law exponent with order of sample structure function. Some interpret this to imply that the variables are multifractal, which explains neither breakdowns in power-law scaling nor ESS. We offer an alternative interpretation consistent with all above phenomena. It views data as samples from stationary, anisotropic sub-Gaussian random fields subordinated to truncated fractional Brownian motion (tfBm) or truncated fractional Gaussian noise (tfGn). The fields are scaled Gaussian mixtures with random variances. Truncation of fBm and fGn entails filtering out components below data measurement or resolution scale and above domain scale. Our novel interpretation of the data allows us to obtain maximum likelihood estimates of all parameters characterizing the underlying truncated sub-Gaussian fields. These parameters in turn make it possible to downscale or upscale all statistical moments to situations entailing smaller or larger measurement or resolution and sampling scales, respectively. They also allow one to perform conditional or unconditional Monte Carlo simulations of random field realizations corresponding to these scales. Aspects of our approach are illustrated on field and laboratory measured porous and fractured rock permeabilities, as well as soil texture characteristics and neural network estimates of unsaturated hydraulic parameters in a deep vadose zone near Phoenix, Arizona. We also use our approach

  20. Testing normality using the summary statistics with application to meta-analysis

    OpenAIRE

    Luo, Dehui; Wan, Xiang; Liu, Jiming; Tong, Tiejun

    2018-01-01

    As the most important tool to provide high-level evidence-based medicine, researchers can statistically summarize and combine data from multiple studies by conducting meta-analysis. In meta-analysis, mean differences are frequently used effect size measurements to deal with continuous data, such as the Cohen's d statistic and Hedges' g statistic values. To calculate the mean difference based effect sizes, the sample mean and standard deviation are two essential summary measures. However, many...

  1. The inappropriate use of lumbar magnetic resonance imaging in a health service area

    International Nuclear Information System (INIS)

    Rodriguez Recio, F. J.; Sanz, J. C.; Vera, S.; Peiro, S.

    1999-01-01

    To identify the percentage of inappropriate lumbar spine magnetic resonance imaging in the Soria Health Service, to quantify the costs and the possible association between inadequate use, the characteristics of the patient and the services requested. A descriptive study of the inappropriate use of MRI of the lumbar spine, taken from the retrospective examination, carried out by a radiologist, of the 233 MRI's requested between 1995 and 1998. For the valuation, the criteria of the American College of Radiology (ACR) and the Basque Agency for the Evaluation of Technologies (OSTEBA) were used. All the MRI's were carried out at an approved centre, the costs were calculated taken form the expenses paid by the Insalud, including the transport costs, calculated at prices applicable for the year in question. 11.7% of the studies were values as inappropriate, 2.1% debatable and the remainder adequate according to the ACR criteria, concentrating the inadequacy on studies for lumbago, that reached 80% of the inappropriate requests. The ACR and OSTEBA criteria coincided to a high degree (kappa statistics: 0.87). The expense related to the unnecessary studies was a litter higher than a million pesetas. No differences were found in the proportion of inappropriate studies according to the characteristics of the patient or the service requested, except the one already mentioned for the supposition diagnosis. Although the results of the study cannot be generalised to other environments, they suggest the possibility of a significant proportion of inappropriate use of lumbar spine MRI that could have an important repercussion on health care expenses. (Author) 11 refs

  2. Statistical analysis about corrosion in nuclear power plants

    International Nuclear Information System (INIS)

    Naquid G, C.; Medina F, A.; Zamora R, L.

    1999-01-01

    Nowadays, it has been carried out the investigations related with the structure degradation mechanisms, systems or and components in the nuclear power plants, since a lot of the involved processes are the responsible of the reliability of these ones, of the integrity of their components, of the safety aspects and others. This work presents the statistics of the studies related with materials corrosion in its wide variety and specific mechanisms. These exist at world level in the PWR, BWR, and WWER reactors, analysing the AIRS (Advanced Incident Reporting System) during the period between 1993-1998 in the two first plants in during the period between 1982-1995 for the WWER. The factors identification allows characterize them as those which apply, they are what have happen by the presence of some corrosion mechanism. Those which not apply, these are due to incidental by natural factors, mechanical failures and human errors. Finally, the total number of cases analysed, they correspond to the total cases which apply and not apply. (Author)

  3. Statistical analysis of the main diseases among atomic bomb survivors

    International Nuclear Information System (INIS)

    Hamada, Tadao; Kuramoto, Kiyoshi; Nambu, Shigeru

    1988-01-01

    Diseases found in 2,104 consequetive inpatients between April 1981 and March 1986 were statistically analyzed. The incidence of disease increased in the following order: diabetes mellitus > heart disease > cerebrovascular disorder > malignancy > hypertensive disease > arteriosclerosis > osteoarthritis. Malignancy is the most common cause of death or the highest mortality rate, followed by heart disease, cerebrovascular disorder, and liver cirrhosis. For the number of autopsy, the order of diseases was: malignancy, cardiovascular disease, gastrointestinal disease, respiratory tract disease, endocrine disease, and hematopoietic disease; for the incidence of autopsy, the order was: liver cirrhosis, diabetes mellitus, cerebrovascular disorder, malignancy, and heart disease. Malignancy accounted for 23 % of the inpatients. The incidence of malignancy increased in the following organs: stomach > liver > colon > lung > breast > biliary tract > esophagus. The incidence of leukemia was low. There was no definitive correlation between the incidence of malignancy and exposure distance, although the incidence of breast cancer tended to be high in the group exposed at ≤2,000 m from the hypocenter. According to age class, gastric cancer was frequent in patients less than 40 years and more than 60 years. Liver cancer was the most common in the sixtieth decade of life of men. The incidence of lung cancer increased with advancing age; the incidence of breast cancer was higher in younger patients. (Namekawa, K.)

  4. Statistical language analysis for automatic exfiltration event detection.

    Energy Technology Data Exchange (ETDEWEB)

    Robinson, David Gerald

    2010-04-01

    This paper discusses the recent development a statistical approach for the automatic identification of anomalous network activity that is characteristic of exfiltration events. This approach is based on the language processing method eferred to as latent dirichlet allocation (LDA). Cyber security experts currently depend heavily on a rule-based framework for initial detection of suspect network events. The application of the rule set typically results in an extensive list of uspect network events that are then further explored manually for suspicious activity. The ability to identify anomalous network events is heavily dependent on the experience of the security personnel wading through the network log. Limitations f this approach are clear: rule-based systems only apply to exfiltration behavior that has previously been observed, and experienced cyber security personnel are rare commodities. Since the new methodology is not a discrete rule-based pproach, it is more difficult for an insider to disguise the exfiltration events. A further benefit is that the methodology provides a risk-based approach that can be implemented in a continuous, dynamic or evolutionary fashion. This permits uspect network activity to be identified early with a quantifiable risk associated with decision making when responding to suspicious activity.

  5. AN ANALYSIS OF SOME RECENT STATISTICS OF THE ROMANIAN TOURISM

    Directory of Open Access Journals (Sweden)

    Iuliana BUCURESCU

    2011-06-01

    Full Text Available One studies the evolution in time of some indicators that are representative for the touristic activity in Romania during 2000 – 2009, as well as correlations between them, these being: the number of arrivals and of overnights in the tourism structures with accomodation functions, as well as the number of tourism structures and their accomodation capacity, separately for foreign and Romanian visitors, as well as for different tourism destinations. All these indicators were extracted from the database of the National Institute of Statistics. Generally, an increase in time of the number of tourists is found, but also a certain decrease during the last two-three years, except for some groups of destinations which show a rather peculiar and interesting dynamics. Thus, the tourism in the resorts of the seaside area have registered an accentuated decrease during the last four years, especially for the foreign tourists, that reflects a change in their options. On the other hand, the tourism for the category of destination “other localities and touristic routes (which excludes the resorts of the spa, seaside, and mountain areas, as well as the city of Bucharest and all the county capital cities has shown a remarkable growth during the whole considered time interval, indicating an increase of the interest of the tourists (both Romanians and foreigners for the cultural and rural tourism.

  6. Prevalence and Risk of Inappropriate Sexual Behavior of Patients Toward Physical Therapist Clinicians and Students in the United States.

    Science.gov (United States)

    Boissonnault, Jill S; Cambier, Ziádee; Hetzel, Scott J; Plack, Margaret M

    2017-11-01

    For health care providers in the United States, the risk for nonfatal violence in the workplace is 16 times greater than that for other workers. Inappropriate patient sexual behavior (IPSB) is directed at clinicians, staff, or other patients and may include leering, sexual remarks, deliberate touching, indecent exposure, and sexual assault. Inappropriate patient sexual behavior may adversely affect clinicians, the organization, or patients themselves. Few IPSB risk factors for physical therapists have been confirmed. The US prevalence was last assessed in the 1990s. The objectives of this study were to determine career and 12-month exposure to IPSB among US physical therapists, physical therapist assistants, physical therapist students, and physical therapist assistant students and to identify IPSB risk factors. This was a retrospective and observational study. An electronic survey was developed; content validity and test-retest reliability were established. Participants were recruited through physical therapist and physical therapist assistant academic programs and sections of the American Physical Therapy Association. Inappropriate patient sexual behavior risk models were constructed individually for any, mild, moderate, and severe IPSB events reported over the past 12 months. Open-ended comments were analyzed using qualitative methods. Eight hundred ninety-two physical therapist professionals and students completed the survey. The career prevalence among respondents was 84%, and the 12-month prevalence was 47%. Statistical risk modeling for any IPSB over the past 12 months indicated the following risks: having fewer years of direct patient care, routinely working with patients with cognitive impairments, being a female practitioner, and treating male patients. Qualitative analysis of 187 open-ended comments revealed patient-related characteristics, provider-related characteristics, and abusive actions. Self-report, clinician memory, and convenience sampling are

  7. Statistical analysis of long term spatial and temporal trends of ...

    Indian Academy of Sciences (India)

    The annual and seasonal trend analysis of different surface temperature parameters (average, maximum, minimum and diurnal temperature range) has been done for historical (1971–2005) and future periods (2011–2099) in the middle catchment of Sutlej river basin, India. The future time series of temperature data has ...

  8. Statistical analysis plan for the EuroHYP-1 trial

    DEFF Research Database (Denmark)

    Winkel, Per; Bath, Philip M; Gluud, Christian

    2017-01-01

    Score; (4) brain infarct size at 48 +/-24 hours; (5) EQ-5D-5 L score, and (6) WHODAS 2.0 score. Other outcomes are: the primary safety outcome serious adverse events; and the incremental cost-effectiveness, and cost utility ratios. The analysis sets include (1) the intention-to-treat population, and (2...

  9. Spatial statistical analysis of dissatisfaction with the performance of ...

    African Journals Online (AJOL)

    The analysis reveals spatial clustering in the level of dissatisfaction with the performance of local government. It also reveals percentage of respondents dissatisfied with dwelling, mean sense of safety index, and percentage agree the country is going in the wrong direction, as significant predictors of the level of local ...

  10. Open Access Publishing Trend Analysis: Statistics beyond the Perception

    Science.gov (United States)

    Poltronieri, Elisabetta; Bravo, Elena; Curti, Moreno; Maurizio Ferri,; Mancini, Cristina

    2016-01-01

    Introduction: The purpose of this analysis was twofold: to track the number of open access journals acquiring impact factor, and to investigate the distribution of subject categories pertaining to these journals. As a case study, journals in which the researchers of the National Institute of Health (Istituto Superiore di Sanità) in Italy have…

  11. Statistical analysis of the organizational factors influence on the ...

    African Journals Online (AJOL)

    At the same time the research of working hours by means of photos of the working day, a moment observations method and the made time observations is important. Correlation dependence of workers' labor productivity on factors of the work organization is revealed. The analysis of the indicators directed to organizational ...

  12. Sealed-Bid Auction of Dutch Mussels : Statistical Analysis

    NARCIS (Netherlands)

    Kleijnen, J.P.C.; van Schaik, F.D.J.

    2007-01-01

    This article presents an econometric analysis of the many data on the sealed-bid auction that sells mussels in Yerseke town, the Netherlands. The goals of this analy- sis are obtaining insight into the important factors that determine the price of these mussels, and quantifying the performance of an

  13. Solar spectra analysis based on the statistical moment method

    Czech Academy of Sciences Publication Activity Database

    Druckmüller, M.; Klvaňa, Miroslav; Druckmüllerová, Z.

    2007-01-01

    Roč. 31, č. 1 (2007), s. 297-307 ISSN 1845-8319. [Dynamical processes in the solar atmosphere. Hvar, 24.09.2006-29.09.2006] R&D Projects: GA ČR GA205/04/2129 Institutional research plan: CEZ:AV0Z10030501 Keywords : spectral analysis * method Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics

  14. Statistics Education Research in Malaysia and the Philippines: A Comparative Analysis

    Science.gov (United States)

    Reston, Enriqueta; Krishnan, Saras; Idris, Noraini

    2014-01-01

    This paper presents a comparative analysis of statistics education research in Malaysia and the Philippines by modes of dissemination, research areas, and trends. An electronic search for published research papers in the area of statistics education from 2000-2012 yielded 20 for Malaysia and 19 for the Philippines. Analysis of these papers showed…

  15. Inappropriate treatments for patients with cognitive decline.

    Science.gov (United States)

    Robles Bayón, A; Gude Sampedro, F

    2014-01-01

    Some treatments are inappropriate for patients with cognitive decline. We analyse their use in 500 patients and present a literature review. Benzodiazepines produce dependence, and reduce attention, memory, and motor ability. They can cause disinhibition or aggressive behaviour, facilitate the appearance of delirium, and increase accident and mortality rates in people older than 60. In subjects over 65, low systolic blood pressure is associated with cognitive decline. Maintaining this figure between 130 and 140 mm Hg (145 in patients older than 80) is recommended. Hypocholesterolaemia < 160 mg/dl is associated with increased morbidity and mortality, aggressiveness, and suicide; HDL-cholesterol<40 mg/dl is associated with memory loss and increased vascular and mortality risks. Old age is a predisposing factor for developing cognitive disorders or delirium when taking opioids. The risks of prescribing anticholinesterases and memantine to patients with non-Alzheimer dementia that is not associated with Parkinson disease, mild cognitive impairment, or psychiatric disorders probably outweigh the benefits. Anticholinergic drugs acting preferentially on the peripheral system can also induce cognitive side effects. Practitioners should be aware of steroid-induced dementia and steroid-induced psychosis, and know that risk of delirium increases with polypharmacy. Of 500 patients with cognitive impairment, 70.4% were on multiple medications and 42% were taking benzodiazepines. Both conditions were present in 74.3% of all suspected iatrogenic cases. Polypharmacy should be avoided, if it is not essential, especially in elderly patients and those with cognitive impairment. Benzodiazepines, opioids and anticholinergics often elicit cognitive and behavioural disorders. Moreover, systolic blood pressure must be kept above 130 mm Hg, total cholesterol levels over 160 mg/dl, and HDL-cholesterol over 40 mg/dl in this population. Copyright © 2012 Sociedad Española de Neurolog

  16. ON THE STATISTICAL ANALYSIS OF X-RAY POLARIZATION MEASUREMENTS

    Energy Technology Data Exchange (ETDEWEB)

    Strohmayer, T. E.; Kallman, T. R. [X-ray Astrophysics Lab, Astrophysics Science Division, NASA' s Goddard Space Flight Center, Greenbelt, MD 20771 (United States)

    2013-08-20

    In many polarimetry applications, including observations in the X-ray band, the measurement of a polarization signal can be reduced to the detection and quantification of a deviation from uniformity of a distribution of measured angles of the form A + Bcos {sup 2}({phi} - {phi}{sub 0}) (0 < {phi} < {pi}). We explore the statistics of such polarization measurements using Monte Carlo simulations and {chi}{sup 2} fitting methods. We compare our results to those derived using the traditional probability density used to characterize polarization measurements and quantify how they deviate as the intrinsic modulation amplitude grows. We derive relations for the number of counts required to reach a given detection level (parameterized by {beta} the ''number of {sigma}'s'' of the measurement) appropriate for measuring the modulation amplitude a by itself (single interesting parameter case) or jointly with the position angle {phi} (two interesting parameters case). We show that for the former case, when the intrinsic amplitude is equal to the well-known minimum detectable polarization, (MDP) it is, on average, detected at the 3{sigma} level. For the latter case, when one requires a joint measurement at the same confidence level, then more counts are needed than what was required to achieve the MDP level. This additional factor is amplitude-dependent, but is Almost-Equal-To 2.2 for intrinsic amplitudes less than about 20%. It decreases slowly with amplitude and is Almost-Equal-To 1.8 when the amplitude is 50%. We find that the position angle uncertainty at 1{sigma} confidence is well described by the relation {sigma}{sub {phi}} = 28. Degree-Sign 5/{beta}.

  17. Distribution-level electricity reliability: Temporal trends using statistical analysis

    International Nuclear Information System (INIS)

    Eto, Joseph H.; LaCommare, Kristina H.; Larsen, Peter; Todd, Annika; Fisher, Emily

    2012-01-01

    This paper helps to address the lack of comprehensive, national-scale information on the reliability of the U.S. electric power system by assessing trends in U.S. electricity reliability based on the information reported by the electric utilities on power interruptions experienced by their customers. The research analyzes up to 10 years of electricity reliability information collected from 155 U.S. electric utilities, which together account for roughly 50% of total U.S. electricity sales. We find that reported annual average duration and annual average frequency of power interruptions have been increasing over time at a rate of approximately 2% annually. We find that, independent of this trend, installation or upgrade of an automated outage management system is correlated with an increase in the reported annual average duration of power interruptions. We also find that reliance on IEEE Standard 1366-2003 is correlated with higher reported reliability compared to reported reliability not using the IEEE standard. However, we caution that we cannot attribute reliance on the IEEE standard as having caused or led to higher reported reliability because we could not separate the effect of reliance on the IEEE standard from other utility-specific factors that may be correlated with reliance on the IEEE standard. - Highlights: ► We assess trends in electricity reliability based on the information reported by the electric utilities. ► We use rigorous statistical techniques to account for utility-specific differences. ► We find modest declines in reliability analyzing interruption duration and frequency experienced by utility customers. ► Installation or upgrade of an OMS is correlated to an increase in reported duration of power interruptions. ► We find reliance in IEEE Standard 1366 is correlated with higher reported reliability.

  18. A statistical analysis based recommender model for heart disease patients.

    Science.gov (United States)

    Mustaqeem, Anam; Anwar, Syed Muhammad; Khan, Abdul Rashid; Majid, Muhammad

    2017-12-01

    An intelligent information technology based system could have a positive impact on the life-style of patients suffering from chronic diseases by providing useful health recommendations. In this paper, we have proposed a hybrid model that provides disease prediction and medical recommendations to cardiac patients. The first part aims at implementing a prediction model, that can identify the disease of a patient and classify it into one of the four output classes i.e., non-cardiac chest pain, silent ischemia, angina, and myocardial infarction. Following the disease prediction, the second part of the model provides general medical recommendations to patients. The recommendations are generated by assessing the severity of clinical features of patients, estimating the risk associated with clinical features and disease, and calculating the probability of occurrence of disease. The purpose of this model is to build an intelligent and adaptive recommender system for heart disease patients. The experiments for the proposed recommender system are conducted on a clinical data set collected and labelled in consultation with medical experts from a known hospital. The performance of the proposed prediction model is evaluated using accuracy and kappa statistics as evaluation measures. The medical recommendations are generated based on information collected from a knowledge base created with the help of physicians. The results of the recommendation model are evaluated using confusion matrix and gives an accuracy of 97.8%. The proposed system exhibits good prediction and recommendation accuracies and promises to be a useful contribution in the field of e-health and medical informatics. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Characterization of Nuclear Fuel using Multivariate Statistical Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Robel, M; Robel, M; Robel, M; Kristo, M J; Kristo, M J

    2007-11-27

    Various combinations of reactor type and fuel composition have been characterized using principle components analysis (PCA) of the concentrations of 9 U and Pu isotopes in the 10 fuel as a function of burnup. The use of PCA allows the reduction of the 9-dimensional data (isotopic concentrations) into a 3-dimensional approximation, giving a visual representation of the changes in nuclear fuel composition with burnup. Real-world variation in the concentrations of {sup 234}U and {sup 236}U in the fresh (unirradiated) fuel was accounted for. The effects of reprocessing were also simulated. The results suggest that, 15 even after reprocessing, Pu isotopes can be used to determine both the type of reactor and the initial fuel composition with good discrimination. Finally, partial least squares discriminant analysis (PSLDA) was investigated as a substitute for PCA. Our results suggest that PLSDA is a better tool for this application where separation between known classes is most important.

  20. Statistical Analysis of the Grid Connected Photovoltaic System Performance Ratio

    Directory of Open Access Journals (Sweden)

    Javier Vilariño-García

    2017-05-01

    Full Text Available A methodology based on the application of variance analysis and Tukey's method to a data set of solar radiation in the plane of the photovoltaic modules and the corresponding values of power delivered to the grid at intervals of 10 minutes presents from sunrise to sunset during the 52 weeks of the year 2013. These data were obtained through a monitoring system located in a photovoltaic plant of 10 MW of rated power located in Cordoba, consisting of 16 transformers and 98 investors. The application of the comparative method among the middle of the performance index of the processing centers to detect with an analysis of variance if there is significant difference in average at least the rest at a level of significance of 5% and then by testing Tukey which one or more processing centers that are below average due to a fault to be detected and corrected are.

  1. Practical guidance for statistical analysis of operational event data

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1995-10-01

    This report presents ways to avoid mistakes that are sometimes made in analysis of operational event data. It then gives guidance on what to do when a model is rejected, a list of standard types of models to consider, and principles for choosing one model over another. For estimating reliability, it gives advice on which failure modes to model, and moment formulas for combinations of failure modes. The issues are illustrated with many examples and case studies

  2. Statistical analysis of lead isotope data in provenance studies

    International Nuclear Information System (INIS)

    Reedy, C.L.

    1991-01-01

    This paper reports on tracing artifacts to ore sources which is different from assigning ore samples to time epochs. Until now, archaeometrists working with lead isotopes have used the ratio methods developed by geochronologists. For provenance studies, however, the use of composition data (the fraction of each of the four isotopes) leads to fewer arbitrary choices, two standard types of plots (labelled ternary and canonical variable, and a consistent method of discriminant analysis for separating groups of samples from different sources

  3. Practical guidance for statistical analysis of operational event data

    Energy Technology Data Exchange (ETDEWEB)

    Atwood, C.L.

    1995-10-01

    This report presents ways to avoid mistakes that are sometimes made in analysis of operational event data. It then gives guidance on what to do when a model is rejected, a list of standard types of models to consider, and principles for choosing one model over another. For estimating reliability, it gives advice on which failure modes to model, and moment formulas for combinations of failure modes. The issues are illustrated with many examples and case studies.

  4. Detecting errors in micro and trace analysis by using statistics

    DEFF Research Database (Denmark)

    Heydorn, K.

    1993-01-01

    By assigning a standard deviation to each step in an analytical method it is possible to predict the standard deviation of each analytical result obtained by this method. If the actual variability of replicate analytical results agrees with the expected, the analytical method is said...... to results for chlorine in freshwater from BCR certification analyses by highly competent analytical laboratories in the EC. Titration showed systematic errors of several percent, while radiochemical neutron activation analysis produced results without detectable bias....

  5. Prescribing Patterns and Inappropriate Use of Medications in Elderly ...

    African Journals Online (AJOL)

    Prescribing Patterns and Inappropriate Use of Medications in Elderly Outpatients in a Tertiary Hospital in Nigeria. ... Tropical Journal of Pharmaceutical Research ... Purpose: To determine the prescribing patterns and occurrence of potentially inappropriate medications (PIM) among elderly outpatients visiting a tertiary ...

  6. Potentially inappropriate prescribing in elderly population: A study in medicine out-patient department

    Directory of Open Access Journals (Sweden)

    Ajit Kumar Sah

    2017-03-01

    Full Text Available Background & Objectives: Older individuals often suffer from multiple systemic diseases and are particularly more vulnerable to potentially inappropriate medicine prescribing. Inappropriate medication can cause serious medical problem for the elderly. The study was conducted with objectives to determine the prevalence of potentially inappropriate medicine (PIM prescribing in older Nepalese patients in a medicine outpatient department.Materials & Methods: A prospective observational analysis of drugs prescribed in medicine out-patient department (OPD of a tertiary hospital of central Nepal was conducted during November 2012 to October 2013 among 869 older adults aged 65 years and above. The use of potentially inappropriate medications (PIM in elderly patients was analysed using Beer’s Criteria updated to 2013. Results: In the 869 patients included, the average number of drugs prescribed per prescription was 5.56. The most commonly used drugs were atenolol (24.3%, amlodipine (23.16%, paracetamol (17.6%, salbutamol (15.72% and vitamin B complex (13.26%. The total number of medications prescribed was 4833. At least one instance of PIM was experienced by approximately 26.3% of patients when evaluated using the Beers criteria. Conclusion: Potentially inappropriate medications are highly prevalent among older patients attending medical OPD and are associated with number of medications prescribed. Further research is warranted to study the impact of PIMs towards health related outcomes in these elderly.

  7. WebBUGS: Conducting Bayesian Statistical Analysis Online

    Directory of Open Access Journals (Sweden)

    Zhiyong Zhang

    2014-11-01

    Full Text Available A web interface, named WebBUGS, is developed to conduct Bayesian analysis online over the Internet through OpenBUGS and R. WebBUGS can be used with the minimum requirement of a web browser both remotely and locally. WebBUGS has many collaborative features such as email notification and sharing. WebBUGS also eases the use of OpenBUGS by providing built-in model templates, data management module, and other useful modules. In this paper, the use of WebBUGS is illustrated and discussed.

  8. Statistical methods for the forensic analysis of striated tool marks

    Energy Technology Data Exchange (ETDEWEB)

    Hoeksema, Amy Beth [Iowa State Univ., Ames, IA (United States)

    2013-01-01

    In forensics, fingerprints can be used to uniquely identify suspects in a crime. Similarly, a tool mark left at a crime scene can be used to identify the tool that was used. However, the current practice of identifying matching tool marks involves visual inspection of marks by forensic experts which can be a very subjective process. As a result, declared matches are often successfully challenged in court, so law enforcement agencies are particularly interested in encouraging research in more objective approaches. Our analysis is based on comparisons of profilometry data, essentially depth contours of a tool mark surface taken along a linear path. In current practice, for stronger support of a match or non-match, multiple marks are made in the lab under the same conditions by the suspect tool. We propose the use of a likelihood ratio test to analyze the difference between a sample of comparisons of lab tool marks to a field tool mark, against a sample of comparisons of two lab tool marks. Chumbley et al. (2010) point out that the angle of incidence between the tool and the marked surface can have a substantial impact on the tool mark and on the effectiveness of both manual and algorithmic matching procedures. To better address this problem, we describe how the analysis can be enhanced to model the effect of tool angle and allow for angle estimation for a tool mark left at a crime scene. With sufficient development, such methods may lead to more defensible forensic analyses.

  9. Some statistical design and analysis aspects for NAEG studies

    International Nuclear Information System (INIS)

    Gilbert, R.O.; Eberhardt, L.L.

    1975-01-01

    Some of the design and analysis aspects of the NAEG studies at safety-shot sites are reviewed in conjunction with discussions of possible new approaches. The use of double sampling to estimate inventories is suggested as a means of obtaining data for estimating the geographical distribution of plutonium using computer contouring programs. The lack of estimates of error for plutonium contours is noted and a regression approach discussed for obtaining such estimates. The kinds of new data that are now available for analysis from A site of Area 11 and the four Tonopah Test Range (TTR) sites are outlined, and the need for a closer look at methods for analyzing ratio-type data is pointed out. The necessity for thorough planning of environmental sampling programs is emphasized in order to obtain the maximum amount of information for fixed cost. Some general planning aspects of new studies at nuclear sites and experimental clean-up plots are discussed, as is the planning of interlaboratory comparisons. (U.S.)

  10. Statistical analysis of the direct count method for enumerating bacteria.

    Science.gov (United States)

    Kirchman, D; Sigda, J; Kapuscinski, R; Mitchell, R

    1982-08-01

    The direct count method for enumerating bacteria in natural environments is widely used. This paper analyzes the sources of variation contributed by the various levels of the method: subsamples, filters, and microscope fields. Based on a nested analysis of variance, we show that most of the variance (less than 80%) is caused by the fields and that the filters contributed nearly all of the remaining variance. The replication at each of the levels determines the total cost and error of a measurement. We compared several sampling schemes, including an optimal strategy which gives the lowest possible variance for a given cost. We recommend that preparing one filter from one subsample is adequate only if the samples are closely spaced in time or distance; otherwise, one filter should be prepared from two or preferably three subsamples. This sampling scheme emphasizes the importance of the highest level of replication. Our analysis shows that the accuracy of the direct count method can be substantially improved (by 20 to 50%) without a large increase in cost when the proper degree of replication at each level is performed.

  11. Analysis of compressive fracture in rock using statistical techniques

    Energy Technology Data Exchange (ETDEWEB)

    Blair, S.C.

    1994-12-01

    Fracture of rock in compression is analyzed using a field-theory model, and the processes of crack coalescence and fracture formation and the effect of grain-scale heterogeneities on macroscopic behavior of rock are studied. The model is based on observations of fracture in laboratory compression tests, and incorporates assumptions developed using fracture mechanics analysis of rock fracture. The model represents grains as discrete sites, and uses superposition of continuum and crack-interaction stresses to create cracks at these sites. The sites are also used to introduce local heterogeneity. Clusters of cracked sites can be analyzed using percolation theory. Stress-strain curves for simulated uniaxial tests were analyzed by studying the location of cracked sites, and partitioning of strain energy for selected intervals. Results show that the model implicitly predicts both development of shear-type fracture surfaces and a strength-vs-size relation that are similar to those observed for real rocks. Results of a parameter-sensitivity analysis indicate that heterogeneity in the local stresses, attributed to the shape and loading of individual grains, has a first-order effect on strength, and that increasing local stress heterogeneity lowers compressive strength following an inverse power law. Peak strength decreased with increasing lattice size and decreasing mean site strength, and was independent of site-strength distribution. A model for rock fracture based on a nearest-neighbor algorithm for stress redistribution is also presented and used to simulate laboratory compression tests, with promising results.

  12. Shape Analysis of HII Regions - I. Statistical Clustering

    Science.gov (United States)

    Campbell-White, Justyn; Froebrich, Dirk; Kume, Alfred

    2018-04-01

    We present here our shape analysis method for a sample of 76 Galactic HII regions from MAGPIS 1.4 GHz data. The main goal is to determine whether physical properties and initial conditions of massive star cluster formation is linked to the shape of the regions. We outline a systematic procedure for extracting region shapes and perform hierarchical clustering on the shape data. We identified six groups that categorise HII regions by common morphologies. We confirmed the validity of these groupings by bootstrap re-sampling and the ordinance technique multidimensional scaling. We then investigated associations between physical parameters and the assigned groups. Location is mostly independent of group, with a small preference for regions of similar longitudes to share common morphologies. The shapes are homogeneously distributed across Galactocentric distance and latitude. One group contains regions that are all younger than 0.5 Myr and ionised by low- to intermediate-mass sources. Those in another group are all driven by intermediate- to high-mass sources. One group was distinctly separated from the other five and contained regions at the surface brightness detection limit for the survey. We find that our hierarchical procedure is most sensitive to the spatial sampling resolution used, which is determined for each region from its distance. We discuss how these errors can be further quantified and reduced in future work by utilising synthetic observations from numerical simulations of HII regions. We also outline how this shape analysis has further applications to other diffuse astronomical objects.

  13. Limitations of Using Microsoft Excel Version 2016 (MS Excel 2016) for Statistical Analysis for Medical Research.

    Science.gov (United States)

    Tanavalee, Chotetawan; Luksanapruksa, Panya; Singhatanadgige, Weerasak

    2016-06-01

    Microsoft Excel (MS Excel) is a commonly used program for data collection and statistical analysis in biomedical research. However, this program has many limitations, including fewer functions that can be used for analysis and a limited number of total cells compared with dedicated statistical programs. MS Excel cannot complete analyses with blank cells, and cells must be selected manually for analysis. In addition, it requires multiple steps of data transformation and formulas to plot survival analysis graphs, among others. The Megastat add-on program, which will be supported by MS Excel 2016 soon, would eliminate some limitations of using statistic formulas within MS Excel.

  14. The Effect of Prior Experience with Computers, Statistical Self-Efficacy, and Computer Anxiety on Students' Achievement in an Introductory Statistics Course: A Partial Least Squares Path Analysis

    Science.gov (United States)

    Abd-El-Fattah, Sabry M.

    2005-01-01

    A Partial Least Squares Path Analysis technique was used to test the effect of students' prior experience with computers, statistical self-efficacy, and computer anxiety on their achievement in an introductory statistics course. Computer Anxiety Rating Scale and Current Statistics Self-Efficacy Scale were administered to a sample of 64 first-year…

  15. A force profile analysis comparison between functional data analysis, statistical parametric mapping and statistical non-parametric mapping in on-water single sculling.

    Science.gov (United States)

    Warmenhoven, John; Harrison, Andrew; Robinson, Mark A; Vanrenterghem, Jos; Bargary, Norma; Smith, Richard; Cobley, Stephen; Draper, Conny; Donnelly, Cyril; Pataky, Todd

    2018-03-21

    To examine whether the Functional Data Analysis (FDA), Statistical Parametric Mapping (SPM) and Statistical non-Parametric Mapping (SnPM) hypothesis testing techniques differ in their ability to draw inferences in the context of a single, simple experimental design. The sample data used is cross-sectional (two-sample gender comparison) and evaluation of differences between statistical techniques used a combination of descriptive and qualitative assessments. FDA, SPM and SnPM t-tests were applied to sample data of twenty highly skilled male and female rowers, rowing at 32 strokes per minute in a single scull boat. Statistical differences for gender were assessed by applying two t-tests (one for each side of the boat). The t-statistic values were identical for all three methods (with the FDA t-statistic presented as an absolute measure). The critical t-statistics (t crit ) were very similar between the techniques, with SPM t crit providing a marginally higher t crit than the FDA and SnPM t crit values (which were identical). All techniques were successful in identifying consistent sections of the force waveform, where male and female rowers were shown to differ significantly (pparametric assumption of SPM, as well as contextual factors related to the type of waveform data to be analysed and the experimental research question of interest. Copyright © 2018. Published by Elsevier Ltd.

  16. Statistical analysis of failure time in stress corrosion cracking of fuel tube in light water reactor

    International Nuclear Information System (INIS)

    Hirao, Keiichi; Yamane, Toshimi; Minamino, Yoritoshi

    1991-01-01

    This report is to show how the life due to stress corrosion cracking breakdown of fuel cladding tubes is evaluated by applying the statistical techniques to that examined by a few testing methods. The statistical distribution of the limiting values of constant load stress corrosion cracking life, the statistical analysis by making the probabilistic interpretation of constant load stress corrosion cracking life, and the statistical analysis of stress corrosion cracking life by the slow strain rate test (SSRT) method are described. (K.I.)

  17. Statistical Power Analysis with Missing Data A Structural Equation Modeling Approach

    CERN Document Server

    Davey, Adam

    2009-01-01

    Statistical power analysis has revolutionized the ways in which we conduct and evaluate research.  Similar developments in the statistical analysis of incomplete (missing) data are gaining more widespread applications. This volume brings statistical power and incomplete data together under a common framework, in a way that is readily accessible to those with only an introductory familiarity with structural equation modeling.  It answers many practical questions such as: How missing data affects the statistical power in a study How much power is likely with different amounts and types

  18. The art of data analysis how to answer almost any question using basic statistics

    CERN Document Server

    Jarman, Kristin H

    2013-01-01

    A friendly and accessible approach to applying statistics in the real worldWith an emphasis on critical thinking, The Art of Data Analysis: How to Answer Almost Any Question Using Basic Statistics presents fun and unique examples, guides readers through the entire data collection and analysis process, and introduces basic statistical concepts along the way.Leaving proofs and complicated mathematics behind, the author portrays the more engaging side of statistics and emphasizes its role as a problem-solving tool.  In addition, light-hearted case studies

  19. Statistical Analysis Methods for the fMRI Data

    Directory of Open Access Journals (Sweden)

    Huseyin Boyaci

    2011-08-01

    Full Text Available Functional magnetic resonance imaging (fMRI is a safe and non-invasive way to assess brain functions by using signal changes associated with brain activity. The technique has become a ubiquitous tool in basic, clinical and cognitive neuroscience. This method can measure little metabolism changes that occur in active part of the brain. We process the fMRI data to be able to find the parts of brain that are involve in a mechanism, or to determine the changes that occur in brain activities due to a brain lesion. In this study we will have an overview over the methods that are used for the analysis of fMRI data.

  20. Statistical Analysis of Temple Orientation in Ancient India

    Science.gov (United States)

    Aller, Alba; Belmonte, Juan Antonio

    2015-05-01

    The great diversity of religions that have been followed in India for over 3000 years is the reason why there are hundreds of temples built to worship dozens of different divinities. In this work, more than one hundred temples geographically distributed over the whole Indian land have been analyzed, obtaining remarkable results. For this purpose, a deep analysis of the main deities who are worshipped in each of them, as well as of the different dynasties (or cultures) who built them has also been conducted. As a result, we have found that the main axes of the temples dedicated to Shiva seem to be oriented to the east cardinal point while those temples dedicated to Vishnu would be oriented to both the east and west cardinal points. To explain these cardinal directions we propose to look back to the origins of Hinduism. Besides these cardinal orientations, clear solar orientations have also been found, especially at the equinoctial declination.

  1. Statistical Analysis of Magnetic Abrasive Finishing (MAF) On Surface Roughness

    Science.gov (United States)

    Givi, Mehrdad; Tehrani, Alireza Fadaei; Mohammadi, Aminollah

    2010-06-01

    Magnetic assisted finishing is one of the nontraditional methods of polishing that recently has been attractive for the researchers. This paper investigates the effects of some parameters such as rotational speed of the permanent magnetic pole, work gap between the permanent pole and the work piece, number of the cycles and the weight of the abrasive particles on aluminum surface plate finishing. The three levels full factorial method was used as the DOE technique (design of experiments) for studying the selected factors. Analysis of Variance (ANOVA) has been used to determine significant factors and also to obtain an equation based on data regression. Experimental results indicate that for a change in surface roughness ΔRa, number of cycles and working gap are found to be the most significant parameters followed by rotational speed and then weight of powders.

  2. STATISTICAL ANALYSIS OF ACOUSTIC WAVE PARAMETERS NEAR SOLAR ACTIVE REGIONS

    International Nuclear Information System (INIS)

    Rabello-Soares, M. Cristina; Bogart, Richard S.; Scherrer, Philip H.

    2016-01-01

    In order to quantify the influence of magnetic fields on acoustic mode parameters and flows in and around active regions, we analyze the differences in the parameters in magnetically quiet regions nearby an active region (which we call “nearby regions”), compared with those of quiet regions at the same disk locations for which there are no neighboring active regions. We also compare the mode parameters in active regions with those in comparably located quiet regions. Our analysis is based on ring-diagram analysis of all active regions observed by the Helioseismic and Magnetic Imager (HMI) during almost five years. We find that the frequency at which the mode amplitude changes from attenuation to amplification in the quiet nearby regions is around 4.2 mHz, in contrast to the active regions, for which it is about 5.1 mHz. This amplitude enhacement (the “acoustic halo effect”) is as large as that observed in the active regions, and has a very weak dependence on the wave propagation direction. The mode energy difference in nearby regions also changes from a deficit to an excess at around 4.2 mHz, but averages to zero over all modes. The frequency difference in nearby regions increases with increasing frequency until a point at which the frequency shifts turn over sharply, as in active regions. However, this turnover occurs around 4.9 mHz, which is significantly below the acoustic cutoff frequency. Inverting the horizontal flow parameters in the direction of the neigboring active regions, we find flows that are consistent with a model of the thermal energy flow being blocked directly below the active region.

  3. The Effect of ICD Programming on Inappropriate and Appropriate ICD Therapies in Ischemic and Nonischemic Cardiomyopathy

    DEFF Research Database (Denmark)

    Sedláček, Kamil; Ruwald, Anne-Christine; Kutyifa, Valentina

    2015-01-01

    INTRODUCTION: The MADIT-RIT trial demonstrated reduction of inappropriate and appropriate ICD therapies and mortality by high-rate cut-off and 60-second-delayed VT therapy ICD programming in patients with a primary prophylactic ICD indication. The aim of this analysis was to study effects of MADIT......-RIT ICD programming in patients with ischemic and nonischemic cardiomyopathy. METHODS AND RESULTS: First and total occurrences of both inappropriate and appropriate ICD therapies were analyzed by multivariate Cox models in 791 (53%) patients with ischemic and 707 (47%) patients with nonischemic......-rate cut-off (arm B) and delayed VT therapy ICD programming (arm C) compared with conventional (arm A) ICD programming were associated with a significant risk reduction of first inappropriate and appropriate ICD therapy in patients with ischemic and nonischemic cardiomyopathy (HR range 0.11-0.34, P

  4. Inappropriate shock for myopotential over-sensing in a patient with subcutaneous ICD

    Directory of Open Access Journals (Sweden)

    Alessandro Corzani

    2015-01-01

    Full Text Available Inappropriate ICD shocks are common adverse events; they are mainly due to supraventricular arrhythmias and secondly are related to noise, undersensing, oversensing, device malfunctions. We present a case of inappropriate device therapy due to myopotential oversensing in a patient with a subcutaneous ICD (s-ICD. A 58 years old male with an s-ICD during the device interrogation showed a previous episode of suspected sustained ventricular tachycardia at 210 bpm, which was effectively treated with ICD shock. The patient experienced the electrical shock while holding a big gas-cylinder in his arms. The EGM analysis revealed many irregular ventricular signals of low amplitude lasting for 24 s and interrupted by the shock. The device showed no malfunctions. This is the first case report of inappropriate S-ICD shock related to myopotential over-sensing. By recording intracardiac EGM, we demonstrated that the noise was created by the activity of the pectorals muscles.

  5. Prevalence of inappropriate medication using Beers criteria in Japanese long-term care facilities

    DEFF Research Database (Denmark)

    Niwata, Satoko; Yamada, Yukari; Ikegami, Naoki

    2006-01-01

    Background The prevalence and risk factors of potentially inappropriate medication use among the elderly patients have been studied in various countries, but because of the difficulty of obtaining data on patient characteristics and medications they have not been studied in Japan. Methods We...... dependent on the disease or condition was found in patients with chronic constipation. Multiple logistic regression analysis revealed psychotropic drug use (OR = 1.511), medication cost of per day (OR = 1.173), number of medications (OR = 1.140), and age (OR = 0.981) as factors related to inappropriate...

  6. Application of a statistical thermal design procedure to evaluate the PWR DNBR safety analysis limits

    International Nuclear Information System (INIS)

    Robeyns, J.; Parmentier, F.; Peeters, G.

    2001-01-01

    In the framework of safety analysis for the Belgian nuclear power plants and for the reload compatibility studies, Tractebel Energy Engineering (TEE) has developed, to define a 95/95 DNBR criterion, a statistical thermal design method based on the analytical full statistical approach: the Statistical Thermal Design Procedure (STDP). In that methodology, each DNBR value in the core assemblies is calculated with an adapted CHF (Critical Heat Flux) correlation implemented in the sub-channel code Cobra for core thermal hydraulic analysis. The uncertainties of the correlation are represented by the statistical parameters calculated from an experimental database. The main objective of a sub-channel analysis is to prove that in all class 1 and class 2 situations, the minimum DNBR (Departure from Nucleate Boiling Ratio) remains higher than the Safety Analysis Limit (SAL). The SAL value is calculated from the Statistical Design Limit (SDL) value adjusted with some penalties and deterministic factors. The search of a realistic value for the SDL is the objective of the statistical thermal design methods. In this report, we apply a full statistical approach to define the DNBR criterion or SDL (Statistical Design Limit) with the strict observance of the design criteria defined in the Standard Review Plan. The same statistical approach is used to define the expected number of rods experiencing DNB. (author)

  7. Plasma Heating in Solar Microflares: Statistics and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kirichenko, A. S.; Bogachev, S. A. [Lebedev Physical Institute of the Russian Academy of Sciences, Moscow, 119991 (Russian Federation)

    2017-05-01

    In this paper we present the results of an analysis of 481 weak solar flares, from A0.01 class flares to the B GOES class, that were observed during the period of extremely low solar activity from 2009 April to July. For all flares we measured the temperature of the plasma in the isothermal and two-temperature approximations and tried to fit its relationship with the X-ray class using exponential and power-law functions. We found that the whole temperature distribution in the range from A0.01 to X-class cannot be fit by one exponential function. The fitting for weak flares below A1.0 is significantly steeper than that for medium and large flares. The power-law approximation seems to be more reliable: the corresponding functions were found to be in good agreement with experimental data both for microflares and for normal flares. Our study predicts that evidence of plasma heating can be found in flares starting from the A0.0002 X-ray class. Weaker events presumably cannot heat the surrounding plasma. We also estimated emission measures for all flares studied and the thermal energy for 113 events.

  8. Statistical analysis for validating ACO-KNN algorithm as feature selection in sentiment analysis

    Science.gov (United States)

    Ahmad, Siti Rohaidah; Yusop, Nurhafizah Moziyana Mohd; Bakar, Azuraliza Abu; Yaakub, Mohd Ridzwan

    2017-10-01

    This research paper aims to propose a hybrid of ant colony optimization (ACO) and k-nearest neighbor (KNN) algorithms as feature selections for selecting and choosing relevant features from customer review datasets. Information gain (IG), genetic algorithm (GA), and rough set attribute reduction (RSAR) were used as baseline algorithms in a performance comparison with the proposed algorithm. This paper will also discuss the significance test, which was used to evaluate the performance differences between the ACO-KNN, IG-GA, and IG-RSAR algorithms. This study evaluated the performance of the ACO-KNN algorithm using precision, recall, and F-score, which were validated using the parametric statistical significance tests. The evaluation process has statistically proven that this ACO-KNN algorithm has been significantly improved compared to the baseline algorithms. The evaluation process has statistically proven that this ACO-KNN algorithm has been significantly improved compared to the baseline algorithms. In addition, the experimental results have proven that the ACO-KNN can be used as a feature selection technique in sentiment analysis to obtain quality, optimal feature subset that can represent the actual data in customer review data.

  9. Introduction to applied statistical signal analysis guide to biomedical and electrical engineering applications

    CERN Document Server

    Shiavi, Richard

    2007-01-01

    Introduction to Applied Statistical Signal Analysis is designed for the experienced individual with a basic background in mathematics, science, and computer. With this predisposed knowledge, the reader will coast through the practical introduction and move on to signal analysis techniques, commonly used in a broad range of engineering areas such as biomedical engineering, communications, geophysics, and speech.Introduction to Applied Statistical Signal Analysis intertwines theory and implementation with practical examples and exercises. Topics presented in detail include: mathematical

  10. Application of Multivariable Statistical Techniques in Plant-wide WWTP Control Strategies Analysis

    DEFF Research Database (Denmark)

    Flores Alsina, Xavier; Comas, J.; Rodríguez-Roda, I.

    2007-01-01

    The main objective of this paper is to present the application of selected multivariable statistical techniques in plant-wide wastewater treatment plant (WWTP) control strategies analysis. In this study, cluster analysis (CA), principal component analysis/factor analysis (PCA/FA) and discriminant...

  11. Modeling gallic acid production rate by empirical and statistical analysis

    Directory of Open Access Journals (Sweden)

    Bratati Kar

    2000-01-01

    Full Text Available For predicting the rate of enzymatic reaction empirical correlation based on the experimental results obtained under various operating conditions have been developed. Models represent both the activation as well as deactivation conditions of enzymatic hydrolysis and the results have been analyzed by analysis of variance (ANOVA. The tannase activity was found maximum at incubation time 5 min, reaction temperature 40ºC, pH 4.0, initial enzyme concentration 0.12 v/v, initial substrate concentration 0.42 mg/ml, ionic strength 0.2 M and under these optimal conditions, the maximum rate of gallic acid production was 33.49 mumoles/ml/min.Para predizer a taxa das reações enzimaticas uma correlação empírica baseada nos resultados experimentais foi desenvolvida. Os modelos representam a ativação e a desativativação da hydrolise enzimatica. Os resultados foram avaliados pela análise de variança (ANOVA. A atividade máxima da tannase foi obtida após 5 minutos de incubação, temperatura 40ºC, pH 4,0, concentração inicial da enzima de 0,12 v/v, concentração inicial do substrato 0,42 mg/ml, força iônica 0,2 M. Sob essas condições a taxa máxima de produção ácido galico foi de 33,49 µmoles/ml/min.

  12. Processing and statistical analysis of soil-root images

    Science.gov (United States)

    Razavi, Bahar S.; Hoang, Duyen; Kuzyakov, Yakov

    2016-04-01

    Importance of the hotspots such as rhizosphere, the small soil volume that surrounds and is influenced by plant roots, calls for spatially explicit methods to visualize distribution of microbial activities in this active site (Kuzyakov and Blagodatskaya, 2015). Zymography technique has previously been adapted to visualize the spatial dynamics of enzyme activities in rhizosphere (Spohn and Kuzyakov, 2014). Following further developing of soil zymography -to obtain a higher resolution of enzyme activities - we aimed to 1) quantify the images, 2) determine whether the pattern (e.g. distribution of hotspots in space) is clumped (aggregated) or regular (dispersed). To this end, we incubated soil-filled rhizoboxes with maize Zea mays L. and without maize (control box) for two weeks. In situ soil zymography was applied to visualize enzymatic activity of β-glucosidase and phosphatase at soil-root interface. Spatial resolution of fluorescent images was improved by direct application of a substrate saturated membrane to the soil-root system. Furthermore, we applied "spatial point pattern analysis" to determine whether the pattern (e.g. distribution of hotspots in space) is clumped (aggregated) or regular (dispersed). Our results demonstrated that distribution of hotspots at rhizosphere is clumped (aggregated) compare to control box without plant which showed regular (dispersed) pattern. These patterns were similar in all three replicates and for both enzymes. We conclude that improved zymography is promising in situ technique to identify, analyze, visualize and quantify spatial distribution of enzyme activities in the rhizosphere. Moreover, such different patterns should be considered in assessments and modeling of rhizosphere extension and the corresponding effects on soil properties and functions. Key words: rhizosphere, spatial point pattern, enzyme activity, zymography, maize.

  13. Statistical analysis of rockfall volume distributions: Implications for rockfall dynamics

    Science.gov (United States)

    Dussauge, Carine; Grasso, Jean-Robert; Helmstetter, AgnèS.

    2003-06-01

    We analyze the volume distribution of natural rockfalls on different geological settings (i.e., calcareous cliffs in the French Alps, Grenoble area, and granite Yosemite cliffs, California Sierra) and different volume ranges (i.e., regional and worldwide catalogs). Contrary to previous studies that included several types of landslides, we restrict our analysis to rockfall sources which originated on subvertical cliffs. For the three data sets, we find that the rockfall volumes follow a power law distribution with a similar exponent value, within error bars. This power law distribution was also proposed for rockfall volumes that occurred along road cuts. All these results argue for a recurrent power law distribution of rockfall volumes on subvertical cliffs, for a large range of rockfall sizes (102-1010 m3), regardless of the geological settings and of the preexisting geometry of fracture patterns that are drastically different on the three studied areas. The power law distribution for rockfall volumes could emerge from two types of processes. First, the observed power law distribution of rockfall volumes is similar to the one reported for both fragmentation experiments and fragmentation models. This argues for the geometry of rock mass fragment sizes to possibly control the rockfall volumes. This way neither cascade nor avalanche processes would influence the rockfall volume distribution. Second, without any requirement of scale-invariant quenched heterogeneity patterns, the rock mass dynamics can arise from avalanche processes driven by fluctuations of the rock mass properties, e.g., cohesion or friction angle. This model may also explain the power law distribution reported for landslides involving unconsolidated materials. We find that the exponent values of rockfall volume on subvertical cliffs, 0.5 ± 0.2, is significantly smaller than the 1.2 ± 0.3 value reported for mixed landslide types. This change of exponents can be driven by the material strength, which

  14. Quantitative analysis and IBM SPSS statistics a guide for business and finance

    CERN Document Server

    Aljandali, Abdulkader

    2016-01-01

    This guide is for practicing statisticians and data scientists who use IBM SPSS for statistical analysis of big data in business and finance. This is the first of a two-part guide to SPSS for Windows, introducing data entry into SPSS, along with elementary statistical and graphical methods for summarizing and presenting data. Part I also covers the rudiments of hypothesis testing and business forecasting while Part II will present multivariate statistical methods, more advanced forecasting methods, and multivariate methods. IBM SPSS Statistics offers a powerful set of statistical and information analysis systems that run on a wide variety of personal computers. The software is built around routines that have been developed, tested, and widely used for more than 20 years. As such, IBM SPSS Statistics is extensively used in industry, commerce, banking, local and national governments, and education. Just a small subset of users of the package include the major clearing banks, the BBC, British Gas, British Airway...

  15. Benchmark validation of statistical models: Application to mediation analysis of imagery and memory.

    Science.gov (United States)

    MacKinnon, David P; Valente, Matthew J; Wurpts, Ingrid C

    2018-03-29

    This article describes benchmark validation, an approach to validating a statistical model. According to benchmark validation, a valid model generates estimates and research conclusions consistent with a known substantive effect. Three types of benchmark validation-(a) benchmark value, (b) benchmark estimate, and (c) benchmark effect-are described and illustrated with examples. Benchmark validation methods are especially useful for statistical models with assumptions that are untestable or very difficult to test. Benchmark effect validation methods were applied to evaluate statistical mediation analysis in eight studies using the established effect that increasing mental imagery improves recall of words. Statistical mediation analysis led to conclusions about mediation that were consistent with established theory that increased imagery leads to increased word recall. Benchmark validation based on established substantive theory is discussed as a general way to investigate characteristics of statistical models and a complement to mathematical proof and statistical simulation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  16. College Student Perceptions of the (In)Appropriateness and Functions of Teacher Disclosure

    Science.gov (United States)

    Hosek, Angela M.; Presley, Rachel

    2018-01-01

    This study investigated college student perceptions of the (in)appropriateness of instructor disclosures and perceived functions of instructor disclosures. An interpretive analysis of 35 college students identified that family relationships, life experiences and background, and everyday talk and activities were forms of appropriate disclosures;…

  17. Bayesian Statistics and Uncertainty Quantification for Safety Boundary Analysis in Complex Systems

    Science.gov (United States)

    He, Yuning; Davies, Misty Dawn

    2014-01-01

    The analysis of a safety-critical system often requires detailed knowledge of safe regions and their highdimensional non-linear boundaries. We present a statistical approach to iteratively detect and characterize the boundaries, which are provided as parameterized shape candidates. Using methods from uncertainty quantification and active learning, we incrementally construct a statistical model from only few simulation runs and obtain statistically sound estimates of the shape parameters for safety boundaries.

  18. Inappropriate urinary catheter reinsertion in hospitalized older patients.

    Science.gov (United States)

    Hu, Fang-Wen; Tsai, Chuan-Hsiu; Lin, Huey-Shyan; Chen, Ching-Huey; Chang, Chia-Ming

    2017-01-01

    We investigated the incidence and rationale for inappropriate reinsertion of urinary catheters and elucidated whether reinsertion is an independent predictor of adverse outcomes. A longitudinal study was adopted. Patients aged ≥65 years with urinary catheters placed within 24 hours of hospitalization were enrolled. Data collection, including demographic variables and health conditions, was conducted within 48 hours after admission. Patients with catheters in place were followed-up every day. If the patient had catheter reinsertion, the reinsertion information was reviewed from medical records. Adverse outcomes were collected at discharge. A total of 321 patients were enrolled. Urinary catheters were reinserted in 66 patients (20.6%), with 95 reinsertions; 49.5% of catheter reinsertions were found to be inappropriate. "No evident reason for urinary catheter use" was the most common rationale for inappropriate reinsertion. Inappropriate reinsertion was found to be a significant predictor for prolonged length of hospital stay, development of catheter-associated urinary tract infections and catheter-related complications, and decline in activities of daily living. This study indicates a considerable percentage of inappropriate urinary catheter reinsertions in hospitalized older patients. Inappropriate reinsertion was significantly associated with worsening outcomes. Efforts to improve appropriateness of reinsertion and setting clinical policies for catheterization are necessary to reduce the high rate of inappropriate reinsertion. Copyright © 2017 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.

  19. Compliance strategy for statistically based neutron overpower protection safety analysis methodology

    International Nuclear Information System (INIS)

    Holliday, E.; Phan, B.; Nainer, O.

    2009-01-01

    The methodology employed in the safety analysis of the slow Loss of Regulation (LOR) event in the OPG and Bruce Power CANDU reactors, referred to as Neutron Overpower Protection (NOP) analysis, is a statistically based methodology. Further enhancement to this methodology includes the use of Extreme Value Statistics (EVS) for the explicit treatment of aleatory and epistemic uncertainties, and probabilistic weighting of the initial core states. A key aspect of this enhanced NOP methodology is to demonstrate adherence, or compliance, with the analysis basis. This paper outlines a compliance strategy capable of accounting for the statistical nature of the enhanced NOP methodology. (author)

  20. Analysis of Statistical Distributions Used for Modeling Reliability and Failure Rate of Temperature Alarm Circuit

    International Nuclear Information System (INIS)

    EI-Shanshoury, G.I.

    2011-01-01

    Several statistical distributions are used to model various reliability and maintainability parameters. The applied distribution depends on the' nature of the data being analyzed. The presented paper deals with analysis of some statistical distributions used in reliability to reach the best fit of distribution analysis. The calculations rely on circuit quantity parameters obtained by using Relex 2009 computer program. The statistical analysis of ten different distributions indicated that Weibull distribution gives the best fit distribution for modeling the reliability of the data set of Temperature Alarm Circuit (TAC). However, the Exponential distribution is found to be the best fit distribution for modeling the failure rate

  1. Image analysis and circular statistics for shape-fabric analysis: applications to lithified ignimbrites

    Science.gov (United States)

    Capaccioni, Bruno; Valentini, Laura; Rocchi, Marco B. L.; Nappi, Giovanni; Sarocchi, Damiano

    Computer-assisted image analysis can be successfully used to derive quantitative textural data on pyroclastic rock samples. This method provides a large number of different measurements such as grain size, particle shape and 2D orientation of particle main axes (directional- or shape-fabric) automatically and in a relatively short time. Orientation data reduction requires specific statistical tests, mainly devoted to defining the kind of particle distribution pattern, the possible occurrence of preferred particle orientation, the confidence interval of the mean direction and the degree of randomness with respect to pre-assigned theoretical frequency distributions. Data obtained from image analysis of seven lithified ignimbrite samples from the Vulsini Volcanic District (Central Italy) are used to test different statistics and to provide insight about directional fabrics. First, the possible occurrence of a significant deviation from a theoretical circular uniform distribution was evaluated by using the Rayleigh and Tukey χ2 tests. Then, the Kuiper test was performed to evaluate whether or not the observation fits with a unimodal, Von Mises-like theoretical frequency distribution. Finally, the confidence interval of mean direction was calculated. With the exception of one sample (FPD10), which showed a well-developed bimodality, all the analysed samples display significant anisotropic and unimodal distributions. The minimum number of measurements necessary to obtain reasonable variabilities of the calculated statistics and mean directions was evaluated by repeating random collections of the measured particles at increments of 100 particles for each sample. Although the observed variabilities depend largely on the pattern of distribution and an absolute minimum number cannot be stated, approximately 1500-2000 measurements are required in order to get meaningful mean directions for the analysed samples.

  2. Standardization of data processing and statistical analysis in comparative plant proteomics experiment.

    Science.gov (United States)

    Valledor, Luis; Romero-Rodríguez, M Cristina; Jorrin-Novo, Jesus V

    2014-01-01

    Two-dimensional gel electrophoresis remains the most widely used technique for protein separation in plant proteomics experiments. Despite the continuous technical advances and improvements in current 2-DE protocols, an adequate and correct experimental design and statistical analysis of the data tend to be ignored or not properly documented in current literature. Both proper experimental design and appropriate statistical analysis are requested in order to confidently discuss our results and to conclude from experimental data.In this chapter, we describe a model procedure for a correct experimental design and a complete statistical analysis of proteomic dataset. Our model procedure covers all of the steps in data mining and processing, starting with the data preprocessing (transformation, missing value imputation, definition of outliers) and univariate statistics (parametric and nonparametric tests), and finishing with multivariate statistics (clustering, heat-mapping, PCA, ICA, PLS-DA).

  3. Integrated Data Collection Analysis (IDCA) Program - Statistical Analysis of RDX Standard Data Sets

    Energy Technology Data Exchange (ETDEWEB)

    Sandstrom, Mary M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Brown, Geoffrey W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Preston, Daniel N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Pollard, Colin J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Warner, Kirstin F. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Sorensen, Daniel N. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Remmers, Daniel L. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Phillips, Jason J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Shelley, Timothy J. [Air Force Research Lab. (AFRL), Tyndall AFB, FL (United States); Reyes, Jose A. [Applied Research Associates, Tyndall AFB, FL (United States); Hsu, Peter C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Reynolds, John G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-10-30

    The Integrated Data Collection Analysis (IDCA) program is conducting a Proficiency Test for Small- Scale Safety and Thermal (SSST) testing of homemade explosives (HMEs). Described here are statistical analyses of the results for impact, friction, electrostatic discharge, and differential scanning calorimetry analysis of the RDX Type II Class 5 standard. The material was tested as a well-characterized standard several times during the proficiency study to assess differences among participants and the range of results that may arise for well-behaved explosive materials. The analyses show that there are detectable differences among the results from IDCA participants. While these differences are statistically significant, most of them can be disregarded for comparison purposes to assess potential variability when laboratories attempt to measure identical samples using methods assumed to be nominally the same. The results presented in this report include the average sensitivity results for the IDCA participants and the ranges of values obtained. The ranges represent variation about the mean values of the tests of between 26% and 42%. The magnitude of this variation is attributed to differences in operator, method, and environment as well as the use of different instruments that are also of varying age. The results appear to be a good representation of the broader safety testing community based on the range of methods, instruments, and environments included in the IDCA Proficiency Test.

  4. A new statistic for the analysis of circular data in gamma-ray astronomy

    Science.gov (United States)

    Protheroe, R. J.

    1985-01-01

    A new statistic is proposed for the analysis of circular data. The statistic is designed specifically for situations where a test of uniformity is required which is powerful against alternatives in which a small fraction of the observations is grouped in a small range of directions, or phases.

  5. Measuring the Success of an Academic Development Programme: A Statistical Analysis

    Science.gov (United States)

    Smith, L. C.

    2009-01-01

    This study uses statistical analysis to estimate the impact of first-year academic development courses in microeconomics, statistics, accountancy, and information systems, offered by the University of Cape Town's Commerce Academic Development Programme, on students' graduation performance relative to that achieved by mainstream students. The data…

  6. Methodology сomparative statistical analysis of Russian industry based on cluster analysis

    Directory of Open Access Journals (Sweden)

    Sergey S. Shishulin

    2017-01-01

    Full Text Available The article is devoted to researching of the possibilities of applying multidimensional statistical analysis in the study of industrial production on the basis of comparing its growth rates and structure with other developed and developing countries of the world. The purpose of this article is to determine the optimal set of statistical methods and the results of their application to industrial production data, which would give the best access to the analysis of the result.Data includes such indicators as output, output, gross value added, the number of employed and other indicators of the system of national accounts and operational business statistics. The objects of observation are the industry of the countrys of the Customs Union, the United States, Japan and Erope in 2005-2015. As the research tool used as the simplest methods of transformation, graphical and tabular visualization of data, and methods of statistical analysis. In particular, based on a specialized software package (SPSS, the main components method, discriminant analysis, hierarchical methods of cluster analysis, Ward’s method and k-means were applied.The application of the method of principal components to the initial data makes it possible to substantially and effectively reduce the initial space of industrial production data. Thus, for example, in analyzing the structure of industrial production, the reduction was from fifteen industries to three basic, well-interpreted factors: the relatively extractive industries (with a low degree of processing, high-tech industries and consumer goods (medium-technology sectors. At the same time, as a result of comparison of the results of application of cluster analysis to the initial data and data obtained on the basis of the principal components method, it was established that clustering industrial production data on the basis of new factors significantly improves the results of clustering.As a result of analyzing the parameters of

  7. Potential errors and misuse of statistics in studies on leakage in endodontics.

    Science.gov (United States)

    Lucena, C; Lopez, J M; Pulgar, R; Abalos, C; Valderrama, M J

    2013-04-01

    To assess the quality of the statistical methodology used in studies of leakage in Endodontics, and to compare the results found using appropriate versus inappropriate inferential statistical methods. The search strategy used the descriptors 'root filling' 'microleakage', 'dye penetration', 'dye leakage', 'polymicrobial leakage' and 'fluid filtration' for the time interval 2001-2010 in journals within the categories 'Dentistry, Oral Surgery and Medicine' and 'Materials Science, Biomaterials' of the Journal Citation Report. All retrieved articles were reviewed to find potential pitfalls in statistical methodology that may be encountered during study design, data management or data analysis. The database included 209 papers. In all the studies reviewed, the statistical methods used were appropriate for the category attributed to the outcome variable, but in 41% of the cases, the chi-square test or parametric methods were inappropriately selected subsequently. In 2% of the papers, no statistical test was used. In 99% of cases, a statistically 'significant' or 'not significant' effect was reported as a main finding, whilst only 1% also presented an estimation of the magnitude of the effect. When the appropriate statistical methods were applied in the studies with originally inappropriate data analysis, the conclusions changed in 19% of the cases. Statistical deficiencies in leakage studies may affect their results and interpretation and might be one of the reasons for the poor agreement amongst the reported findings. Therefore, more effort should be made to standardize statistical methodology. © 2012 International Endodontic Journal.

  8. Development of statistical analysis code for meteorological data (W-View)

    International Nuclear Information System (INIS)

    Tachibana, Haruo; Sekita, Tsutomu; Yamaguchi, Takenori

    2003-03-01

    A computer code (W-View: Weather View) was developed to analyze the meteorological data statistically based on 'the guideline of meteorological statistics for the safety analysis of nuclear power reactor' (Nuclear Safety Commission on January 28, 1982; revised on March 29, 2001). The code gives statistical meteorological data to assess the public dose in case of normal operation and severe accident to get the license of nuclear reactor operation. This code was revised from the original code used in a large office computer code to enable a personal computer user to analyze the meteorological data simply and conveniently and to make the statistical data tables and figures of meteorology. (author)

  9. PROSA: A computer program for statistical analysis of near-real-time-accountancy (NRTA) data

    International Nuclear Information System (INIS)

    Beedgen, R.; Bicking, U.

    1987-04-01

    The computer program PROSA (Program for Statistical Analysis of NRTA Data) is a tool to decide on the basis of statistical considerations if, in a given sequence of materials balance periods, a loss of material might have occurred or not. The evaluation of the material balance data is based on statistical test procedures. In PROSA three truncated sequential tests are applied to a sequence of material balances. The manual describes the statistical background of PROSA and how to use the computer program on an IBM-PC with DOS 3.1. (orig.) [de

  10. Elimination of statistical fluctuations in higher order moments from event-by-event analysis

    International Nuclear Information System (INIS)

    Li Bo; Zhu Hongli; Liu Lianshou

    2004-01-01

    In the present investigation of high energy multiparticle production, the method of event by event analysis has received wide interest. Inasmuch as the limited number of particles in a single event, an important problem is that the elimination of statistical fluctuations has to be worked out first of all. In the current literature, the elimination of statistical fluctuations has been only considered in lower order moments (not above the third order). In the present paper, the elimination of statistical fluctuations is studied in higher order moments and a general expression for the elimination of statistical fluctuations in the moments of arbitrary order is given. (author)

  11. Statistical analysis and optimization in the process/device/circuit/system microelectronics design

    OpenAIRE

    Kuleshov, A.; Nelayev, V.; Stempitsky, V.

    2010-01-01

    Methodology and results of statistical analysis and optimization in the joined process/device/circuit/system microelectronics design are presented. A simple example of the cell inverter design illustrates the e±ciency of the methodology.

  12. Sequential Structural and Fluid Dynamics Analysis of Balloon-Expandable Coronary Stents: A Multivariable Statistical Analysis.

    Science.gov (United States)

    Martin, David; Boyle, Fergal

    2015-09-01

    Several clinical studies have identified a strong correlation between neointimal hyperplasia following coronary stent deployment and both stent-induced arterial injury and altered vessel hemodynamics. As such, the sequential structural and fluid dynamics analysis of balloon-expandable stent deployment should provide a comprehensive indication of stent performance. Despite this observation, very few numerical studies of balloon-expandable coronary stents have considered both the mechanical and hemodynamic impact of stent deployment. Furthermore, in the few studies that have considered both phenomena, only a small number of stents have been considered. In this study, a sequential structural and fluid dynamics analysis methodology was employed to compare both the mechanical and hemodynamic impact of six balloon-expandable coronary stents. To investigate the relationship between stent design and performance, several common stent design properties were then identified and the dependence between these properties and both the mechanical and hemodynamic variables of interest was evaluated using statistical measures of correlation. Following the completion of the numerical analyses, stent strut thickness was identified as the only common design property that demonstrated a strong dependence with either the mean equivalent stress predicted in the artery wall or the mean relative residence time predicted on the luminal surface of the artery. These results corroborate the findings of the large-scale ISAR-STEREO clinical studies and highlight the crucial role of strut thickness in coronary stent design. The sequential structural and fluid dynamics analysis methodology and the multivariable statistical treatment of the results described in this study should prove useful in the design of future balloon-expandable coronary stents.

  13. On the importance of statistics in breath analysis - Hope or curse?

    OpenAIRE

    Eckel, Sandrah P.; Baumbach, Jan; Hauschild, Anne-Christin

    2014-01-01

    As we saw at the 2013 Breath Analysis Summit, breath analysis is a rapidly evolving field. Increasingly sophisticated technology is producing huge amounts of complex data. A major barrier now faced by the breath research community is the analysis of these data. Emerging breath data require sophisticated, modern statistical methods to allow for a careful and robust deduction of real-world conclusions.

  14. Statistical analysis and Monte Carlo simulation of growing self-avoiding walks on percolation

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Yuxia [Department of Physics, Wuhan University, Wuhan 430072 (China); Sang Jianping [Department of Physics, Wuhan University, Wuhan 430072 (China); Department of Physics, Jianghan University, Wuhan 430056 (China); Zou Xianwu [Department of Physics, Wuhan University, Wuhan 430072 (China)]. E-mail: xwzou@whu.edu.cn; Jin Zhunzhi [Department of Physics, Wuhan University, Wuhan 430072 (China)

    2005-09-26

    The two-dimensional growing self-avoiding walk on percolation was investigated by statistical analysis and Monte Carlo simulation. We obtained the expression of the mean square displacement and effective exponent as functions of time and percolation probability by statistical analysis and made a comparison with simulations. We got a reduced time to scale the motion of walkers in growing self-avoiding walks on regular and percolation lattices.

  15. Comparative analysis of statistical software products for the qualifying examination of plant varieties suitable for dissemination

    Directory of Open Access Journals (Sweden)

    Н. В. Лещук

    2017-12-01

    Full Text Available Purpose. To define statistical methods and tools (application packages for creating the decision support system (DSS for qualifying examination of plant varieties suitable for dissemination (VSD in the context of data processing tasks. To substantiate the selection of software for proces­sing statistical data relative to field and laboratory investigations that are included into the qualifying examination for VSD. Methods. Analytical one based on the comparison of methods of descriptive and multivariate statistics and tools of intellectual analysis of data obtained during qualifying examination for VSD. Comparative analysis of software tools for processing statistical data in order to prepare proposals for the final decision on plant variety application. Decomposition of tasks was carried out which were included into the decision support system for qualifying examination of varieties-candidates for VSD. Results. Statistical package SPSS, analysis package included in MS Excel and programe language R was compared for the following criteria: interface usability, functionality, quality of calculation result presentation, visibility of graphical information, software cost. The both packages were widely used in the world for statistical data processing, they have similar functions for statistics calculation. Conclusion. Tasks of VSD were separated and recommended to tackle using investigated tools. Programe language R was a product recommended to use as a tool. The main advantage of R as compared to the package IBM SPSS Statistics is the fact that R is an open source software.

  16. TRAPR: R Package for Statistical Analysis and Visualization of RNA-Seq Data.

    Science.gov (United States)

    Lim, Jae Hyun; Lee, Soo Youn; Kim, Ju Han

    2017-03-01

    High-throughput transcriptome sequencing, also known as RNA sequencing (RNA-Seq), is a standard technology for measuring gene expression with unprecedented accuracy. Numerous bioconductor packages have been developed for the statistical analysis of RNA-Seq data. However, these tools focus on specific aspects of the data analysis pipeline, and are difficult to appropriately integrate with one another due to their disparate data structures and processing methods. They also lack visualization methods to confirm the integrity of the data and the process. In this paper, we propose an R-based RNA-Seq analysis pipeline called TRAPR, an integrated tool that facilitates the statistical analysis and visualization of RNA-Seq expression data. TRAPR provides various functions for data management, the filtering of low-quality data, normalization, transformation, statistical analysis, data visualization, and result visualization that allow researchers to build customized analysis pipelines.

  17. Statistical trend analysis methodology for rare failures in changing technical systems

    International Nuclear Information System (INIS)

    Ott, K.O.; Hoffmann, H.J.

    1983-07-01

    A methodology for a statistical trend analysis (STA) in failure rates is presented. It applies primarily to relatively rare events in changing technologies or components. The formulation is more general and the assumptions are less restrictive than in a previously published version. Relations of the statistical analysis and probabilistic assessment (PRA) are discussed in terms of categorization of decisions for action following particular failure events. The significance of tentatively identified trends is explored. In addition to statistical tests for trend significance, a combination of STA and PRA results quantifying the trend complement is proposed. The STA approach is compared with other concepts for trend characterization. (orig.)

  18. R: A Software Environment for Comprehensive Statistical Analysis of Astronomical Data

    Science.gov (United States)

    Feigelson, E. D.

    2012-09-01

    R is the largest public domain software language for statistical analysis of data. Together with CRAN, its rapidly growing collection of >3000 add-on specialized packages, it implements around 60,000 statistical functionalities in a cohesive software environment. Extensive graphical capabilities and interfaces with other programming languages are also available. The scope and language of R/CRAN are briefly described, along with efforts to promulgate its use in the astronomy. R can become an important tool for advanced statistical analysis of astronomical data.

  19. What type of statistical model to choose for the analysis of radioimmunoassays

    International Nuclear Information System (INIS)

    Huet, S.

    1984-01-01

    The current techniques used for statistical analysis of radioimmunoassays are not very satisfactory for either the statistician or the biologist. They are based on an attempt to make the response curve linear to avoid complicated computations. The present article shows that this practice has considerable effects (often neglected) on the statistical assumptions which must be formulated. A more strict analysis is proposed by applying the four-parameter logistic model. The advantages of this method are: the statistical assumptions formulated are based on observed data, and the model can be applied to almost all radioimmunoassays [fr

  20. Mathematical statistics

    CERN Document Server

    Pestman, Wiebe R

    2009-01-01

    This textbook provides a broad and solid introduction to mathematical statistics, including the classical subjects hypothesis testing, normal regression analysis, and normal analysis of variance. In addition, non-parametric statistics and vectorial statistics are considered, as well as applications of stochastic analysis in modern statistics, e.g., Kolmogorov-Smirnov testing, smoothing techniques, robustness and density estimation. For students with some elementary mathematical background. With many exercises. Prerequisites from measure theory and linear algebra are presented.

  1. The Statistical Analysis and Assessment of the Solvency of Forest Enterprises

    Directory of Open Access Journals (Sweden)

    Vyniatynska Liudmila V.

    2016-05-01

    Full Text Available The aim of the article is to conduct a statistical analysis of the solvency of forest enterprises through a system of statistical indicators using the sampling method (the sampling is based on the criteria of forest cover percent of regions of Ukraine. Using financial statements of forest enterprises that form a system of information and analytical support for the statistical analysis of the level of solvency of forestry in Ukraine for 2009-2015 has been analyzed and evaluated. With the help of the developed recommended values the results of the statistical analysis of the forest enterprises’ solvency under conditions of self-financing and commercial consideration have been summarized and systematized. Using the methodology of the statistical analysis of the forest enterprises’ solvency conducted on the corresponding conceptual framework, which is relevant and meets the current needs, a system of statistical indicators enabling to assess the level of solvency of forest enterprises and identify the reasons of its low level has been calculated.

  2. Discontinuing Inappropriate Medication Use in Nursing Home Residents : A Cluster Randomized Controlled Trial

    NARCIS (Netherlands)

    Wouters, Hans; Scheper, Jessica; Koning, Hedi; Brouwer, Chris; Twisk, Jos W.; van der Meer, Helene; Boersma, Froukje; Zuidema, Sytse U.; Taxis, Katja

    2017-01-01

    Background: Inappropriate prescribing is a well-known clinical problem in nursing home residents, but few interventions have focused on reducing inappropriate medication use. Objective: To examine successful discontinuation of inappropriate medication use and to improve prescribing in nursing home

  3. Discontinuing Inappropriate Medication in Nursing Home Residents (DIM-NHR study): A cluster randomized controlled trial

    NARCIS (Netherlands)

    Wouters, H.; Scheper, J.; Koning, H.; Brouwer, C.; Twisk, J.; Van Der Meer, H.; Boersma, F.; Zuidema, S.; Taxis, K.

    2017-01-01

    Introduction: Inappropriate prescribing is a prevalent problem in nursing home residents that is associated with cognitive and physical impairment. Few interventions have been shown to reduce inappropriate prescribing. The aim was therefore to examine successful discontinuation of inappropriate

  4. Unveiling common responses of Medicago truncatula to appropriate and inappropriate rust species

    Directory of Open Access Journals (Sweden)

    Maria Carlota eVaz Patto

    2014-11-01

    Full Text Available Little is known about the nature of effective defense mechanisms in legumes to pathogens of remotely related plant species. Some rust species are among pathogens with broad host range causing dramatic losses in various crop plants. To understand and compare the different host and nonhost resistance responses of legume species against rusts, we characterized the reaction of the model legume Medicago truncatula to one appropriate (Uromyces striatus and two inappropriate (U. viciae-fabae and U. lupinicolus rusts. We found that similar pre and post-haustorial mechanisms of resistance appear to be operative in M. truncatula against appropriate and inappropriate rust fungus. The appropriate U. striatus germinated better on M. truncatula accessions then the inappropriate U. viciae-fabae and U. lupinicolus, but once germinated, germ tubes of the three rusts had a similar level of success in finding stomata and forming an appressoria over a stoma. However responses to different inappropriate rust species also showed some specificity, suggesting a combination of non specific and specific responses underlying this legume nonhost resistance to rust fungi. Further genetic and expression analysis studies will contribute to the development of the necessary molecular tools to use the present information on host and nonhost resistance mechanisms to breed for broad-spectrum resistance to rust in legume species.

  5. [Reasons for inappropriate prescribing of antibiotics in a high-complexity pediatric hospital].

    Science.gov (United States)

    Ruvinsky, Silvina; Mónaco, Andrea; Pérez, Guadalupe; Taicz, Moira; Inda, Laura; Kijko, Ivana; Constanzo, Patricia; Bologna, Rosa

    2011-12-01

    Determine the reasons for inappropriate prescription of antibiotics and identify opportunities to improve prescription of these drugs in pediatric patients hospitalized in intermediate and intensive care units. A prospective, descriptive longitudinal study was conducted of pediatric patients in intermediate and intensive care units who received parenteral administration of antibiotics, with the exception of newborns, burn unit patients, and surgical prophylaxis patients. A univariate analysis and multiple logistic regression were performed. A total of 376 patients with a median of age of 50 months were studied (interquartile range [IQR] 14.5-127 months). Out of the total patients studied, 75% had one or more underlying conditions. A total of 40.6% of these patients had an oncologic pathology and 33.5% had neurological conditions. The remaining 25.9% had other underlying conditions. Antibiotic treatment was inappropriate in 35.6% of the patients studied (N = 134). In 73 (54.4%) of the 134 cases, inappropriate use was due to the type of antibiotic prescribed, the dose administered, or the treatment period. The 61 (45.5%) remaining cases did not require antibiotic treatment. In the multivariate analysis, the risk factors for inappropriate use of antibiotics were: administration of ceftriaxone OR 2 (95% CI, 1.3-3.7; P = 0.02); acute lower respiratory tract infection OR 1.8 (95% CI, 1.1-3.3; P < 0.04); onset of fever of unknown origin in hospital inpatients OR 5.55 (95% CI, 2.5-12; P < 0.0001); and febrile neutropenia OR 0.3 (95% CI, 0.1-0.7; P = 0.009). Inappropriate use of antibiotics was less common in the clinical conditions that were well-characterized. Prescribing practices that could be improved were identified through the preparation and circulation of guidelines for antibiotic use in hospital inpatients.

  6. An inappropriate tool: criminal law and HIV in Asia.

    Science.gov (United States)

    Csete, Joanne; Dube, Siddharth

    2010-09-01

    Asian countries have applied criminal sanctions widely in areas directly relevant to national HIV programmes and policies, including criminalization of HIV transmission, sex work, homosexuality and drug injection. This criminalization may impede universal access to HIV prevention and treatment services in Asia and undermine vulnerable people's ability to be part of the HIV response. To review the status of application of criminal law in key HIV-related areas in Asia and analyze its impact. Review of literature and application of human rights norms to analysis of criminal law measures. Criminal laws in the areas considered here and their enforcement, while intended to reduce HIV transmission, are inappropriate and counterproductive with respect to health and human rights. Governments should remove punitive laws that impede the HIV response and should ensure meaningful participation of people living with HIV, people who use illicit drugs, sex workers and men who have sex with men in combating stigma and discrimination and developing rights-centered approaches to HIV.

  7. Statistical power analysis a simple and general model for traditional and modern hypothesis tests

    CERN Document Server

    Murphy, Kevin R; Wolach, Allen

    2014-01-01

    Noted for its accessible approach, this text applies the latest approaches of power analysis to both null hypothesis and minimum-effect testing using the same basic unified model. Through the use of a few simple procedures and examples, the authors show readers with little expertise in statistical analysis how to obtain the values needed to carry out the power analysis for their research. Illustrations of how these analyses work and how they can be used to choose the appropriate criterion for defining statistically significant outcomes are sprinkled throughout. The book presents a simple and g

  8. Primary Sjogren's syndrome associated with inappropriate ...

    African Journals Online (AJOL)

    1990-03-30

    Mar 30, 1990 ... ethanol, paracetamol, barbiturates and benzodiazepines were negative. An ECG, chest radiography, blood gas analysis, thyroid function tests and the cortisol level were norm~ and screening for porphyrin in urine and stool was negative. Computed tomography of the brain and cerebrospinal fluid.

  9. A critical discussion of null hypothesis significance testing and statistical power analysis within psychological research

    DEFF Research Database (Denmark)

    Jones, Allan; Sommerlund, Bo

    2007-01-01

    The uses of null hypothesis significance testing (NHST) and statistical power analysis within psychological research are critically discussed. The article looks at the problems of relying solely on NHST when dealing with small and large sample sizes. The use of power-analysis in estimating...

  10. Analysis of Variance with Summary Statistics in Microsoft® Excel®

    Science.gov (United States)

    Larson, David A.; Hsu, Ko-Cheng

    2010-01-01

    Students regularly are asked to solve Single Factor Analysis of Variance problems given only the sample summary statistics (number of observations per category, category means, and corresponding category standard deviations). Most undergraduate students today use Excel for data analysis of this type. However, Excel, like all other statistical…

  11. STATISTICAL ANALYSIS OF DIESEL CAR REPAIRS ON THE EXAMPLE OF DIESEL SERVICE ADAMCZYK COMPANIES

    Directory of Open Access Journals (Sweden)

    Łukasz KONIECZNY

    2014-12-01

    Full Text Available The article presents a statistical analysis of car repair data gathered by an examined company over five-year time interval. It is based on a SQL database which contains information about all realized orders. The analysis defines the structure of the set of repaired car makes and additionally to find the most frequent vehicle defects.

  12. On the blind use of statistical tools in the analysis of globular cluster stars

    Science.gov (United States)

    D'Antona, Francesca; Caloi, Vittoria; Tailo, Marco

    2018-04-01

    As with most data analysis methods, the Bayesian method must be handled with care. We show that its application to determine stellar evolution parameters within globular clusters can lead to paradoxical results if used without the necessary precautions. This is a cautionary tale on the use of statistical tools for big data analysis.

  13. PVeStA: A Parallel Statistical Model Checking and Quantitative Analysis Tool

    KAUST Repository

    AlTurki, Musab

    2011-01-01

    Statistical model checking is an attractive formal analysis method for probabilistic systems such as, for example, cyber-physical systems which are often probabilistic in nature. This paper is about drastically increasing the scalability of statistical model checking, and making such scalability of analysis available to tools like Maude, where probabilistic systems can be specified at a high level as probabilistic rewrite theories. It presents PVeStA, an extension and parallelization of the VeStA statistical model checking tool [10]. PVeStA supports statistical model checking of probabilistic real-time systems specified as either: (i) discrete or continuous Markov Chains; or (ii) probabilistic rewrite theories in Maude. Furthermore, the properties that it can model check can be expressed in either: (i) PCTL/CSL, or (ii) the QuaTEx quantitative temporal logic. As our experiments show, the performance gains obtained from parallelization can be very high. © 2011 Springer-Verlag.

  14. Statistical analysis of solid waste composition data: Arithmetic mean, standard deviation and correlation coefficients

    DEFF Research Database (Denmark)

    Edjabou, Maklawe Essonanawe; Martín-Fernández, Josep Antoni; Scheutz, Charlotte

    2017-01-01

    Data for fractional solid waste composition provide relative magnitudes of individual waste fractions, the percentages of which always sum to 100, thereby connecting them intrinsically. Due to this sum constraint, waste composition data represent closed data, and their interpretation and analysis......, have the potential to generate spurious or misleading results. Therefore, ¨compositional data should be transformed adequately prior to any statistical analysis, such as computing mean, standard deviation and correlation coefficients....... require statistical methods, other than classical statistics that are suitable only for non-constrained data such as absolute values. However, the closed characteristics of waste composition data are often ignored when analysed. The results of this study showed, for example, that unavoidable animal...... and plastic packaging. However, correlation tests applied to waste fraction compositions (percentage values) showed a negative association in this regard, thus demonstrating that statistical analyses applied to compositional waste fraction data, without addressing the closed characteristics of these data...

  15. Introduction to statistics and data analysis with exercises, solutions and applications in R

    CERN Document Server

    Heumann, Christian; Shalabh

    2016-01-01

    This introductory statistics textbook conveys the essential concepts and tools needed to develop and nurture statistical thinking. It presents descriptive, inductive and explorative statistical methods and guides the reader through the process of quantitative data analysis. In the experimental sciences and interdisciplinary research, data analysis has become an integral part of any scientific study. Issues such as judging the credibility of data, analyzing the data, evaluating the reliability of the obtained results and finally drawing the correct and appropriate conclusions from the results are vital. The text is primarily intended for undergraduate students in disciplines like business administration, the social sciences, medicine, politics, macroeconomics, etc. It features a wealth of examples, exercises and solutions with computer code in the statistical programming language R as well as supplementary material that will enable the reader to quickly adapt all methods to their own applications.

  16. Exploratory Visual Analysis of Statistical Results from Microarray Experiments Comparing High and Low Grade Glioma

    Directory of Open Access Journals (Sweden)

    Jason H. Moore

    2007-01-01

    Full Text Available The biological interpretation of gene expression microarray results is a daunting challenge. For complex diseases such as cancer, wherein the body of published research is extensive, the incorporation of expert knowledge provides a useful analytical framework. We have previously developed the Exploratory Visual Analysis (EVA software for exploring data analysis results in the context of annotation information about each gene, as well as biologically relevant groups of genes. We present EVA as a fl exible combination of statistics and biological annotation that provides a straightforward visual interface for the interpretation of microarray analyses of gene expression in the most commonly occurring class of brain tumors, glioma. We demonstrate the utility of EVA for the biological interpretation of statistical results by analyzing publicly available gene expression profi les of two important glial tumors. The results of a statistical comparison between 21 malignant, high-grade glioblastoma multiforme (GBM tumors and 19 indolent, low-grade pilocytic astrocytomas were analyzed using EVA. By using EVA to examine the results of a relatively simple statistical analysis, we were able to identify tumor class-specifi c gene expression patterns having both statistical and biological signifi cance. Our interactive analysis highlighted the potential importance of genes involved in cell cycle progression, proliferation, signaling, adhesion, migration, motility, and structure, as well as candidate gene loci on a region of Chromosome 7 that has been implicated in glioma. Because EVA does not require statistical or computational expertise and has the fl exibility to accommodate any type of statistical analysis, we anticipate EVA will prove a useful addition to the repertoire of computational methods used for microarray data analysis. EVA is available at no charge to academic users and can be found at http://www.epistasis.org.

  17. Statistical analysis of water-quality data containing multiple detection limits: S-language software for regression on order statistics

    Science.gov (United States)

    Lee, L.; Helsel, D.

    2005-01-01

    Trace contaminants in water, including metals and organics, often are measured at sufficiently low concentrations to be reported only as values below the instrument detection limit. Interpretation of these "less thans" is complicated when multiple detection limits occur. Statistical methods for multiply censored, or multiple-detection limit, datasets have been developed for medical and industrial statistics, and can be employed to estimate summary statistics or model the distributions of trace-level environmental data. We describe S-language-based software tools that perform robust linear regression on order statistics (ROS). The ROS method has been evaluated as one of the most reliable procedures for developing summary statistics of multiply censored data. It is applicable to any dataset that has 0 to 80% of its values censored. These tools are a part of a software library, or add-on package, for the R environment for statistical computing. This library can be used to generate ROS models and associated summary statistics, plot modeled distributions, and predict exceedance probabilities of water-quality standards. ?? 2005 Elsevier Ltd. All rights reserved.

  18. Analysis of health in health centers area in Depok using correspondence analysis and scan statistic

    Science.gov (United States)

    Basir, C.; Widyaningsih, Y.; Lestari, D.

    2017-07-01

    Hotspots indicate area that has a higher case intensity than others. For example, in health problems of an area, the number of sickness of a region can be used as parameter and condition of area that determined severity of an area. If this condition is known soon, it can be overcome preventively. Many factors affect the severity level of area. Some health factors to be considered in this study are the number of infant with low birth weight, malnourished children under five years old, under five years old mortality, maternal deaths, births without the help of health personnel, infants without handling the baby's health, and infant without basic immunization. The number of cases is based on every public health center area in Depok. Correspondence analysis provides graphical information about two nominal variables relationship. It create plot based on row and column scores and show categories that have strong relation in a close distance. Scan Statistic method is used to examine hotspot based on some selected variables that occurred in the study area; and Correspondence Analysis is used to picturing association between the regions and variables. Apparently, using SaTScan software, Sukatani health center is obtained as a point hotspot; and Correspondence Analysis method shows health centers and the seven variables have a very significant relationship and the majority of health centers close to all variables, except Cipayung which is distantly related to the number of pregnant mother death. These results can be used as input for the government agencies to upgrade the health level in the area.

  19. On Statistical Analysis of Competing Risks with Application to the Time of First Goal

    Czech Academy of Sciences Publication Activity Database

    Volf, Petr

    2016-01-01

    Roč. 2, č. 10 (2016), s. 606-623, č. článku 2. ISSN 2411-2518 R&D Projects: GA ČR GA13-14445S Institutional support: RVO:67985556 Keywords : survival analysis * competing risks * sports statistics Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2016/SI/volf-0466157.pdf

  20. An Analysis of the International Proposals for Harmonization Accounts Statement and Government Finance Statistics

    OpenAIRE

    Andrei Razvan Crisan; Melinda Timea Fulop

    2014-01-01

    Considering the modern market requirements and the government in the last decade that require two information systems: accounting and statistic. In accordance with these requirements, we believe it is very important to analyze the harmonizing of the two systems between Government Finance Statistics (GFS), used in support of macroeconomic analysis and General Purpose Financial Reports (GPFR) according with International Public Sector Accounting Standards, used for making decisions and accounta...

  1. Using R and RStudio for data management, statistical analysis and graphics

    CERN Document Server

    Horton, Nicholas J

    2015-01-01

    This is the second edition of the popular book on using R for statistical analysis and graphics. The authors, who run a popular blog supplementing their books, have focused on adding many new examples to this new edition. These examples are presented primarily in new chapters based on the following themes: simulation, probability, statistics, mathematics/computing, and graphics. The authors have also added many other updates, including a discussion of RStudio-a very popular development environment for R.

  2. Visual and statistical analysis of {sup 18}F-FDG PET in primary progressive aphasia

    Energy Technology Data Exchange (ETDEWEB)

    Matias-Guiu, Jordi A.; Moreno-Ramos, Teresa; Garcia-Ramos, Rocio; Fernandez-Matarrubia, Marta; Oreja-Guevara, Celia; Matias-Guiu, Jorge [Hospital Clinico San Carlos, Department of Neurology, Madrid (Spain); Cabrera-Martin, Maria Nieves; Perez-Castejon, Maria Jesus; Rodriguez-Rey, Cristina; Ortega-Candil, Aida; Carreras, Jose Luis [San Carlos Health Research Institute (IdISSC) Complutense University of Madrid, Department of Nuclear Medicine, Hospital Clinico San Carlos, Madrid (Spain)

    2015-05-01

    Diagnosing progressive primary aphasia (PPA) and its variants is of great clinical importance, and fluorodeoxyglucose (FDG) positron emission tomography (PET) may be a useful diagnostic technique. The purpose of this study was to evaluate interobserver variability in the interpretation of FDG PET images in PPA as well as the diagnostic sensitivity and specificity of the technique. We also aimed to compare visual and statistical analyses of these images. There were 10 raters who analysed 44 FDG PET scans from 33 PPA patients and 11 controls. Five raters analysed the images visually, while the other five used maps created using Statistical Parametric Mapping software. Two spatial normalization procedures were performed: global mean normalization and cerebellar normalization. Clinical diagnosis was considered the gold standard. Inter-rater concordance was moderate for visual analysis (Fleiss' kappa 0.568) and substantial for statistical analysis (kappa 0.756-0.881). Agreement was good for all three variants of PPA except for the nonfluent/agrammatic variant studied with visual analysis. The sensitivity and specificity of each rater's diagnosis of PPA was high, averaging 87.8 and 89.9 % for visual analysis and 96.9 and 90.9 % for statistical analysis using global mean normalization, respectively. In cerebellar normalization, sensitivity was 88.9 % and specificity 100 %. FDG PET demonstrated high diagnostic accuracy for the diagnosis of PPA and its variants. Inter-rater concordance was higher for statistical analysis, especially for the nonfluent/agrammatic variant. These data support the use of FDG PET to evaluate patients with PPA and show that statistical analysis methods are particularly useful for identifying the nonfluent/agrammatic variant of PPA. (orig.)

  3. Visual and statistical analysis of 18F-FDG PET in primary progressive aphasia

    International Nuclear Information System (INIS)

    Matias-Guiu, Jordi A.; Moreno-Ramos, Teresa; Garcia-Ramos, Rocio; Fernandez-Matarrubia, Marta; Oreja-Guevara, Celia; Matias-Guiu, Jorge; Cabrera-Martin, Maria Nieves; Perez-Castejon, Maria Jesus; Rodriguez-Rey, Cristina; Ortega-Candil, Aida; Carreras, Jose Luis

    2015-01-01

    Diagnosing progressive primary aphasia (PPA) and its variants is of great clinical importance, and fluorodeoxyglucose (FDG) positron emission tomography (PET) may be a useful diagnostic technique. The purpose of this study was to evaluate interobserver variability in the interpretation of FDG PET images in PPA as well as the diagnostic sensitivity and specificity of the technique. We also aimed to compare visual and statistical analyses of these images. There were 10 raters who analysed 44 FDG PET scans from 33 PPA patients and 11 controls. Five raters analysed the images visually, while the other five used maps created using Statistical Parametric Mapping software. Two spatial normalization procedures were performed: global mean normalization and cerebellar normalization. Clinical diagnosis was considered the gold standard. Inter-rater concordance was moderate for visual analysis (Fleiss' kappa 0.568) and substantial for statistical analysis (kappa 0.756-0.881). Agreement was good for all three variants of PPA except for the nonfluent/agrammatic variant studied with visual analysis. The sensitivity and specificity of each rater's diagnosis of PPA was high, averaging 87.8 and 89.9 % for visual analysis and 96.9 and 90.9 % for statistical analysis using global mean normalization, respectively. In cerebellar normalization, sensitivity was 88.9 % and specificity 100 %. FDG PET demonstrated high diagnostic accuracy for the diagnosis of PPA and its variants. Inter-rater concordance was higher for statistical analysis, especially for the nonfluent/agrammatic variant. These data support the use of FDG PET to evaluate patients with PPA and show that statistical analysis methods are particularly useful for identifying the nonfluent/agrammatic variant of PPA. (orig.)

  4. Prevalence and correlates of inappropriate use of benzodiazepines in Kosovo.

    Science.gov (United States)

    Tahiri, Zejdush; Kellici, Suela; Mone, Iris; Shabani, Driton; Qazimi, Musa; Burazeri, Genc

    2017-08-01

    In post-war Kosovo, the magnitude of inappropriate use of benzodiazepines is unknown to date. The aim of this study was to assess the prevalence and correlates of continuation of intake of benzodiazepines beyond prescription (referred to as "inappropriate use") in the adult population of Gjilan region in Kosovo. A cross-sectional study was conducted in Gjilan region in 2015 including a representative sample of 780 individuals attending different pharmacies and reporting use of benzodiazepines (385 men and 395 women; age range 18-87 years; response rate: 90%). A structured questionnaire was administered to all participants inquiring about the use of benzodiazepines and socio-demographic characteristics. Overall, the prevalence of inappropriate use of benzodiazepines was 58%. In multivariable-adjusted models, inappropriate use of benzodiazepines was significantly associated with older age (OR 1.7, 95% CI 1.1-2.7), middle education (OR 1.8, 95% CI 1.2-2.7), daily use (OR 1.4, 95% CI 1.1-2.0) and addiction awareness (OR 2.7, 95% CI 2.0-3.8). Furthermore, there was evidence of a borderline relationship with rural residence (OR 1.2, 95% CI 0.9-1.7). Our study provides novel evidence about the prevalence and selected correlates of inappropriate use of benzodiazepines in Gjilan region of Kosovo. Health professionals and policymakers in Kosovo should be aware of the magnitude and determinants of drug misuse in this transitional society.

  5. [Develop a statistics analysis software in population genetics using VBA language].

    Science.gov (United States)

    Cai, Ying; Zhou, Ni; Xu, Ye-li; Xiang, Da-peng; Su, Jiang-hui; Zhang, Lin-tian

    2006-12-01

    To develop a statistics analysis software that can be used in STR population genetics for the purpose of promoting and fastening the basic research of STR population genetics. Selecting the Microsoft VBA for Excel, which is simple and easy to use, as the program language and using its macro function to develop a statistics analysis software used in STR population genetics. The software "Easy STR Genetics" based on VBA language, by which the population genetic analysis of STR data can be made, were developed. The developed software "Easy STR Genetics" based on VBA language, can be spread in the domain of STR population genetics research domestically and internationally, due to its feature of full function, good compatibility for different formats of input data, distinct and easy to understand outputs for statistics and calculation results.

  6. Statistical analysis of extreme values from insurance, finance, hydrology and other fields

    CERN Document Server

    Reiss, Rolf-Dieter

    1997-01-01

    The statistical analysis of extreme data is important for various disciplines, including hydrology, insurance, finance, engineering and environmental sciences. This book provides a self-contained introduction to the parametric modeling, exploratory analysis and statistical interference for extreme values. The entire text of this third edition has been thoroughly updated and rearranged to meet the new requirements. Additional sections and chapters, elaborated on more than 100 pages, are particularly concerned with topics like dependencies, the conditional analysis and the multivariate modeling of extreme data. Parts I–III about the basic extreme value methodology remain unchanged to some larger extent, yet notable are, e.g., the new sections about "An Overview of Reduced-Bias Estimation" (co-authored by M.I. Gomes), "The Spectral Decomposition Methodology", and "About Tail Independence" (co-authored by M. Frick), and the new chapter about "Extreme Value Statistics of Dependent Random Variables" (co-authored ...

  7. Error analysis of terrestrial laser scanning data by means of spherical statistics and 3D graphs.

    Science.gov (United States)

    Cuartero, Aurora; Armesto, Julia; Rodríguez, Pablo G; Arias, Pedro

    2010-01-01

    This paper presents a complete analysis of the positional errors of terrestrial laser scanning (TLS) data based on spherical statistics and 3D graphs. Spherical statistics are preferred because of the 3D vectorial nature of the spatial error. Error vectors have three metric elements (one module and two angles) that were analyzed by spherical statistics. A study case has been presented and discussed in detail. Errors were calculating using 53 check points (CP) and CP coordinates were measured by a digitizer with submillimetre accuracy. The positional accuracy was analyzed by both the conventional method (modular errors analysis) and the proposed method (angular errors analysis) by 3D graphics and numerical spherical statistics. Two packages in R programming language were performed to obtain graphics automatically. The results indicated that the proposed method is advantageous as it offers a more complete analysis of the positional accuracy, such as angular error component, uniformity of the vector distribution, error isotropy, and error, in addition the modular error component by linear statistics.

  8. JULIDE: a software tool for 3D reconstruction and statistical analysis of autoradiographic mouse brain sections.

    Directory of Open Access Journals (Sweden)

    Delphine Ribes

    Full Text Available In this article we introduce JULIDE, a software toolkit developed to perform the 3D reconstruction, intensity normalization, volume standardization by 3D image registration and voxel-wise statistical analysis of autoradiographs of mouse brain sections. This software tool has been developed in the open-source ITK software framework and is freely available under a GPL license. The article presents the complete image processing chain from raw data acquisition to 3D statistical group analysis. Results of the group comparison in the context of a study on spatial learning are shown as an illustration of the data that can be obtained with this tool.

  9. The null hypothesis of GSEA, and a novel statistical model for competitive gene set analysis

    DEFF Research Database (Denmark)

    Debrabant, Birgit

    2017-01-01

    . This is a major handicap to the interpretation of results obtained from a gene set analysis. RESULTS: This work presents a hierarchical statistical model based on the notion of dependence measures, which overcomes this problem. The two levels of the model naturally reflect the modular structure of many gene set......MOTIVATION: Competitive gene set analysis intends to assess whether a specific set of genes is more associated with a trait than the remaining genes. However, the statistical models assumed to date to underly these methods do not enable a clear cut formulation of the competitive null hypothesis...

  10. A method for statistical steady state thermal analysis of reactor cores

    International Nuclear Information System (INIS)

    Whetton, P.A.

    1981-01-01

    In a previous publication the author presented a method for undertaking statistical steady state thermal analyses of reactor cores. The present paper extends the technique to an assessment of confidence limits for the resulting probability functions which define the probability that a given thermal response value will be exceeded in a reactor core. Establishing such confidence limits is considered an integral part of any statistical thermal analysis and essential if such analysis are to be considered in any regulatory process. In certain applications the use of a best estimate probability function may be justifiable but it is recognised that a demonstrably conservative probability function is required for any regulatory considerations. (orig.)

  11. Statistical analysis of 2D patterns and its application to astrometry

    Science.gov (United States)

    Zavada, Petr; Píška, Karel

    2017-12-01

    A general statistical procedure for analysis of finite 2D patterns, inspired by analysis of heavy-ion data, is developed. The method is verified in the study of publicly available data obtained by the Gaia-ESA mission. We prove that the procedure can be sensitive to the limits of accuracy of measurement, but it can also clearly identify the real physical effects on the large background of random distributions. As an example, the method confirms presence of binary and ternary star systems in the studied data. At the same time the possibility of statistical detection of gravitational microlensing effect is discussed.

  12. Parametric analysis of the statistical model of the stick-slip process

    Science.gov (United States)

    Lima, Roberta; Sampaio, Rubens

    2017-06-01

    In this paper it is performed a parametric analysis of the statistical model of the response of a dry-friction oscillator. The oscillator is a spring-mass system which moves over a base with a rough surface. Due to this roughness, the mass is subject to a dry-frictional force modeled as a Coulomb friction. The system is stochastically excited by an imposed bang-bang base motion. The base velocity is modeled by a Poisson process for which a probabilistic model is fully specified. The excitation induces in the system stochastic stick-slip oscillations. The system response is composed by a random sequence alternating stick and slip-modes. With realizations of the system, a statistical model is constructed for this sequence. In this statistical model, the variables of interest of the sequence are modeled as random variables, as for example, the number of time intervals in which stick or slip occur, the instants at which they begin, and their duration. Samples of the system response are computed by integration of the dynamic equation of the system using independent samples of the base motion. Statistics and histograms of the random variables which characterize the stick-slip process are estimated for the generated samples. The objective of the paper is to analyze how these estimated statistics and histograms vary with the system parameters, i.e., to make a parametric analysis of the statistical model of the stick-slip process.

  13. Statistical Learning in Specific Language Impairment and Autism Spectrum Disorder: A Meta-Analysis

    Directory of Open Access Journals (Sweden)

    Rita Obeid

    2016-08-01

    Full Text Available Impairments in statistical learning might be a common deficit among individuals with Specific Language Impairment (SLI and Autism Spectrum Disorder (ASD. Using meta-analysis, we examined statistical learning in SLI (14 studies, 15 comparisons and ASD (13 studies, 20 comparisons to evaluate this hypothesis. Effect sizes were examined as a function of diagnosis across multiple statistical learning tasks (Serial Reaction Time, Contextual Cueing, Artificial Grammar Learning, Speech Stream, Observational Learning, Probabilistic Classification. Individuals with SLI showed deficits in statistical learning relative to age-matched controls g = .47, 95% CI [.28, .66], p < .001. In contrast, statistical learning was intact in individuals with ASD relative to controls, g = –.13, 95% CI [–.34, .08], p = .22. Effect sizes did not vary as a function of task modality or participant age. Our findings inform debates about overlapping social-communicative difficulties in children with SLI and ASD by suggesting distinct underlying mechanisms. In line with the procedural deficit hypothesis (Ullman & Pierpont, 2005, impaired statistical learning may account for phonological and syntactic difficulties associated with SLI. In contrast, impaired statistical learning fails to account for the social-pragmatic difficulties associated with ASD.

  14. Application of Statistical Tools for Data Analysis and Interpretation in Rice Plant Pathology

    Directory of Open Access Journals (Sweden)

    Parsuram Nayak

    2018-01-01

    Full Text Available There has been a significant advancement in the application of statistical tools in plant pathology during the past four decades. These tools include multivariate analysis of disease dynamics involving principal component analysis, cluster analysis, factor analysis, pattern analysis, discriminant analysis, multivariate analysis of variance, correspondence analysis, canonical correlation analysis, redundancy analysis, genetic diversity analysis, and stability analysis, which involve in joint regression, additive main effects and multiplicative interactions, and genotype-by-environment interaction biplot analysis. The advanced statistical tools, such as non-parametric analysis of disease association, meta-analysis, Bayesian analysis, and decision theory, take an important place in analysis of disease dynamics. Disease forecasting methods by simulation models for plant diseases have a great potentiality in practical disease control strategies. Common mathematical tools such as monomolecular, exponential, logistic, Gompertz and linked differential equations take an important place in growth curve analysis of disease epidemics. The highly informative means of displaying a range of numerical data through construction of box and whisker plots has been suggested. The probable applications of recent advanced tools of linear and non-linear mixed models like the linear mixed model, generalized linear model, and generalized linear mixed models have been presented. The most recent technologies such as micro-array analysis, though cost effective, provide estimates of gene expressions for thousands of genes simultaneously and need attention by the molecular biologists. Some of these advanced tools can be well applied in different branches of rice research, including crop improvement, crop production, crop protection, social sciences as well as agricultural engineering. The rice research scientists should take advantage of these new opportunities adequately in

  15. Inadequate drug prescribing: comparison of inappropriate drug rates at the end of a geriatric short-stay service with three prescribing tools.

    Science.gov (United States)

    Fanon, Jean-Luc; Dechavigny, Sandra; Dramé, Moustapha; Godaert, Lidvine

    2017-12-01

    To compare the proportion of prescriptions containing at least one inappropriate drug, as identified using three tools for optimizing drug prescriptions in the elderly. Cross-sectional, observational study based on the analysis of prescriptions of patients discharged between 1 September and 31 October 2014 in a short-stay geriatrics unit at the Louis Domergue de Trinité Hospital in Martinique (France). Each prescription was analysed using 3 tools, namely one for general medicine (Vidal © drug dictionary) and two tools specifically designed for geriatrics (the Laroche list of potentially inappropriate medications, and the STOPP-START toolkit). The number of prescriptions containing at least one inappropriate medication was recorded as evaluated with each tool. These prescriptions were then compared to investigate whether the two geriatric tools identified the same prescriptions as being inappropriate. In total, 53 prescriptions were analysed. The male-female sex ratio was 0.70. The average age of the patients was 84.5±6.2 years. Analysis according to the Vidal © drug dictionary identified the greatest number of inappropriate prescriptions (28.3% of all prescriptions). The proportion of prescriptions containing at least one inappropriate drug was lower with the two tools specific to geriatrics (11% for the Laroche list and 7.5% for the STOPP-START method). The general medicine Vidal © drug dictionary identified more inappropriate prescriptions than the tools specifically designed for geriatrics. The tools for aiding drug prescriptions in the elderly identified different drugs as being inappropriate.

  16. General specifications for the development of a USL NASA PC R and D statistical analysis support package

    Science.gov (United States)

    Dominick, Wayne D. (Editor); Bassari, Jinous; Triantafyllopoulos, Spiros

    1984-01-01

    The University of Southwestern Louisiana (USL) NASA PC R and D statistical analysis support package is designed to be a three-level package to allow statistical analysis for a variety of applications within the USL Data Base Management System (DBMS) contract work. The design addresses usage of the statistical facilities as a library package, as an interactive statistical analysis system, and as a batch processing package.

  17. Development of statistical analysis code for meteorological data (W-View)

    Energy Technology Data Exchange (ETDEWEB)

    Tachibana, Haruo; Sekita, Tsutomu; Yamaguchi, Takenori [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2003-03-01

    A computer code (W-View: Weather View) was developed to analyze the meteorological data statistically based on 'the guideline of meteorological statistics for the safety analysis of nuclear power reactor' (Nuclear Safety Commission on January 28, 1982; revised on March 29, 2001). The code gives statistical meteorological data to assess the public dose in case of normal operation and severe accident to get the license of nuclear reactor operation. This code was revised from the original code used in a large office computer code to enable a personal computer user to analyze the meteorological data simply and conveniently and to make the statistical data tables and figures of meteorology. (author)

  18. Orthopedic research: an overview of data entry, database management, and statistical analysis.

    Science.gov (United States)

    Kassing, D R; Ritter, M A; Faris, P M; Keating, E M; Nyhuis, A W

    1989-12-01

    An orthopedic practitioner can facilitate clinical research and analyze quality assurance data with a minor investment in a personal computer, an optical scanner, and two software packages, namely a database manager and a statistics program. One of the most time-consuming stages in the research process includes entering patient chart data, editing and manipulating the data (database management), and analyzing the data (statistical analysis). This can be automated to a large extent with the above mentioned equipment. This article focuses on the steps involved in organizing an orthopedic office for research. The steps include choosing a method of data entry, choosing and implementing a database package, and choosing and implementing a statistics package. This discussion is followed by a practical review of basic statistics applicable to orthopedic research. Several simple and advanced tests are described and examples are given for each.

  19. Statistical strategies to reveal potential vibrational markers for in vivo analysis by confocal Raman spectroscopy

    Science.gov (United States)

    Oliveira Mendes, Thiago de; Pinto, Liliane Pereira; Santos, Laurita dos; Tippavajhala, Vamshi Krishna; Téllez Soto, Claudio Alberto; Martin, Airton Abrahão

    2016-07-01

    The analysis of biological systems by spectroscopic techniques involves the evaluation of hundreds to thousands of variables. Hence, different statistical approaches are used to elucidate regions that discriminate classes of samples and to propose new vibrational markers for explaining various phenomena like disease monitoring, mechanisms of action of drugs, food, and so on. However, the technical statistics are not always widely discussed in applied sciences. In this context, this work presents a detailed discussion including the various steps necessary for proper statistical analysis. It includes univariate parametric and nonparametric tests, as well as multivariate unsupervised and supervised approaches. The main objective of this study is to promote proper understanding of the application of various statistical tools in these spectroscopic methods used for the analysis of biological samples. The discussion of these methods is performed on a set of in vivo confocal Raman spectra of human skin analysis that aims to identify skin aging markers. In the Appendix, a complete routine of data analysis is executed in a free software that can be used by the scientific community involved in these studies.

  20. Practical recommendations for statistical analysis and data presentation in Biochemia Medica journal.

    Science.gov (United States)

    Simundic, Ana-Maria

    2012-01-01

    The aim of this article is to highlight practical recommendations based on our experience as reviewers and journal editors and refer to some most common mistakes in manuscripts submitted to Biochemia Medica. One of the most important parts of the article is the Abstract. Authors quite often forget that Abstract is sometimes the first (and only) part of the article read by the readers. The article Abstract must therefore be comprehensive and provide key results of your work. Problematic part of the article, also often neglected by authors is the subheading Statistical analysis, within Materials and methods, where authors must explain which statistical tests were used in their data analysis and the rationale for using those tests. They also need to make sure that all tests used are listed under Statistical analysis section, as well as that all tests listed are indeed used in the study. When writing Results section there are several key points to keep in mind, such as: are results presented with adequate precision and accurately; is descriptive analysis appropriate; is the measure of confidence provided for all estimates; if necessary and applicable, are correct statistical tests used for analysis; is P value provided for all tests, etc. Especially important is not to make any conclusions on the causal relationship unless the study is an experiment or clinical trial. We believe that the use of the proposed checklist might increase the quality of the submitted work and speed up the peer-review and publication process for published articles.