WorldWideScience

Sample records for detailed statistical analysis

  1. Detailed Analysis of the Interoccurrence Time Statistics in Seismic Activity

    Science.gov (United States)

    Tanaka, Hiroki; Aizawa, Yoji

    2017-02-01

    The interoccurrence time statistics of seismiciry is studied theoretically as well as numerically by taking into account the conditional probability and the correlations among many earthquakes in different magnitude levels. It is known so far that the interoccurrence time statistics is well approximated by the Weibull distribution, but the more detailed information about the interoccurrence times can be obtained from the analysis of the conditional probability. Firstly, we propose the Embedding Equation Theory (EET), where the conditional probability is described by two kinds of correlation coefficients; one is the magnitude correlation and the other is the inter-event time correlation. Furthermore, the scaling law of each correlation coefficient is clearly determined from the numerical data-analysis carrying out with the Preliminary Determination of Epicenter (PDE) Catalog and the Japan Meteorological Agency (JMA) Catalog. Secondly, the EET is examined to derive the magnitude dependence of the interoccurrence time statistics and the multi-fractal relation is successfully formulated. Theoretically we cannot prove the universality of the multi-fractal relation in seismic activity; nevertheless, the theoretical results well reproduce all numerical data in our analysis, where several common features or the invariant aspects are clearly observed. Especially in the case of stationary ensembles the multi-fractal relation seems to obey an invariant curve, furthermore in the case of non-stationary (moving time) ensembles for the aftershock regime the multi-fractal relation seems to satisfy a certain invariant curve at any moving times. It is emphasized that the multi-fractal relation plays an important role to unify the statistical laws of seismicity: actually the Gutenberg-Richter law and the Weibull distribution are unified in the multi-fractal relation, and some universality conjectures regarding the seismicity are briefly discussed.

  2. Detailed statistical analysis plan for the pulmonary protection trial

    DEFF Research Database (Denmark)

    Buggeskov, Katrine B; Jakobsen, Janus C; Secher, Niels H

    2014-01-01

    BACKGROUND: Pulmonary dysfunction complicates cardiac surgery that includes cardiopulmonary bypass. The pulmonary protection trial evaluates effect of pulmonary perfusion on pulmonary function in patients suffering from chronic obstructive pulmonary disease. This paper presents the statistical plan...

  3. Detailed statistical analysis plan for the difficult airway management (DIFFICAIR) trial

    DEFF Research Database (Denmark)

    Nørskov, Anders Kehlet; Lundstrøm, Lars Hyldborg; Rosenstock, Charlotte Vallentin

    2014-01-01

    on the frequency of unanticipated difficult airway management.To prevent outcome bias and selective reporting, we hereby present a detailed statistical analysis plan as an amendment (update) to the previously published protocol for the DIFFICAIR trial. METHOD/DESIGN: The DIFFICAIR trial is a stratified, parallel...... trial by an a priori publication of a statistical analysis plan. TRIAL REGISTRATION: ClinicalTrials.gov: NCT01718561....

  4. Statistical Analysis of Detailed 3-D CFD LES Simulations with Regard to CCV Modeling

    Directory of Open Access Journals (Sweden)

    Vítek Oldřich

    2016-06-01

    Full Text Available The paper deals with statistical analysis of large amount of detailed 3-D CFD data in terms of cycle-to-cycle variations (CCVs. These data were obtained by means of LES calculations of many consecutive cycles. Due to non-linear nature of Navier-Stokes equation set, there is a relatively significant CCV. Hence, every cycle is slightly different – this leads to requirement to perform statistical analysis based on ensemble averaging procedure which enables better understanding of CCV in ICE including its quantification. The data obtained from the averaging procedure provides results on different space resolution levels. The procedure is applied locally, i.e., in every cell of the mesh. Hence there is detailed CCV information on local level – such information can be compared with RANS simulations. Next, volume/mass averaging provides information at specific locations – e.g., gap between electrodes of a spark plug. Finally, volume/mass averaging of the whole combustion chamber leads to global information which can be compared with experimental data or results of system simulation tools (which are based on 0-D/1-D approach.

  5. Detailed statistical analysis plan for the target temperature management after out-of-hospital cardiac arrest trial

    DEFF Research Database (Denmark)

    Nielsen, Niklas; Winkel, Per; Cronberg, Tobias

    2013-01-01

    , and did not treat hyperthermia in the control groups. The optimal target temperature management (TTM) strategy is not known. To prevent outcome reporting bias, selective reporting and data-driven results, we present the a priori defined detailed statistical analysis plan as an update to the previously...

  6. The Active for Life Year 5 (AFLY5) school-based cluster randomised controlled trial protocol: detailed statistical analysis plan.

    Science.gov (United States)

    Lawlor, Debbie A; Peters, Tim J; Howe, Laura D; Noble, Sian M; Kipping, Ruth R; Jago, Russell

    2013-07-24

    The Active For Life Year 5 (AFLY5) randomised controlled trial protocol was published in this journal in 2011. It provided a summary analysis plan. This publication is an update of that protocol and provides a detailed analysis plan. This update provides a detailed analysis plan of the effectiveness and cost-effectiveness of the AFLY5 intervention. The plan includes details of how variables will be quality control checked and the criteria used to define derived variables. Details of four key analyses are provided: (a) effectiveness analysis 1 (the effect of the AFLY5 intervention on primary and secondary outcomes at the end of the school year in which the intervention is delivered); (b) mediation analyses (secondary analyses examining the extent to which any effects of the intervention are mediated via self-efficacy, parental support and knowledge, through which the intervention is theoretically believed to act); (c) effectiveness analysis 2 (the effect of the AFLY5 intervention on primary and secondary outcomes 12 months after the end of the intervention) and (d) cost effectiveness analysis (the cost-effectiveness of the AFLY5 intervention). The details include how the intention to treat and per-protocol analyses were defined and planned sensitivity analyses for dealing with missing data. A set of dummy tables are provided in Additional file 1. This detailed analysis plan was written prior to any analyst having access to any data and was approved by the AFLY5 Trial Steering Committee. Its publication will ensure that analyses are in accordance with an a priori plan related to the trial objectives and not driven by knowledge of the data. ISRCTN50133740.

  7. The Penicillin for the Emergency Department Outpatient treatment of CELLulitis (PEDOCELL) trial: update to the study protocol and detailed statistical analysis plan (SAP).

    Science.gov (United States)

    Boland, Fiona; Quirke, Michael; Gannon, Brenda; Plunkett, Sinead; Hayden, John; McCourt, John; O'Sullivan, Ronan; Eustace, Joseph; Deasy, Conor; Wakai, Abel

    2017-08-24

    Cellulitis is a painful, potentially serious, infectious process of the dermal and subdermal tissues and represents a significant disease burden. The statistical analysis plan (SAP) for the Penicillin for the Emergency Department Outpatient treatment of CELLulitis (PEDOCELL) trial is described here. The PEDOCELL trial is a multicentre, randomised, parallel-arm, double-blinded, non-inferiority clinical trial comparing the efficacy of flucloxacillin (monotherapy) with combination flucloxacillin/phenoxymethylpenicillin (dual therapy) for the outpatient treatment of cellulitis in the emergency department (ED) setting. To prevent outcome reporting bias, selective reporting and data-driven results, the a priori-defined, detailed SAP is presented here. Patients will be randomised to either orally administered flucloxacillin 500 mg four times daily and placebo or orally administered 500 mg of flucloxacillin four times daily and phenoxymethylpenicillin 500 mg four times daily. The trial consists of a 7-day intervention period and a 2-week follow-up period. Study measurements will be taken at four specific time points: at patient enrolment, day 2-3 after enrolment and commencing treatment (early clinical response (ECR) visit), day 8-10 after enrolment (end-of-treatment (EOT) visit) and day 14-21 after enrolment (test-of-cure (TOC) visit). The primary outcome measure is investigator-determined clinical response measured at the TOC visit. The secondary outcomes are as follows: lesion size at ECR, clinical treatment failure at each follow-up visit, adherence and persistence of trial patients with orally administered antibiotic therapy at EOT, health-related quality of life (HRQoL) and pharmacoeconomic assessments. The plan for the presentation and comparison of baseline characteristics and outcomes is described in this paper. This trial aims to establish the non-inferiority of orally administered flucloxacillin monotherapy with orally administered flucloxacillin

  8. Mathematical and statistical analysis

    Science.gov (United States)

    Houston, A. Glen

    1988-01-01

    The goal of the mathematical and statistical analysis component of RICIS is to research, develop, and evaluate mathematical and statistical techniques for aerospace technology applications. Specific research areas of interest include modeling, simulation, experiment design, reliability assessment, and numerical analysis.

  9. Deconstructing Statistical Analysis

    Science.gov (United States)

    Snell, Joel

    2014-01-01

    Using a very complex statistical analysis and research method for the sake of enhancing the prestige of an article or making a new product or service legitimate needs to be monitored and questioned for accuracy. 1) The more complicated the statistical analysis, and research the fewer the number of learned readers can understand it. This adds a…

  10. Detailed Analysis of Motor Unit Activity

    DEFF Research Database (Denmark)

    Nikolic, Mile; Sørensen, John Aasted; Dahl, Kristian

    1997-01-01

    System for decomposition of EMG signals intotheir constituent motor unit potentials and their firing patterns.The aim of the system is detailed analysis ofmotor unit variability.......System for decomposition of EMG signals intotheir constituent motor unit potentials and their firing patterns.The aim of the system is detailed analysis ofmotor unit variability....

  11. Detailed Analysis of Motor Unit Activity

    DEFF Research Database (Denmark)

    Nikolic, Mile; Sørensen, John Aasted; Dahl, Kristian

    1997-01-01

    System for decomposition of EMG signals intotheir constituent motor unit potentials and their firing patterns.The aim of the system is detailed analysis ofmotor unit variability.......System for decomposition of EMG signals intotheir constituent motor unit potentials and their firing patterns.The aim of the system is detailed analysis ofmotor unit variability....

  12. Detailed noise statistics for an optically preamplified direct detection receiver

    DEFF Research Database (Denmark)

    Danielsen, Søren Lykke; Mikkelsen, Benny; Durhuus, Terji

    1995-01-01

    We describe the exact statistics of an optically preamplified direct detection receiver by means of the moment generating function. The theory allows an arbitrary shaped electrical filter in the receiver circuit. The moment generating function (MGF) allows for a precise calculation of the error...

  13. Beginning statistics with data analysis

    CERN Document Server

    Mosteller, Frederick; Rourke, Robert EK

    2013-01-01

    This introduction to the world of statistics covers exploratory data analysis, methods for collecting data, formal statistical inference, and techniques of regression and analysis of variance. 1983 edition.

  14. Associative Analysis in Statistics

    Directory of Open Access Journals (Sweden)

    Mihaela Muntean

    2015-03-01

    Full Text Available In the last years, the interest in technologies such as in-memory analytics and associative search has increased. This paper explores how you can use in-memory analytics and an associative model in statistics. The word “associative” puts the emphasis on understanding how datasets relate to one another. The paper presents the main characteristics of “associative” data model. Also, the paper presents how to design an associative model for labor market indicators analysis. The source is the EU Labor Force Survey. Also, this paper presents how to make associative analysis.

  15. Per Object statistical analysis

    DEFF Research Database (Denmark)

    2008-01-01

    Variable. This procedure was developed in order to be able to export objects as ESRI shape data with the 90-percentile of the Hue of each object's pixels as an item in the shape attribute table. This procedure uses a sub-level single pixel chessboard segmentation, loops for each of the objects......This RS code is to do Object-by-Object analysis of each Object's sub-objects, e.g. statistical analysis of an object's individual image data pixels. Statistics, such as percentiles (so-called "quartiles") are derived by the process, but the return of that can only be a Scene Variable, not an Object...... of a specific class in turn, and uses as pair of PPO stages to derive the statistics and then assign them to the objects' Object Variables. It may be that this could all be done in some other, simply way, but several other ways that were tried did not succeed. The procedure ouptut has been tested against...

  16. Multistructure Statistical Model Applied To Factor Analysis

    Science.gov (United States)

    Bentler, Peter M.

    1976-01-01

    A general statistical model for the multivariate analysis of mean and covariance structures is described. Matrix calculus is used to develop the statistical aspects of one new special case in detail. This special case separates the confounding of principal components and factor analysis. (DEP)

  17. Applied multivariate statistical analysis

    CERN Document Server

    Härdle, Wolfgang Karl

    2015-01-01

    Focusing on high-dimensional applications, this 4th edition presents the tools and concepts used in multivariate data analysis in a style that is also accessible for non-mathematicians and practitioners.  It surveys the basic principles and emphasizes both exploratory and inferential statistics; a new chapter on Variable Selection (Lasso, SCAD and Elastic Net) has also been added.  All chapters include practical exercises that highlight applications in different multivariate data analysis fields: in quantitative financial studies, where the joint dynamics of assets are observed; in medicine, where recorded observations of subjects in different locations form the basis for reliable diagnoses and medication; and in quantitative marketing, where consumers’ preferences are collected in order to construct models of consumer behavior.  All of these examples involve high to ultra-high dimensions and represent a number of major fields in big data analysis. The fourth edition of this book on Applied Multivariate ...

  18. Material analysis on engineering statistics

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seung Hun

    2008-03-15

    This book is about material analysis on engineering statistics using mini tab, which includes technical statistics and seven tools of QC, probability distribution, presumption and checking, regression analysis, tim series analysis, control chart, process capacity analysis, measurement system analysis, sampling check, experiment planning, response surface analysis, compound experiment, Taguchi method, and non parametric statistics. It is good for university and company to use because it deals with theory first and analysis using mini tab on 6 sigma BB and MBB.

  19. Detailed statistical contact angle analyses; "slow moving" drops on inclining silicon-oxide surfaces.

    Science.gov (United States)

    Schmitt, M; Groß, K; Grub, J; Heib, F

    2015-06-01

    Contact angle determination by sessile drop technique is essential to characterise surface properties in science and in industry. Different specific angles can be observed on every solid which are correlated with the advancing or the receding of the triple line. Different procedures and definitions for the determination of specific angles exist which are often not comprehensible or reproducible. Therefore one of the most important things in this area is to build standard, reproducible and valid methods for determining advancing/receding contact angles. This contribution introduces novel techniques to analyse dynamic contact angle measurements (sessile drop) in detail which are applicable for axisymmetric and non-axisymmetric drops. Not only the recently presented fit solution by sigmoid function and the independent analysis of the different parameters (inclination, contact angle, velocity of the triple point) but also the dependent analysis will be firstly explained in detail. These approaches lead to contact angle data and different access on specific contact angles which are independent from "user-skills" and subjectivity of the operator. As example the motion behaviour of droplets on flat silicon-oxide surfaces after different surface treatments is dynamically measured by sessile drop technique when inclining the sample plate. The triple points, the inclination angles, the downhill (advancing motion) and the uphill angles (receding motion) obtained by high-precision drop shape analysis are independently and dependently statistically analysed. Due to the small covered distance for the dependent analysis (static to the "slow moving" dynamic contact angle determination. They are characterised by small deviations of the computed values. Additional to the detailed introduction of this novel analytical approaches plus fit solution special motion relations for the drop on inclined surfaces and detailed relations about the reactivity of the freshly cleaned silicon wafer

  20. Parametric statistical change point analysis

    CERN Document Server

    Chen, Jie

    2000-01-01

    This work is an in-depth study of the change point problem from a general point of view and a further examination of change point analysis of the most commonly used statistical models Change point problems are encountered in such disciplines as economics, finance, medicine, psychology, signal processing, and geology, to mention only several The exposition is clear and systematic, with a great deal of introductory material included Different models are presented in each chapter, including gamma and exponential models, rarely examined thus far in the literature Other models covered in detail are the multivariate normal, univariate normal, regression, and discrete models Extensive examples throughout the text emphasize key concepts and different methodologies are used, namely the likelihood ratio criterion, and the Bayesian and information criterion approaches A comprehensive bibliography and two indices complete the study

  1. Statistical Analysis and validation

    NARCIS (Netherlands)

    Hoefsloot, H.C.J.; Horvatovich, P.; Bischoff, R.

    2013-01-01

    In this chapter guidelines are given for the selection of a few biomarker candidates from a large number of compounds with a relative low number of samples. The main concepts concerning the statistical validation of the search for biomarkers are discussed. These complicated methods and concepts are

  2. Statistical data analysis

    Energy Technology Data Exchange (ETDEWEB)

    Hahn, A.A.

    1994-11-01

    The complexity of instrumentation sometimes requires data analysis to be done before the result is presented to the control room. This tutorial reviews some of the theoretical assumptions underlying the more popular forms of data analysis and presents simple examples to illuminate the advantages and hazards of different techniques.

  3. Foundation of statistical energy analysis in vibroacoustics

    CERN Document Server

    Le Bot, A

    2015-01-01

    This title deals with the statistical theory of sound and vibration. The foundation of statistical energy analysis is presented in great detail. In the modal approach, an introduction to random vibration with application to complex systems having a large number of modes is provided. For the wave approach, the phenomena of propagation, group speed, and energy transport are extensively discussed. Particular emphasis is given to the emergence of diffuse field, the central concept of the theory.

  4. Statistical Analysis Plan

    DEFF Research Database (Denmark)

    Ris Hansen, Inge; Søgaard, Karen; Gram, Bibi

    2015-01-01

    This is the analysis plan for the multicentre randomised control study looking at the effect of training and exercises in chronic neck pain patients that is being conducted in Jutland and Funen, Denmark. This plan will be used as a work description for the analyses of the data collected....

  5. Spatial analysis statistics, visualization, and computational methods

    CERN Document Server

    Oyana, Tonny J

    2015-01-01

    An introductory text for the next generation of geospatial analysts and data scientists, Spatial Analysis: Statistics, Visualization, and Computational Methods focuses on the fundamentals of spatial analysis using traditional, contemporary, and computational methods. Outlining both non-spatial and spatial statistical concepts, the authors present practical applications of geospatial data tools, techniques, and strategies in geographic studies. They offer a problem-based learning (PBL) approach to spatial analysis-containing hands-on problem-sets that can be worked out in MS Excel or ArcGIS-as well as detailed illustrations and numerous case studies. The book enables readers to: Identify types and characterize non-spatial and spatial data Demonstrate their competence to explore, visualize, summarize, analyze, optimize, and clearly present statistical data and results Construct testable hypotheses that require inferential statistical analysis Process spatial data, extract explanatory variables, conduct statisti...

  6. Research design and statistical analysis

    CERN Document Server

    Myers, Jerome L; Lorch Jr, Robert F

    2013-01-01

    Research Design and Statistical Analysis provides comprehensive coverage of the design principles and statistical concepts necessary to make sense of real data.  The book's goal is to provide a strong conceptual foundation to enable readers to generalize concepts to new research situations.  Emphasis is placed on the underlying logic and assumptions of the analysis and what it tells the researcher, the limitations of the analysis, and the consequences of violating assumptions.  Sampling, design efficiency, and statistical models are emphasized throughout. As per APA recommendations

  7. Statistical Analysis of Tsunami Variability

    Science.gov (United States)

    Zolezzi, Francesca; Del Giudice, Tania; Traverso, Chiara; Valfrè, Giulio; Poggi, Pamela; Parker, Eric J.

    2010-05-01

    The purpose of this paper was to investigate statistical variability of seismically generated tsunami impact. The specific goal of the work was to evaluate the variability in tsunami wave run-up due to uncertainty in fault rupture parameters (source effects) and to the effects of local bathymetry at an individual location (site effects). This knowledge is critical to development of methodologies for probabilistic tsunami hazard assessment. Two types of variability were considered: • Inter-event; • Intra-event. Generally, inter-event variability refers to the differences of tsunami run-up at a given location for a number of different earthquake events. The focus of the current study was to evaluate the variability of tsunami run-up at a given point for a given magnitude earthquake. In this case, the variability is expected to arise from lack of knowledge regarding the specific details of the fault rupture "source" parameters. As sufficient field observations are not available to resolve this question, numerical modelling was used to generate run-up data. A scenario magnitude 8 earthquake in the Hellenic Arc was modelled. This is similar to the event thought to have caused the infamous 1303 tsunami. The tsunami wave run-up was computed at 4020 locations along the Egyptian coast between longitudes 28.7° E and 33.8° E. Specific source parameters (e.g. fault rupture length and displacement) were varied, and the effects on wave height were determined. A Monte Carlo approach considering the statistical distribution of the underlying parameters was used to evaluate the variability in wave height at locations along the coast. The results were evaluated in terms of the coefficient of variation of the simulated wave run-up (standard deviation divided by mean value) for each location. The coefficient of variation along the coast was between 0.14 and 3.11, with an average value of 0.67. The variation was higher in areas of irregular coast. This level of variability is

  8. OPIC Greenhouse Gas Emissions Analysis Details

    Data.gov (United States)

    Overseas Private Investment Corporation — Summary project inventory with independent analysis to quantify the greenhouse gas ("GHG") emissions directly attributable to projects to which the Overseas Private...

  9. Statistical Analysis with Missing Data

    CERN Document Server

    Little, Roderick J A

    2002-01-01

    Praise for the First Edition of Statistical Analysis with Missing Data ""An important contribution to the applied statistics literature.... I give the book high marks for unifying and making accessible much of the past and current work in this important area."" —William E. Strawderman, Rutgers University ""This book...provide[s] interesting real-life examples, stimulating end-of-chapter exercises, and up-to-date references. It should be on every applied statistician’s bookshelf."" —The Statistician ""The book should be studied in the statistical methods d

  10. Bayesian Methods for Statistical Analysis

    OpenAIRE

    Puza, Borek

    2015-01-01

    Bayesian methods for statistical analysis is a book on statistical methods for analysing a wide variety of data. The book consists of 12 chapters, starting with basic concepts and covering numerous topics, including Bayesian estimation, decision theory, prediction, hypothesis testing, hierarchical models, Markov chain Monte Carlo methods, finite population inference, biased sampling and nonignorable nonresponse. The book contains many exercises, all with worked solutions, including complete c...

  11. The PEC reactor. Safety analysis: Detailed reports

    Energy Technology Data Exchange (ETDEWEB)

    1988-01-01

    In the safety-analysis of the PEC Brasimone reactor (Italy), attention was focused on the role of plant-incident analysis during the design stage and the conclusions reached. The analysis regarded the following: thermohydraulic incidents at full power; incidents with the reactor shut down; reactivity incidents; core local faults; analysis of fuel-handling incidents; engineered safeguards and passive safety features; coolant leakage and sodium fires; research and development studies on the seismic behaviour of the PEC fast reactor; generalized sodium fire; severe accidents, accident sequences with shudown; reference accident. Both the theoretical and experimental analyses demonstrated the adequacy of the design of the PEC fast reactor, aimed at minimizing the consequences of a hypothetical disruptive core accident with mechanical energy release. It was shown that the containment barriers were sized correctly and that the residual heat from a disassembled core would be removed. The re-evaluation of the source term emphasized the conservative nature of the hypotheses assumed in the preliminary safety analysis for calculating the risk to the public.

  12. Common pitfalls in statistical analysis: Clinical versus statistical significance

    Science.gov (United States)

    Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc

    2015-01-01

    In clinical research, study results, which are statistically significant are often interpreted as being clinically important. While statistical significance indicates the reliability of the study results, clinical significance reflects its impact on clinical practice. The third article in this series exploring pitfalls in statistical analysis clarifies the importance of differentiating between statistical significance and clinical significance. PMID:26229754

  13. Statistical methods for bioimpedance analysis

    Directory of Open Access Journals (Sweden)

    Christian Tronstad

    2014-04-01

    Full Text Available This paper gives a basic overview of relevant statistical methods for the analysis of bioimpedance measurements, with an aim to answer questions such as: How do I begin with planning an experiment? How many measurements do I need to take? How do I deal with large amounts of frequency sweep data? Which statistical test should I use, and how do I validate my results? Beginning with the hypothesis and the research design, the methodological framework for making inferences based on measurements and statistical analysis is explained. This is followed by a brief discussion on correlated measurements and data reduction before an overview is given of statistical methods for comparison of groups, factor analysis, association, regression and prediction, explained in the context of bioimpedance research. The last chapter is dedicated to the validation of a new method by different measures of performance. A flowchart is presented for selection of statistical method, and a table is given for an overview of the most important terms of performance when evaluating new measurement technology.

  14. Detailed Uncertainty Analysis of the ZEM-3 Measurement System

    Science.gov (United States)

    Mackey, Jon; Sehirlioglu, Alp; Dynys, Fred

    2014-01-01

    The measurement of Seebeck coefficient and electrical resistivity are critical to the investigation of all thermoelectric systems. Therefore, it stands that the measurement uncertainty must be well understood to report ZT values which are accurate and trustworthy. A detailed uncertainty analysis of the ZEM-3 measurement system has been performed. The uncertainty analysis calculates error in the electrical resistivity measurement as a result of sample geometry tolerance, probe geometry tolerance, statistical error, and multi-meter uncertainty. The uncertainty on Seebeck coefficient includes probe wire correction factors, statistical error, multi-meter uncertainty, and most importantly the cold-finger effect. The cold-finger effect plagues all potentiometric (four-probe) Seebeck measurement systems, as heat parasitically transfers through thermocouple probes. The effect leads to an asymmetric over-estimation of the Seebeck coefficient. A thermal finite element analysis allows for quantification of the phenomenon, and provides an estimate on the uncertainty of the Seebeck coefficient. The thermoelectric power factor has been found to have an uncertainty of +9-14 at high temperature and 9 near room temperature.

  15. A consistent approach for mixed detailed and statistical calculation of opacities in hot plasmas

    CERN Document Server

    Porcherot, Quentin; Gilleron, Franck; Blenski, Thomas

    2011-01-01

    Absorption and emission spectra of plasmas with multicharged-ions contain transition arrays with a huge number of coalescent electric-dipole (E1) lines, which are well suited for treatment by the unresolved transition array and derivative methods. But, some transition arrays show detailed features whose description requires diagonalization of the Hamiltonian matrix. We developed a hybrid opacity code, called SCORCG, which combines statistical approaches with fine-structure calculations consistently. Data required for the computation of detailed transition arrays (atomic configurations and atomic radial integrals) are calculated by the super-configuration code SCO (Super-Configuration Opacity), which provides an accurate description of the plasma screening effects on the wave-functions. Level energies as well as position and strength of spectral lines are computed by an adapted RCG routine of R. D. Cowan. The resulting code provides opacities for hot plasmas and can handle mid-Z elements. The code is also a po...

  16. The Hybrid Detailed / Statistical Opacity Code SCO-RCG: New Developments and Applications

    CERN Document Server

    Pain, Jean-Christophe; Porcherot, Quentin; Blenski, Thomas

    2013-01-01

    We present the hybrid opacity code SCO-RCG which combines statistical approaches with fine-structure calculations. Radial integrals needed for the computation of detailed transition arrays are calculated by the code SCO (Super-configuration Code for Opacity), which calculates atomic structure at finite temperature and density, taking into account plasma effects on the wave-functions. Levels and spectral lines are then computed by an adapted RCG routine of R. D. Cowan. SCO-RCG now includes the Partially Resolved Transition Array model, which allows one to replace a complex transition array by a small-scale detailed calculation preserving energy and variance of the genuine transition array and yielding improved high-order moments. An approximate method for studying the impact of strong magnetic field on opacity and emissivity was also recently implemented.

  17. Regularized Statistical Analysis of Anatomy

    DEFF Research Database (Denmark)

    Sjöstrand, Karl

    2007-01-01

    This thesis presents the application and development of regularized methods for the statistical analysis of anatomical structures. Focus is on structure-function relationships in the human brain, such as the connection between early onset of Alzheimer’s disease and shape changes of the corpus cal...

  18. Statistical Analysis for Performance Comparison

    Directory of Open Access Journals (Sweden)

    Priyanka Dutta

    2013-07-01

    Full Text Available Performance responsiveness and scalability is a make-or-break quality for software. Nearly everyone runsinto performance problems at one time or another. This paper discusses about performance issues facedduring Pre Examination Process Automation System (PEPAS implemented in java technology. Thechallenges faced during the life cycle of the project and the mitigation actions performed. It compares 3java technologies and shows how improvements are made through statistical analysis in response time ofthe application. The paper concludes with result analysis.

  19. Bayesian Inference in Statistical Analysis

    CERN Document Server

    Box, George E P

    2011-01-01

    The Wiley Classics Library consists of selected books that have become recognized classics in their respective fields. With these new unabridged and inexpensive editions, Wiley hopes to extend the life of these important works by making them available to future generations of mathematicians and scientists. Currently available in the Series: T. W. Anderson The Statistical Analysis of Time Series T. S. Arthanari & Yadolah Dodge Mathematical Programming in Statistics Emil Artin Geometric Algebra Norman T. J. Bailey The Elements of Stochastic Processes with Applications to the Natural Sciences Rob

  20. Tools for Basic Statistical Analysis

    Science.gov (United States)

    Luz, Paul L.

    2005-01-01

    Statistical Analysis Toolset is a collection of eight Microsoft Excel spreadsheet programs, each of which performs calculations pertaining to an aspect of statistical analysis. These programs present input and output data in user-friendly, menu-driven formats, with automatic execution. The following types of calculations are performed: Descriptive statistics are computed for a set of data x(i) (i = 1, 2, 3 . . . ) entered by the user. Normal Distribution Estimates will calculate the statistical value that corresponds to cumulative probability values, given a sample mean and standard deviation of the normal distribution. Normal Distribution from two Data Points will extend and generate a cumulative normal distribution for the user, given two data points and their associated probability values. Two programs perform two-way analysis of variance (ANOVA) with no replication or generalized ANOVA for two factors with four levels and three repetitions. Linear Regression-ANOVA will curvefit data to the linear equation y=f(x) and will do an ANOVA to check its significance.

  1. Multivariate analysis: A statistical approach for computations

    Science.gov (United States)

    Michu, Sachin; Kaushik, Vandana

    2014-10-01

    Multivariate analysis is a type of multivariate statistical approach commonly used in, automotive diagnosis, education evaluating clusters in finance etc and more recently in the health-related professions. The objective of the paper is to provide a detailed exploratory discussion about factor analysis (FA) in image retrieval method and correlation analysis (CA) of network traffic. Image retrieval methods aim to retrieve relevant images from a collected database, based on their content. The problem is made more difficult due to the high dimension of the variable space in which the images are represented. Multivariate correlation analysis proposes an anomaly detection and analysis method based on the correlation coefficient matrix. Anomaly behaviors in the network include the various attacks on the network like DDOs attacks and network scanning.

  2. Statistical Analysis of Protein Ensembles

    Science.gov (United States)

    Máté, Gabriell; Heermann, Dieter

    2014-04-01

    As 3D protein-configuration data is piling up, there is an ever-increasing need for well-defined, mathematically rigorous analysis approaches, especially that the vast majority of the currently available methods rely heavily on heuristics. We propose an analysis framework which stems from topology, the field of mathematics which studies properties preserved under continuous deformations. First, we calculate a barcode representation of the molecules employing computational topology algorithms. Bars in this barcode represent different topological features. Molecules are compared through their barcodes by statistically determining the difference in the set of their topological features. As a proof-of-principle application, we analyze a dataset compiled of ensembles of different proteins, obtained from the Ensemble Protein Database. We demonstrate that our approach correctly detects the different protein groupings.

  3. Statistical Analysis of Protein Ensembles

    Directory of Open Access Journals (Sweden)

    Gabriell eMáté

    2014-04-01

    Full Text Available As 3D protein-configuration data is piling up, there is an ever-increasing need for well-defined, mathematically rigorous analysis approaches, especially that the vast majority of the currently available methods rely heavily on heuristics. We propose an analysis framework which stems from topology, the field of mathematics which studies properties preserved under continuous deformations. First, we calculate a barcode representation of the molecules employing computational topology algorithms. Bars in this barcode represent different topological features. Molecules are compared through their barcodes by statistically determining the difference in the set of their topological features. As a proof-of-principle application, we analyze a dataset compiled of ensembles of different proteins, obtained from the Ensemble Protein Database. We demonstrate that our approach correctly detects the different protein groupings.

  4. Investigation on improved infrared image detail enhancement algorithm based on adaptive histogram statistical stretching and gradient filtering

    Science.gov (United States)

    Zeng, Bangze; Zhu, Youpan; Li, Zemin; Hu, Dechao; Luo, Lin; Zhao, Deli; Huang, Juan

    2014-11-01

    Duo to infrared image with low contrast, big noise and unclear visual effect, target is very difficult to observed and identified. This paper presents an improved infrared image detail enhancement algorithm based on adaptive histogram statistical stretching and gradient filtering (AHSS-GF). Based on the fact that the human eyes are very sensitive to the edges and lines, the author proposed to extract the details and textures by using the gradient filtering. New histogram could be acquired by calculating the sum of original histogram based on fixed window. With the minimum value for cut-off point, author carried on histogram statistical stretching. After the proper weights given to the details and background, the detail-enhanced results could be acquired finally. The results indicate image contrast could be improved and the details and textures could be enhanced effectively as well.

  5. Statistical analysis of management data

    CERN Document Server

    Gatignon, Hubert

    2013-01-01

    This book offers a comprehensive approach to multivariate statistical analyses. It provides theoretical knowledge of the concepts underlying the most important multivariate techniques and an overview of actual applications.

  6. Spectangular - Spectral Disentangling For Detailed Chemical Analysis Of Binaries

    Science.gov (United States)

    Sablowski, Daniel

    2016-08-01

    Disentangling of spectra helps to improve the orbit parameters and allows detailed chemical analysis. Spectangular is a GUI program written in C++ for spectral disentangling of spectra of SB1 and SB2 systems. It is based on singular value decomposition in the wavelength space and is coupled to an orbital solution.The results are the component spectra and the orbital parameters.

  7. Statistical Analysis by Statistical Physics Model for the STOCK Markets

    Science.gov (United States)

    Wang, Tiansong; Wang, Jun; Fan, Bingli

    A new stochastic stock price model of stock markets based on the contact process of the statistical physics systems is presented in this paper, where the contact model is a continuous time Markov process, one interpretation of this model is as a model for the spread of an infection. Through this model, the statistical properties of Shanghai Stock Exchange (SSE) and Shenzhen Stock Exchange (SZSE) are studied. In the present paper, the data of SSE Composite Index and the data of SZSE Component Index are analyzed, and the corresponding simulation is made by the computer computation. Further, we investigate the statistical properties, fat-tail phenomena, the power-law distributions, and the long memory of returns for these indices. The techniques of skewness-kurtosis test, Kolmogorov-Smirnov test, and R/S analysis are applied to study the fluctuation characters of the stock price returns.

  8. Vapor Pressure Data Analysis and Statistics

    Science.gov (United States)

    2016-12-01

    VAPOR PRESSURE DATA ANALYSIS AND STATISTICS ECBC-TR-1422 Ann Brozena RESEARCH AND TECHNOLOGY DIRECTORATE...DATE XX-12-2016 2. REPORT TYPE Final 3. DATES COVERED (From - To) Nov 2015 – Apr 2016 4. TITLE Vapor Pressure Data Analysis and Statistics 5a...1 VAPOR PRESSURE DATA ANALYSIS AND STATISTICS 1. INTRODUCTION Knowledge of the vapor pressure of materials as a function of temperature is

  9. A Statistical Analysis of Cryptocurrencies

    Directory of Open Access Journals (Sweden)

    Stephen Chan

    2017-05-01

    Full Text Available We analyze statistical properties of the largest cryptocurrencies (determined by market capitalization, of which Bitcoin is the most prominent example. We characterize their exchange rates versus the U.S. Dollar by fitting parametric distributions to them. It is shown that returns are clearly non-normal, however, no single distribution fits well jointly to all the cryptocurrencies analysed. We find that for the most popular currencies, such as Bitcoin and Litecoin, the generalized hyperbolic distribution gives the best fit, while for the smaller cryptocurrencies the normal inverse Gaussian distribution, generalized t distribution, and Laplace distribution give good fits. The results are important for investment and risk management purposes.

  10. Asymptotic modal analysis and statistical energy analysis

    Science.gov (United States)

    Dowell, Earl H.

    1988-07-01

    Statistical Energy Analysis (SEA) is defined by considering the asymptotic limit of Classical Modal Analysis, an approach called Asymptotic Modal Analysis (AMA). The general approach is described for both structural and acoustical systems. The theoretical foundation is presented for structural systems, and experimental verification is presented for a structural plate responding to a random force. Work accomplished subsequent to the grant initiation focusses on the acoustic response of an interior cavity (i.e., an aircraft or spacecraft fuselage) with a portion of the wall vibrating in a large number of structural modes. First results were presented at the ASME Winter Annual Meeting in December, 1987, and accepted for publication in the Journal of Vibration, Acoustics, Stress and Reliability in Design. It is shown that asymptotically as the number of acoustic modes excited becomes large, the pressure level in the cavity becomes uniform except at the cavity boundaries. However, the mean square pressure at the cavity corner, edge and wall is, respectively, 8, 4, and 2 times the value in the cavity interior. Also it is shown that when the portion of the wall which is vibrating is near a cavity corner or edge, the response is significantly higher.

  11. Common pitfalls in statistical analysis: Logistic regression.

    Science.gov (United States)

    Ranganathan, Priya; Pramesh, C S; Aggarwal, Rakesh

    2017-01-01

    Logistic regression analysis is a statistical technique to evaluate the relationship between various predictor variables (either categorical or continuous) and an outcome which is binary (dichotomous). In this article, we discuss logistic regression analysis and the limitations of this technique.

  12. Towards a Judgement-Based Statistical Analysis

    Science.gov (United States)

    Gorard, Stephen

    2006-01-01

    There is a misconception among social scientists that statistical analysis is somehow a technical, essentially objective, process of decision-making, whereas other forms of data analysis are judgement-based, subjective and far from technical. This paper focuses on the former part of the misconception, showing, rather, that statistical analysis…

  13. Statistical Power in Meta-Analysis

    Science.gov (United States)

    Liu, Jin

    2015-01-01

    Statistical power is important in a meta-analysis study, although few studies have examined the performance of simulated power in meta-analysis. The purpose of this study is to inform researchers about statistical power estimation on two sample mean difference test under different situations: (1) the discrepancy between the analytical power and…

  14. Statistical Power in Meta-Analysis

    Science.gov (United States)

    Liu, Jin

    2015-01-01

    Statistical power is important in a meta-analysis study, although few studies have examined the performance of simulated power in meta-analysis. The purpose of this study is to inform researchers about statistical power estimation on two sample mean difference test under different situations: (1) the discrepancy between the analytical power and…

  15. Statistical methods for astronomical data analysis

    CERN Document Server

    Chattopadhyay, Asis Kumar

    2014-01-01

    This book introduces “Astrostatistics” as a subject in its own right with rewarding examples, including work by the authors with galaxy and Gamma Ray Burst data to engage the reader. This includes a comprehensive blending of Astrophysics and Statistics. The first chapter’s coverage of preliminary concepts and terminologies for astronomical phenomenon will appeal to both Statistics and Astrophysics readers as helpful context. Statistics concepts covered in the book provide a methodological framework. A unique feature is the inclusion of different possible sources of astronomical data, as well as software packages for converting the raw data into appropriate forms for data analysis. Readers can then use the appropriate statistical packages for their particular data analysis needs. The ideas of statistical inference discussed in the book help readers determine how to apply statistical tests. The authors cover different applications of statistical techniques already developed or specifically introduced for ...

  16. Statistical analysis with Excel for dummies

    CERN Document Server

    Schmuller, Joseph

    2013-01-01

    Take the mystery out of statistical terms and put Excel to work! If you need to create and interpret statistics in business or classroom settings, this easy-to-use guide is just what you need. It shows you how to use Excel's powerful tools for statistical analysis, even if you've never taken a course in statistics. Learn the meaning of terms like mean and median, margin of error, standard deviation, and permutations, and discover how to interpret the statistics of everyday life. You'll learn to use Excel formulas, charts, PivotTables, and other tools to make sense of everything fro

  17. Thermal and mechanical analysis for the detailed model using submodel

    Energy Technology Data Exchange (ETDEWEB)

    Kuh, Jung Eui; Kang, Chul Hyung; Park, Jeong Hwa [Korea Atomic Energy Research Institute, Taejon (Korea)

    1999-11-01

    A very big model is required for the TM analysis for HLRW repository, and also very small size of mesh is needed to simulate precisely main parts of analysis, e.g., canister, buffer, etc. However, it is practically impossible due to high memory size and computing time. In this report, a submodel concept in ABAQUS is used to handle this difficulty. A submodel concept is the part interested only is performed detailed modelling and this result is used as a boundary condition of full scale model. To follow this kind of computation procedure temperature distribution in buffer and canister could be computed precisely. This approach can be applied to TM analysis of buffer and canister, or a finite size of repository. 12 refs., 28 figs., 9 tabs. (Author)

  18. Detailed analysis of Balmer lines in cool dwarf stars

    CERN Document Server

    Barklem, P S; Allende-Prieto, C; Kochukhov, O P; Piskunov, N; O'Mara, B J

    2002-01-01

    An analysis of H alpha and H beta spectra in a sample of 30 cool dwarf and subgiant stars is presented using MARCS model atmospheres based on the most recent calculations of the line opacities. A detailed quantitative comparison of the solar flux spectra with model spectra shows that Balmer line profile shapes, and therefore the temperature structure in the line formation region, are best represented under the mixing length theory by any combination of a low mixing-length parameter alpha and a low convective structure parameter y. A slightly lower effective temperature is obtained for the sun than the accepted value, which we attribute to errors in models and line opacities. The programme stars span temperatures from 4800 to 7100 K and include a small number of population II stars. Effective temperatures have been derived using a quantitative fitting method with a detailed error analysis. Our temperatures find good agreement with those from the Infrared Flux Method (IRFM) near solar metallicity but show diffe...

  19. A statistical trend analysis of ozonesonde data

    Science.gov (United States)

    Tiao, G. C.; Pedrick, J. H.; Allenby, G. M.; Reinsel, G. C.; Mateer, C. L.

    1986-01-01

    A detailed statistical analysis of monthly averages of ozonesond readings is performed to assess trends in ozone in the troposphere and the lower to midstratosphere. Regression time series models, which include seasonal and trend factors, are estimated for 13 stations located mainly in the midlatitudes of the Northern Hemisphere. At each station, trend estimates are calculated for 14 'fractional' Umkehr layers covering the altitude range from 0 to 33 km. For the 1970-1982 period, the main findings indicate an overall negative trend in ozonesonde data in the lower stratosphere (15-21 km) of about -0.5 percent per year, and some evidence of a positive trend in the troposphere (0-5 km) of about 0.8 percent per year. An in-depth sensitivity study of the trend estimates is performed with respect to various correction procedures used to normalize ozonesonde readings to Dobson total ozone measurements. The main results indicate that the negative trend findings in the 15- to 21-km altitude region are robust to the normalization procedures considered.

  20. Explorations in Statistics: The Analysis of Change

    Science.gov (United States)

    Curran-Everett, Douglas; Williams, Calvin L.

    2015-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This tenth installment of "Explorations in Statistics" explores the analysis of a potential change in some physiological response. As researchers, we often express absolute change as percent change so we can…

  1. Analysis of Preference Data Using Intermediate Test Statistic Abstract

    African Journals Online (AJOL)

    PROF. O. E. OSUAGWU

    2013-06-01

    Jun 1, 2013 ... Intermediate statistic is a link between Friedman test statistic and the multinomial statistic. The statistic is ... The null hypothesis Ho .... [7] Taplin, R.H., The Statistical Analysis of Preference Data, Applied Statistics, No. 4, pp.

  2. Hypothesis testing and statistical analysis of microbiome

    Directory of Open Access Journals (Sweden)

    Yinglin Xia

    2017-09-01

    Full Text Available After the initiation of Human Microbiome Project in 2008, various biostatistic and bioinformatic tools for data analysis and computational methods have been developed and applied to microbiome studies. In this review and perspective, we discuss the research and statistical hypotheses in gut microbiome studies, focusing on mechanistic concepts that underlie the complex relationships among host, microbiome, and environment. We review the current available statistic tools and highlight recent progress of newly developed statistical methods and models. Given the current challenges and limitations in biostatistic approaches and tools, we discuss the future direction in developing statistical methods and models for the microbiome studies.

  3. Statistical Analysis of English Noun Suffixes

    Institute of Scientific and Technical Information of China (English)

    王惠灵

    2014-01-01

    This study discusses the origin of English noun suffixes,including Old English,Latin,French and Greek.A statistical analysis of some typical noun suffixes is followed in two corpora,which focuses on their frequency and distribution.

  4. Statistical Analysis Of Reconnaissance Geochemical Data From ...

    African Journals Online (AJOL)

    Statistical Analysis Of Reconnaissance Geochemical Data From Orle District, ... The univariate methods used include frequency distribution and cumulative ... The possible mineral potential of the area include base metals (Pb, Zn, Cu, Mo, etc.) ...

  5. Statistical shape analysis with applications in R

    CERN Document Server

    Dryden, Ian L

    2016-01-01

    A thoroughly revised and updated edition of this introduction to modern statistical methods for shape analysis Shape analysis is an important tool in the many disciplines where objects are compared using geometrical features. Examples include comparing brain shape in schizophrenia; investigating protein molecules in bioinformatics; and describing growth of organisms in biology. This book is a significant update of the highly-regarded `Statistical Shape Analysis’ by the same authors. The new edition lays the foundations of landmark shape analysis, including geometrical concepts and statistical techniques, and extends to include analysis of curves, surfaces, images and other types of object data. Key definitions and concepts are discussed throughout, and the relative merits of different approaches are presented. The authors have included substantial new material on recent statistical developments and offer numerous examples throughout the text. Concepts are introduced in an accessible manner, while reta...

  6. Reproducible statistical analysis with multiple languages

    DEFF Research Database (Denmark)

    Lenth, Russell; Højsgaard, Søren

    2011-01-01

    This paper describes the system for making reproducible statistical analyses. differs from other systems for reproducible analysis in several ways. The two main differences are: (1) Several statistics programs can be in used in the same document. (2) Documents can be prepared using OpenOffice or ......This paper describes the system for making reproducible statistical analyses. differs from other systems for reproducible analysis in several ways. The two main differences are: (1) Several statistics programs can be in used in the same document. (2) Documents can be prepared using Open......Office or \\LaTeX. The main part of this paper is an example showing how to use and together in an OpenOffice text document. The paper also contains some practical considerations on the use of literate programming in statistics....

  7. Advances in statistical models for data analysis

    CERN Document Server

    Minerva, Tommaso; Vichi, Maurizio

    2015-01-01

    This edited volume focuses on recent research results in classification, multivariate statistics and machine learning and highlights advances in statistical models for data analysis. The volume provides both methodological developments and contributions to a wide range of application areas such as economics, marketing, education, social sciences and environment. The papers in this volume were first presented at the 9th biannual meeting of the Classification and Data Analysis Group (CLADAG) of the Italian Statistical Society, held in September 2013 at the University of Modena and Reggio Emilia, Italy.

  8. Analysis of Fatigue Crack Growth in Ship Structural Details

    Directory of Open Access Journals (Sweden)

    Leheta Heba W.

    2016-04-01

    Full Text Available Fatigue failure avoidance is a goal that can be achieved only if the fatigue design is an integral part of the original design program. The purpose of fatigue design is to ensure that the structure has adequate fatigue life. Calculated fatigue life can form the basis for meaningful and efficient inspection programs during fabrication and throughout the life of the ship. The main objective of this paper is to develop an add-on program for the analysis of fatigue crack growth in ship structural details. The developed program will be an add-on script in a pre-existing package. A crack propagation in a tanker side connection is analyzed by using the developed program based on linear elastic fracture mechanics (LEFM and finite element method (FEM. The basic idea of the developed application is that a finite element model of this side connection will be first analyzed by using ABAQUS and from the results of this analysis the location of the highest stresses will be revealed. At this location, an initial crack will be introduced to the finite element model and from the results of the new crack model the direction of the crack propagation and the values of the stress intensity factors, will be known. By using the calculated direction of propagation a new segment will be added to the crack and then the model is analyzed again. The last step will be repeated until the calculated stress intensity factors reach the critical value.

  9. Detailed 3-D nuclear analysis of ITER outboard blanket modules

    Energy Technology Data Exchange (ETDEWEB)

    Bohm, Tim, E-mail: tdbohm@wisc.edu [Fusion Technology Institute, University of Wisconsin-Madison, Madison, WI (United States); Davis, Andrew; Sawan, Mohamed; Marriott, Edward; Wilson, Paul [Fusion Technology Institute, University of Wisconsin-Madison, Madison, WI (United States); Ulrickson, Michael; Bullock, James [Formerly, Fusion Technology, Sandia National Laboratories, Albuquerque, NM (United States)

    2015-10-15

    Highlights: • Nuclear analysis was performed on detailed CAD models placed in a 40 degree model of ITER. • The regions examined include BM09, the upper ELM coil region (BM11–13), the neutral beam (NB) region (BM13–16), and BM18. • The results show that VV nuclear heating exceeds limits in the NB and upper ELM coil regions. • The results also show that the level of He production in parts of BM18 exceeds limits. • These calculations are being used to modify the design of the ITER blanket modules. - Abstract: In the ITER design, the blanket modules (BM) provide thermal and nuclear shielding for the vacuum vessel (VV), magnets, and other components. We used the CAD based DAG-MCNP5 transport code to analyze detailed models inserted into a 40 degree partially homogenized ITER global model. The regions analyzed include BM09, BM16 near the heating neutral beam injection (HNB) region, BM11–13 near the upper ELM coil region, and BM18. For the BM16 HNB region, the VV nuclear heating behind the NB region exceeds the design limit by up to 80%. For the BM11–13 region, the nuclear heating of the VV exceeds the design limit by up to 45%. For BM18, the results show that He production does not meet the limit necessary for re-welding. The results presented in this work are being used by the ITER Organization Blanket and Tokamak Integration groups to modify the BM design in the cases where limits are exceeded.

  10. Schmidt decomposition and multivariate statistical analysis

    Science.gov (United States)

    Bogdanov, Yu. I.; Bogdanova, N. A.; Fastovets, D. V.; Luckichev, V. F.

    2016-12-01

    The new method of multivariate data analysis based on the complements of classical probability distribution to quantum state and Schmidt decomposition is presented. We considered Schmidt formalism application to problems of statistical correlation analysis. Correlation of photons in the beam splitter output channels, when input photons statistics is given by compound Poisson distribution is examined. The developed formalism allows us to analyze multidimensional systems and we have obtained analytical formulas for Schmidt decomposition of multivariate Gaussian states. It is shown that mathematical tools of quantum mechanics can significantly improve the classical statistical analysis. The presented formalism is the natural approach for the analysis of both classical and quantum multivariate systems and can be applied in various tasks associated with research of dependences.

  11. Statistics and analysis of scientific data

    CERN Document Server

    Bonamente, Massimiliano

    2013-01-01

    Statistics and Analysis of Scientific Data covers the foundations of probability theory and statistics, and a number of numerical and analytical methods that are essential for the present-day analyst of scientific data. Topics covered include probability theory, distribution functions of statistics, fits to two-dimensional datasheets and parameter estimation, Monte Carlo methods and Markov chains. Equal attention is paid to the theory and its practical application, and results from classic experiments in various fields are used to illustrate the importance of statistics in the analysis of scientific data. The main pedagogical method is a theory-then-application approach, where emphasis is placed first on a sound understanding of the underlying theory of a topic, which becomes the basis for an efficient and proactive use of the material for practical applications. The level is appropriate for undergraduates and beginning graduate students, and as a reference for the experienced researcher. Basic calculus is us...

  12. Instant Replay: Investigating statistical Analysis in Sports

    CERN Document Server

    Sidhu, Gagan

    2011-01-01

    Technology has had an unquestionable impact on the way people watch sports. As technology has evolved, so too has the knowledge of a casual sports fan. A direct result of this evolution is the amount of statistical analysis in sport. The goal of statistical analysis in sports is a simple one: to eliminate subjective analysis. Over the past four decades, statistics have slowly pervaded the viewing experience of sports. In this paper, we analyze previous work that proposed metrics and models that seek to evaluate various aspects of sports. The unifying goal of these works is an accurate representation of either the player or sport. We also look at work that investigates certain situations and their impact on the outcome a game. We conclude this paper with the discussion of potential future work in certain areas of sport..

  13. Detailed 3-D nuclear analysis of ITER blanket modules

    Energy Technology Data Exchange (ETDEWEB)

    Bohm, T.D., E-mail: tdbohm@wisc.edu [University of Wisconsin-Madison, Madison, WI (United States); Sawan, M.E.; Marriott, E.P.; Wilson, P.P.H. [University of Wisconsin-Madison, Madison, WI (United States); Ulrickson, M.; Bullock, J. [Sandia National Laboratories, Albuquerque, NM (United States)

    2014-10-15

    In ITER, the blanket modules (BM) are arranged around the plasma to provide thermal and nuclear shielding for the vacuum vessel (VV), magnets, and other components. As a part of the BM design process, nuclear analysis is required to determine the level of nuclear heating, helium production, and radiation damage in the BM. Additionally, nuclear heating in the VV is also important for assessing the BM design. We used the CAD based DAG-MCNP5 transport code to analyze detailed models inserted into a 40-degree partially homogenized ITER global model. The regions analyzed include BM01, the neutral beam injection (NB) region, and the upper port region. For BM01, the results show that He production meets the limit necessary for re-welding, and the VV heating behind BM01 is acceptable. For the NBI region, the VV nuclear heating behind the NB region exceeds the design limit by a factor of two. For the upper port region, the nuclear heating of the VV exceeds the design limit by up to 20%. The results presented in this work are being used to modify the BM design in the cases where limits are exceeded.

  14. Detailed analysis of structure and particle trajectories in sheared suspensions

    Science.gov (United States)

    Morris, Jeffrey; Katyal, Bhavana

    1999-11-01

    The structure and particle dynamics of sheared suspensions of hard spheres over a range of shear strength to Brownain motion (Péclet number, Pe) have been studied by detailed analysis of extended sampling of Stokesian Dynamics simulations of simple shear. The emphasis is upon large Pe. The structure has been analyzed by decomposition of the pair distribution function, g(r), into spherical harmonics; the harmonics are a complete set for the decompositon. The results indicate a profound and very marked change in structure due to shearing. It is shown that as Pe increases, the structure is increasingly distorted from teh equilibrium spherical symmetry and the number of harmonics required to recompose the original data to within an arbitrary accuracy increases, and this variation depends upon particle fraction. We present information on the content of the dominant harmonics as a function of radial distance for a pair, and interpret the results in terms of preferred directions in the material. Dynamic particle trajectories at time scales long relative to that used for the Brownian step are analyzed in a novel fashion by simple differential geometric measures, such as root mean square path curvature and torsion. Preliminary results illustrate that the path variation from mean flow correlates with the particle stress.

  15. Statistical analysis of network data with R

    CERN Document Server

    Kolaczyk, Eric D

    2014-01-01

    Networks have permeated everyday life through everyday realities like the Internet, social networks, and viral marketing. As such, network analysis is an important growth area in the quantitative sciences, with roots in social network analysis going back to the 1930s and graph theory going back centuries. Measurement and analysis are integral components of network research. As a result, statistical methods play a critical role in network analysis. This book is the first of its kind in network research. It can be used as a stand-alone resource in which multiple R packages are used to illustrate how to conduct a wide range of network analyses, from basic manipulation and visualization, to summary and characterization, to modeling of network data. The central package is igraph, which provides extensive capabilities for studying network graphs in R. This text builds on Eric D. Kolaczyk’s book Statistical Analysis of Network Data (Springer, 2009).

  16. Statistical Analysis of Data for Timber Strengths

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2003-01-01

    Statistical analyses are performed for material strength parameters from a large number of specimens of structural timber. Non-parametric statistical analysis and fits have been investigated for the following distribution types: Normal, Lognormal, 2 parameter Weibull and 3-parameter Weibull....... The statistical fits have generally been made using all data and the lower tail of the data. The Maximum Likelihood Method and the Least Square Technique have been used to estimate the statistical parameters in the selected distributions. The results show that the 2-parameter Weibull distribution gives the best...... fits to the data available, especially if tail fits are used whereas the Log Normal distribution generally gives a poor fit and larger coefficients of variation, especially if tail fits are used. The implications on the reliability level of typical structural elements and on partial safety factors...

  17. Statistics and analysis of scientific data

    CERN Document Server

    Bonamente, Massimiliano

    2017-01-01

    The revised second edition of this textbook provides the reader with a solid foundation in probability theory and statistics as applied to the physical sciences, engineering and related fields. It covers a broad range of numerical and analytical methods that are essential for the correct analysis of scientific data, including probability theory, distribution functions of statistics, fits to two-dimensional data and parameter estimation, Monte Carlo methods and Markov chains. Features new to this edition include: • a discussion of statistical techniques employed in business science, such as multiple regression analysis of multivariate datasets. • a new chapter on the various measures of the mean including logarithmic averages. • new chapters on systematic errors and intrinsic scatter, and on the fitting of data with bivariate errors. • a new case study and additional worked examples. • mathematical derivations and theoretical background material have been appropriately marked,to improve the readabili...

  18. About Statistical Analysis of Qualitative Survey Data

    Directory of Open Access Journals (Sweden)

    Stefan Loehnert

    2010-01-01

    Full Text Available Gathered data is frequently not in a numerical form allowing immediate appliance of the quantitative mathematical-statistical methods. In this paper are some basic aspects examining how quantitative-based statistical methodology can be utilized in the analysis of qualitative data sets. The transformation of qualitative data into numeric values is considered as the entrance point to quantitative analysis. Concurrently related publications and impacts of scale transformations are discussed. Subsequently, it is shown how correlation coefficients are usable in conjunction with data aggregation constrains to construct relationship modelling matrices. For illustration, a case study is referenced at which ordinal type ordered qualitative survey answers are allocated to process defining procedures as aggregation levels. Finally options about measuring the adherence of the gathered empirical data to such kind of derived aggregation models are introduced and a statistically based reliability check approach to evaluate the reliability of the chosen model specification is outlined.

  19. Energy Issues In Mobile Telecom Network: A Detailed Analysis

    Directory of Open Access Journals (Sweden)

    P. Balagangadhar Rao

    2011-12-01

    Full Text Available Diesel and Conventional energy costs are increasing at twice the growth rate of revenues of Mobile Telecom Network infrastructure industry. There is an urgent need to reduce the Operating Expenditure (OPEX in this front. While bridging the rural and urban divide, Telecom Operators should adopt stronger regulations for climate control by reducing the Green house gases like CO2.This strengthens the business case for renewable energy technology usage. Solutions like Solar, Fuel Cells, Wind, Biomass, and Geothermal can be explored and implemented in the arena of energy starving Telecom sector. Such sources provide clean and green energy. They are free and infinitely available. These technologies which use the natural resources are not only suitable for stand alone applications but also have long life span. Their maintenance cost is quite minimal. Most important advantage of the use of these natural resources is to have a low Carbon foot print. These are silent energy sources. Out of these, Solar-based solutions are available as Ground (or Tower mounted variants. Hybrid Technology solutions like Solar-Solar, Solar-DCDG (Direct Current Diesel Generators or Solar-battery bank are to be put into use in order to cut down the OPEX (Operating Expenditure. Further, a single Multi Fuel Cell can also be used, which can run on Ethanol/Bio Fuel/Compressed Natural Gas (CNG/Liquefied Petroleum Gas (LPG/Pyrolysis oil. Also, storage solutions like Lithium ion batteries reduce the Diesel Generator run hours, offering about fifty percent of savings in operating expenditure front. A detailed analysis is made in this paper in respect of the Energy requirements of Mobile Telecom Network; Minimising the Operating Costs by the usage of the technologies that harvest Natural resources; Sharing the Infrastructure by different Operators and bringing Energy efficiency by adopting latest Storage back up technologies.

  20. Human Factors Considerations in New Nuclear Power Plants: Detailed Analysis.

    Energy Technology Data Exchange (ETDEWEB)

    OHara,J.; Higgins, J.; Brown, W.; Fink, R.

    2008-02-14

    This Nuclear Regulatory Commission (NRC) sponsored study has identified human-performance issues in new and advanced nuclear power plants. To identify the issues, current industry developments and trends were evaluated in the areas of reactor technology, instrumentation and control technology, human-system integration technology, and human factors engineering (HFE) methods and tools. The issues were organized into seven high-level HFE topic areas: Role of Personnel and Automation, Staffing and Training, Normal Operations Management, Disturbance and Emergency Management, Maintenance and Change Management, Plant Design and Construction, and HFE Methods and Tools. The issues where then prioritized into four categories using a 'Phenomena Identification and Ranking Table' methodology based on evaluations provided by 14 independent subject matter experts. The subject matter experts were knowledgeable in a variety of disciplines. Vendors, utilities, research organizations and regulators all participated. Twenty issues were categorized into the top priority category. This Brookhaven National Laboratory (BNL) technical report provides the detailed methodology, issue analysis, and results. A summary of the results of this study can be found in NUREG/CR-6947. The research performed for this project has identified a large number of human-performance issues for new control stations and new nuclear power plant designs. The information gathered in this project can serve as input to the development of a long-term strategy and plan for addressing human performance in these areas through regulatory research. Addressing human-performance issues will provide the technical basis from which regulatory review guidance can be developed to meet these challenges. The availability of this review guidance will help set clear expectations for how the NRC staff will evaluate new designs, reduce regulatory uncertainty, and provide a well-defined path to new nuclear power plant

  1. Analysis of Common Fatigue Details in Steel Truss Structures

    Institute of Scientific and Technical Information of China (English)

    张玉玲; 潘际炎; 潘际銮

    2004-01-01

    Generally, the number of fatigue cycles, the range of the repeated stresses, and the type of the structural details are the key factors affecting fatigue in large-scale welded structures. Seven types of structure details were tested using a 2000-kN hydraulic-pressure-servo fatigue machine to imitate fatigue behavior in modern steel-truss-structures fabricated using thicker welded steel plates and integral joint technology. The details included longitudinal edge welds, welded attachment affecting detail, integral joint, and weld repairs on plate edges. The fatigue damage locations show that the stress (normal or shear), the shape, and the location of the weld start and end points are three major factors reducing the fatigue strength. The test results can be used for similar large structures.

  2. The fuzzy approach to statistical analysis

    NARCIS (Netherlands)

    Coppi, Renato; Gil, Maria A.; Kiers, Henk A. L.

    2006-01-01

    For the last decades, research studies have been developed in which a coalition of Fuzzy Sets Theory and Statistics has been established with different purposes. These namely are: (i) to introduce new data analysis problems in which the objective involves either fuzzy relationships or fuzzy terms;

  3. Statistical Analysis of Random Simulations : Bootstrap Tutorial

    NARCIS (Netherlands)

    Deflandre, D.; Kleijnen, J.P.C.

    2002-01-01

    The bootstrap is a simple but versatile technique for the statistical analysis of random simulations.This tutorial explains the basics of that technique, and applies it to the well-known M/M/1 queuing simulation.In that numerical example, different responses are studied.For some responses, bootstrap

  4. Bayesian Statistics for Biological Data: Pedigree Analysis

    Science.gov (United States)

    Stanfield, William D.; Carlton, Matthew A.

    2004-01-01

    The use of Bayes' formula is applied to the biological problem of pedigree analysis to show that the Bayes' formula and non-Bayesian or "classical" methods of probability calculation give different answers. First year college students of biology can be introduced to the Bayesian statistics.

  5. Selected papers on analysis, probability, and statistics

    CERN Document Server

    Nomizu, Katsumi

    1994-01-01

    This book presents papers that originally appeared in the Japanese journal Sugaku. The papers fall into the general area of mathematical analysis as it pertains to probability and statistics, dynamical systems, differential equations and analytic function theory. Among the topics discussed are: stochastic differential equations, spectra of the Laplacian and Schrödinger operators, nonlinear partial differential equations which generate dissipative dynamical systems, fractal analysis on self-similar sets and the global structure of analytic functions.

  6. Statistical Analysis of Thermal Analysis Margin

    Science.gov (United States)

    Garrison, Matthew B.

    2011-01-01

    NASA Goddard Space Flight Center requires that each project demonstrate a minimum of 5 C margin between temperature predictions and hot and cold flight operational limits. The bounding temperature predictions include worst-case environment and thermal optical properties. The purpose of this work is to: assess how current missions are performing against their pre-launch bounding temperature predictions and suggest any possible changes to the thermal analysis margin rules

  7. The Statistical Analysis of Time Series

    CERN Document Server

    Anderson, T W

    2011-01-01

    The Wiley Classics Library consists of selected books that have become recognized classics in their respective fields. With these new unabridged and inexpensive editions, Wiley hopes to extend the life of these important works by making them available to future generations of mathematicians and scientists. Currently available in the Series: T. W. Anderson Statistical Analysis of Time Series T. S. Arthanari & Yadolah Dodge Mathematical Programming in Statistics Emil Artin Geometric Algebra Norman T. J. Bailey The Elements of Stochastic Processes with Applications to the Natural Sciences George

  8. Statistical analysis of next generation sequencing data

    CERN Document Server

    Nettleton, Dan

    2014-01-01

    Next Generation Sequencing (NGS) is the latest high throughput technology to revolutionize genomic research. NGS generates massive genomic datasets that play a key role in the big data phenomenon that surrounds us today. To extract signals from high-dimensional NGS data and make valid statistical inferences and predictions, novel data analytic and statistical techniques are needed. This book contains 20 chapters written by prominent statisticians working with NGS data. The topics range from basic preprocessing and analysis with NGS data to more complex genomic applications such as copy number variation and isoform expression detection. Research statisticians who want to learn about this growing and exciting area will find this book useful. In addition, many chapters from this book could be included in graduate-level classes in statistical bioinformatics for training future biostatisticians who will be expected to deal with genomic data in basic biomedical research, genomic clinical trials and personalized med...

  9. Statistical Tools for Forensic Analysis of Toolmarks

    Energy Technology Data Exchange (ETDEWEB)

    David Baldwin; Max Morris; Stan Bajic; Zhigang Zhou; James Kreiser

    2004-04-22

    Recovery and comparison of toolmarks, footprint impressions, and fractured surfaces connected to a crime scene are of great importance in forensic science. The purpose of this project is to provide statistical tools for the validation of the proposition that particular manufacturing processes produce marks on the work-product (or tool) that are substantially different from tool to tool. The approach to validation involves the collection of digital images of toolmarks produced by various tool manufacturing methods on produced work-products and the development of statistical methods for data reduction and analysis of the images. The developed statistical methods provide a means to objectively calculate a ''degree of association'' between matches of similarly produced toolmarks. The basis for statistical method development relies on ''discriminating criteria'' that examiners use to identify features and spatial relationships in their analysis of forensic samples. The developed data reduction algorithms utilize the same rules used by examiners for classification and association of toolmarks.

  10. Statistical Analysis of Iberian Peninsula Megaliths Orientations

    Science.gov (United States)

    González-García, A. C.

    2009-08-01

    Megalithic monuments have been intensively surveyed and studied from the archaeoastronomical point of view in the past decades. We have orientation measurements for over one thousand megalithic burial monuments in the Iberian Peninsula, from several different periods. These data, however, lack a sound understanding. A way to classify and start to understand such orientations is by means of statistical analysis of the data. A first attempt is done with simple statistical variables and a mere comparison between the different areas. In order to minimise the subjectivity in the process a further more complicated analysis is performed. Some interesting results linking the orientation and the geographical location will be presented. Finally I will present some models comparing the orientation of the megaliths in the Iberian Peninsula with the rising of the sun and the moon at several times of the year.

  11. Statistical quality control through overall vibration analysis

    Science.gov (United States)

    Carnero, M. a. Carmen; González-Palma, Rafael; Almorza, David; Mayorga, Pedro; López-Escobar, Carlos

    2010-05-01

    The present study introduces the concept of statistical quality control in automotive wheel bearings manufacturing processes. Defects on products under analysis can have a direct influence on passengers' safety and comfort. At present, the use of vibration analysis on machine tools for quality control purposes is not very extensive in manufacturing facilities. Noise and vibration are common quality problems in bearings. These failure modes likely occur under certain operating conditions and do not require high vibration amplitudes but relate to certain vibration frequencies. The vibration frequencies are affected by the type of surface problems (chattering) of ball races that are generated through grinding processes. The purpose of this paper is to identify grinding process variables that affect the quality of bearings by using statistical principles in the field of machine tools. In addition, an evaluation of the quality results of the finished parts under different combinations of process variables is assessed. This paper intends to establish the foundations to predict the quality of the products through the analysis of self-induced vibrations during the contact between the grinding wheel and the parts. To achieve this goal, the overall self-induced vibration readings under different combinations of process variables are analysed using statistical tools. The analysis of data and design of experiments follows a classical approach, considering all potential interactions between variables. The analysis of data is conducted through analysis of variance (ANOVA) for data sets that meet normality and homoscedasticity criteria. This paper utilizes different statistical tools to support the conclusions such as chi squared, Shapiro-Wilks, symmetry, Kurtosis, Cochran, Hartlett, and Hartley and Krushal-Wallis. The analysis presented is the starting point to extend the use of predictive techniques (vibration analysis) for quality control. This paper demonstrates the existence

  12. Statistical mechanics analysis of sparse data.

    Science.gov (United States)

    Habeck, Michael

    2011-03-01

    Inferential structure determination uses Bayesian theory to combine experimental data with prior structural knowledge into a posterior probability distribution over protein conformational space. The posterior distribution encodes everything one can say objectively about the native structure in the light of the available data and additional prior assumptions and can be searched for structural representatives. Here an analogy is drawn between the posterior distribution and the canonical ensemble of statistical physics. A statistical mechanics analysis assesses the complexity of a structure calculation globally in terms of ensemble properties. Analogs of the free energy and density of states are introduced; partition functions evaluate the consistency of prior assumptions with data. Critical behavior is observed with dwindling restraint density, which impairs structure determination with too sparse data. However, prior distributions with improved realism ameliorate the situation by lowering the critical number of observations. An in-depth analysis of various experimentally accessible structural parameters and force field terms will facilitate a statistical approach to protein structure determination with sparse data that avoids bias as much as possible.

  13. Sensitivity analysis and related analysis : A survey of statistical techniques

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    1995-01-01

    This paper reviews the state of the art in five related types of analysis, namely (i) sensitivity or what-if analysis, (ii) uncertainty or risk analysis, (iii) screening, (iv) validation, and (v) optimization. The main question is: when should which type of analysis be applied; which statistical

  14. Statistical analysis of sleep spindle occurrences.

    Science.gov (United States)

    Panas, Dagmara; Malinowska, Urszula; Piotrowski, Tadeusz; Żygierewicz, Jarosław; Suffczyński, Piotr

    2013-01-01

    Spindles - a hallmark of stage II sleep - are a transient oscillatory phenomenon in the EEG believed to reflect thalamocortical activity contributing to unresponsiveness during sleep. Currently spindles are often classified into two classes: fast spindles, with a frequency of around 14 Hz, occurring in the centro-parietal region; and slow spindles, with a frequency of around 12 Hz, prevalent in the frontal region. Here we aim to establish whether the spindle generation process also exhibits spatial heterogeneity. Electroencephalographic recordings from 20 subjects were automatically scanned to detect spindles and the time occurrences of spindles were used for statistical analysis. Gamma distribution parameters were fit to each inter-spindle interval distribution, and a modified Wald-Wolfowitz lag-1 correlation test was applied. Results indicate that not all spindles are generated by the same statistical process, but this dissociation is not spindle-type specific. Although this dissociation is not topographically specific, a single generator for all spindle types appears unlikely.

  15. Statistical error analysis of reactivity measurement

    Energy Technology Data Exchange (ETDEWEB)

    Thammaluckan, Sithisak; Hah, Chang Joo [KEPCO International Nuclear Graduate School, Ulsan (Korea, Republic of)

    2013-10-15

    After statistical analysis, it was confirmed that each group were sampled from same population. It is observed in Table 7 that the mean error decreases as core size increases. Application of bias factor obtained from this research reduces mean error further. The point kinetic model had been used to measure control rod worth without 3D spatial information of neutron flux or power distribution, which causes inaccurate result. Dynamic Control rod Reactivity Measurement (DCRM) was employed to take into account of 3D spatial information of flux in the point kinetics model. The measured bank worth probably contains some uncertainty such as methodology uncertainty and measurement uncertainty. Those uncertainties may varies with size of core and magnitude of reactivity. The goal of this research is to investigate the effect of core size and magnitude of control rod worth on the error of reactivity measurement using statistics.

  16. Statistical Analysis of Ship Collisions with Bridges in China Waterway

    Institute of Scientific and Technical Information of China (English)

    DAI Tong-yu; NIE Wu; LIU Ying-jie; WANG Li-ping

    2002-01-01

    Having carried out investigations on ship collision accidents with bridges in waterway in China, a database of ship collision with bridge (SCB) is developed in this paper. It includes detailed information about more than 200 accidents near ship's waterways in the last four decades, in which ships collided with the bridges. Based on the information a statistical analysis is presented tentatively. The increase in frequency of ship collision with bridges appears, and the accident quantity of the barge system is more than that of single ship. The main reason of all the factors for ship collision with bridge is the human errors, which takes up 70%. The quantity of the accidents happened during flooding period shows over 3~6 times compared with the period from March to June in a year. The probability follows the normal distribution according to statistical analysis. Visibility, span between piers also have an effect on the frequency of the accidents.

  17. Detailed analysis of observed antiprotons in cosmic rays

    Directory of Open Access Journals (Sweden)

    P Davoudifar

    2009-12-01

    Full Text Available In the present work, the origin of antiprotons observed in cosmic rays (above the atmosphere is analyzed in details. We have considered the origin of the primaries, (which their interactions with the interstellar medium is one of the most important sources of antiprotons is a supernova type II then used a diffusion model for their propagation. We have used the latest parameterization for antiproton production cross section in pp collisions (instead of well known parameterization introduced by Tan et al. as well as our calculated residence time for primaries. The resulted intensity shows the secondary antiprotons produced in pp collisions in the galaxy, have a high population as one can not consider an excess for extragalactic antiprotons. Also there is a high degree of uncertainty in different parameters.

  18. On quantum statistics in data analysis

    CERN Document Server

    Pavlovic, Dusko

    2008-01-01

    Originally, quantum probability theory was developed to analyze statistical phenomena in quantum systems, where classical probability theory does not apply, because the lattice of measurable sets is not necessarily distributive. On the other hand, it is well known that the lattices of concepts, that arise in data analysis, are in general also non-distributive, albeit for completely different reasons. In his recent book, van Rijsbergen argues that many of the logical tools developed for quantum systems are also suitable for applications in information retrieval. I explore the mathematical support for this idea on an abstract vector space model, covering several forms of data analysis (information retrieval, data mining, collaborative filtering, formal concept analysis...), and roughly based on an idea from categorical quantum mechanics. It turns out that quantum (i.e., noncommutative) probability distributions arise already in this rudimentary mathematical framework. Moreover, a Bell-type inequality is formula...

  19. Multiscale statistical analysis of coronal solar activity

    CERN Document Server

    Gamborino, Diana; Martinell, Julio J

    2016-01-01

    Multi-filter images from the solar corona are used to obtain temperature maps which are analyzed using techniques based on proper orthogonal decomposition (POD) in order to extract dynamical and structural information at various scales. Exploring active regions before and after a solar flare and comparing them with quiet regions we show that the multiscale behavior presents distinct statistical properties for each case that can be used to characterize the level of activity in a region. Information about the nature of heat transport is also be extracted from the analysis.

  20. Statistical Hot Channel Analysis for the NBSR

    Energy Technology Data Exchange (ETDEWEB)

    Cuadra A.; Baek J.

    2014-05-27

    A statistical analysis of thermal limits has been carried out for the research reactor (NBSR) at the National Institute of Standards and Technology (NIST). The objective of this analysis was to update the uncertainties of the hot channel factors with respect to previous analysis for both high-enriched uranium (HEU) and low-enriched uranium (LEU) fuels. Although uncertainties in key parameters which enter into the analysis are not yet known for the LEU core, the current analysis uses reasonable approximations instead of conservative estimates based on HEU values. Cumulative distribution functions (CDFs) were obtained for critical heat flux ratio (CHFR), and onset of flow instability ratio (OFIR). As was done previously, the Sudo-Kaminaga correlation was used for CHF and the Saha-Zuber correlation was used for OFI. Results were obtained for probability levels of 90%, 95%, and 99.9%. As an example of the analysis, the results for both the existing reactor with HEU fuel and the LEU core show that CHFR would have to be above 1.39 to assure with 95% probability that there is no CHF. For the OFIR, the results show that the ratio should be above 1.40 to assure with a 95% probability that OFI is not reached.

  1. Cutting costs through detailed probabilistic fire risk analysis

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Luiz; Huser, Asmund; Vianna, Savio [Det Norske Veritas PRINCIPIA, Rio de Janeiro, RJ (Brazil)

    2004-07-01

    A new procedure for calculation of fire risks to offshore installations has been developed. The purposes of the procedure are to calculate the escalation and impairment frequencies to be applied in quantitative risk analyses, to optimize Passive Fire Protection (PFP) arrangement, and to optimize other fire mitigation means. The novelties of the procedure are that it uses state of the art Computational Fluid Dynamics (CFD) models to simulate fires and radiation, as well as the use of a probabilistic approach to decide the dimensioning fire loads. A CFD model of an actual platform was used to investigate the dynamic properties of a large set of jet fires, resulting in detailed knowledge of the important parameters that decide the severity of offshore fires. These results are applied to design the procedure. Potential increase in safety is further obtained for those conditions where simplified tools may have failed to predict abnormal heat loads due to geometrical effects. Using a field example it is indicated that the probabilistic approach can give significant reductions in PFP coverage with corresponding cost savings, still keeping the risk at acceptable level. (author)

  2. Detailed analysis of turbulent flows in air curtains

    NARCIS (Netherlands)

    Jaramillo, Julian E.; Perez-Segarra, Carlos D.; Lehmkuhl, Oriol; Castro, Jesus

    2011-01-01

    In order to prevent entrainment, an air curtain should provide a jet with low turbulence level, and enough momentum to counteract pressure differences across the opening. Consequently, the analysis of the discharge plenum should be taken into consideration. Hence, the main object of this paper is to

  3. Detailed analysis of turbulent flows in air curtains

    NARCIS (Netherlands)

    Jaramillo, Julian E.; Perez-Segarra, Carlos D.; Lehmkuhl, Oriol; Castro, Jesus

    2011-01-01

    In order to prevent entrainment, an air curtain should provide a jet with low turbulence level, and enough momentum to counteract pressure differences across the opening. Consequently, the analysis of the discharge plenum should be taken into consideration. Hence, the main object of this paper is to

  4. Lightning climatology in the Congo Basin: detailed analysis

    Science.gov (United States)

    Soula, Serge; Kigotsi, Jean; Georgis, Jean-François; Barthe, Christelle

    2016-04-01

    The lightning climatology of the Congo Basin including several countries of Central Africa is analyzed in detail for the first time. It is based on World Wide Lightning Location Network (WWLLN) data for the period from 2005 to 2013. A comparison of these data with the Lightning Imaging Sensor (LIS) data for the same period shows the WWLLN detection efficiency (DE) in the region increases from about 1.70 % in the beginning of the period to 5.90 % in 2013, relative to LIS data, but not uniformly over the whole 2750 km × 2750 km area. Both the annual flash density and the number of stormy days show sharp maximum values localized in eastern of Democratic Republic of Congo (DRC) and west of Kivu Lake, regardless of the reference year and the period of the year. These maxima reach 12.86 fl km-2 and 189 days, respectively, in 2013, and correspond with a very active region located at the rear of the Virunga mountain range characterised with summits that can reach 3000 m. The presence of this range plays a role in the thunderstorm development along the year. The estimation of this local maximum of the lightning density by taking into account the DE, leads to a value consistent with that of the global climatology by Christian et al. (2003) and other authors. Thus, a mean maximum value of about 157 fl km-2 y-1 is found for the annual lightning density. The zonal distribution of the lightning flashes exhibits a maximum between 1°S and 2°S and about 56 % of the flashes located below the equator in the 10°S - 10°N interval. The diurnal evolution of the flash rate has a maximum between 1400 and 1700 UTC, according to the reference year, in agreement with previous works in other regions of the world.

  5. Statistical energy analysis of similarly coupled systems

    Institute of Scientific and Technical Information of China (English)

    ZHANG Jian

    2002-01-01

    Based on the principle of Statistical Energy Analysis (SEA) for non-conservatively coupled dynamical systems under non-correlative or correlative excitations, energy relationship between two similar SEA systems is established in the paper. The energy relationship is verified theoretically and experimentally from two similar SEA systems i.e., the structure of a coupled panel-beam and that of a coupled panel-sideframe, in the cases of conservative coupling and non-conservative coupling respectively. As an application of the method, relationship between noise power radiated from two similar cutting systems is studied. Results show that there are good agreements between the theory and the experiments, and the method is valuable to analysis of dynamical problems associated with a complicated system from that with a simple one.

  6. STATISTICAL ANALYSIS OF PUBLIC ADMINISTRATION PAY

    Directory of Open Access Journals (Sweden)

    Elena I. Dobrolyubova

    2014-01-01

    Full Text Available This article reviews the progress achieved inimproving the pay system in public administration and outlines the key issues to be resolved.The cross-country comparisons presented inthe article suggest high differentiation in pay levels depending on position held. In fact,this differentiation in Russia exceeds one in OECD almost twofold The analysis of theinternal pay structure demonstrates that thelow share of the base pay leads to perversenature of ‘stimulation elements’ of the paysystem which in fact appear to be used mostlyfor compensation purposes. The analysis of regional statistical data demonstrates thatdespite high differentiation among regionsin terms of their revenue potential, averagepublic official pay is strongly correlated withthe average regional pay.

  7. Wavelet and statistical analysis for melanoma classification

    Science.gov (United States)

    Nimunkar, Amit; Dhawan, Atam P.; Relue, Patricia A.; Patwardhan, Sachin V.

    2002-05-01

    The present work focuses on spatial/frequency analysis of epiluminesence images of dysplastic nevus and melanoma. A three-level wavelet decomposition was performed on skin-lesion images to obtain coefficients in the wavelet domain. A total of 34 features were obtained by computing ratios of the mean, variance, energy and entropy of the wavelet coefficients along with the mean and standard deviation of image intensity. An unpaired t-test for a normal distribution based features and the Wilcoxon rank-sum test for non-normal distribution based features were performed for selecting statistically correlated features. For our data set, the statistical analysis of features reduced the feature set from 34 to 5 features. For classification, the discriminant functions were computed in the feature space using the Mahanalobis distance. ROC curves were generated and evaluated for false positive fraction from 0.1 to 0.4. Most of the discrimination functions provided a true positive rate for melanoma of 93% with a false positive rate up to 21%.

  8. A Detailed Spectroscopic and Photometric Analysis of DQ White Dwarfs

    CERN Document Server

    Dufour, P; Fontaine, G

    2005-01-01

    We present an analysis of spectroscopic and photometric observations of cool DQ white dwarfs based on improved model atmosphere calculations. In particular, we revise the atmospheric parameters of the trigonometric parallax sample of Bergeron, Leggett, & Ruiz, and discuss the astrophysical implications on the temperature scale and mean mass, as well as the chemical evolution of these stars. We also analyze 40 new DQ stars discovered in the first data release of the Sloan Digital Sky Survey. Our analysis confirms that effective temperatures derived from model atmospheres including carbon are significantly lower than the temperatures obtained from pure helium models. Similarly the mean mass of the trigonometric parallax sample, = 0.62 Mo, is significantly lower than that obtained from pure helium models, = 0.73 Mo, and more consistent with the spectroscopic mean mass of DB stars, = 0.59 Mo, the most likely progenitors of DQ white dwarfs. We find that DQ stars form a remarkably well defined sequence in a ...

  9. Resolution requirements for monitor viewing of digital flat-panel detector radiographs: a contrast detail analysis

    Energy Technology Data Exchange (ETDEWEB)

    Peer, Siegfried; Giacomuzzi, Salvatore M.; Peer, Regina; Gassner, Eva; Steingruber, Iris; Jaschke, Werner [Department of Radiology, University Hospital, Anichstrasse 35, 6020 Innsbruck (Austria)

    2003-02-01

    With the introduction of digital flat-panel detector systems into clinical practice, the still unresolved question of resolution requirements for picture archiving communication system (PACS) workstation monitors has gained new momentum. This contrast detail analysis was thus performed to define the differences in observer performance in the detection of small low-contrast objects on clinical 1K and 2K monitor workstations. Images of the CDRAD 2.0 phantom were acquired at varying exposures on an indirect-type digital flat-panel detector. Three observers evaluated a total of 15 images each with respect to the threshold contrast for each detail size. The numbers of correctly identified objects were determined for all image subsets. No significant difference in the correct detection ratio was detected among the observers; however, the difference between the two types of workstations (1K vs 2K monitors) despite less than 3% was significant at a 95% confidence level. Slight but statistically significant differences exist in the detection of low-contrast nodular details visualized on 1K- and 2K-monitor workstations. Further work is needed to see if this result holds true also for comparison of clinical flat-panel detector images and may, for example, exert an influence on the diagnostic accuracy of chest X-ray readings. (orig.)

  10. Statistical analysis of tourism destination competitiveness

    Directory of Open Access Journals (Sweden)

    Attilio Gardini

    2013-05-01

    Full Text Available The growing relevance of tourism industry for modern advanced economies has increased the interest among researchers and policy makers in the statistical analysis of destination competitiveness. In this paper we outline a new model of destination competitiveness based on sound theoretical grounds and we develop a statistical test of the model on sample data based on Italian tourist destination decisions and choices. Our model focuses on the tourism decision process which starts from the demand schedule for holidays and ends with the choice of a specific holiday destination. The demand schedule is a function of individual preferences and of destination positioning, while the final decision is a function of the initial demand schedule and the information concerning services for accommodation and recreation in the selected destinations. Moreover, we extend previous studies that focused on image or attributes (such as climate and scenery by paying more attention to the services for accommodation and recreation in the holiday destinations. We test the proposed model using empirical data collected from a sample of 1.200 Italian tourists interviewed in 2007 (October - December. Data analysis shows that the selection probability for the destination included in the consideration set is not proportional to the share of inclusion because the share of inclusion is determined by the brand image, while the selection of the effective holiday destination is influenced by the real supply conditions. The analysis of Italian tourists preferences underline the existence of a latent demand for foreign holidays which points out a risk of market share reduction for Italian tourism system in the global market. We also find a snow ball effect which helps the most popular destinations, mainly in the northern Italian regions.

  11. Multivariate statistical analysis of wildfires in Portugal

    Science.gov (United States)

    Costa, Ricardo; Caramelo, Liliana; Pereira, Mário

    2013-04-01

    Several studies demonstrate that wildfires in Portugal present high temporal and spatial variability as well as cluster behavior (Pereira et al., 2005, 2011). This study aims to contribute to the characterization of the fire regime in Portugal with the multivariate statistical analysis of the time series of number of fires and area burned in Portugal during the 1980 - 2009 period. The data used in the analysis is an extended version of the Rural Fire Portuguese Database (PRFD) (Pereira et al, 2011), provided by the National Forest Authority (Autoridade Florestal Nacional, AFN), the Portuguese Forest Service, which includes information for more than 500,000 fire records. There are many multiple advanced techniques for examining the relationships among multiple time series at the same time (e.g., canonical correlation analysis, principal components analysis, factor analysis, path analysis, multiple analyses of variance, clustering systems). This study compares and discusses the results obtained with these different techniques. Pereira, M.G., Trigo, R.M., DaCamara, C.C., Pereira, J.M.C., Leite, S.M., 2005: "Synoptic patterns associated with large summer forest fires in Portugal". Agricultural and Forest Meteorology. 129, 11-25. Pereira, M. G., Malamud, B. D., Trigo, R. M., and Alves, P. I.: The history and characteristics of the 1980-2005 Portuguese rural fire database, Nat. Hazards Earth Syst. Sci., 11, 3343-3358, doi:10.5194/nhess-11-3343-2011, 2011 This work is supported by European Union Funds (FEDER/COMPETE - Operational Competitiveness Programme) and by national funds (FCT - Portuguese Foundation for Science and Technology) under the project FCOMP-01-0124-FEDER-022692, the project FLAIR (PTDC/AAC-AMB/104702/2008) and the EU 7th Framework Program through FUME (contract number 243888).

  12. Statistical analysis of sleep spindle occurrences.

    Directory of Open Access Journals (Sweden)

    Dagmara Panas

    Full Text Available Spindles - a hallmark of stage II sleep - are a transient oscillatory phenomenon in the EEG believed to reflect thalamocortical activity contributing to unresponsiveness during sleep. Currently spindles are often classified into two classes: fast spindles, with a frequency of around 14 Hz, occurring in the centro-parietal region; and slow spindles, with a frequency of around 12 Hz, prevalent in the frontal region. Here we aim to establish whether the spindle generation process also exhibits spatial heterogeneity. Electroencephalographic recordings from 20 subjects were automatically scanned to detect spindles and the time occurrences of spindles were used for statistical analysis. Gamma distribution parameters were fit to each inter-spindle interval distribution, and a modified Wald-Wolfowitz lag-1 correlation test was applied. Results indicate that not all spindles are generated by the same statistical process, but this dissociation is not spindle-type specific. Although this dissociation is not topographically specific, a single generator for all spindle types appears unlikely.

  13. Managing Performance Analysis with Dynamic Statistical Projection Pursuit

    Energy Technology Data Exchange (ETDEWEB)

    Vetter, J.S.; Reed, D.A.

    2000-05-22

    Computer systems and applications are growing more complex. Consequently, performance analysis has become more difficult due to the complex, transient interrelationships among runtime components. To diagnose these types of performance issues, developers must use detailed instrumentation to capture a large number of performance metrics. Unfortunately, this instrumentation may actually influence the performance analysis, leading the developer to an ambiguous conclusion. In this paper, we introduce a technique for focusing a performance analysis on interesting performance metrics. This technique, called dynamic statistical projection pursuit, identifies interesting performance metrics that the monitoring system should capture across some number of processors. By reducing the number of performance metrics, projection pursuit can limit the impact of instrumentation on the performance of the target system and can reduce the volume of performance data.

  14. Statistical Analysis of Bus Networks in India

    CERN Document Server

    Chatterjee, Atanu; Ramadurai, Gitakrishnan

    2015-01-01

    Through the past decade the field of network science has established itself as a common ground for the cross-fertilization of exciting inter-disciplinary studies which has motivated researchers to model almost every physical system as an interacting network consisting of nodes and links. Although public transport networks such as airline and railway networks have been extensively studied, the status of bus networks still remains in obscurity. In developing countries like India, where bus networks play an important role in day-to-day commutation, it is of significant interest to analyze its topological structure and answer some of the basic questions on its evolution, growth, robustness and resiliency. In this paper, we model the bus networks of major Indian cities as graphs in \\textit{L}-space, and evaluate their various statistical properties using concepts from network science. Our analysis reveals a wide spectrum of network topology with the common underlying feature of small-world property. We observe tha...

  15. Strategic cost-benefit analysis of energy policies: detailed projections

    Energy Technology Data Exchange (ETDEWEB)

    Davitian, H.; Groncki, P.J.; Kleeman, P.; Lukachinski, J.

    1979-10-01

    Current US energy policy includes many programs directed toward restructuring the energy system in order to decrease US dependence on foreign supplies and to increase our reliance on plentiful and environmentally benign energy forms. However, recent events have led to renewed concern over the direction of current energy policy. This study describes three possible energy strategies and analyzes each in terms of its economic, environmental, and national security benefits and costs. Each strategy is represented by a specific policy. In the first, no additional programs or policies are initiated beyond those currently in effect or announced. The second is directed toward reducing the growth in energy demand, i.e., energy conservation. The third promotes increased domestic supply through accelerated development of synthetic and unconventional fuels. The analysis focuses on the evaluation and comparison of these strategy alternatives with respect to their energy, economic, and environmental consequences. Results indicate that conservation can substantially reduce import dependence and slow the growth of energy demand, with only a small macroeconomic cost and with substantial environmental benefits; the synfuels policy reduces imports by a smaller amount, does not reduce the growth in energy demand, involves substantial environmental costs and slows the rate of economic growth. These relationships could be different if the energy savings per unit cost for conservation are less than anticipated, or if the costs of synthetic fuels can be significantly lowered. Given these uncertainties, both conservation and RD and D support for synfuels should be included in future energy policy. However, between these policy alternatives, conservation appears to be the preferred strategy. The results of this study are presented in three reports (see also BNL--51105 and BNL--51128). 11 references, 3 figures, 61 tables.

  16. Statistical Analysis of 30 Years Rainfall Data: A Case Study

    Science.gov (United States)

    Arvind, G.; Ashok Kumar, P.; Girish Karthi, S.; Suribabu, C. R.

    2017-07-01

    Rainfall is a prime input for various engineering design such as hydraulic structures, bridges and culverts, canals, storm water sewer and road drainage system. The detailed statistical analysis of each region is essential to estimate the relevant input value for design and analysis of engineering structures and also for crop planning. A rain gauge station located closely in Trichy district is selected for statistical analysis where agriculture is the prime occupation. The daily rainfall data for a period of 30 years is used to understand normal rainfall, deficit rainfall, Excess rainfall and Seasonal rainfall of the selected circle headquarters. Further various plotting position formulae available is used to evaluate return period of monthly, seasonally and annual rainfall. This analysis will provide useful information for water resources planner, farmers and urban engineers to assess the availability of water and create the storage accordingly. The mean, standard deviation and coefficient of variation of monthly and annual rainfall was calculated to check the rainfall variability. From the calculated results, the rainfall pattern is found to be erratic. The best fit probability distribution was identified based on the minimum deviation between actual and estimated values. The scientific results and the analysis paved the way to determine the proper onset and withdrawal of monsoon results which were used for land preparation and sowing.

  17. On two methods of statistical image analysis

    NARCIS (Netherlands)

    Missimer, J; Knorr, U; Maguire, RP; Herzog, H; Seitz, RJ; Tellman, L; Leenders, KL

    1999-01-01

    The computerized brain atlas (CBA) and statistical parametric mapping (SPM) are two procedures for voxel-based statistical evaluation of PET activation studies. Each includes spatial standardization of image volumes, computation of a statistic, and evaluation of its significance. In addition, smooth

  18. Statistics Analysis Measures Painting of Cooling Tower

    Directory of Open Access Journals (Sweden)

    A. Zacharopoulou

    2013-01-01

    Full Text Available This study refers to the cooling tower of Megalopolis (construction 1975 and protection from corrosive environment. The maintenance of the cooling tower took place in 2008. The cooling tower was badly damaged from corrosion of reinforcement. The parabolic cooling towers (factory of electrical power are a typical example of construction, which has a special aggressive environment. The protection of cooling towers is usually achieved through organic coatings. Because of the different environmental impacts on the internal and external side of the cooling tower, a different system of paint application is required. The present study refers to the damages caused by corrosion process. The corrosive environments, the application of this painting, the quality control process, the measures and statistics analysis, and the results were discussed in this study. In the process of quality control the following measurements were taken into consideration: (1 examination of the adhesion with the cross-cut test, (2 examination of the film thickness, and (3 controlling of the pull-off resistance for concrete substrates and paintings. Finally, this study refers to the correlations of measurements, analysis of failures in relation to the quality of repair, and rehabilitation of the cooling tower. Also this study made a first attempt to apply the specific corrosion inhibitors in such a large structure.

  19. An R package for statistical provenance analysis

    Science.gov (United States)

    Vermeesch, Pieter; Resentini, Alberto; Garzanti, Eduardo

    2016-05-01

    This paper introduces provenance, a software package within the statistical programming environment R, which aims to facilitate the visualisation and interpretation of large amounts of sedimentary provenance data, including mineralogical, petrographic, chemical and isotopic provenance proxies, or any combination of these. provenance comprises functions to: (a) calculate the sample size required to achieve a given detection limit; (b) plot distributional data such as detrital zircon U-Pb age spectra as Cumulative Age Distributions (CADs) or adaptive Kernel Density Estimates (KDEs); (c) plot compositional data as pie charts or ternary diagrams; (d) correct the effects of hydraulic sorting on sandstone petrography and heavy mineral composition; (e) assess the settling equivalence of detrital minerals and grain-size dependence of sediment composition; (f) quantify the dissimilarity between distributional data using the Kolmogorov-Smirnov and Sircombe-Hazelton distances, or between compositional data using the Aitchison and Bray-Curtis distances; (e) interpret multi-sample datasets by means of (classical and nonmetric) Multidimensional Scaling (MDS) and Principal Component Analysis (PCA); and (f) simplify the interpretation of multi-method datasets by means of Generalised Procrustes Analysis (GPA) and 3-way MDS. All these tools can be accessed through an intuitive query-based user interface, which does not require knowledge of the R programming language. provenance is free software released under the GPL-2 licence and will be further expanded based on user feedback.

  20. STATISTICAL ANALYSIS OF SOME EXPERIMENTAL FATIGUE TESTS RESULTS

    OpenAIRE

    Adrian Stere PARIS; Gheorghe AMZA; Claudiu BABIŞ; Dan Niţoi

    2012-01-01

    The paper details the results of processing the fatigue data experiments to find the regression function. Application software for statistical processing like ANOVA and regression calculi are properly utilized, with emphasis on popular software like MSExcel and CurveExpert

  1. Using Statistical Analysis Software to Advance Nitro Plasticizer Wettability

    Energy Technology Data Exchange (ETDEWEB)

    Shear, Trevor Allan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-29

    Statistical analysis in science is an extremely powerful tool that is often underutilized. Additionally, it is frequently the case that data is misinterpreted or not used to its fullest extent. Utilizing the advanced software JMP®, many aspects of experimental design and data analysis can be evaluated and improved. This overview will detail the features of JMP® and how they were used to advance a project, resulting in time and cost savings, as well as the collection of scientifically sound data. The project analyzed in this report addresses the inability of a nitro plasticizer to coat a gold coated quartz crystal sensor used in a quartz crystal microbalance. Through the use of the JMP® software, the wettability of the nitro plasticizer was increased by over 200% using an atmospheric plasma pen, ensuring good sample preparation and reliable results.

  2. Introduction to applied statistical signal analysis guide to biomedical and electrical engineering applications

    CERN Document Server

    Shiavi, Richard

    2007-01-01

    Introduction to Applied Statistical Signal Analysis is designed for the experienced individual with a basic background in mathematics, science, and computer. With this predisposed knowledge, the reader will coast through the practical introduction and move on to signal analysis techniques, commonly used in a broad range of engineering areas such as biomedical engineering, communications, geophysics, and speech.Introduction to Applied Statistical Signal Analysis intertwines theory and implementation with practical examples and exercises. Topics presented in detail include: mathematical

  3. Analysis of Variance: What Is Your Statistical Software Actually Doing?

    Science.gov (United States)

    Li, Jian; Lomax, Richard G.

    2011-01-01

    Users assume statistical software packages produce accurate results. In this article, the authors systematically examined Statistical Package for the Social Sciences (SPSS) and Statistical Analysis System (SAS) for 3 analysis of variance (ANOVA) designs, mixed-effects ANOVA, fixed-effects analysis of covariance (ANCOVA), and nested ANOVA. For each…

  4. Bayesian Statistics and Uncertainty Quantification for Safety Boundary Analysis in Complex Systems

    Science.gov (United States)

    He, Yuning; Davies, Misty Dawn

    2014-01-01

    The analysis of a safety-critical system often requires detailed knowledge of safe regions and their highdimensional non-linear boundaries. We present a statistical approach to iteratively detect and characterize the boundaries, which are provided as parameterized shape candidates. Using methods from uncertainty quantification and active learning, we incrementally construct a statistical model from only few simulation runs and obtain statistically sound estimates of the shape parameters for safety boundaries.

  5. Statistical Analysis of Bus Networks in India

    Science.gov (United States)

    2016-01-01

    In this paper, we model the bus networks of six major Indian cities as graphs in L-space, and evaluate their various statistical properties. While airline and railway networks have been extensively studied, a comprehensive study on the structure and growth of bus networks is lacking. In India, where bus transport plays an important role in day-to-day commutation, it is of significant interest to analyze its topological structure and answer basic questions on its evolution, growth, robustness and resiliency. Although the common feature of small-world property is observed, our analysis reveals a wide spectrum of network topologies arising due to significant variation in the degree-distribution patterns in the networks. We also observe that these networks although, robust and resilient to random attacks are particularly degree-sensitive. Unlike real-world networks, such as Internet, WWW and airline, that are virtual, bus networks are physically constrained. Our findings therefore, throw light on the evolution of such geographically and constrained networks that will help us in designing more efficient bus networks in the future. PMID:27992590

  6. 78 FR 66929 - Intent To Conduct a Detailed Economic Impact Analysis

    Science.gov (United States)

    2013-11-07

    ... Conduct a Detailed Economic Impact Analysis AGENCY: Policy and Planning Division, Export-Import Bank of... public of its intent to conduct a detailed economic impact analysis regarding a loan guarantee to support the export of U.S.-manufactured Boeing 787 wide-body passenger aircraft to an airline in China. Export...

  7. Web-Based Statistical Sampling and Analysis

    Science.gov (United States)

    Quinn, Anne; Larson, Karen

    2016-01-01

    Consistent with the Common Core State Standards for Mathematics (CCSSI 2010), the authors write that they have asked students to do statistics projects with real data. To obtain real data, their students use the free Web-based app, Census at School, created by the American Statistical Association (ASA) to help promote civic awareness among school…

  8. Developments in statistical analysis in quantitative genetics

    DEFF Research Database (Denmark)

    Sorensen, Daniel

    2009-01-01

    A remarkable research impetus has taken place in statistical genetics since the last World Conference. This has been stimulated by breakthroughs in molecular genetics, automated data-recording devices and computer-intensive statistical methods. The latter were revolutionized by the bootstrap and ...

  9. Statistical Analysis of Data for Timber Strengths

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2003-01-01

    . The statistical fits have generally been made using all data and the lower tail of the data. The Maximum Likelihood Method and the Least Square Technique have been used to estimate the statistical parameters in the selected distributions. The results show that the 2-parameter Weibull distribution gives the best...

  10. Statistical network analysis for analyzing policy networks

    DEFF Research Database (Denmark)

    Robins, Garry; Lewis, Jenny; Wang, Peng

    2012-01-01

    and policy network methodology is the development of statistical modeling approaches that can accommodate such dependent data. In this article, we review three network statistical methods commonly used in the current literature: quadratic assignment procedures, exponential random graph models (ERGMs...... has much to offer in analyzing the policy process....

  11. Time Series Analysis Based on Running Mann Whitney Z Statistics

    Science.gov (United States)

    A sensitive and objective time series analysis method based on the calculation of Mann Whitney U statistics is described. This method samples data rankings over moving time windows, converts those samples to Mann-Whitney U statistics, and then normalizes the U statistics to Z statistics using Monte-...

  12. Federal Funds for Research and Development: Fiscal Years 1980, 1981, and 1982. Volume XXX. Detailed Statistical Tables. Surveys of Science Resources Series.

    Science.gov (United States)

    National Science Foundation, Washington, DC.

    During the March through July 1981 period a total of 36 Federal agencies and their subdivisions (95 individual respondents) submitted data in response to the Annual Survey of Federal Funds for Research and Development, Volume XXX, conducted by the National Science Foundation. The detailed statistical tables presented in this report were derived…

  13. The system for statistical analysis of logistic information

    Directory of Open Access Journals (Sweden)

    Khayrullin Rustam Zinnatullovich

    2015-05-01

    Full Text Available The current problem for managers in logistic and trading companies is the task of improving the operational business performance and developing the logistics support of sales. The development of logistics sales supposes development and implementation of a set of works for the development of the existing warehouse facilities, including both a detailed description of the work performed, and the timing of their implementation. Logistics engineering of warehouse complex includes such tasks as: determining the number and the types of technological zones, calculation of the required number of loading-unloading places, development of storage structures, development and pre-sales preparation zones, development of specifications of storage types, selection of loading-unloading equipment, detailed planning of warehouse logistics system, creation of architectural-planning decisions, selection of information-processing equipment, etc. The currently used ERP and WMS systems did not allow us to solve the full list of logistics engineering problems. In this regard, the development of specialized software products, taking into account the specifics of warehouse logistics, and subsequent integration of these software with ERP and WMS systems seems to be a current task. In this paper we suggest a system of statistical analysis of logistics information, designed to meet the challenges of logistics engineering and planning. The system is based on the methods of statistical data processing.The proposed specialized software is designed to improve the efficiency of the operating business and the development of logistics support of sales. The system is based on the methods of statistical data processing, the methods of assessment and prediction of logistics performance, the methods for the determination and calculation of the data required for registration, storage and processing of metal products, as well as the methods for planning the reconstruction and development

  14. Statistical Analysis of Data for Timber Strengths

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Hoffmeyer, P.

    Statistical analyses are performed for material strength parameters from approximately 6700 specimens of structural timber. Non-parametric statistical analyses and fits to the following distributions types have been investigated: Normal, Lognormal, 2 parameter Weibull and 3-parameter Weibull....... The statistical fits have generally been made using all data (100%) and the lower tail (30%) of the data. The Maximum Likelihood Method and the Least Square Technique have been used to estimate the statistical parameters in the selected distributions. 8 different databases are analysed. The results show that 2......-parameter Weibull (and Normal) distributions give the best fits to the data available, especially if tail fits are used whereas the LogNormal distribution generally gives poor fit and larger coefficients of variation, especially if tail fits are used....

  15. The features of Drosophila core promoters revealed by statistical analysis

    Directory of Open Access Journals (Sweden)

    Trifonov Edward N

    2006-06-01

    Full Text Available Abstract Background Experimental investigation of transcription is still a very labor- and time-consuming process. Only a few transcription initiation scenarios have been studied in detail. The mechanism of interaction between basal machinery and promoter, in particular core promoter elements, is not known for the majority of identified promoters. In this study, we reveal various transcription initiation mechanisms by statistical analysis of 3393 nonredundant Drosophila promoters. Results Using Drosophila-specific position-weight matrices, we identified promoters containing TATA box, Initiator, Downstream Promoter Element (DPE, and Motif Ten Element (MTE, as well as core elements discovered in Human (TFIIB Recognition Element (BRE and Downstream Core Element (DCE. Promoters utilizing known synergetic combinations of two core elements (TATA_Inr, Inr_MTE, Inr_DPE, and DPE_MTE were identified. We also establish the existence of promoters with potentially novel synergetic combinations: TATA_DPE and TATA_MTE. Our analysis revealed several motifs with the features of promoter elements, including possible novel core promoter element(s. Comparison of Human and Drosophila showed consistent percentages of promoters with TATA, Inr, DPE, and synergetic combinations thereof, as well as most of the same functional and mutual positions of the core elements. No statistical evidence of MTE utilization in Human was found. Distinct nucleosome positioning in particular promoter classes was revealed. Conclusion We present lists of promoters that potentially utilize the aforementioned elements/combinations. The number of these promoters is two orders of magnitude larger than the number of promoters in which transcription initiation was experimentally studied. The sequences are ready to be experimentally tested or used for further statistical analysis. The developed approach may be utilized for other species.

  16. Statistical models and methods for reliability and survival analysis

    CERN Document Server

    Couallier, Vincent; Huber-Carol, Catherine; Mesbah, Mounir; Huber -Carol, Catherine; Limnios, Nikolaos; Gerville-Reache, Leo

    2013-01-01

    Statistical Models and Methods for Reliability and Survival Analysis brings together contributions by specialists in statistical theory as they discuss their applications providing up-to-date developments in methods used in survival analysis, statistical goodness of fit, stochastic processes for system reliability, amongst others. Many of these are related to the work of Professor M. Nikulin in statistics over the past 30 years. The authors gather together various contributions with a broad array of techniques and results, divided into three parts - Statistical Models and Methods, Statistical

  17. Statistical Analysis Of Data Sets Legislative Type

    Directory of Open Access Journals (Sweden)

    Gheorghe Săvoiu

    2013-06-01

    Full Text Available This paper identifies some characteristic statistical aspects of the annual legislation’s dynamics and structure in the socio-economic system that had defined Romania, over the last two decades. After a brief introduction devoted to the concepts of social and economic system (SES and societal computerized management (SCM in Romania, first section describes the indicators, the specific database and the investigative method and a second section presents some descriptive statistics on the suggestive abnormality of the data series on the legislation of the last 20 years. A final remark underlines the difficult context of Romania’s legislative adjustment to EU requirements.

  18. Statistical analysis of protein kinase specificity determinants

    DEFF Research Database (Denmark)

    Kreegipuu, Andres; Blom, Nikolaj; Brunak, Søren;

    1998-01-01

    The site and sequence specificity of protein kinase, as well as the role of the secondary structure and surface accessibility of the phosphorylation sites on substrate proteins, was statistically analyzed. The experimental data were collected from the literature and are available on the World Wide...

  19. Statistical Analysis in Dental Research Papers.

    Science.gov (United States)

    1983-08-08

    Clinical Trials of Agents used in the Prevention and Treatment of Periodontal Diseases (5). Standards sumary data must be included statistical methods...situations under clinical conditions. Research: qualitative research, as in joint tomography , where the results and conclusions are not amenable to

  20. Statistical analysis of medical data using SAS

    CERN Document Server

    Der, Geoff

    2005-01-01

    An Introduction to SASDescribing and Summarizing DataBasic InferenceScatterplots Correlation: Simple Regression and SmoothingAnalysis of Variance and CovarianceMultiple RegressionLogistic RegressionThe Generalized Linear ModelGeneralized Additive ModelsNonlinear Regression ModelsThe Analysis of Longitudinal Data IThe Analysis of Longitudinal Data II: Models for Normal Response VariablesThe Analysis of Longitudinal Data III: Non-Normal ResponseSurvival AnalysisAnalysis Multivariate Date: Principal Components and Cluster AnalysisReferences

  1. Common pitfalls in statistical analysis: "P" values, statistical significance and confidence intervals

    Directory of Open Access Journals (Sweden)

    Priya Ranganathan

    2015-01-01

    Full Text Available In the second part of a series on pitfalls in statistical analysis, we look at various ways in which a statistically significant study result can be expressed. We debunk some of the myths regarding the ′P′ value, explain the importance of ′confidence intervals′ and clarify the importance of including both values in a paper

  2. Common pitfalls in statistical analysis: “P” values, statistical significance and confidence intervals

    Science.gov (United States)

    Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc

    2015-01-01

    In the second part of a series on pitfalls in statistical analysis, we look at various ways in which a statistically significant study result can be expressed. We debunk some of the myths regarding the ‘P’ value, explain the importance of ‘confidence intervals’ and clarify the importance of including both values in a paper PMID:25878958

  3. Notes on numerical reliability of several statistical analysis programs

    Science.gov (United States)

    Landwehr, J.M.; Tasker, Gary D.

    1999-01-01

    This report presents a benchmark analysis of several statistical analysis programs currently in use in the USGS. The benchmark consists of a comparison between the values provided by a statistical analysis program for variables in the reference data set ANASTY and their known or calculated theoretical values. The ANASTY data set is an amendment of the Wilkinson NASTY data set that has been used in the statistical literature to assess the reliability (computational correctness) of calculated analytical results.

  4. Fundamentals of statistical experimental design and analysis

    CERN Document Server

    Easterling, Robert G

    2015-01-01

    Professionals in all areas - business; government; the physical, life, and social sciences; engineering; medicine, etc. - benefit from using statistical experimental design to better understand their worlds and then use that understanding to improve the products, processes, and programs they are responsible for. This book aims to provide the practitioners of tomorrow with a memorable, easy to read, engaging guide to statistics and experimental design. This book uses examples, drawn from a variety of established texts, and embeds them in a business or scientific context, seasoned with a dash of humor, to emphasize the issues and ideas that led to the experiment and the what-do-we-do-next? steps after the experiment. Graphical data displays are emphasized as means of discovery and communication and formulas are minimized, with a focus on interpreting the results that software produce. The role of subject-matter knowledge, and passion, is also illustrated. The examples do not require specialized knowledge, and t...

  5. Common misconceptions about data analysis and statistics.

    Science.gov (United States)

    Motulsky, Harvey J

    2015-02-01

    Ideally, any experienced investigator with the right tools should be able to reproduce a finding published in a peer-reviewed biomedical science journal. In fact, the reproducibility of a large percentage of published findings has been questioned. Undoubtedly, there are many reasons for this, but one reason may be that investigators fool themselves due to a poor understanding of statistical concepts. In particular, investigators often make these mistakes: (1) P-Hacking. This is when you reanalyze a data set in many different ways, or perhaps reanalyze with additional replicates, until you get the result you want. (2) Overemphasis on P values rather than on the actual size of the observed effect. (3) Overuse of statistical hypothesis testing, and being seduced by the word "significant". (4) Overreliance on standard errors, which are often misunderstood.

  6. Critical analysis of adsorption data statistically

    Science.gov (United States)

    Kaushal, Achla; Singh, S. K.

    2016-09-01

    Experimental data can be presented, computed, and critically analysed in a different way using statistics. A variety of statistical tests are used to make decisions about the significance and validity of the experimental data. In the present study, adsorption was carried out to remove zinc ions from contaminated aqueous solution using mango leaf powder. The experimental data was analysed statistically by hypothesis testing applying t test, paired t test and Chi-square test to (a) test the optimum value of the process pH, (b) verify the success of experiment and (c) study the effect of adsorbent dose in zinc ion removal from aqueous solutions. Comparison of calculated and tabulated values of t and χ 2 showed the results in favour of the data collected from the experiment and this has been shown on probability charts. K value for Langmuir isotherm was 0.8582 and m value for Freundlich adsorption isotherm obtained was 0.725, both are Pearson's correlation coefficient values for Langmuir and Freundlich adsorption isotherms were obtained as 0.99 and 0.95 respectively, which show higher degree of correlation between the variables. This validates the data obtained for adsorption of zinc ions from the contaminated aqueous solution with the help of mango leaf powder.

  7. Statistics

    CERN Document Server

    Hayslett, H T

    1991-01-01

    Statistics covers the basic principles of Statistics. The book starts by tackling the importance and the two kinds of statistics; the presentation of sample data; the definition, illustration and explanation of several measures of location; and the measures of variation. The text then discusses elementary probability, the normal distribution and the normal approximation to the binomial. Testing of statistical hypotheses and tests of hypotheses about the theoretical proportion of successes in a binomial population and about the theoretical mean of a normal population are explained. The text the

  8. Statistical analysis of life history calendar data.

    Science.gov (United States)

    Eerola, Mervi; Helske, Satu

    2016-04-01

    The life history calendar is a data-collection tool for obtaining reliable retrospective data about life events. To illustrate the analysis of such data, we compare the model-based probabilistic event history analysis and the model-free data mining method, sequence analysis. In event history analysis, we estimate instead of transition hazards the cumulative prediction probabilities of life events in the entire trajectory. In sequence analysis, we compare several dissimilarity metrics and contrast data-driven and user-defined substitution costs. As an example, we study young adults' transition to adulthood as a sequence of events in three life domains. The events define the multistate event history model and the parallel life domains in multidimensional sequence analysis. The relationship between life trajectories and excess depressive symptoms in middle age is further studied by their joint prediction in the multistate model and by regressing the symptom scores on individual-specific cluster indices. The two approaches complement each other in life course analysis; sequence analysis can effectively find typical and atypical life patterns while event history analysis is needed for causal inquiries.

  9. Statistical analysis of Contact Angle Hysteresis

    Science.gov (United States)

    Janardan, Nachiketa; Panchagnula, Mahesh

    2015-11-01

    We present the results of a new statistical approach to determining Contact Angle Hysteresis (CAH) by studying the nature of the triple line. A statistical distribution of local contact angles on a random three-dimensional drop is used as the basis for this approach. Drops with randomly shaped triple lines but of fixed volumes were deposited on a substrate and their triple line shapes were extracted by imaging. Using a solution developed by Prabhala et al. (Langmuir, 2010), the complete three dimensional shape of the sessile drop was generated. A distribution of the local contact angles for several such drops but of the same liquid-substrate pairs is generated. This distribution is a result of several microscopic advancing and receding processes along the triple line. This distribution is used to yield an approximation of the CAH associated with the substrate. This is then compared with measurements of CAH by means of a liquid infusion-withdrawal experiment. Static measurements are shown to be sufficient to measure quasistatic contact angle hysteresis of a substrate. The approach also points towards the relationship between microscopic triple line contortions and CAH.

  10. Book review: Statistical Analysis and Modelling of Spatial Point Patterns

    DEFF Research Database (Denmark)

    Møller, Jesper

    2009-01-01

    Statistical Analysis and Modelling of Spatial Point Patterns by J. Illian, A. Penttinen, H. Stoyan and D. Stoyan. Wiley (2008), ISBN 9780470014912......Statistical Analysis and Modelling of Spatial Point Patterns by J. Illian, A. Penttinen, H. Stoyan and D. Stoyan. Wiley (2008), ISBN 9780470014912...

  11. Statistical Modelling of Wind Proles - Data Analysis and Modelling

    DEFF Research Database (Denmark)

    Jónsson, Tryggvi; Pinson, Pierre

    The aim of the analysis presented in this document is to investigate whether statistical models can be used to make very short-term predictions of wind profiles.......The aim of the analysis presented in this document is to investigate whether statistical models can be used to make very short-term predictions of wind profiles....

  12. Methods for dependency estimation and system unavailability evaluation based on failure data statistics. Volume 2, Detailed description and applications

    Energy Technology Data Exchange (ETDEWEB)

    Azarm, M.A.; Hsu, F.; Martinez-Guridi, G. [Brookhaven National Lab., Upton, NY (United States); Vesely, W.E. [Science Applications International Corp., Dublin, OH (United States)

    1993-07-01

    This report introduces a new perspective on the basic concept of dependent failures where the definition of dependency is based on clustering in failure times of similar components. This perspective has two significant implications: firstly, it relaxes the conventional assumption that dependent failures must be simultaneous and result from a severe shock; secondly, it allows the analyst to use all the failures in a time continuum to estimate the potential for multiple failures in a window of time (e.g., a test interval), therefore arriving at a more accurate value for system unavailability. In addition, the models developed here provide a method for plant-specific analysis of dependency, reflecting the plant-specific maintenance practices that reduce or increase the contribution of dependent failures to system unavailability. The proposed methodology can be used for screening analysis of failure data to estimate the fraction of dependent failures among the failures. In addition, the proposed method can evaluate the impact of the observed dependency on the system unavailability and plant risk. The formations derived in this report have undergone various levels of validations through computer simulation studies and pilot applications. The pilot applications of these methodologies showed that the contribution of dependent failures of diesel generators in one plant was negligible, while in another plant, it was quite significant. It also showed that in the plant with significant contribution of dependency to Emergency Power System (ESP) unavailability, the contribution changed with time. Similar findings were reported for the Containment Fan Cooler breakers. Drawing such conclusions about system performance would not have been possible with any other reported dependency methodologies.

  13. Statistical methods for categorical data analysis

    CERN Document Server

    Powers, Daniel

    2008-01-01

    This book provides a comprehensive introduction to methods and models for categorical data analysis and their applications in social science research. Companion website also available, at https://webspace.utexas.edu/dpowers/www/

  14. Statistical analysis: the need, the concept, and the usage

    Directory of Open Access Journals (Sweden)

    Naduvilath Thomas

    1998-01-01

    Full Text Available In general, better understanding of the need and usage of statistics would benefit the medical community in India. This paper explains why statistical analysis is needed, and what is the conceptual basis for it. Ophthalmic data are used as examples. The concept of sampling variation is explained to further corroborate the need for statistical analysis in medical research. Statistical estimation and testing of hypothesis which form the major components of statistical inference are construed. Commonly reported univariate and multivariate statistical tests are explained in order to equip the ophthalmologist with basic knowledge of statistics for better understanding of research data. It is felt that this understanding would facilitate well designed investigations ultimately leading to higher quality practice of ophthalmology in our country.

  15. STATISTICAL ANALYSIS OF SOME EXPERIMENTAL FATIGUE TESTS RESULTS

    Directory of Open Access Journals (Sweden)

    Adrian Stere PARIS

    2012-05-01

    Full Text Available The paper details the results of processing the fatigue data experiments to find the regression function. Application software for statistical processing like ANOVA and regression calculi are properly utilized, with emphasis on popular software like MSExcel and CurveExpert

  16. Classification of Malaysia aromatic rice using multivariate statistical analysis

    Science.gov (United States)

    Abdullah, A. H.; Adom, A. H.; Shakaff, A. Y. Md; Masnan, M. J.; Zakaria, A.; Rahim, N. A.; Omar, O.

    2015-05-01

    Aromatic rice (Oryza sativa L.) is considered as the best quality premium rice. The varieties are preferred by consumers because of its preference criteria such as shape, colour, distinctive aroma and flavour. The price of aromatic rice is higher than ordinary rice due to its special needed growth condition for instance specific climate and soil. Presently, the aromatic rice quality is identified by using its key elements and isotopic variables. The rice can also be classified via Gas Chromatography Mass Spectrometry (GC-MS) or human sensory panels. However, the uses of human sensory panels have significant drawbacks such as lengthy training time, and prone to fatigue as the number of sample increased and inconsistent. The GC-MS analysis techniques on the other hand, require detailed procedures, lengthy analysis and quite costly. This paper presents the application of in-house developed Electronic Nose (e-nose) to classify new aromatic rice varieties. The e-nose is used to classify the variety of aromatic rice based on the samples odour. The samples were taken from the variety of rice. The instrument utilizes multivariate statistical data analysis, including Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA) and K-Nearest Neighbours (KNN) to classify the unknown rice samples. The Leave-One-Out (LOO) validation approach is applied to evaluate the ability of KNN to perform recognition and classification of the unspecified samples. The visual observation of the PCA and LDA plots of the rice proves that the instrument was able to separate the samples into different clusters accordingly. The results of LDA and KNN with low misclassification error support the above findings and we may conclude that the e-nose is successfully applied to the classification of the aromatic rice varieties.

  17. Classification of Malaysia aromatic rice using multivariate statistical analysis

    Energy Technology Data Exchange (ETDEWEB)

    Abdullah, A. H.; Adom, A. H.; Shakaff, A. Y. Md; Masnan, M. J.; Zakaria, A.; Rahim, N. A. [School of Mechatronic Engineering, Universiti Malaysia Perlis, Kampus Pauh Putra, 02600 Arau, Perlis (Malaysia); Omar, O. [Malaysian Agriculture Research and Development Institute (MARDI), Persiaran MARDI-UPM, 43400 Serdang, Selangor (Malaysia)

    2015-05-15

    Aromatic rice (Oryza sativa L.) is considered as the best quality premium rice. The varieties are preferred by consumers because of its preference criteria such as shape, colour, distinctive aroma and flavour. The price of aromatic rice is higher than ordinary rice due to its special needed growth condition for instance specific climate and soil. Presently, the aromatic rice quality is identified by using its key elements and isotopic variables. The rice can also be classified via Gas Chromatography Mass Spectrometry (GC-MS) or human sensory panels. However, the uses of human sensory panels have significant drawbacks such as lengthy training time, and prone to fatigue as the number of sample increased and inconsistent. The GC–MS analysis techniques on the other hand, require detailed procedures, lengthy analysis and quite costly. This paper presents the application of in-house developed Electronic Nose (e-nose) to classify new aromatic rice varieties. The e-nose is used to classify the variety of aromatic rice based on the samples odour. The samples were taken from the variety of rice. The instrument utilizes multivariate statistical data analysis, including Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA) and K-Nearest Neighbours (KNN) to classify the unknown rice samples. The Leave-One-Out (LOO) validation approach is applied to evaluate the ability of KNN to perform recognition and classification of the unspecified samples. The visual observation of the PCA and LDA plots of the rice proves that the instrument was able to separate the samples into different clusters accordingly. The results of LDA and KNN with low misclassification error support the above findings and we may conclude that the e-nose is successfully applied to the classification of the aromatic rice varieties.

  18. A case study in behavioral analysis, synthesis and attention to detail: social learning of food preferences.

    Science.gov (United States)

    Galef, Bennett G

    2012-06-01

    Philip Teitelbaum's focus on detailed description of behavior, the interplay of analysis and synthesis in experimental investigations and the importance of converging lines of evidence in testing hypotheses has proven useful in fields distant from the physiological psychology that he studied throughout his career. Here we consider the social biasing of food choice in Norway rats as an instance of the application of Teitelbaum's principles of behavioral analysis and synthesis and the usefulness of convergent evidence as well as the contributions of detailed behavioral analysis of social influences on food choice to present understanding of both sensory processes and memory. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. A detailed 3D finite element analysis of the peeling behaviour of a gecko spatula

    NARCIS (Netherlands)

    Sauer, R.A.; Holl, M.

    2013-01-01

    This paper presents a detailed finite element analysis of the adhesion of a gecko spatula. The gecko spatulae form the tips of the gecko foot hairs that transfer the adhesional and frictional forces between substrate and foot. The analysis is based on a parameterised description of the 3D geometry o

  20. Statistical analysis of concrete quality testing results

    Directory of Open Access Journals (Sweden)

    Jevtić Dragica

    2014-01-01

    Full Text Available This paper statistically investigates the testing results of compressive strength and density of control concrete specimens tested in the Laboratory for materials, Faculty of Civil Engineering, University of Belgrade, during 2012. The total number of 4420 concrete specimens were tested, which were sampled on different locations - either on concrete production site (concrete plant, or concrete placement location (construction site. To be exact, these samples were made of concrete which was produced on 15 concrete plants, i.e. placed in at 50 different reinforced concrete structures, built during 2012 by 22 different contractors. It is a known fact that the achieved values of concrete compressive strength are very important, both for quality and durability assessment of concrete inside the structural elements, as well as for calculation of their load-bearing capacity limit. Together with the compressive strength testing results, the data concerning requested (designed concrete class, matching between the designed and the achieved concrete quality, concrete density values and frequency of execution of concrete works during 2012 were analyzed.

  1. Statistics

    Science.gov (United States)

    Links to sources of cancer-related statistics, including the Surveillance, Epidemiology and End Results (SEER) Program, SEER-Medicare datasets, cancer survivor prevalence data, and the Cancer Trends Progress Report.

  2. Statistical Smoothing Methods and Image Analysis

    Science.gov (United States)

    1988-12-01

    83 - 111. Rosenfeld, A. and Kak, A.C. (1982). Digital Picture Processing. Academic Press,Qrlando. Serra, J. (1982). Image Analysis and Mat hematical ...hypothesis testing. IEEE Trans. Med. Imaging, MI-6, 313-319. Wicksell, S.D. (1925) The corpuscle problem. A mathematical study of a biometric problem

  3. Statistical inference of Minimum Rank Factor Analysis

    NARCIS (Netherlands)

    Shapiro, A; Ten Berge, JMF

    2002-01-01

    For any given number of factors, Minimum Rank Factor Analysis yields optimal communalities for an observed covariance matrix in the sense that the unexplained common variance with that number of factors is minimized, subject to the constraint that both the diagonal matrix of unique variances and the

  4. Statistical inference of Minimum Rank Factor Analysis

    NARCIS (Netherlands)

    Shapiro, A; Ten Berge, JMF

    For any given number of factors, Minimum Rank Factor Analysis yields optimal communalities for an observed covariance matrix in the sense that the unexplained common variance with that number of factors is minimized, subject to the constraint that both the diagonal matrix of unique variances and the

  5. The Statistical Analysis of Failure Time Data

    CERN Document Server

    Kalbfleisch, John D

    2011-01-01

    Contains additional discussion and examples on left truncation as well as material on more general censoring and truncation patterns.Introduces the martingale and counting process formulation swil lbe in a new chapter.Develops multivariate failure time data in a separate chapter and extends the material on Markov and semi Markov formulations.Presents new examples and applications of data analysis.

  6. The Ontology of Biological and Clinical Statistics (OBCS) for standardized and reproducible statistical analysis.

    Science.gov (United States)

    Zheng, Jie; Harris, Marcelline R; Masci, Anna Maria; Lin, Yu; Hero, Alfred; Smith, Barry; He, Yongqun

    2016-09-14

    Statistics play a critical role in biological and clinical research. However, most reports of scientific results in the published literature make it difficult for the reader to reproduce the statistical analyses performed in achieving those results because they provide inadequate documentation of the statistical tests and algorithms applied. The Ontology of Biological and Clinical Statistics (OBCS) is put forward here as a step towards solving this problem. The terms in OBCS including 'data collection', 'data transformation in statistics', 'data visualization', 'statistical data analysis', and 'drawing a conclusion based on data', cover the major types of statistical processes used in basic biological research and clinical outcome studies. OBCS is aligned with the Basic Formal Ontology (BFO) and extends the Ontology of Biomedical Investigations (OBI), an OBO (Open Biological and Biomedical Ontologies) Foundry ontology supported by over 20 research communities. Currently, OBCS comprehends 878 terms, representing 20 BFO classes, 403 OBI classes, 229 OBCS specific classes, and 122 classes imported from ten other OBO ontologies. We discuss two examples illustrating how the ontology is being applied. In the first (biological) use case, we describe how OBCS was applied to represent the high throughput microarray data analysis of immunological transcriptional profiles in human subjects vaccinated with an influenza vaccine. In the second (clinical outcomes) use case, we applied OBCS to represent the processing of electronic health care data to determine the associations between hospital staffing levels and patient mortality. Our case studies were designed to show how OBCS can be used for the consistent representation of statistical analysis pipelines under two different research paradigms. Other ongoing projects using OBCS for statistical data processing are also discussed. The OBCS source code and documentation are available at: https://github.com/obcs/obcs . The Ontology

  7. Statistic analysis of millions of digital photos

    Science.gov (United States)

    Wueller, Dietmar; Fageth, Reiner

    2008-02-01

    The analysis of images has always been an important aspect in the quality enhancement of photographs and photographic equipment. Due to the lack of meta data it was mostly limited to images taken by experts under predefined conditions and the analysis was also done by experts or required psychophysical tests. With digital photography and the EXIF1 meta data stored in the images, a lot of information can be gained from a semiautomatic or automatic image analysis if one has access to a large number of images. Although home printing is becoming more and more popular, the European market still has a few photofinishing companies who have access to a large number of images. All printed images are stored for a certain period of time adding up to several million images on servers every day. We have utilized the images to answer numerous questions and think that these answers are useful for increasing image quality by optimizing the image processing algorithms. Test methods can be modified to fit typical user conditions and future developments can be pointed towards ideal directions.

  8. EXTREME PROGRAMMING PROJECT PERFORMANCE MANAGEMENT BY STATISTICAL EARNED VALUE ANALYSIS

    OpenAIRE

    Wei Lu; Li Lu

    2013-01-01

    As an important project type of Agile Software Development, the performance evaluation and prediction for eXtreme Programming project has significant meanings. Targeting on the short release life cycle and concurrent multitask features, a statistical earned value analysis model is proposed. Based on the traditional concept of earned value analysis, the statistical earned value analysis model introduced Elastic Net regression function and Laplacian hierarchical model to construct a Bayesian El...

  9. An analysis of radio pulsar nulling statistics

    Science.gov (United States)

    Biggs, James D.

    1992-01-01

    Survival analysis methods are used to seek correlations between the fraction of null pulsars and other pulsar characteristics for an ensemble of 72 radio pulsars. The strongest correlation is found between the null fraction and the pulse period, suggesting that nulling is a manifestation of a faltering emission mechanism. Correlations are also found between the fraction of null pulses and other parameters that have a strong dependence on the pulse period. The results presented here suggest that nulling is broad-band and may ultimately be explained in terms of polar cap models of pulsar emission.

  10. CORSSA: The Community Online Resource for Statistical Seismicity Analysis

    Science.gov (United States)

    Michael, Andrew J.; Wiemer, Stefan

    2010-01-01

    Statistical seismology is the application of rigorous statistical methods to earthquake science with the goal of improving our knowledge of how the earth works. Within statistical seismology there is a strong emphasis on the analysis of seismicity data in order to improve our scientific understanding of earthquakes and to improve the evaluation and testing of earthquake forecasts, earthquake early warning, and seismic hazards assessments. Given the societal importance of these applications, statistical seismology must be done well. Unfortunately, a lack of educational resources and available software tools make it difficult for students and new practitioners to learn about this discipline. The goal of the Community Online Resource for Statistical Seismicity Analysis (CORSSA) is to promote excellence in statistical seismology by providing the knowledge and resources necessary to understand and implement the best practices, so that the reader can apply these methods to their own research. This introduction describes the motivation for and vision of CORRSA. It also describes its structure and contents.

  11. Improved statistics for genome-wide interaction analysis.

    Science.gov (United States)

    Ueki, Masao; Cordell, Heather J

    2012-01-01

    Recently, Wu and colleagues [1] proposed two novel statistics for genome-wide interaction analysis using case/control or case-only data. In computer simulations, their proposed case/control statistic outperformed competing approaches, including the fast-epistasis option in PLINK and logistic regression analysis under the correct model; however, reasons for its superior performance were not fully explored. Here we investigate the theoretical properties and performance of Wu et al.'s proposed statistics and explain why, in some circumstances, they outperform competing approaches. Unfortunately, we find minor errors in the formulae for their statistics, resulting in tests that have higher than nominal type 1 error. We also find minor errors in PLINK's fast-epistasis and case-only statistics, although theory and simulations suggest that these errors have only negligible effect on type 1 error. We propose adjusted versions of all four statistics that, both theoretically and in computer simulations, maintain correct type 1 error rates under the null hypothesis. We also investigate statistics based on correlation coefficients that maintain similar control of type 1 error. Although designed to test specifically for interaction, we show that some of these previously-proposed statistics can, in fact, be sensitive to main effects at one or both loci, particularly in the presence of linkage disequilibrium. We propose two new "joint effects" statistics that, provided the disease is rare, are sensitive only to genuine interaction effects. In computer simulations we find, in most situations considered, that highest power is achieved by analysis under the correct genetic model. Such an analysis is unachievable in practice, as we do not know this model. However, generally high power over a wide range of scenarios is exhibited by our joint effects and adjusted Wu statistics. We recommend use of these alternative or adjusted statistics and urge caution when using Wu et al

  12. A statistical analysis of UK financial networks

    Science.gov (United States)

    Chu, J.; Nadarajah, S.

    2017-04-01

    In recent years, with a growing interest in big or large datasets, there has been a rise in the application of large graphs and networks to financial big data. Much of this research has focused on the construction and analysis of the network structure of stock markets, based on the relationships between stock prices. Motivated by Boginski et al. (2005), who studied the characteristics of a network structure of the US stock market, we construct network graphs of the UK stock market using same method. We fit four distributions to the degree density of the vertices from these graphs, the Pareto I, Fréchet, lognormal, and generalised Pareto distributions, and assess the goodness of fit. Our results show that the degree density of the complements of the market graphs, constructed using a negative threshold value close to zero, can be fitted well with the Fréchet and lognormal distributions.

  13. STATISTICAL BAYESIAN ANALYSIS OF EXPERIMENTAL DATA.

    Directory of Open Access Journals (Sweden)

    AHLAM LABDAOUI

    2012-12-01

    Full Text Available The Bayesian researcher should know the basic ideas underlying Bayesian methodology and the computational tools used in modern Bayesian econometrics.  Some of the most important methods of posterior simulation are Monte Carlo integration, importance sampling, Gibbs sampling and the Metropolis- Hastings algorithm. The Bayesian should also be able to put the theory and computational tools together in the context of substantive empirical problems. We focus primarily on recent developments in Bayesian computation. Then we focus on particular models. Inevitably, we combine theory and computation in the context of particular models. Although we have tried to be reasonably complete in terms of covering the basic ideas of Bayesian theory and the computational tools most commonly used by the Bayesian, there is no way we can cover all the classes of models used in econometrics. We propose to the user of analysis of variance and linear regression model.

  14. Method for statistical data analysis of multivariate observations

    CERN Document Server

    Gnanadesikan, R

    1997-01-01

    A practical guide for multivariate statistical techniques-- now updated and revised In recent years, innovations in computer technology and statistical methodologies have dramatically altered the landscape of multivariate data analysis. This new edition of Methods for Statistical Data Analysis of Multivariate Observations explores current multivariate concepts and techniques while retaining the same practical focus of its predecessor. It integrates methods and data-based interpretations relevant to multivariate analysis in a way that addresses real-world problems arising in many areas of inte

  15. Statistical evaluation of diagnostic performance topics in ROC analysis

    CERN Document Server

    Zou, Kelly H; Bandos, Andriy I; Ohno-Machado, Lucila; Rockette, Howard E

    2016-01-01

    Statistical evaluation of diagnostic performance in general and Receiver Operating Characteristic (ROC) analysis in particular are important for assessing the performance of medical tests and statistical classifiers, as well as for evaluating predictive models or algorithms. This book presents innovative approaches in ROC analysis, which are relevant to a wide variety of applications, including medical imaging, cancer research, epidemiology, and bioinformatics. Statistical Evaluation of Diagnostic Performance: Topics in ROC Analysis covers areas including monotone-transformation techniques in parametric ROC analysis, ROC methods for combined and pooled biomarkers, Bayesian hierarchical transformation models, sequential designs and inferences in the ROC setting, predictive modeling, multireader ROC analysis, and free-response ROC (FROC) methodology. The book is suitable for graduate-level students and researchers in statistics, biostatistics, epidemiology, public health, biomedical engineering, radiology, medi...

  16. Online Statistical Modeling (Regression Analysis) for Independent Responses

    Science.gov (United States)

    Made Tirta, I.; Anggraeni, Dian; Pandutama, Martinus

    2017-06-01

    Regression analysis (statistical analmodelling) are among statistical methods which are frequently needed in analyzing quantitative data, especially to model relationship between response and explanatory variables. Nowadays, statistical models have been developed into various directions to model various type and complex relationship of data. Rich varieties of advanced and recent statistical modelling are mostly available on open source software (one of them is R). However, these advanced statistical modelling, are not very friendly to novice R users, since they are based on programming script or command line interface. Our research aims to developed web interface (based on R and shiny), so that most recent and advanced statistical modelling are readily available, accessible and applicable on web. We have previously made interface in the form of e-tutorial for several modern and advanced statistical modelling on R especially for independent responses (including linear models/LM, generalized linier models/GLM, generalized additive model/GAM and generalized additive model for location scale and shape/GAMLSS). In this research we unified them in the form of data analysis, including model using Computer Intensive Statistics (Bootstrap and Markov Chain Monte Carlo/ MCMC). All are readily accessible on our online Virtual Statistics Laboratory. The web (interface) make the statistical modeling becomes easier to apply and easier to compare them in order to find the most appropriate model for the data.

  17. QRS DETECTION OF ECG - A STATISTICAL ANALYSIS

    Directory of Open Access Journals (Sweden)

    I.S. Siva Rao

    2015-03-01

    Full Text Available Electrocardiogram (ECG is a graphical representation generated by heart muscle. ECG plays an important role in diagnosis and monitoring of heart’s condition. The real time analyzer based on filtering, beat recognition, clustering, classification of signal with maximum few seconds delay can be done to recognize the life threatening arrhythmia. ECG signal examines and study of anatomic and physiologic facets of the entire cardiac muscle. The inceptive task for proficient scrutiny is the expulsion of noise. It is attained by the use of wavelet transform analysis. Wavelets yield temporal and spectral information concurrently and offer stretchability with a possibility of wavelet functions of different properties. This paper is concerned with the extraction of QRS complexes of ECG signals using Discrete Wavelet Transform based algorithms aided with MATLAB. By removing the inconsistent wavelet transform coefficient, denoising is done in ECG signal. In continuation, QRS complexes are identified and in which each peak can be utilized to discover the peak of separate waves like P and T with their derivatives. Here we put forth a new combinatory algorithm builded on using Pan-Tompkins' method and multi-wavelet transform.

  18. Guidelines for Statistical Analysis of Percentage of Syllables Stuttered Data

    Science.gov (United States)

    Jones, Mark; Onslow, Mark; Packman, Ann; Gebski, Val

    2006-01-01

    Purpose: The purpose of this study was to develop guidelines for the statistical analysis of percentage of syllables stuttered (%SS) data in stuttering research. Method; Data on %SS from various independent sources were used to develop a statistical model to describe this type of data. On the basis of this model, %SS data were simulated with…

  19. Attitudes and Achievement in Statistics: A Meta-Analysis Study

    Science.gov (United States)

    Emmioglu, Esma; Capa-Aydin, Yesim

    2012-01-01

    This study examined the relationships among statistics achievement and four components of attitudes toward statistics (Cognitive Competence, Affect, Value, and Difficulty) as assessed by the SATS. Meta-analysis results revealed that the size of relationships differed by the geographical region in which the studies were conducted as well as by the…

  20. Explorations in Statistics: The Analysis of Ratios and Normalized Data

    Science.gov (United States)

    Curran-Everett, Douglas

    2013-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This ninth installment of "Explorations in Statistics" explores the analysis of ratios and normalized--or standardized--data. As researchers, we compute a ratio--a numerator divided by a denominator--to compute a…

  1. Attitudes and Achievement in Statistics: A Meta-Analysis Study

    Science.gov (United States)

    Emmioglu, Esma; Capa-Aydin, Yesim

    2012-01-01

    This study examined the relationships among statistics achievement and four components of attitudes toward statistics (Cognitive Competence, Affect, Value, and Difficulty) as assessed by the SATS. Meta-analysis results revealed that the size of relationships differed by the geographical region in which the studies were conducted as well as by the…

  2. The Importance of Statistical Modeling in Data Analysis and Inference

    Science.gov (United States)

    Rollins, Derrick, Sr.

    2017-01-01

    Statistical inference simply means to draw a conclusion based on information that comes from data. Error bars are the most commonly used tool for data analysis and inference in chemical engineering data studies. This work demonstrates, using common types of data collection studies, the importance of specifying the statistical model for sound…

  3. Analysis of Dynamic Interactions between Different Drivetrain Components with a Detailed Wind Turbine Model

    Science.gov (United States)

    Bartschat, A.; Morisse, M.; Mertens, A.; Wenske, J.

    2016-09-01

    The presented work describes a detailed analysis of the dynamic interactions among mechanical and electrical drivetrain components of a modern wind turbine under the influence of parameter variations, different control mechanisms and transient excitations. For this study, a detailed model of a 2MW wind turbine with a gearbox, a permanent magnet synchronous generator and a full power converter has been developed which considers all relevant characteristics of the mechanical and electrical subsystems. This model includes an accurate representation of the aerodynamics and the mechanical properties of the rotor and the complete mechanical drivetrain. Furthermore, a detailed electrical modelling of the generator, the full scale power converter with discrete switching devices, its filters, the transformer and the grid as well as the control structure is considered. The analysis shows that, considering control measures based on active torsional damping, interactions between mechanical and electrical subsystems can significantly affect the loads and thus the individual lifetime of the components.

  4. Development of statistical models for data analysis

    Energy Technology Data Exchange (ETDEWEB)

    Downham, D.Y.

    2000-07-01

    Incidents that cause, or could cause, injury to personnel, and that satisfy specific criteria, are reported to the Offshore Safety Division (OSD) of the Health and Safety Executive (HSE). The underlying purpose of this report is to improve ways of quantifying risk, a recommendation in Lord Cullen's report into the Piper Alpha disaster. Records of injuries and hydrocarbon releases from 1 January, 1991, to 31 March 1996, are analysed, because the reporting of incidents was standardised after 1990. Models are identified for risk assessment and some are applied. The appropriate analyses of one or two factors (or variables) are tests of uniformity or of independence. Radar graphs are used to represent some temporal variables. Cusums are applied for the analysis of incident frequencies over time, and could be applied for regular monitoring. Log-linear models for Poisson-distributed data are identified as being suitable for identifying 'non-random' combinations of more than two factors. Some questions cannot be addressed with the available data: for example, more data are needed to assess the risk of injury per employee in a time interval. If the questions are considered sufficiently important, resources could be assigned to obtain the data. Some of the main results from the analyses are as follows: the cusum analyses identified a change-point at the end of July 1993, when the reported number of injuries reduced by 40%. Injuries were more likely to occur between 8am and 12am or between 2pm and 5pm than at other times: between 2pm and 3pm the number of injuries was almost twice the average and was more than three fold the smallest. No seasonal effects in the numbers of injuries were identified. Three-day injuries occurred more frequently on the 5th, 6th and 7th days into a tour of duty than on other days. Three-day injuries occurred less frequently on the 13th and 14th days of a tour of duty. An injury classified as 'lifting or craning' was

  5. Detailed statistical analysis plan for the Danish Palliative Care Trial (DanPaCT)

    DEFF Research Database (Denmark)

    Johnsen, Anna Thit; Petersen, Morten Aagaard; Gluud, Christian

    2014-01-01

    BACKGROUND: Advanced cancer patients experience considerable symptoms, problems, and needs. Early referral of these patients to specialized palliative care (SPC) could offer improvements. The Danish Palliative Care Trial (DanPaCT) investigates whether patients with metastatic cancer will benefit...... from being referred to 'early SPC'. DanPaCT is a multicenter, parallel-group, superiority clinical trial with 1:1 randomization. The planned sample size was 300 patients. The primary data collection for DanPaCT is finished. To prevent outcome reporting bias, selective reporting, and data-driven results......-individualised outcome representing the score of the symptom or problem that had the highest intensity out of seven at baseline assessed with the European Organisation for Research and Treatment of Cancer Quality of Life Questionnaire (EORTC QLQ-C30). Secondary outcomes are the seven scales that are represented...

  6. Detailed statistical analysis plan for the Danish Palliative Care Trial (DanPaCT)

    DEFF Research Database (Denmark)

    Johnsen, Anna Thit; Petersen, Morten Aagaard; Gluud, Christian

    2014-01-01

    BACKGROUND: Advanced cancer patients experience considerable symptoms, problems, and needs. Early referral of these patients to specialized palliative care (SPC) could offer improvements. The Danish Palliative Care Trial (DanPaCT) investigates whether patients with metastatic cancer will benefit...... from being referred to 'early SPC'. DanPaCT is a multicenter, parallel-group, superiority clinical trial with 1:1 randomization. The planned sample size was 300 patients. The primary data collection for DanPaCT is finished. To prevent outcome reporting bias, selective reporting, and data-driven results...... data, multiplicity and the risk of bias. CONCLUSIONS: Only few trials have investigated the effects of SPC. To our knowledge DanPaCT is the first trial to investigate screening based 'early SPC' for patients with metastatic cancer from a broad spectrum of cancer diagnosis. TRIAL REGISTRATION...

  7. Tri-Service Champus Statistical Database Project (TCSDP): Champus Ambulatory Data Analysis. Detail Report

    Science.gov (United States)

    1993-03-30

    8217 > 1 IK I I I I IU I m "" . L1- I I M U. I 1i to I LI- -- - - -+- - - --- - 0 -- ------ -7 - - --- - TT I~ P1 .1 -I’ .1 N1 - 1 m 1 .1 * a .1 . I -wa...1tu ..i C4 a a acca I .I1 .I 4101t I- a I~ aO I - IacaI-aa. a L,. a 1u4a a -a-a ,t "I Ia -a w1 a. zuI "Ia ) i-a a a a a a I- zo ai I a I gow I l I L...d~l ~ l -w P1 P-I I l in r I U) IN 0 : (0 l 1 O 1 IC0 .1 I4M io I) ’ I I I Iý I I In I 1 I N 1 1 l I 1 0; 1~ -0 ~ 0 I I I1 1 1 M I zl I I I I I I I I

  8. Practical application and statistical analysis of titrimetric monitoring ...

    African Journals Online (AJOL)

    Practical application and statistical analysis of titrimetric monitoring of water and ... The resulting raw data were further processed with an Excel-based program. ... As such the type of component and the concentration can be determined.

  9. Statistical Analysis of the Exchange Rate of Bitcoin: e0133678

    National Research Council Canada - National Science Library

    Jeffrey Chu; Saralees Nadarajah; Stephen Chan

    2015-01-01

      Bitcoin, the first electronic payment system, is becoming a popular currency. We provide a statistical analysis of the log-returns of the exchange rate of Bitcoin versus the United States Dollar...

  10. Detailed Behavior Analysis for High Voltage Bidirectional Flyback Converter Driving DEAP Actuator

    DEFF Research Database (Denmark)

    Huang, Lina; Zhang, Zhe; Andersen, Michael A. E.

    2013-01-01

    flyback based converter has been implemented. The parasitic elements have serious influence for the operation of the converter, especially in the high output voltage condition. The detailed behavior analysis has been performed considering the impact of the critical parasitic parameters. The converter has...

  11. 78 FR 47317 - Intent To Conduct a Detailed Economic Impact Analysis

    Science.gov (United States)

    2013-08-05

    ... UNITED STATES Intent To Conduct a Detailed Economic Impact Analysis This notice is to inform the public... support the export of U.S.-manufactured Boeing 787 wide-body passenger aircraft to an airline in China... U.S. airline industry. The aircraft in this transaction could enable passenger route service within...

  12. 78 FR 69669 - Intent To Conduct a Detailed Economic Impact Analysis

    Science.gov (United States)

    2013-11-20

    ... Conduct a Detailed Economic Impact Analysis This notice is to inform the public that the Export-Import....-manufactured Boeing 777 wide-body passenger aircraft that will be operated by an airline in Russia, which will... 1% or more of comparable wide-body seat capacity within the U.S. airline industry. The aircraft in...

  13. Propensity Score Analysis: An Alternative Statistical Approach for HRD Researchers

    Science.gov (United States)

    Keiffer, Greggory L.; Lane, Forrest C.

    2016-01-01

    Purpose: This paper aims to introduce matching in propensity score analysis (PSA) as an alternative statistical approach for researchers looking to make causal inferences using intact groups. Design/methodology/approach: An illustrative example demonstrated the varying results of analysis of variance, analysis of covariance and PSA on a heuristic…

  14. Meta analysis a guide to calibrating and combining statistical evidence

    CERN Document Server

    Kulinskaya, Elena; Staudte, Robert G

    2008-01-01

    Meta Analysis: A Guide to Calibrating and Combining Statistical Evidence acts as a source of basic methods for scientists wanting to combine evidence from different experiments. The authors aim to promote a deeper understanding of the notion of statistical evidence.The book is comprised of two parts - The Handbook, and The Theory. The Handbook is a guide for combining and interpreting experimental evidence to solve standard statistical problems. This section allows someone with a rudimentary knowledge in general statistics to apply the methods. The Theory provides the motivation, theory and results of simulation experiments to justify the methodology.This is a coherent introduction to the statistical concepts required to understand the authors' thesis that evidence in a test statistic can often be calibrated when transformed to the right scale.

  15. Advanced data analysis in neuroscience integrating statistical and computational models

    CERN Document Server

    Durstewitz, Daniel

    2017-01-01

    This book is intended for use in advanced graduate courses in statistics / machine learning, as well as for all experimental neuroscientists seeking to understand statistical methods at a deeper level, and theoretical neuroscientists with a limited background in statistics. It reviews almost all areas of applied statistics, from basic statistical estimation and test theory, linear and nonlinear approaches for regression and classification, to model selection and methods for dimensionality reduction, density estimation and unsupervised clustering.  Its focus, however, is linear and nonlinear time series analysis from a dynamical systems perspective, based on which it aims to convey an understanding also of the dynamical mechanisms that could have generated observed time series. Further, it integrates computational modeling of behavioral and neural dynamics with statistical estimation and hypothesis testing. This way computational models in neuroscience are not only explanat ory frameworks, but become powerfu...

  16. The Economic Contribution of Canada's Colleges and Institutes. An Analysis of Investment Effectiveness and Economic Growth. Volume 2: Detailed Results by Gender and Entry Level of Education

    Science.gov (United States)

    Robison, M. Henry; Christophersen, Kjell A.

    2008-01-01

    The purpose of this volume is to present the results of the economic impact analysis in detail by gender and entry level of education. On the data entry side, gender and entry level of education are important variables that help characterize the student body profile. This profile data links to national statistical databases which are already…

  17. Statistical multiresolution analysis in amplitude-frequency domain

    Institute of Scientific and Technical Information of China (English)

    SUN Hong; GUAN Bao; Henri Maitre

    2004-01-01

    A concept of statistical multiresolution analysis in amplitude-frequency domain is proposed, which is to employ the wavelet transform on the statistical character of a signal in amplitude domain. In terms of the theorem of generalized ergodicity, an algorithm to estimate the transform coefficients based on the amplitude statistical multiresolution analysis (AMA) is presented. The principle of applying the AMA to Synthetic Aperture Radar (SAR) image processing is described, and the good experimental results imply that the AMA is an efficient tool for processing of speckled signals modeled by the multiplicative noise.

  18. Basic statistical tools in research and data analysis

    Science.gov (United States)

    Ali, Zulfiqar; Bhaskar, S Bala

    2016-01-01

    Statistical methods involved in carrying out a study include planning, designing, collecting data, analysing, drawing meaningful interpretation and reporting of the research findings. The statistical analysis gives meaning to the meaningless numbers, thereby breathing life into a lifeless data. The results and inferences are precise only if proper statistical tests are used. This article will try to acquaint the reader with the basic research tools that are utilised while conducting various studies. The article covers a brief outline of the variables, an understanding of quantitative and qualitative variables and the measures of central tendency. An idea of the sample size estimation, power analysis and the statistical errors is given. Finally, there is a summary of parametric and non-parametric tests used for data analysis.

  19. Basic statistical tools in research and data analysis.

    Science.gov (United States)

    Ali, Zulfiqar; Bhaskar, S Bala

    2016-09-01

    Statistical methods involved in carrying out a study include planning, designing, collecting data, analysing, drawing meaningful interpretation and reporting of the research findings. The statistical analysis gives meaning to the meaningless numbers, thereby breathing life into a lifeless data. The results and inferences are precise only if proper statistical tests are used. This article will try to acquaint the reader with the basic research tools that are utilised while conducting various studies. The article covers a brief outline of the variables, an understanding of quantitative and qualitative variables and the measures of central tendency. An idea of the sample size estimation, power analysis and the statistical errors is given. Finally, there is a summary of parametric and non-parametric tests used for data analysis.

  20. Basic statistical tools in research and data analysis

    Directory of Open Access Journals (Sweden)

    Zulfiqar Ali

    2016-01-01

    Full Text Available Statistical methods involved in carrying out a study include planning, designing, collecting data, analysing, drawing meaningful interpretation and reporting of the research findings. The statistical analysis gives meaning to the meaningless numbers, thereby breathing life into a lifeless data. The results and inferences are precise only if proper statistical tests are used. This article will try to acquaint the reader with the basic research tools that are utilised while conducting various studies. The article covers a brief outline of the variables, an understanding of quantitative and qualitative variables and the measures of central tendency. An idea of the sample size estimation, power analysis and the statistical errors is given. Finally, there is a summary of parametric and non-parametric tests used for data analysis.

  1. Numeric computation and statistical data analysis on the Java platform

    CERN Document Server

    Chekanov, Sergei V

    2016-01-01

    Numerical computation, knowledge discovery and statistical data analysis integrated with powerful 2D and 3D graphics for visualization are the key topics of this book. The Python code examples powered by the Java platform can easily be transformed to other programming languages, such as Java, Groovy, Ruby and BeanShell. This book equips the reader with a computational platform which, unlike other statistical programs, is not limited by a single programming language. The author focuses on practical programming aspects and covers a broad range of topics, from basic introduction to the Python language on the Java platform (Jython), to descriptive statistics, symbolic calculations, neural networks, non-linear regression analysis and many other data-mining topics. He discusses how to find regularities in real-world data, how to classify data, and how to process data for knowledge discoveries. The code snippets are so short that they easily fit into single pages. Numeric Computation and Statistical Data Analysis ...

  2. A testing method for the machine details state by means of the speckle image parameters analysis

    Science.gov (United States)

    Malov, A. N.; Pavlov, P. V.; Neupokoeva, A. V.

    2016-08-01

    Non destructive testing method, allowing to define a residual resource of power details of mechanical engineering designs under the analysis of registered speckle-image parameters, it is discussed. The "chessboard" algorithm based on calculation of correlation between the given speckle-image and the a chessboard image is considered. Experimental research results of an offered non destructive testing method are presented. It is established, that to increase in quantity of a power detail tests cycles there is an increase in roughness parameters that conducts to reduction of correlation factor between reference and to resultants the image at the given stage of test. Knowing of correlation factor change dynamics, it is possible to define a residual resource of power details while in exploitation.

  3. Simulation Experiments in Practice : Statistical Design and Regression Analysis

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2007-01-01

    In practice, simulation analysts often change only one factor at a time, and use graphical analysis of the resulting Input/Output (I/O) data. The goal of this article is to change these traditional, naïve methods of design and analysis, because statistical theory proves that more information is obta

  4. Simulation Experiments in Practice : Statistical Design and Regression Analysis

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    2007-01-01

    In practice, simulation analysts often change only one factor at a time, and use graphical analysis of the resulting Input/Output (I/O) data. Statistical theory proves that more information is obtained when applying Design Of Experiments (DOE) and linear regression analysis. Unfortunately, classic t

  5. Statistical Analysis of Processes of Bankruptcy is in Ukraine

    OpenAIRE

    Berest Marina Nikolaevna

    2012-01-01

    The statistical analysis of processes of bankruptcy in Ukraine is conducted. Quantitative and high-quality indexes, characterizing efficiency of functioning of institute of bankruptcy of enterprises, are analyzed; the analysis of processes, related to bankruptcy of enterprises, being in state administration is conducted.

  6. HistFitter software framework for statistical data analysis

    CERN Document Server

    Baak, M.; Côte, D.; Koutsman, A.; Lorenz, J.; Short, D.

    2015-01-01

    We present a software framework for statistical data analysis, called HistFitter, that has been used extensively by the ATLAS Collaboration to analyze big datasets originating from proton-proton collisions at the Large Hadron Collider at CERN. Since 2012 HistFitter has been the standard statistical tool in searches for supersymmetric particles performed by ATLAS. HistFitter is a programmable and flexible framework to build, book-keep, fit, interpret and present results of data models of nearly arbitrary complexity. Starting from an object-oriented configuration, defined by users, the framework builds probability density functions that are automatically fitted to data and interpreted with statistical tests. A key innovation of HistFitter is its design, which is rooted in core analysis strategies of particle physics. The concepts of control, signal and validation regions are woven into its very fabric. These are progressively treated with statistically rigorous built-in methods. Being capable of working with mu...

  7. A Divergence Statistics Extension to VTK for Performance Analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Pebay, Philippe Pierre; Bennett, Janine Camille

    2015-02-01

    This report follows the series of previous documents ([PT08, BPRT09b, PT09, BPT09, PT10, PB13], where we presented the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k -means, order and auto-correlative statistics engines which we developed within the Visualization Tool Kit ( VTK ) as a scalable, parallel and versatile statistics package. We now report on a new engine which we developed for the calculation of divergence statistics, a concept which we hereafter explain and whose main goal is to quantify the discrepancy, in a stasticial manner akin to measuring a distance, between an observed empirical distribution and a theoretical, "ideal" one. The ease of use of the new diverence statistics engine is illustrated by the means of C++ code snippets. Although this new engine does not yet have a parallel implementation, it has already been applied to HPC performance analysis, of which we provide an example.

  8. A Divergence Statistics Extension to VTK for Performance Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Pebay, Philippe Pierre [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bennett, Janine Camille [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    This report follows the series of previous documents ([PT08, BPRT09b, PT09, BPT09, PT10, PB13], where we presented the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k -means, order and auto-correlative statistics engines which we developed within the Visualization Tool Kit ( VTK ) as a scalable, parallel and versatile statistics package. We now report on a new engine which we developed for the calculation of divergence statistics, a concept which we hereafter explain and whose main goal is to quantify the discrepancy, in a stasticial manner akin to measuring a distance, between an observed empirical distribution and a theoretical, "ideal" one. The ease of use of the new diverence statistics engine is illustrated by the means of C++ code snippets. Although this new engine does not yet have a parallel implementation, it has already been applied to HPC performance analysis, of which we provide an example.

  9. Probability and Statistics Questions and Tests : a critical analysis

    Directory of Open Access Journals (Sweden)

    Fabrizio Maturo

    2015-06-01

    Full Text Available In probability and statistics courses, a popular method for the evaluation of the students is to assess them using multiple choice tests. The use of these tests allows to evaluate certain types of skills such as fast response, short-term memory, mental clarity and ability to compete. In our opinion, the verification through testing can certainly be useful for the analysis of certain aspects, and to speed up the process of assessment, but we should be aware of the limitations of such a standardized procedure and then exclude that the assessments of pupils, classes and schools can be reduced to processing of test results. To prove this thesis, this article argues in detail the main test limits, presents some recent models which have been proposed in the literature and suggests some alternative valuation methods.   Quesiti e test di Probabilità e Statistica: un'analisi critica Nei corsi di Probabilità e  Statistica, un metodo molto diffuso per la valutazione degli studenti consiste nel sottoporli a quiz a risposta multipla.  L'uso di questi test permette di valutare alcuni tipi di abilità come la rapidità di risposta, la memoria a breve termine, la lucidità mentale e l'attitudine a gareggiare. A nostro parere, la verifica attraverso i test può essere sicuramente utile per l'analisi di alcuni aspetti e per velocizzare il percorso di valutazione ma si deve essere consapevoli dei limiti di una tale procedura standardizzata e quindi escludere che le valutazioni di alunni, classi e scuole possano essere ridotte a elaborazioni di risultati di test. A dimostrazione di questa tesi, questo articolo argomenta in dettaglio i limiti principali dei test, presenta alcuni recenti modelli proposti in letteratura e propone alcuni metodi di valutazione alternativi. Parole Chiave:  item responce theory, valutazione, test, probabilità

  10. Detailed description and user`s manual of high burnup fuel analysis code EXBURN-I

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Motoe [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Saitou, Hiroaki

    1997-11-01

    EXBURN-I has been developed for the analysis of LWR high burnup fuel behavior in normal operation and power transient conditions. In the high burnup region, phenomena occur which are different in quality from those expected for the extension of behaviors in the mid-burnup region. To analyze these phenomena, EXBURN-I has been formed by the incorporation of such new models as pellet thermal conductivity change, burnup-dependent FP gas release rate, and cladding oxide layer growth to the basic structure of low- and mid-burnup fuel analysis code FEMAXI-IV. The present report describes in detail the whole structure of the code, models, and materials properties. Also, it includes a detailed input manual and sample output, etc. (author). 55 refs.

  11. Longitudinal data analysis a handbook of modern statistical methods

    CERN Document Server

    Fitzmaurice, Garrett; Verbeke, Geert; Molenberghs, Geert

    2008-01-01

    Although many books currently available describe statistical models and methods for analyzing longitudinal data, they do not highlight connections between various research threads in the statistical literature. Responding to this void, Longitudinal Data Analysis provides a clear, comprehensive, and unified overview of state-of-the-art theory and applications. It also focuses on the assorted challenges that arise in analyzing longitudinal data. After discussing historical aspects, leading researchers explore four broad themes: parametric modeling, nonparametric and semiparametric methods, joint

  12. Detailed atmospheric abundance analysis of the optical counterpart of the IR source IRAS 16559-2957

    CERN Document Server

    Molina, R E

    2013-01-01

    We have undertaken a detailed abundance analysis of the optical counterpart of the IR source IRAS16559-2957 with the aim of confirming its possible post-AGB nature. The star shows solar metallicity and our investigation of a large number of elements including CNO and 12C/13C suggests that this object has experienced the first dredge-up and it is likely still at RGB stage.

  13. A novel statistic for genome-wide interaction analysis.

    Science.gov (United States)

    Wu, Xuesen; Dong, Hua; Luo, Li; Zhu, Yun; Peng, Gang; Reveille, John D; Xiong, Momiao

    2010-09-23

    Although great progress in genome-wide association studies (GWAS) has been made, the significant SNP associations identified by GWAS account for only a few percent of the genetic variance, leading many to question where and how we can find the missing heritability. There is increasing interest in genome-wide interaction analysis as a possible source of finding heritability unexplained by current GWAS. However, the existing statistics for testing interaction have low power for genome-wide interaction analysis. To meet challenges raised by genome-wide interactional analysis, we have developed a novel statistic for testing interaction between two loci (either linked or unlinked). The null distribution and the type I error rates of the new statistic for testing interaction are validated using simulations. Extensive power studies show that the developed statistic has much higher power to detect interaction than classical logistic regression. The results identified 44 and 211 pairs of SNPs showing significant evidence of interactions with FDRanalysis is a valuable tool for finding remaining missing heritability unexplained by the current GWAS, and the developed novel statistic is able to search significant interaction between SNPs across the genome. Real data analysis showed that the results of genome-wide interaction analysis can be replicated in two independent studies.

  14. Complexity of software trustworthiness and its dynamical statistical analysis methods

    Institute of Scientific and Technical Information of China (English)

    ZHENG ZhiMing; MA ShiLong; LI Wei; JIANG Xin; WEI Wei; MA LiLi; TANG ShaoTing

    2009-01-01

    Developing trusted softwares has become an important trend and a natural choice in the development of software technology and applications.At present,the method of measurement and assessment of software trustworthiness cannot guarantee safe and reliable operations of software systems completely and effectively.Based on the dynamical system study,this paper interprets the characteristics of behaviors of software systems and the basic scientific problems of software trustworthiness complexity,analyzes the characteristics of complexity of software trustworthiness,and proposes to study the software trustworthiness measurement in terms of the complexity of software trustworthiness.Using the dynamical statistical analysis methods,the paper advances an invariant-measure based assessment method of software trustworthiness by statistical indices,and hereby provides a dynamical criterion for the untrustworthiness of software systems.By an example,the feasibility of the proposed dynamical statistical analysis method in software trustworthiness measurement is demonstrated using numerical simulations and theoretical analysis.

  15. Towards proper sampling and statistical analysis of defects

    Directory of Open Access Journals (Sweden)

    Cetin Ali

    2014-06-01

    Full Text Available Advancements in applied statistics with great relevance to defect sampling and analysis are presented. Three main issues are considered; (i proper handling of multiple defect types, (ii relating sample data originating from polished inspection surfaces (2D to finite material volumes (3D, and (iii application of advanced extreme value theory in statistical analysis of block maximum data. Original and rigorous, but practical mathematical solutions are presented. Finally, these methods are applied to make prediction regarding defect sizes in a steel alloy containing multiple defect types.

  16. Quantum chemical characterization of N-(2-hydroxybenzylidene)acetohydrazide (HBAH): a detailed vibrational and NLO analysis.

    Science.gov (United States)

    Tamer, Ömer; Avcı, Davut; Atalay, Yusuf

    2014-01-03

    The molecular modeling of N-(2-hydroxybenzylidene)acetohydrazide (HBAH) was carried out using B3LYP, CAMB3LYP and PBE1PBE levels of density functional theory (DFT). The molecular structure of HBAH was solved by means of IR, NMR and UV-vis spectroscopies. In order to find the stable conformers, conformational analysis was performed based on B3LYP level. A detailed vibrational analysis was made on the basis of potential energy distribution (PED). HOMO and LUMO energies were calculated, and the obtained energies displayed that charge transfer occurs in HBAH. NLO analysis indicated that HBAH can be used as an effective NLO material. NBO analysis also proved that charge transfer, conjugative interactions and intramolecular hydrogen bonding interactions occur through HBAH. Additionally, major contributions from molecular orbitals to the electronic transitions were investigated theoretically.

  17. Statistics and Analysis on Papers Published on Journal 'Advanced Technology of Electrical Engineering and Energy'

    Institute of Scientific and Technical Information of China (English)

    QIN Jie; LIN Liangzhen; QI Zhiping; MA Yuhuan; SHEN Guoliao; JING Bohong

    2009-01-01

    The paper presented the statistics and analysis on papers published on the journal 'Advanced Technology of Electrical Engineering and Energy' from 1996 to 2008: the paper acceptance rate, the paper category, the first author's affiliations, the top 7 first authors, the top 10 coauthors and also the joumal evaluation indexes of the journal. It offers details of the journal to anyone interested, especially to our editorial board and our broad readers.

  18. Adaptive strategy for the statistical analysis of connectomes.

    Directory of Open Access Journals (Sweden)

    Djalel Eddine Meskaldji

    Full Text Available We study an adaptive statistical approach to analyze brain networks represented by brain connection matrices of interregional connectivity (connectomes. Our approach is at a middle level between a global analysis and single connections analysis by considering subnetworks of the global brain network. These subnetworks represent either the inter-connectivity between two brain anatomical regions or by the intra-connectivity within the same brain anatomical region. An appropriate summary statistic, that characterizes a meaningful feature of the subnetwork, is evaluated. Based on this summary statistic, a statistical test is performed to derive the corresponding p-value. The reformulation of the problem in this way reduces the number of statistical tests in an orderly fashion based on our understanding of the problem. Considering the global testing problem, the p-values are corrected to control the rate of false discoveries. Finally, the procedure is followed by a local investigation within the significant subnetworks. We contrast this strategy with the one based on the individual measures in terms of power. We show that this strategy has a great potential, in particular in cases where the subnetworks are well defined and the summary statistics are properly chosen. As an application example, we compare structural brain connection matrices of two groups of subjects with a 22q11.2 deletion syndrome, distinguished by their IQ scores.

  19. Acquisition and statistical analysis of reliability data for I and C parts in plant protection system

    Energy Technology Data Exchange (ETDEWEB)

    Lim, T. J.; Byun, S. S.; Han, S. H.; Lee, H. J.; Lim, J. S.; Oh, S. J.; Park, K. Y.; Song, H. S. [Soongsil Univ., Seoul (Korea)

    2001-04-01

    This project has been performed in order to construct I and C part reliability databases for detailed analysis of plant protection system and to develop a methodology for analysing trip set point drifts. Reliability database for the I and C parts of plant protection system is required to perform the detailed analysis. First, we have developed an electronic part reliability prediction code based on MIL-HDBK-217F. Then we have collected generic reliability data for the I and C parts in plant protection system. Statistical analysis procedure has been developed to process the data. Then the generic reliability database has been constructed. We have also collected plant specific reliability data for the I and C parts in plant protection system for YGN 3,4 and UCN 3,4 units. Plant specific reliability database for I and C parts has been developed by the Bayesian procedure. We have also developed an statistical analysis procedure for set point drift, and performed analysis of drift effects for trip set point. The basis for the detailed analysis can be provided from the reliability database for the PPS I and C parts. The safety of the KSNP and succeeding NPPs can be proved by reducing the uncertainty of PSA. Economic and efficient operation of NPP can be possible by optimizing the test period to reduce utility's burden. 14 refs., 215 figs., 137 tabs. (Author)

  20. Statistical Error analysis of Nucleon-Nucleon phenomenological potentials

    CERN Document Server

    Perez, R Navarro; Arriola, E Ruiz

    2014-01-01

    Nucleon-Nucleon potentials are commonplace in nuclear physics and are determined from a finite number of experimental data with limited precision sampling the scattering process. We study the statistical assumptions implicit in the standard least squares fitting procedure and apply, along with more conventional tests, a tail sensitive quantile-quantile test as a simple and confident tool to verify the normality of residuals. We show that the fulfilment of normality tests is linked to a judicious and consistent selection of a nucleon-nucleon database. These considerations prove crucial to a proper statistical error analysis and uncertainty propagation. We illustrate these issues by analyzing about 8000 proton-proton and neutron-proton scattering published data. This enables the construction of potentials meeting all statistical requirements necessary for statistical uncertainty estimates in nuclear structure calculations.

  1. Data analysis using the Gnu R system for statistical computation

    Energy Technology Data Exchange (ETDEWEB)

    Simone, James; /Fermilab

    2011-07-01

    R is a language system for statistical computation. It is widely used in statistics, bioinformatics, machine learning, data mining, quantitative finance, and the analysis of clinical drug trials. Among the advantages of R are: it has become the standard language for developing statistical techniques, it is being actively developed by a large and growing global user community, it is open source software, it is highly portable (Linux, OS-X and Windows), it has a built-in documentation system, it produces high quality graphics and it is easily extensible with over four thousand extension library packages available covering statistics and applications. This report gives a very brief introduction to R with some examples using lattice QCD simulation results. It then discusses the development of R packages designed for chi-square minimization fits for lattice n-pt correlation functions.

  2. Data analysis of asymmetric structures advanced approaches in computational statistics

    CERN Document Server

    Saito, Takayuki

    2004-01-01

    Data Analysis of Asymmetric Structures provides a comprehensive presentation of a variety of models and theories for the analysis of asymmetry and its applications and provides a wealth of new approaches in every section. It meets both the practical and theoretical needs of research professionals across a wide range of disciplines and  considers data analysis in fields such as psychology, sociology, social science, ecology, and marketing. In seven comprehensive chapters this guide details theories, methods, and models for the analysis of asymmetric structures in a variety of disciplines and presents future opportunities and challenges affecting research developments and business applications.

  3. Relative urban ecosystem health assessment: a method integrating comprehensive evaluation and detailed analysis.

    Science.gov (United States)

    Su, Meirong; Yang, Zhifeng; Chen, Bin

    2010-12-01

    Regarding the basic roles of urban ecosystem health assessment (i.e., discovering the comprehensive health status, and diagnosing the limiting factors of urban ecosystems), the general framework integrating comprehensive evaluation and detailed analysis is established, from both bottom-up and top-down directions. Emergy-based health indicators are established to reflect the urban ecosystem health status from a biophysical viewpoint. Considering the intrinsic uncertainty and relativity of urban ecosystem health, set pair analysis is combined with the emergy-based indicators to fill the general framework and evaluate the relative health level of urban ecosystems. These techniques are favorable for understanding the overall urban ecosystem health status and confirming the limiting factors of concerned urban ecosystems from biophysical perspective. Moreover, clustering analysis is applied by combining the health status with spatial geographical conditions. Choosing 26 typical Chinese cities in 2005, relative comprehensive urban ecosystem health levels were evaluated. The higher health levels of Xiamen, Qingdao, Shenzhen, and Zhuhai are in particular contrast to those of Wuhan, Beijing, Yinchuan, and Harbin, which are relatively poor. In addition, the conditions of each factor and related indicators are investigated through set pair analysis, from which the critical limiting factors of Beijing are confirmed. According to clustering analysis results, the urban ecosystems studied are divided into four groups. It is concluded that the proposed framework of urban ecosystem health assessment, which integrates comprehensive evaluation and detailed analysis and is fulfilled by emergy synthesis and set pair analysis, can serve as a useful tool to conduct diagnosis of urban ecosystem health.

  4. A novel statistic for genome-wide interaction analysis.

    Directory of Open Access Journals (Sweden)

    Xuesen Wu

    2010-09-01

    Full Text Available Although great progress in genome-wide association studies (GWAS has been made, the significant SNP associations identified by GWAS account for only a few percent of the genetic variance, leading many to question where and how we can find the missing heritability. There is increasing interest in genome-wide interaction analysis as a possible source of finding heritability unexplained by current GWAS. However, the existing statistics for testing interaction have low power for genome-wide interaction analysis. To meet challenges raised by genome-wide interactional analysis, we have developed a novel statistic for testing interaction between two loci (either linked or unlinked. The null distribution and the type I error rates of the new statistic for testing interaction are validated using simulations. Extensive power studies show that the developed statistic has much higher power to detect interaction than classical logistic regression. The results identified 44 and 211 pairs of SNPs showing significant evidence of interactions with FDR<0.001 and 0.001analysis is a valuable tool for finding remaining missing heritability unexplained by the current GWAS, and the developed novel statistic is able to search significant interaction between SNPs across the genome. Real data analysis showed that the results of genome-wide interaction analysis can be replicated in two independent studies.

  5. A Statistical Analysis of Cointegration for I(2) Variables

    DEFF Research Database (Denmark)

    Johansen, Søren

    1995-01-01

    be conducted using the ¿ sup2/sup distribution. It is shown to what extent inference on the cointegration ranks can be conducted using the tables already prepared for the analysis of cointegration of I(1) variables. New tables are needed for the test statistics to control the size of the tests. This paper...

  6. Advanced Statistical and Data Analysis Tools for Astrophysics

    Science.gov (United States)

    Kashyap, V.; Scargle, Jeffrey D. (Technical Monitor)

    2001-01-01

    The goal of the project is to obtain, derive, and develop statistical and data analysis tools that would be of use in the analyses of high-resolution, high-sensitivity data that are becoming available with new instruments. This is envisioned as a cross-disciplinary effort with a number of collaborators.

  7. A New Statistic for Variable Selection in Questionnaire Analysis

    Institute of Scientific and Technical Information of China (English)

    ZHANG Jun-hua; FANG Wei-wu

    2001-01-01

    In this paper, a new statistic is proposed for variable selection which is one of the important problems in analysis of questionnaire data. Contrasting to other methods, the approach introduced here can be used not only for two groups of samples but can also be easily generalized to the multi-group case.

  8. Measures of radioactivity: a tool for understanding statistical data analysis

    CERN Document Server

    Montalbano, Vera

    2012-01-01

    A learning path on radioactivity in the last class of high school is presented. An introduction to radioactivity and nuclear phenomenology is followed by measurements of natural radioactivity. Background and weak sources are monitored for days or weeks. The data are analyzed in order to understand the importance of statistical analysis in modern physics.

  9. Statistical Analysis of Hypercalcaemia Data related to Transferability

    DEFF Research Database (Denmark)

    Frølich, Anne; Nielsen, Bo Friis

    2005-01-01

    In this report we describe statistical analysis related to a study of hypercalcaemia carried out in the Copenhagen area in the ten year period from 1984 to 1994. Results from the study have previously been publised in a number of papers [3, 4, 5, 6, 7, 8, 9] and in various abstracts and posters...

  10. AstroStat - A VO Tool for Statistical Analysis

    CERN Document Server

    Kembhavi, Ajit K; Kale, Tejas; Jagade, Santosh; Vibhute, Ajay; Garg, Prerak; Vaghmare, Kaustubh; Navelkar, Sharmad; Agrawal, Tushar; Nandrekar, Deoyani; Shaikh, Mohasin

    2015-01-01

    AstroStat is an easy-to-use tool for performing statistical analysis on data. It has been designed to be compatible with Virtual Observatory (VO) standards thus enabling it to become an integral part of the currently available collection of VO tools. A user can load data in a variety of formats into AstroStat and perform various statistical tests using a menu driven interface. Behind the scenes, all analysis is done using the public domain statistical software - R and the output returned is presented in a neatly formatted form to the user. The analyses performable include exploratory tests, visualizations, distribution fitting, correlation & causation, hypothesis testing, multivariate analysis and clustering. The tool is available in two versions with identical interface and features - as a web service that can be run using any standard browser and as an offline application. AstroStat will provide an easy-to-use interface which can allow for both fetching data and performing power statistical analysis on ...

  11. Introduction to Statistics and Data Analysis With Computer Applications I.

    Science.gov (United States)

    Morris, Carl; Rolph, John

    This document consists of unrevised lecture notes for the first half of a 20-week in-house graduate course at Rand Corporation. The chapter headings are: (1) Histograms and descriptive statistics; (2) Measures of dispersion, distance and goodness of fit; (3) Using JOSS for data analysis; (4) Binomial distribution and normal approximation; (5)…

  12. Investigation of Weibull statistics in fracture analysis of cast aluminum

    Science.gov (United States)

    Holland, F. A., Jr.; Zaretsky, E. V.

    1989-01-01

    The fracture strengths of two large batches of A357-T6 cast aluminum coupon specimens were compared by using two-parameter Weibull analysis. The minimum number of these specimens necessary to find the fracture strength of the material was determined. The applicability of three-parameter Weibull analysis was also investigated. A design methodolgy based on the combination of elementary stress analysis and Weibull statistical analysis is advanced and applied to the design of a spherical pressure vessel shell. The results from this design methodology are compared with results from the applicable ASME pressure vessel code.

  13. Investigation of Weibull statistics in fracture analysis of cast aluminum

    Science.gov (United States)

    Holland, F. A., Jr.; Zaretsky, E. V.

    1989-01-01

    The fracture strengths of two large batches of A357-T6 cast aluminum coupon specimens were compared by using two-parameter Weibull analysis. The minimum number of these specimens necessary to find the fracture strength of the material was determined. The applicability of three-parameter Weibull analysis was also investigated. A design methodolgy based on the combination of elementary stress analysis and Weibull statistical analysis is advanced and applied to the design of a spherical pressure vessel shell. The results from this design methodology are compared with results from the applicable ASME pressure vessel code.

  14. Multivariate statistical analysis of precipitation chemistry in Northwestern Spain

    Energy Technology Data Exchange (ETDEWEB)

    Prada-Sanchez, J.M.; Garcia-Jurado, I.; Gonzalez-Manteiga, W.; Fiestras-Janeiro, M.G.; Espada-Rios, M.I.; Lucas-Dominguez, T. (University of Santiago, Santiago (Spain). Faculty of Mathematics, Dept. of Statistics and Operations Research)

    1993-07-01

    149 samples of rainwater were collected in the proximity of a power station in northwestern Spain at three rainwater monitoring stations. The resulting data are analyzed using multivariate statistical techniques. Firstly, the Principal Component Analysis shows that there are three main sources of pollution in the area (a marine source, a rural source and an acid source). The impact from pollution from these sources on the immediate environment of the stations is studied using Factorial Discriminant Analysis. 8 refs., 7 figs., 11 tabs.

  15. HistFitter software framework for statistical data analysis

    Science.gov (United States)

    Baak, M.; Besjes, G. J.; Côté, D.; Koutsman, A.; Lorenz, J.; Short, D.

    2015-04-01

    We present a software framework for statistical data analysis, called HistFitter, that has been used extensively by the ATLAS Collaboration to analyze big datasets originating from proton-proton collisions at the Large Hadron Collider at CERN. Since 2012 HistFitter has been the standard statistical tool in searches for supersymmetric particles performed by ATLAS. HistFitter is a programmable and flexible framework to build, book-keep, fit, interpret and present results of data models of nearly arbitrary complexity. Starting from an object-oriented configuration, defined by users, the framework builds probability density functions that are automatically fit to data and interpreted with statistical tests. Internally HistFitter uses the statistics packages RooStats and HistFactory. A key innovation of HistFitter is its design, which is rooted in analysis strategies of particle physics. The concepts of control, signal and validation regions are woven into its fabric. These are progressively treated with statistically rigorous built-in methods. Being capable of working with multiple models at once that describe the data, HistFitter introduces an additional level of abstraction that allows for easy bookkeeping, manipulation and testing of large collections of signal hypotheses. Finally, HistFitter provides a collection of tools to present results with publication quality style through a simple command-line interface.

  16. Common pitfalls in statistical analysis: Linear regression analysis.

    Science.gov (United States)

    Aggarwal, Rakesh; Ranganathan, Priya

    2017-01-01

    In a previous article in this series, we explained correlation analysis which describes the strength of relationship between two continuous variables. In this article, we deal with linear regression analysis which predicts the value of one continuous variable from another. We also discuss the assumptions and pitfalls associated with this analysis.

  17. Common pitfalls in statistical analysis: Linear regression analysis

    Directory of Open Access Journals (Sweden)

    Rakesh Aggarwal

    2017-01-01

    Full Text Available In a previous article in this series, we explained correlation analysis which describes the strength of relationship between two continuous variables. In this article, we deal with linear regression analysis which predicts the value of one continuous variable from another. We also discuss the assumptions and pitfalls associated with this analysis.

  18. Statistical analysis of absorptive laser damage in dielectric thin films

    Energy Technology Data Exchange (ETDEWEB)

    Budgor, A.B.; Luria-Budgor, K.F.

    1978-09-11

    The Weibull distribution arises as an example of the theory of extreme events. It is commonly used to fit statistical data arising in the failure analysis of electrical components and in DC breakdown of materials. This distribution is employed to analyze time-to-damage and intensity-to-damage statistics obtained when irradiating thin film coated samples of SiO/sub 2/, ZrO/sub 2/, and Al/sub 2/O/sub 3/ with tightly focused laser beams. The data used is furnished by Milam. The fit to the data is excellent; and least squared correlation coefficients greater than 0.9 are often obtained.

  19. Comparison of Statistical Models for Regional Crop Trial Analysis

    Institute of Scientific and Technical Information of China (English)

    ZHANG Qun-yuan; KONG Fan-ling

    2002-01-01

    Based on the review and comparison of main statistical analysis models for estimating varietyenvironment cell means in regional crop trials, a new statistical model, LR-PCA composite model was proposed, and the predictive precision of these models were compared by cross validation of an example data. Results showed that the order of model precision was LR-PCA model > AMMI model > PCA model > Treatment Means (TM) model > Linear Regression (LR) model > Additive Main Effects ANOVA model. The precision gain factor of LR-PCA model was 1.55, increasing by 8.4% compared with AMMI.

  20. Network similarity and statistical analysis of earthquake seismic data

    CERN Document Server

    Deyasi, Krishanu; Banerjee, Anirban

    2016-01-01

    We study the structural similarity of earthquake networks constructed from seismic catalogs of different geographical regions. A hierarchical clustering of underlying undirected earthquake networks is shown using Jensen-Shannon divergence in graph spectra. The directed nature of links indicates that each earthquake network is strongly connected, which motivates us to study the directed version statistically. Our statistical analysis of each earthquake region identifies the hub regions. We calculate the conditional probability of the forthcoming occurrences of earthquakes in each region. The conditional probability of each event has been compared with their stationary distribution.

  1. [Some basic aspects in statistical analysis of visual acuity data].

    Science.gov (United States)

    Ren, Ze-Qin

    2007-06-01

    All visual acuity charts used currently have their own shortcomings. Therefore, it is difficult for ophthalmologists to evaluate visual acuity data. Many problems present in the use of statistical methods for handling visual acuity data in clinical research. The quantitative relationship between visual acuity and visual angle varied in different visual acuity charts. The type of visual acuity and visual angle are different from each other. Therefore, different statistical methods should be used for different data sources. A correct understanding and analysis of visual acuity data could be obtained only after the elucidation of these aspects.

  2. Network similarity and statistical analysis of earthquake seismic data

    Science.gov (United States)

    Deyasi, Krishanu; Chakraborty, Abhijit; Banerjee, Anirban

    2017-09-01

    We study the structural similarity of earthquake networks constructed from seismic catalogs of different geographical regions. A hierarchical clustering of underlying undirected earthquake networks is shown using Jensen-Shannon divergence in graph spectra. The directed nature of links indicates that each earthquake network is strongly connected, which motivates us to study the directed version statistically. Our statistical analysis of each earthquake region identifies the hub regions. We calculate the conditional probability of the forthcoming occurrences of earthquakes in each region. The conditional probability of each event has been compared with their stationary distribution.

  3. Error Analysis of Terrestrial Laser Scanning Data by Means of Spherical Statistics and 3D Graphs

    Directory of Open Access Journals (Sweden)

    Pedro Arias

    2010-11-01

    Full Text Available This paper presents a complete analysis of the positional errors of terrestrial laser scanning (TLS data based on spherical statistics and 3D graphs. Spherical statistics are preferred because of the 3D vectorial nature of the spatial error. Error vectors have three metric elements (one module and two angles that were analyzed by spherical statistics. A study case has been presented and discussed in detail. Errors were calculating using 53 check points (CP and CP coordinates were measured by a digitizer with submillimetre accuracy. The positional accuracy was analyzed by both the conventional method (modular errors analysis and the proposed method (angular errors analysis by 3D graphics and numerical spherical statistics. Two packages in R programming language were performed to obtain graphics automatically. The results indicated that the proposed method is advantageous as it offers a more complete analysis of the positional accuracy, such as angular error component, uniformity of the vector distribution, error isotropy, and error, in addition the modular error component by linear statistics.

  4. Statistical analysis and interpolation of compositional data in materials science.

    Science.gov (United States)

    Pesenson, Misha Z; Suram, Santosh K; Gregoire, John M

    2015-02-01

    Compositional data are ubiquitous in chemistry and materials science: analysis of elements in multicomponent systems, combinatorial problems, etc., lead to data that are non-negative and sum to a constant (for example, atomic concentrations). The constant sum constraint restricts the sampling space to a simplex instead of the usual Euclidean space. Since statistical measures such as mean and standard deviation are defined for the Euclidean space, traditional correlation studies, multivariate analysis, and hypothesis testing may lead to erroneous dependencies and incorrect inferences when applied to compositional data. Furthermore, composition measurements that are used for data analytics may not include all of the elements contained in the material; that is, the measurements may be subcompositions of a higher-dimensional parent composition. Physically meaningful statistical analysis must yield results that are invariant under the number of composition elements, requiring the application of specialized statistical tools. We present specifics and subtleties of compositional data processing through discussion of illustrative examples. We introduce basic concepts, terminology, and methods required for the analysis of compositional data and utilize them for the spatial interpolation of composition in a sputtered thin film. The results demonstrate the importance of this mathematical framework for compositional data analysis (CDA) in the fields of materials science and chemistry.

  5. Feature-Based Statistical Analysis of Combustion Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, J; Krishnamoorthy, V; Liu, S; Grout, R; Hawkes, E; Chen, J; Pascucci, V; Bremer, P T

    2011-11-18

    We present a new framework for feature-based statistical analysis of large-scale scientific data and demonstrate its effectiveness by analyzing features from Direct Numerical Simulations (DNS) of turbulent combustion. Turbulent flows are ubiquitous and account for transport and mixing processes in combustion, astrophysics, fusion, and climate modeling among other disciplines. They are also characterized by coherent structure or organized motion, i.e. nonlocal entities whose geometrical features can directly impact molecular mixing and reactive processes. While traditional multi-point statistics provide correlative information, they lack nonlocal structural information, and hence, fail to provide mechanistic causality information between organized fluid motion and mixing and reactive processes. Hence, it is of great interest to capture and track flow features and their statistics together with their correlation with relevant scalar quantities, e.g. temperature or species concentrations. In our approach we encode the set of all possible flow features by pre-computing merge trees augmented with attributes, such as statistical moments of various scalar fields, e.g. temperature, as well as length-scales computed via spectral analysis. The computation is performed in an efficient streaming manner in a pre-processing step and results in a collection of meta-data that is orders of magnitude smaller than the original simulation data. This meta-data is sufficient to support a fully flexible and interactive analysis of the features, allowing for arbitrary thresholds, providing per-feature statistics, and creating various global diagnostics such as Cumulative Density Functions (CDFs), histograms, or time-series. We combine the analysis with a rendering of the features in a linked-view browser that enables scientists to interactively explore, visualize, and analyze the equivalent of one terabyte of simulation data. We highlight the utility of this new framework for combustion

  6. Detailed abundance analysis of a metal-poor giant in the Galactic Center

    CERN Document Server

    Ryde, N; Rich, R M; Thorsbro, B; Schultheis, M; Origlia, L; Chatzopoulos, S

    2016-01-01

    We report the first results from our program to examine the metallicity distribution of the Milky Way nuclear star cluster connected to SgrA*, with the goal of inferring the star formation and enrichment history of this system, as well as its connection and relationship with the central 100 pc of the bulge/bar system. We present the first high resolution (R~24,000), detailed abundance analysis of a K=10.2 metal-poor, alpha-enhanced red giant projected at 1.5 pc from the Galactic Center, using NIRSPEC on Keck II. A careful analysis of the dynamics and color of the star locates it at about 26 pc line-of-sight distance in front of the nuclear cluster. It probably belongs to one of the nuclear components (cluster or disk), not to the bar-bulge or classical disk. A detailed spectroscopic synthesis, using a new linelist in the K band, finds [Fe/H]~-1.0 and [alpha/Fe]~+0.4, consistent with stars of similar metallicity in the bulge. As known giants with comparable [Fe/H] and alpha enhancement are old, we conclude tha...

  7. Thermohydraulic incidents at full power (safety analysis detailed report no. 1)

    Energy Technology Data Exchange (ETDEWEB)

    1988-01-15

    In this paper, attention is focused on the role of plant-incident analysis during the design stage and the conclusions reached regarding safety. This class of incidents includes sequences arising from the breakdown or anomalous behaviour of components or from errors in plant operation that have repercussions on the process of the system involved and the related systems. The sequences of possible relevance to safety are those which stress the active and passive protection (containment barriers). As these stresses are below the design-basis limits, they have no consequences in terms of radioactivity release. This report illustrates in greater detail the analysis that led to this conclusion, with particular reference to reactor events that have significant consequences on the first barrier (fuel cladding). Thermohydraulic incidents at full power are examined here.

  8. Exergy analysis of an industrial-scale ultrafiltrated (UF) cheese production plant: a detailed survey

    Science.gov (United States)

    Nasiri, Farshid; Aghbashlo, Mortaza; Rafiee, Shahin

    2017-02-01

    In this study, a detailed exergy analysis of an industrial-scale ultrafiltrated (UF) cheese production plant was conducted based on actual operational data in order to provide more comprehensive insights into the performance of the whole plant and its main subcomponents. The plant included four main subsystems, i.e., steam generator (I), above-zero refrigeration system (II), Bactocatch-assisted pasteurization line (III), and UF cheese production line (IV). In addition, this analysis was aimed at quantifying the exergy destroyed in processing a known quantity of the UF cheese using the mass allocation method. The specific exergy destruction of the UF cheese production was determined at 2330.42 kJ/kg. The contributions of the subsystems I, II, III, and IV to the specific exergy destruction of the UF cheese production were computed as 1337.67, 386.18, 283.05, and 323.51 kJ/kg, respectively. Additionally, it was observed through the analysis that the steam generation system had the largest contribution to the thermodynamic inefficiency of the UF cheese production, accounting for 57.40 % of the specific exergy destruction. Generally, the outcomes of this survey further manifested the benefits of applying exergy analysis for design, analysis, and optimization of industrial-scale dairy processing plants to achieve the most cost-effective and environmentally-benign production strategies.

  9. Exergy analysis of an industrial-scale ultrafiltrated (UF) cheese production plant: a detailed survey

    Science.gov (United States)

    Nasiri, Farshid; Aghbashlo, Mortaza; Rafiee, Shahin

    2016-05-01

    In this study, a detailed exergy analysis of an industrial-scale ultrafiltrated (UF) cheese production plant was conducted based on actual operational data in order to provide more comprehensive insights into the performance of the whole plant and its main subcomponents. The plant included four main subsystems, i.e., steam generator (I), above-zero refrigeration system (II), Bactocatch-assisted pasteurization line (III), and UF cheese production line (IV). In addition, this analysis was aimed at quantifying the exergy destroyed in processing a known quantity of the UF cheese using the mass allocation method. The specific exergy destruction of the UF cheese production was determined at 2330.42 kJ/kg. The contributions of the subsystems I, II, III, and IV to the specific exergy destruction of the UF cheese production were computed as 1337.67, 386.18, 283.05, and 323.51 kJ/kg, respectively. Additionally, it was observed through the analysis that the steam generation system had the largest contribution to the thermodynamic inefficiency of the UF cheese production, accounting for 57.40 % of the specific exergy destruction. Generally, the outcomes of this survey further manifested the benefits of applying exergy analysis for design, analysis, and optimization of industrial-scale dairy processing plants to achieve the most cost-effective and environmentally-benign production strategies.

  10. Statistical Analysis of SAR Sea Clutter for Classification Purposes

    Directory of Open Access Journals (Sweden)

    Jaime Martín-de-Nicolás

    2014-09-01

    Full Text Available Statistical analysis of radar clutter has always been one of the topics, where more effort has been put in the last few decades. These studies were usually focused on finding the statistical models that better fitted the clutter distribution; however, the goal of this work is not the modeling of the clutter, but the study of the suitability of the statistical parameters to carry out a sea state classification. In order to achieve this objective and provide some relevance to this study, an important set of maritime and coastal Synthetic Aperture Radar data is considered. Due to the nature of the acquisition of data by SAR sensors, speckle noise is inherent to these data, and a specific study of how this noise affects the clutter distribution is also performed in this work. In pursuit of a sense of wholeness, a thorough study of the most suitable statistical parameters, as well as the most adequate classifier is carried out, achieving excellent results in terms of classification success rates. These concluding results confirm that a sea state classification is not only viable, but also successful using statistical parameters different from those of the best modeling distribution and applying a speckle filter, which allows a better characterization of the parameters used to distinguish between different sea states.

  11. Wavelet analysis in ecology and epidemiology: impact of statistical tests.

    Science.gov (United States)

    Cazelles, Bernard; Cazelles, Kévin; Chavez, Mario

    2014-02-06

    Wavelet analysis is now frequently used to extract information from ecological and epidemiological time series. Statistical hypothesis tests are conducted on associated wavelet quantities to assess the likelihood that they are due to a random process. Such random processes represent null models and are generally based on synthetic data that share some statistical characteristics with the original time series. This allows the comparison of null statistics with those obtained from original time series. When creating synthetic datasets, different techniques of resampling result in different characteristics shared by the synthetic time series. Therefore, it becomes crucial to consider the impact of the resampling method on the results. We have addressed this point by comparing seven different statistical testing methods applied with different real and simulated data. Our results show that statistical assessment of periodic patterns is strongly affected by the choice of the resampling method, so two different resampling techniques could lead to two different conclusions about the same time series. Moreover, our results clearly show the inadequacy of resampling series generated by white noise and red noise that are nevertheless the methods currently used in the wide majority of wavelets applications. Our results highlight that the characteristics of a time series, namely its Fourier spectrum and autocorrelation, are important to consider when choosing the resampling technique. Results suggest that data-driven resampling methods should be used such as the hidden Markov model algorithm and the 'beta-surrogate' method.

  12. Statistical analysis of the precision of the Match method

    Directory of Open Access Journals (Sweden)

    R. Lehmann

    2005-05-01

    Full Text Available The Match method quantifies chemical ozone loss in the polar stratosphere. The basic idea consists in calculating the forward trajectory of an air parcel that has been probed by an ozone measurement (e.g., by an ozone sonde or satellite and finding a second ozone measurement close to this trajectory. Such an event is called a ''match''. A rate of chemical ozone destruction can be obtained by a statistical analysis of several tens of such match events. Information on the uncertainty of the calculated rate can be inferred from the scatter of the ozone mixing ratio difference (second measurement minus first measurement associated with individual matches. A standard analysis would assume that the errors of these differences are statistically independent. However, this assumption may be violated because different matches can share a common ozone measurement, so that the errors associated with these match events become statistically dependent. Taking this effect into account, we present an analysis of the uncertainty of the final Match result. It has been applied to Match data from the Arctic winters 1995, 1996, 2000, and 2003. For these ozone-sonde Match studies the effect of the error correlation on the uncertainty estimates is rather small: compared to a standard error analysis, the uncertainty estimates increase by 15% on average. However, the effect is more pronounced for typical satellite Match analyses: for an Antarctic satellite Match study (2003, the uncertainty estimates increase by 60% on average.

  13. Second-order hydrodynamics for fermionic cold atoms: Detailed analysis of transport coefficients and relaxation times

    CERN Document Server

    Kikuchi, Yuta; Kunihiro, Teiji

    2016-01-01

    We give a detailed derivation of the second-order (local) hydrodynamics for Boltzmann equation with an external force by using the renormalization group method. In this method, we solve the Boltzmann equation faithfully to extract the hydrodynamics without recourse to any ansatz. Our method leads to microscopic expressions of not only all the transport coefficients that are of the same form as those in Chapman-Enskog method but also those of the viscous relaxation times $\\tau_i$ that admit physically natural interpretations. As an example, we apply our microscopic expressions to calculate the transport coefficients and the relaxation times of the cold fermionic atoms in a quantitative way, where the transition probability in the collision term is given explicitly in terms of the $s$-wave scattering length $a_s$. We thereby discuss the quantum statistical effects, temperature dependence, and scattering-length dependence of the first-order transport coefficients and the viscous relaxation times: It is shown tha...

  14. Developments in remote sensing technology enable more detailed urban flood risk analysis.

    Science.gov (United States)

    Denniss, A.; Tewkesbury, A.

    2009-04-01

    digital airborne sensors, both optical and lidar, to produce the input layer for surface water flood modelling. A national flood map product has been created. The new product utilises sophisticated modelling techniques, perfected over many years, which harness graphical processing power. This product will prove particularly valuable for risk assessment decision support within insurance/reinsurance, property/environmental, utilities, risk management and government agencies. However, it is not just the ground elevation that determines the behaviour of surface water. By combining height information (surface and terrain) with high resolution aerial photography and colour infrared imagery, a high definition land cover mapping dataset (LandBase) is being produced, which provides a precise measure of sealed versus non sealed surface. This will allows even more sophisticated modelling of flood scenarios. Thus, the value of airborne survey data can be demonstrated by flood risk analysis down to individual addresses in urban areas. However for some risks, an even more detailed survey may be justified. In order to achieve this, Infoterra is testing new 360˚ mobile lidar technology. Collecting lidar data from a moving vehicle allows each street to be mapped in very high detail, allowing precise information about the location, size and shape of features such as kerbstones, gullies, road camber and building threshold level to be captured quickly and accurately. These data can then be used to model the problem of overland flood risk at the scale of individual properties. Whilst at present it might be impractical to undertake such detailed modelling for all properties, these techniques can certainly be used to improve the flood risk analysis of key locations. This paper will demonstrate how these new high resolution remote sensing techniques can be combined to provide a new resolution of detail to aid urban flood modelling.

  15. Statistical strategies to reveal potential vibrational markers for in vivo analysis by confocal Raman spectroscopy

    Science.gov (United States)

    Oliveira Mendes, Thiago de; Pinto, Liliane Pereira; Santos, Laurita dos; Tippavajhala, Vamshi Krishna; Téllez Soto, Claudio Alberto; Martin, Airton Abrahão

    2016-07-01

    The analysis of biological systems by spectroscopic techniques involves the evaluation of hundreds to thousands of variables. Hence, different statistical approaches are used to elucidate regions that discriminate classes of samples and to propose new vibrational markers for explaining various phenomena like disease monitoring, mechanisms of action of drugs, food, and so on. However, the technical statistics are not always widely discussed in applied sciences. In this context, this work presents a detailed discussion including the various steps necessary for proper statistical analysis. It includes univariate parametric and nonparametric tests, as well as multivariate unsupervised and supervised approaches. The main objective of this study is to promote proper understanding of the application of various statistical tools in these spectroscopic methods used for the analysis of biological samples. The discussion of these methods is performed on a set of in vivo confocal Raman spectra of human skin analysis that aims to identify skin aging markers. In the Appendix, a complete routine of data analysis is executed in a free software that can be used by the scientific community involved in these studies.

  16. Comparison of different computed radiography systems: Physical characterization and contrast detail analysis

    Energy Technology Data Exchange (ETDEWEB)

    Rivetti, Stefano; Lanconelli, Nico; Bertolini, Marco; Nitrosi, Andrea; Burani, Aldo; Acchiappati, Domenico [Servizio Fisica Sanitaria, ' ' Azienda USL di Modena' ' , 41100 Modena (Italy); Department of Physics, Alma Mater Studiorum, University of Bologna, Viale Berti Pichat 6/2, 40127 Bologna (Italy); Arcispedale Santa Maria Nuova, 42123 Reggio Emilia (Italy); ' ' Azienda USL di Modena' ' , Ospedale di Sassuolo, 41049 Sassuolo (Italy); Servizio Fisica Sanitaria, ' ' Azienda USL di Modena' ' , 41100 Modena (Italy)

    2010-02-15

    Purpose: In this study, five different units based on three different technologies--traditional computed radiography (CR) units with granular phosphor and single-side reading, granular phosphor and dual-side reading, and columnar phosphor and line-scanning reading--are compared in terms of physical characterization and contrast detail analysis. Methods: The physical characterization of the five systems was obtained with the standard beam condition RQA5. Three of the units have been developed by FUJIFILM (FCR ST-VI, FCR ST-BD, and FCR Velocity U), one by Kodak (Direct View CR 975), and one by Agfa (DX-S). The quantitative comparison is based on the calculation of the modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE). Noise investigation was also achieved by using a relative standard deviation analysis. Psychophysical characterization is assessed by performing a contrast detail analysis with an automatic reading of CDRAD images. Results: The most advanced units based on columnar phosphors provide MTF values in line or better than those from conventional CR systems. The greater thickness of the columnar phosphor improves the efficiency, allowing for enhanced noise properties. In fact, NPS values for standard CR systems are remarkably higher for all the investigated exposures and especially for frequencies up to 3.5 lp/mm. As a consequence, DQE values for the three units based on columnar phosphors and line-scanning reading, or granular phosphor and dual-side reading, are neatly better than those from conventional CR systems. Actually, DQE values of about 40% are easily achievable for all the investigated exposures. Conclusions: This study suggests that systems based on the dual-side reading or line-scanning reading with columnar phosphors provide a remarkable improvement when compared to conventional CR units and yield results in line with those obtained from most digital detectors for radiography.

  17. Towards Advanced Data Analysis by Combining Soft Computing and Statistics

    CERN Document Server

    Gil, María; Sousa, João; Verleysen, Michel

    2013-01-01

    Soft computing, as an engineering science, and statistics, as a classical branch of mathematics, emphasize different aspects of data analysis. Soft computing focuses on obtaining working solutions quickly, accepting approximations and unconventional approaches. Its strength lies in its flexibility to create models that suit the needs arising in applications. In addition, it emphasizes the need for intuitive and interpretable models, which are tolerant to imprecision and uncertainty. Statistics is more rigorous and focuses on establishing objective conclusions based on experimental data by analyzing the possible situations and their (relative) likelihood. It emphasizes the need for mathematical methods and tools to assess solutions and guarantee performance. Combining the two fields enhances the robustness and generalizability of data analysis methods, while preserving the flexibility to solve real-world problems efficiently and intuitively.

  18. Collagen morphology and texture analysis: from statistics to classification

    Science.gov (United States)

    Mostaço-Guidolin, Leila B.; Ko, Alex C.-T.; Wang, Fei; Xiang, Bo; Hewko, Mark; Tian, Ganghong; Major, Arkady; Shiomi, Masashi; Sowa, Michael G.

    2013-07-01

    In this study we present an image analysis methodology capable of quantifying morphological changes in tissue collagen fibril organization caused by pathological conditions. Texture analysis based on first-order statistics (FOS) and second-order statistics such as gray level co-occurrence matrix (GLCM) was explored to extract second-harmonic generation (SHG) image features that are associated with the structural and biochemical changes of tissue collagen networks. Based on these extracted quantitative parameters, multi-group classification of SHG images was performed. With combined FOS and GLCM texture values, we achieved reliable classification of SHG collagen images acquired from atherosclerosis arteries with >90% accuracy, sensitivity and specificity. The proposed methodology can be applied to a wide range of conditions involving collagen re-modeling, such as in skin disorders, different types of fibrosis and muscular-skeletal diseases affecting ligaments and cartilage.

  19. GNSS Spoofing Detection Based on Signal Power Measurements: Statistical Analysis

    Directory of Open Access Journals (Sweden)

    V. Dehghanian

    2012-01-01

    Full Text Available A threat to GNSS receivers is posed by a spoofing transmitter that emulates authentic signals but with randomized code phase and Doppler values over a small range. Such spoofing signals can result in large navigational solution errors that are passed onto the unsuspecting user with potentially dire consequences. An effective spoofing detection technique is developed in this paper, based on signal power measurements and that can be readily applied to present consumer grade GNSS receivers with minimal firmware changes. An extensive statistical analysis is carried out based on formulating a multihypothesis detection problem. Expressions are developed to devise a set of thresholds required for signal detection and identification. The detection processing methods developed are further manipulated to exploit incidental antenna motion arising from user interaction with a GNSS handheld receiver to further enhance the detection performance of the proposed algorithm. The statistical analysis supports the effectiveness of the proposed spoofing detection technique under various multipath conditions.

  20. Statistics in experimental design, preprocessing, and analysis of proteomics data.

    Science.gov (United States)

    Jung, Klaus

    2011-01-01

    High-throughput experiments in proteomics, such as 2-dimensional gel electrophoresis (2-DE) and mass spectrometry (MS), yield usually high-dimensional data sets of expression values for hundreds or thousands of proteins which are, however, observed on only a relatively small number of biological samples. Statistical methods for the planning and analysis of experiments are important to avoid false conclusions and to receive tenable results. In this chapter, the most frequent experimental designs for proteomics experiments are illustrated. In particular, focus is put on studies for the detection of differentially regulated proteins. Furthermore, issues of sample size planning, statistical analysis of expression levels as well as methods for data preprocessing are covered.

  1. Business Management Simulations – a detailed industry analysis as well as recommendations for the future

    Directory of Open Access Journals (Sweden)

    Michael Batko

    2016-06-01

    Full Text Available Being exposed to serious games showed that some simulations widely vary in quality and learning outcome. In order to get to the bottom of best practices a detailed review of business management simulation literature was conducted. Additionally, an industry analysis was performed, by interviewing 17 simulation companies, testing a range of full and demo games, and conducting secondary research. The findings from both research efforts were then collated and cross-referenced against each other in order to determine three things: firstly, the practices and features used by simulation companies that have not yet been the subject of academic research; secondly, the most effective features, elements and inclusions within simulations that best assist in the achievement of learning outcomes and enhancement the user experience; and finally, ‘best practices’ in teaching a business management course in a university or company with the assistance of a simulation. Identified gaps in the current research were found to include the effectiveness of avatars, transparent pricing and the benefits of competing the simulation against other teams as opposed to the computer. In relation to the second and third objectives of the research, the findings were used to compile a business plan, with detailed recommendations for companies looking to develop a new simulation, and for instructors implementing and coordinating the use of a simulation in a business management context.

  2. Statistical Analysis of the Exchange Rate of Bitcoin.

    Science.gov (United States)

    Chu, Jeffrey; Nadarajah, Saralees; Chan, Stephen

    2015-01-01

    Bitcoin, the first electronic payment system, is becoming a popular currency. We provide a statistical analysis of the log-returns of the exchange rate of Bitcoin versus the United States Dollar. Fifteen of the most popular parametric distributions in finance are fitted to the log-returns. The generalized hyperbolic distribution is shown to give the best fit. Predictions are given for future values of the exchange rate.

  3. Lifetime statistics of quantum chaos studied by a multiscale analysis

    KAUST Repository

    Di Falco, A.

    2012-04-30

    In a series of pump and probe experiments, we study the lifetime statistics of a quantum chaotic resonator when the number of open channels is greater than one. Our design embeds a stadium billiard into a two dimensional photonic crystal realized on a silicon-on-insulator substrate. We calculate resonances through a multiscale procedure that combines energy landscape analysis and wavelet transforms. Experimental data is found to follow the universal predictions arising from random matrix theory with an excellent level of agreement.

  4. Statistical and machine learning approaches for network analysis

    CERN Document Server

    Dehmer, Matthias

    2012-01-01

    Explore the multidisciplinary nature of complex networks through machine learning techniques Statistical and Machine Learning Approaches for Network Analysis provides an accessible framework for structurally analyzing graphs by bringing together known and novel approaches on graph classes and graph measures for classification. By providing different approaches based on experimental data, the book uniquely sets itself apart from the current literature by exploring the application of machine learning techniques to various types of complex networks. Comprised of chapters written by internation

  5. Statistical analysis on reliability and serviceability of caterpillar tractor

    Institute of Scientific and Technical Information of China (English)

    WANG Jinwu; LIU Jiafu; XU Zhongxiang

    2007-01-01

    For further understanding reliability and serviceability of tractor and to furnish scientific and technical theories, based on the promotion and application of it, the following experiments and statistical analysis on reliability (reliability and MTBF) serviceability (service and MTTR) of Donfanghong-1002 and Dongfanghong-802 were conducted. The result showed that the intervals of average troubles of these two tractors were 182.62 h and 160.2 h, respectively, and the weakest assembly of them was engine part.

  6. Common pitfalls in statistical analysis: Odds versus risk

    Science.gov (United States)

    Ranganathan, Priya; Aggarwal, Rakesh; Pramesh, C. S.

    2015-01-01

    In biomedical research, we are often interested in quantifying the relationship between an exposure and an outcome. “Odds” and “Risk” are the most common terms which are used as measures of association between variables. In this article, which is the fourth in the series of common pitfalls in statistical analysis, we explain the meaning of risk and odds and the difference between the two. PMID:26623395

  7. Statistical Analysis of the Exchange Rate of Bitcoin

    Science.gov (United States)

    Chu, Jeffrey; Nadarajah, Saralees; Chan, Stephen

    2015-01-01

    Bitcoin, the first electronic payment system, is becoming a popular currency. We provide a statistical analysis of the log-returns of the exchange rate of Bitcoin versus the United States Dollar. Fifteen of the most popular parametric distributions in finance are fitted to the log-returns. The generalized hyperbolic distribution is shown to give the best fit. Predictions are given for future values of the exchange rate. PMID:26222702

  8. Statistical Analysis of the Exchange Rate of Bitcoin.

    Directory of Open Access Journals (Sweden)

    Jeffrey Chu

    Full Text Available Bitcoin, the first electronic payment system, is becoming a popular currency. We provide a statistical analysis of the log-returns of the exchange rate of Bitcoin versus the United States Dollar. Fifteen of the most popular parametric distributions in finance are fitted to the log-returns. The generalized hyperbolic distribution is shown to give the best fit. Predictions are given for future values of the exchange rate.

  9. Statistical analysis of complex systems with nonclassical invariant measures

    KAUST Repository

    Fratalocchi, Andrea

    2011-02-28

    I investigate the problem of finding a statistical description of a complex many-body system whose invariant measure cannot be constructed stemming from classical thermodynamics ensembles. By taking solitons as a reference system and by employing a general formalism based on the Ablowitz-Kaup-Newell-Segur scheme, I demonstrate how to build an invariant measure and, within a one-dimensional phase space, how to develop a suitable thermodynamics. A detailed example is provided with a universal model of wave propagation, with reference to a transparent potential sustaining gray solitons. The system shows a rich thermodynamic scenario, with a free-energy landscape supporting phase transitions and controllable emergent properties. I finally discuss the origin of such behavior, trying to identify common denominators in the area of complex dynamics.

  10. Gyrokinetic turbulence: between idealized estimates and a detailed analysis of nonlinear energy transfers

    CERN Document Server

    Teaca, Bogdan; Told, Daniel

    2016-01-01

    Using large resolution numerical simulations of GK turbulence, spanning an interval ranging from the end of the fluid scales to the electron gyroradius, we study the energy transfers in the perpendicular direction for a proton-electron plasma in a slab magnetic geometry. In addition, to aid our understanding of the nonlinear cascade, we use an idealized test representation for the energy transfers between two scales, mimicking the dynamics of turbulence in an infinite inertial range. For GK turbulence, a detailed analysis of nonlinear energy transfers that account for the separation of energy exchanging scales is performed. We show that locality functions associated with the energy cascade across dyadic (i.e. multiple of two) separated scales achieve an asymptotic state, recovering clear values for the locality exponents. We relate these exponents to the energy exchange between two scales, diagnostics that are less computationally intensive than the locality functions. It is the first time asymptotic locality...

  11. Propagating Disturbances in Coronal Loops: A Detailed Analysis of Propagation Speeds

    CERN Document Server

    Kiddie, G; Del Zanna, G; McIntosh, S W; Whittaker, I

    2012-01-01

    Quasi-periodic disturbances have been observed in the outer solar atmosphere for many years now. Although first interpreted as upflows (Schrijver et al. (1999)), they have been widely regarded as slow magnetoacoustic waves, due to observed velocities and periods. However, recent observations have questioned this interpretation, as periodic disturbances in Doppler velocity, line width and profile asymmetry were found to be in phase with the intensity oscillations (De Pontieu et al. (2010),Tian1 et al. (2011))}, suggesting the disturbances could be quasi-periodic upflows. Here we conduct a detailed analysis of the velocities of these disturbances across several wavelengths using the Atmospheric Imaging Assembly (AIA) on board the Solar Dynamics Observatory (SDO). We analysed 41 examples, including both sunspot and non sunspot regions of the Sun. We found that the velocities of propagating disturbances (PDs) located at sunspots are more likely to be temperature dependent, whereas the velocities of PDs at non sun...

  12. Detailed string stability analysis for bi-directional optimal velocity model

    Institute of Scientific and Technical Information of China (English)

    郑亮

    2015-01-01

    The class of bi-directional optimal velocity models can describe the bi-directional looking effect that usually exists in the reality and is even enhanced with the development of the connected vehicle technologies. Its combined string stability condition can be obtained through the method of the ring-road based string stability analysis. However, the partial string stability about traffic fluctuation propagated backward or forward was neglected, which will be analyzed in detail in this work by the method of transfer function and its H∞norm from the viewpoint of control theory. Then, through comparing the conditions of combined and partial string stabilities, their relationships can make traffic flow be divided into three distinguishable regions, displaying various combined and partial string stability performance. Finally, the numerical experiments verify the theoretical results and find that the final displaying string stability or instability performance results from the accumulated and offset effects of traffic fluctuations propagated from different directions.

  13. Detailed analysis of distraction induced by in-vehicle verbal interactions on visual search performance

    Directory of Open Access Journals (Sweden)

    Kazumitsu Shinohara

    2010-07-01

    Full Text Available We examined the negative effect of in-vehicle verbal interaction on visual search performance. Twenty participants performed a primary visual search task and a secondary verbal interaction task concurrently. We found that visual search performance deteriorated when the secondary task involving memory retrieval and speech production was performed concurrently. Moreover, a detailed analysis of the reaction time as a function of set size revealed that the increased reaction time was attributed not to the slowing of inspecting each item but to the increased processing time other than the inspection of each visual item, possibly due to task switching between the primary visual search task and the secondary verbal task. These findings have implications for providing information from in-vehicle information devices while reducing the risk of driver distraction.

  14. Statistical methods of SNP data analysis with applications

    CERN Document Server

    Bulinski, Alexander; Shashkin, Alexey; Yaskov, Pavel

    2011-01-01

    Various statistical methods important for genetic analysis are considered and developed. Namely, we concentrate on the multifactor dimensionality reduction, logic regression, random forests and stochastic gradient boosting. These methods and their new modifications, e.g., the MDR method with "independent rule", are used to study the risk of complex diseases such as cardiovascular ones. The roles of certain combinations of single nucleotide polymorphisms and external risk factors are examined. To perform the data analysis concerning the ischemic heart disease and myocardial infarction the supercomputer SKIF "Chebyshev" of the Lomonosov Moscow State University was employed.

  15. SAS and R data management, statistical analysis, and graphics

    CERN Document Server

    Kleinman, Ken

    2009-01-01

    An All-in-One Resource for Using SAS and R to Carry out Common TasksProvides a path between languages that is easier than reading complete documentationSAS and R: Data Management, Statistical Analysis, and Graphics presents an easy way to learn how to perform an analytical task in both SAS and R, without having to navigate through the extensive, idiosyncratic, and sometimes unwieldy software documentation. The book covers many common tasks, such as data management, descriptive summaries, inferential procedures, regression analysis, and the creation of graphics, along with more complex applicat

  16. The R software fundamentals of programming and statistical analysis

    CERN Document Server

    Lafaye de Micheaux, Pierre; Liquet, Benoit

    2013-01-01

    The contents of The R Software are presented so as to be both comprehensive and easy for the reader to use. Besides its application as a self-learning text, this book can support lectures on R at any level from beginner to advanced. This book can serve as a textbook on R for beginners as well as more advanced users, working on Windows, MacOs or Linux OSes. The first part of the book deals with the heart of the R language and its fundamental concepts, including data organization, import and export, various manipulations, documentation, plots, programming and maintenance.  The last chapter in this part deals with oriented object programming as well as interfacing R with C/C++ or Fortran, and contains a section on debugging techniques. This is followed by the second part of the book, which provides detailed explanations on how to perform many standard statistical analyses, mainly in the Biostatistics field. Topics from mathematical and statistical settings that are included are matrix operations, integration, o...

  17. Statistical Analysis of the Indus Script Using n-Grams

    Science.gov (United States)

    Yadav, Nisha; Joglekar, Hrishikesh; Rao, Rajesh P. N.; Vahia, Mayank N.; Adhikari, Ronojoy; Mahadevan, Iravatham

    2010-01-01

    The Indus script is one of the major undeciphered scripts of the ancient world. The small size of the corpus, the absence of bilingual texts, and the lack of definite knowledge of the underlying language has frustrated efforts at decipherment since the discovery of the remains of the Indus civilization. Building on previous statistical approaches, we apply the tools of statistical language processing, specifically n-gram Markov chains, to analyze the syntax of the Indus script. We find that unigrams follow a Zipf-Mandelbrot distribution. Text beginner and ender distributions are unequal, providing internal evidence for syntax. We see clear evidence of strong bigram correlations and extract significant pairs and triplets using a log-likelihood measure of association. Highly frequent pairs and triplets are not always highly significant. The model performance is evaluated using information-theoretic measures and cross-validation. The model can restore doubtfully read texts with an accuracy of about 75%. We find that a quadrigram Markov chain saturates information theoretic measures against a held-out corpus. Our work forms the basis for the development of a stochastic grammar which may be used to explore the syntax of the Indus script in greater detail. PMID:20333254

  18. Statistical analysis of the Indus script using n-grams.

    Directory of Open Access Journals (Sweden)

    Nisha Yadav

    Full Text Available The Indus script is one of the major undeciphered scripts of the ancient world. The small size of the corpus, the absence of bilingual texts, and the lack of definite knowledge of the underlying language has frustrated efforts at decipherment since the discovery of the remains of the Indus civilization. Building on previous statistical approaches, we apply the tools of statistical language processing, specifically n-gram Markov chains, to analyze the syntax of the Indus script. We find that unigrams follow a Zipf-Mandelbrot distribution. Text beginner and ender distributions are unequal, providing internal evidence for syntax. We see clear evidence of strong bigram correlations and extract significant pairs and triplets using a log-likelihood measure of association. Highly frequent pairs and triplets are not always highly significant. The model performance is evaluated using information-theoretic measures and cross-validation. The model can restore doubtfully read texts with an accuracy of about 75%. We find that a quadrigram Markov chain saturates information theoretic measures against a held-out corpus. Our work forms the basis for the development of a stochastic grammar which may be used to explore the syntax of the Indus script in greater detail.

  19. HistFitter: a flexible framework for statistical data analysis

    CERN Document Server

    Besjes, G J; Côté, D; Koutsman, A; Lorenz, J M; Short, D

    2015-01-01

    HistFitter is a software framework for statistical data analysis that has been used extensively in the ATLAS Collaboration to analyze data of proton-proton collisions produced by the Large Hadron Collider at CERN. Most notably, HistFitter has become a de-facto standard in searches for supersymmetric particles since 2012, with some usage for Exotic and Higgs boson physics. HistFitter coherently combines several statistics tools in a programmable and flexible framework that is capable of bookkeeping hundreds of data models under study using thousands of generated input histograms.HistFitter interfaces with the statistics tools HistFactory and RooStats to construct parametric models and to perform statistical tests of the data, and extends these tools in four key areas. The key innovations are to weave the concepts of control, validation and signal regions into the very fabric of HistFitter, and to treat these with rigorous methods. Multiple tools to visualize and interpret the results through a simple configura...

  20. Validation of statistical models for creep rupture by parametric analysis

    Energy Technology Data Exchange (ETDEWEB)

    Bolton, J., E-mail: john.bolton@uwclub.net [65, Fisher Ave., Rugby, Warks CV22 5HW (United Kingdom)

    2012-01-15

    Statistical analysis is an efficient method for the optimisation of any candidate mathematical model of creep rupture data, and for the comparative ranking of competing models. However, when a series of candidate models has been examined and the best of the series has been identified, there is no statistical criterion to determine whether a yet more accurate model might be devised. Hence there remains some uncertainty that the best of any series examined is sufficiently accurate to be considered reliable as a basis for extrapolation. This paper proposes that models should be validated primarily by parametric graphical comparison to rupture data and rupture gradient data. It proposes that no mathematical model should be considered reliable for extrapolation unless the visible divergence between model and data is so small as to leave no apparent scope for further reduction. This study is based on the data for a 12% Cr alloy steel used in BS PD6605:1998 to exemplify its recommended statistical analysis procedure. The models considered in this paper include a) a relatively simple model, b) the PD6605 recommended model and c) a more accurate model of somewhat greater complexity. - Highlights: Black-Right-Pointing-Pointer The paper discusses the validation of creep rupture models derived from statistical analysis. Black-Right-Pointing-Pointer It demonstrates that models can be satisfactorily validated by a visual-graphic comparison of models to data. Black-Right-Pointing-Pointer The method proposed utilises test data both as conventional rupture stress and as rupture stress gradient. Black-Right-Pointing-Pointer The approach is shown to be more reliable than a well-established and widely used method (BS PD6605).

  1. Detailed behavioral analysis as a window into cross-situational word learning.

    Science.gov (United States)

    Suanda, Sumarga H; Namy, Laura L

    2012-04-01

    Recent research has demonstrated that word learners can determine word-referent mappings by tracking co-occurrences across multiple ambiguous naming events. The current study addresses the mechanisms underlying this capacity to learn words cross-situationally. This replication and extension of Yu and Smith (2007) investigates the factors influencing both successful cross-situational word learning and mis-mappings. Item analysis and error patterns revealed that the co-occurrence structure of the learning environment as well as the context of the testing environment jointly affected learning across observations. Learners also adopted an exclusion strategy, which contributed conjointly with statistical tracking to performance. Implications for our understanding of the processes underlying cross-situational word learning are discussed. Copyright © 2012 Cognitive Science Society, Inc.

  2. The Effects of Statistical Analysis Software and Calculators on Statistics Achievement

    Science.gov (United States)

    Christmann, Edwin P.

    2009-01-01

    This study compared the effects of microcomputer-based statistical software and hand-held calculators on the statistics achievement of university males and females. The subjects, 73 graduate students enrolled in univariate statistics classes at a public comprehensive university, were randomly assigned to groups that used either microcomputer-based…

  3. Technique for direct measurement of thermal conductivity of elastomers and a detailed uncertainty analysis

    Science.gov (United States)

    Ralphs, Matthew I.; Smith, Barton L.; Roberts, Nicholas A.

    2016-11-01

    High thermal conductivity thermal interface materials (TIMs) are needed to extend the life and performance of electronic circuits. A stepped bar apparatus system has been shown to work well for thermal resistance measurements with rigid materials, but most TIMs are elastic. This work studies the uncertainty of using a stepped bar apparatus to measure the thermal resistance and a tensile/compression testing machine to estimate the compressed thickness of polydimethylsiloxane for a measurement on the thermal conductivity, k eff. An a priori, zeroth order analysis is used to estimate the random uncertainty from the instrumentation; a first order analysis is used to estimate the statistical variation in samples; and an a posteriori, Nth order analysis is used to provide an overall uncertainty on k eff for this measurement method. Bias uncertainty in the thermocouples is found to be the largest single source of uncertainty. The a posteriori uncertainty of the proposed method is 6.5% relative uncertainty (68% confidence), but could be reduced through calibration and correlated biases in the temperature measurements.

  4. Detailed analysis of two particle correlations in central Pb-Au collisions at 158 GeV per nucleon

    Energy Technology Data Exchange (ETDEWEB)

    Antonczyk, D.

    2006-07-01

    This thesis presents a two-particle correlation analysis of the fully calibrated high statistics CERES Pb+Au collision data at the top SPS energy, with the emphasis on the pion-proton correlations and the event-plane dependence of the correlation radii. CERES is a dilepton spectrometer at CERN SPS. After the upgrade, which improved the momentum resolution and extended the detector capabilities to hadrons, CERES collected 30 million Pb+Au events at 158 AGeV in the year 2000. A previous Hanbury-Brown-Twiss (HBT) analysis of pion pairs in a subset of these data, together with the results obtained at other beam energies, lead to a new freeze-out criterion [AAA+03]. In this work, the detailed transverse momentum and event-plane dependence of the pion correlation radii, as well as the pion-proton correlations, are discussed in the framework of the blast wave model of the expanding fireball. Furthermore, development of an electron drift velocity gas monitor for the ALICE TPC sub-detector is presented. The new method of the gas composition monitoring is based on the simultaneous measurement of the electron drift velocity and the gas gain and is sensitive to even small variations of the gas mixture composition. Several modifications of the apparatus were performed resulting in the final drift velocity resolution of 0.3 permille. (orig.)

  5. Multivariate statistical analysis a high-dimensional approach

    CERN Document Server

    Serdobolskii, V

    2000-01-01

    In the last few decades the accumulation of large amounts of in­ formation in numerous applications. has stimtllated an increased in­ terest in multivariate analysis. Computer technologies allow one to use multi-dimensional and multi-parametric models successfully. At the same time, an interest arose in statistical analysis with a de­ ficiency of sample data. Nevertheless, it is difficult to describe the recent state of affairs in applied multivariate methods as satisfactory. Unimprovable (dominating) statistical procedures are still unknown except for a few specific cases. The simplest problem of estimat­ ing the mean vector with minimum quadratic risk is unsolved, even for normal distributions. Commonly used standard linear multivari­ ate procedures based on the inversion of sample covariance matrices can lead to unstable results or provide no solution in dependence of data. Programs included in standard statistical packages cannot process 'multi-collinear data' and there are no theoretical recommen­ ...

  6. Self-Contained Statistical Analysis of Gene Sets

    Science.gov (United States)

    Cannon, Judy L.; Ricoy, Ulises M.; Johnson, Christopher

    2016-01-01

    Microarrays are a powerful tool for studying differential gene expression. However, lists of many differentially expressed genes are often generated, and unraveling meaningful biological processes from the lists can be challenging. For this reason, investigators have sought to quantify the statistical probability of compiled gene sets rather than individual genes. The gene sets typically are organized around a biological theme or pathway. We compute correlations between different gene set tests and elect to use Fisher’s self-contained method for gene set analysis. We improve Fisher’s differential expression analysis of a gene set by limiting the p-value of an individual gene within the gene set to prevent a small percentage of genes from determining the statistical significance of the entire set. In addition, we also compute dependencies among genes within the set to determine which genes are statistically linked. The method is applied to T-ALL (T-lineage Acute Lymphoblastic Leukemia) to identify differentially expressed gene sets between T-ALL and normal patients and T-ALL and AML (Acute Myeloid Leukemia) patients. PMID:27711232

  7. Agriculture, population growth, and statistical analysis of the radiocarbon record.

    Science.gov (United States)

    Zahid, H Jabran; Robinson, Erick; Kelly, Robert L

    2016-01-26

    The human population has grown significantly since the onset of the Holocene about 12,000 y ago. Despite decades of research, the factors determining prehistoric population growth remain uncertain. Here, we examine measurements of the rate of growth of the prehistoric human population based on statistical analysis of the radiocarbon record. We find that, during most of the Holocene, human populations worldwide grew at a long-term annual rate of 0.04%. Statistical analysis of the radiocarbon record shows that transitioning farming societies experienced the same rate of growth as contemporaneous foraging societies. The same rate of growth measured for populations dwelling in a range of environments and practicing a variety of subsistence strategies suggests that the global climate and/or endogenous biological factors, not adaptability to local environment or subsistence practices, regulated the long-term growth of the human population during most of the Holocene. Our results demonstrate that statistical analyses of large ensembles of radiocarbon dates are robust and valuable for quantitatively investigating the demography of prehistoric human populations worldwide.

  8. Statistical wind analysis for near-space applications

    Science.gov (United States)

    Roney, Jason A.

    2007-09-01

    Statistical wind models were developed based on the existing observational wind data for near-space altitudes between 60 000 and 100 000 ft (18 30 km) above ground level (AGL) at two locations, Akon, OH, USA, and White Sands, NM, USA. These two sites are envisioned as playing a crucial role in the first flights of high-altitude airships. The analysis shown in this paper has not been previously applied to this region of the stratosphere for such an application. Standard statistics were compiled for these data such as mean, median, maximum wind speed, and standard deviation, and the data were modeled with Weibull distributions. These statistics indicated, on a yearly average, there is a lull or a “knee” in the wind between 65 000 and 72 000 ft AGL (20 22 km). From the standard statistics, trends at both locations indicated substantial seasonal variation in the mean wind speed at these heights. The yearly and monthly statistical modeling indicated that Weibull distributions were a reasonable model for the data. Forecasts and hindcasts were done by using a Weibull model based on 2004 data and comparing the model with the 2003 and 2005 data. The 2004 distribution was also a reasonable model for these years. Lastly, the Weibull distribution and cumulative function were used to predict the 50%, 95%, and 99% winds, which are directly related to the expected power requirements of a near-space station-keeping airship. These values indicated that using only the standard deviation of the mean may underestimate the operational conditions.

  9. Statistical methods for the detection and analysis of radioactive sources

    Science.gov (United States)

    Klumpp, John

    We consider four topics from areas of radioactive statistical analysis in the present study: Bayesian methods for the analysis of count rate data, analysis of energy data, a model for non-constant background count rate distributions, and a zero-inflated model of the sample count rate. The study begins with a review of Bayesian statistics and techniques for analyzing count rate data. Next, we consider a novel system for incorporating energy information into count rate measurements which searches for elevated count rates in multiple energy regions simultaneously. The system analyzes time-interval data in real time to sequentially update a probability distribution for the sample count rate. We then consider a "moving target" model of background radiation in which the instantaneous background count rate is a function of time, rather than being fixed. Unlike the sequential update system, this model assumes a large body of pre-existing data which can be analyzed retrospectively. Finally, we propose a novel Bayesian technique which allows for simultaneous source detection and count rate analysis. This technique is fully compatible with, but independent of, the sequential update system and moving target model.

  10. A Statistical Analysis of Lunisolar-Earthquake Connections

    Science.gov (United States)

    Rüegg, Christian Michael-André

    2012-11-01

    Despite over a century of study, the relationship between lunar cycles and earthquakes remains controversial and difficult to quantitatively investigate. Perhaps as a consequence, major earthquakes around the globe are frequently followed by "prediction claim", using lunar cycles, that generate media furore and pressure scientists to provide resolute answers. The 2010-2011 Canterbury earthquakes in New Zealand were no exception; significant media attention was given to lunar derived earthquake predictions by non-scientists, even though the predictions were merely "opinions" and were not based on any statistically robust temporal or causal relationships. This thesis provides a framework for studying lunisolar earthquake temporal relationships by developing replicable statistical methodology based on peer reviewed literature. Notable in the methodology is a high accuracy ephemeris, called ECLPSE, designed specifically by the author for use on earthquake catalogs and a model for performing phase angle analysis.

  11. Statistical analysis of subjective preferences for video enhancement

    Science.gov (United States)

    Woods, Russell L.; Satgunam, PremNandhini; Bronstad, P. Matthew; Peli, Eli

    2010-02-01

    Measuring preferences for moving video quality is harder than for static images due to the fleeting and variable nature of moving video. Subjective preferences for image quality can be tested by observers indicating their preference for one image over another. Such pairwise comparisons can be analyzed using Thurstone scaling (Farrell, 1999). Thurstone (1927) scaling is widely used in applied psychology, marketing, food tasting and advertising research. Thurstone analysis constructs an arbitrary perceptual scale for the items that are compared (e.g. enhancement levels). However, Thurstone scaling does not determine the statistical significance of the differences between items on that perceptual scale. Recent papers have provided inferential statistical methods that produce an outcome similar to Thurstone scaling (Lipovetsky and Conklin, 2004). Here, we demonstrate that binary logistic regression can analyze preferences for enhanced video.

  12. ATP binding to a multisubunit enzyme: statistical thermodynamics analysis

    CERN Document Server

    Zhang, Yunxin

    2012-01-01

    Due to inter-subunit communication, multisubunit enzymes usually hydrolyze ATP in a concerted fashion. However, so far the principle of this process remains poorly understood. In this study, from the viewpoint of statistical thermodynamics, a simple model is presented. In this model, we assume that the binding of ATP will change the potential of the corresponding enzyme subunit, and the degree of this change depends on the state of its adjacent subunits. The probability of enzyme in a given state satisfies the Boltzmann's distribution. Although it looks much simple, this model can fit the recent experimental data of chaperonin TRiC/CCT well. From this model, the dominant state of TRiC/CCT can be obtained. This study provided a new way to understand biophysical processes by statistical thermodynamics analysis.

  13. Statistical analysis of effective singular values in matrix rank determination

    Science.gov (United States)

    Konstantinides, Konstantinos; Yao, Kung

    1988-01-01

    A major problem in using SVD (singular-value decomposition) as a tool in determining the effective rank of a perturbed matrix is that of distinguishing between significantly small and significantly large singular values to the end, conference regions are derived for the perturbed singular values of matrices with noisy observation data. The analysis is based on the theories of perturbations of singular values and statistical significance test. Threshold bounds for perturbation due to finite-precision and i.i.d. random models are evaluated. In random models, the threshold bounds depend on the dimension of the matrix, the noisy variance, and predefined statistical level of significance. Results applied to the problem of determining the effective order of a linear autoregressive system from the approximate rank of a sample autocorrelation matrix are considered. Various numerical examples illustrating the usefulness of these bounds and comparisons to other previously known approaches are given.

  14. Statistical methods for data analysis in particle physics

    CERN Document Server

    Lista, Luca

    2015-01-01

    This concise set of course-based notes provides the reader with the main concepts and tools to perform statistical analysis of experimental data, in particular in the field of high-energy physics (HEP). First, an introduction to probability theory and basic statistics is given, mainly as reminder from advanced undergraduate studies, yet also in view to clearly distinguish the Frequentist versus Bayesian approaches and interpretations in subsequent applications. More advanced concepts and applications are gradually introduced, culminating in the chapter on upper limits as many applications in HEP concern hypothesis testing, where often the main goal is to provide better and better limits so as to be able to distinguish eventually between competing hypotheses or to rule out some of them altogether. Many worked examples will help newcomers to the field and graduate students to understand the pitfalls in applying theoretical concepts to actual data

  15. [Statistical analysis of DNA sequences nearby splicing sites].

    Science.gov (United States)

    Korzinov, O M; Astakhova, T V; Vlasov, P K; Roĭtberg, M A

    2008-01-01

    Recognition of coding regions within eukaryotic genomes is one of oldest but yet not solved problems of bioinformatics. New high-accuracy methods of splicing sites recognition are needed to solve this problem. A question of current interest is to identify specific features of nucleotide sequences nearby splicing sites and recognize sites in sequence context. We performed a statistical analysis of human genes fragment database and revealed some characteristics of nucleotide sequences in splicing sites neighborhood. Frequencies of all nucleotides and dinucleotides in splicing sites environment were computed and nucleotides and dinucleotides with extremely high\\low occurrences were identified. Statistical information obtained in this work can be used in further development of the methods of splicing sites annotation and exon-intron structure recognition.

  16. The NIRS Analysis Package: noise reduction and statistical inference.

    Science.gov (United States)

    Fekete, Tomer; Rubin, Denis; Carlson, Joshua M; Mujica-Parodi, Lilianne R

    2011-01-01

    Near infrared spectroscopy (NIRS) is a non-invasive optical imaging technique that can be used to measure cortical hemodynamic responses to specific stimuli or tasks. While analyses of NIRS data are normally adapted from established fMRI techniques, there are nevertheless substantial differences between the two modalities. Here, we investigate the impact of NIRS-specific noise; e.g., systemic (physiological), motion-related artifacts, and serial autocorrelations, upon the validity of statistical inference within the framework of the general linear model. We present a comprehensive framework for noise reduction and statistical inference, which is custom-tailored to the noise characteristics of NIRS. These methods have been implemented in a public domain Matlab toolbox, the NIRS Analysis Package (NAP). Finally, we validate NAP using both simulated and actual data, showing marked improvement in the detection power and reliability of NIRS.

  17. On Statistical Analysis of Neuroimages with Imperfect Registration

    Science.gov (United States)

    Kim, Won Hwa; Ravi, Sathya N.; Johnson, Sterling C.; Okonkwo, Ozioma C.; Singh, Vikas

    2016-01-01

    A variety of studies in neuroscience/neuroimaging seek to perform statistical inference on the acquired brain image scans for diagnosis as well as understanding the pathological manifestation of diseases. To do so, an important first step is to register (or co-register) all of the image data into a common coordinate system. This permits meaningful comparison of the intensities at each voxel across groups (e.g., diseased versus healthy) to evaluate the effects of the disease and/or use machine learning algorithms in a subsequent step. But errors in the underlying registration make this problematic, they either decrease the statistical power or make the follow-up inference tasks less effective/accurate. In this paper, we derive a novel algorithm which offers immunity to local errors in the underlying deformation field obtained from registration procedures. By deriving a deformation invariant representation of the image, the downstream analysis can be made more robust as if one had access to a (hypothetical) far superior registration procedure. Our algorithm is based on recent work on scattering transform. Using this as a starting point, we show how results from harmonic analysis (especially, non-Euclidean wavelets) yields strategies for designing deformation and additive noise invariant representations of large 3-D brain image volumes. We present a set of results on synthetic and real brain images where we achieve robust statistical analysis even in the presence of substantial deformation errors; here, standard analysis procedures significantly under-perform and fail to identify the true signal. PMID:27042168

  18. Detailed Performance Analysis of the 10-Kw CNRS-Promes Dish/Stirling System

    Energy Technology Data Exchange (ETDEWEB)

    Reinalter, W.; Ulmer, S.; Heller, P.; Rauch, T.; Gineste, J. M.; Ferriere, A.; Nepveu, F.

    2006-07-01

    The CNRS-Promes dish/Stirling system was erected in June 2004 as the last of three country reference units built in the Envirodish project, partly financed by the German ministry of environment. It represents the latest development step of the EuroDish system with many improved components. With a measured peak of 11 kW electrical output power it is also the best performing system so far. The measurement campaign to determine the optical and thermodynamic efficiency of the system is presented. The optical quality of the concentrator and the energy input to the power conversion unit was measured with a classical flux-mapping system using a Lambertian target and a CCD camera system. For the thermodynamic analysis all the data necessary for a complete energy balance around the Stirling engine, i.e. efficiency of the Stirling motor, the cavity and the receiver as well as the parasitic losses were measured or approximated by calculations. Such a detailed performance analysis helps to quantify all significant losses of the system and to identify the most rewarding future improvements. (Author)

  19. STATISTICAL ANALYSIS OF TANK 19F FLOOR SAMPLE RESULTS

    Energy Technology Data Exchange (ETDEWEB)

    Harris, S.

    2010-09-02

    Representative sampling has been completed for characterization of the residual material on the floor of Tank 19F as per the statistical sampling plan developed by Harris and Shine. Samples from eight locations have been obtained from the tank floor and two of the samples were archived as a contingency. Six samples, referred to in this report as the current scrape samples, have been submitted to and analyzed by SRNL. This report contains the statistical analysis of the floor sample analytical results to determine if further data are needed to reduce uncertainty. Included are comparisons with the prior Mantis samples results to determine if they can be pooled with the current scrape samples to estimate the upper 95% confidence limits (UCL95%) for concentration. Statistical analysis revealed that the Mantis and current scrape sample results are not compatible. Therefore, the Mantis sample results were not used to support the quantification of analytes in the residual material. Significant spatial variability among the current scrape sample results was not found. Constituent concentrations were similar between the North and South hemispheres as well as between the inner and outer regions of the tank floor. The current scrape sample results from all six samples fall within their 3-sigma limits. In view of the results from numerous statistical tests, the data were pooled from all six current scrape samples. As such, an adequate sample size was provided for quantification of the residual material on the floor of Tank 19F. The uncertainty is quantified in this report by an UCL95% on each analyte concentration. The uncertainty in analyte concentration was calculated as a function of the number of samples, the average, and the standard deviation of the analytical results. The UCL95% was based entirely on the six current scrape sample results (each averaged across three analytical determinations).

  20. Detailed exploration of the endothelium: parameterization of flow-mediated dilation through principal component analysis.

    Science.gov (United States)

    Laclaustra, Martin; Frangi, Alejandro F; Garcia, Daniel; Boisrobert, Loïc; Frangi, Andres G; Pascual, Isaac

    2007-03-01

    Endothelial dysfunction is associated with cardiovascular diseases and their risk factors (CVRF), and flow-mediated dilation (FMD) is increasingly used to explore it. In this test, artery diameter changes after post-ischaemic hyperaemia are classically quantified using maximum peak vasodilation (FMDc). To obtain more detailed descriptors of FMD we applied principal component analysis (PCA) to diameter-time curves (absolute), vasodilation-time curves (relative) and blood-velocity-time curves. Furthermore, combined PCA of vessel size and blood-velocity curves allowed exploring links between flow and dilation. Vessel diameter data for PCA (post-ischaemic: 140 s) were acquired from brachial ultrasound image sequences of 173 healthy male subjects using a computerized technique previously reported by our team based on image registration (Frangi et al 2003 IEEE Trans. Med. Imaging 22 1458). PCA provides a set of axes (called eigenmodes) that captures the underlying variation present in a database of waveforms so that the first few eigenmodes retain most of the variation. These eigenmodes can be used to synthesize each waveform analysed by means of only a few parameters, as well as potentially any signal of the same type derived from tests of new patients. The eigenmodes obtained seemed related to visual features of the waveform of the FMD process. Subsequently, we used eigenmodes to parameterize our data. Most of the main parameters (13 out of 15) correlated with FMDc. Furthermore, not all parameters correlated with the same CVRF tested, that is, serum lipids (i.e., high LDL-c associated with slow vessel return to a baseline, while low HDL-c associated with a lower vasodilation in response to similar velocity stimulus), thus suggesting that this parameterization allows a more detailed and factored description of the process than FMDc.

  1. Forensic discrimination of dyed hair color: II. Multivariate statistical analysis.

    Science.gov (United States)

    Barrett, Julie A; Siegel, Jay A; Goodpaster, John V

    2011-01-01

    This research is intended to assess the ability of UV-visible microspectrophotometry to successfully discriminate the color of dyed hair. Fifty-five red hair dyes were analyzed and evaluated using multivariate statistical techniques including agglomerative hierarchical clustering (AHC), principal component analysis (PCA), and discriminant analysis (DA). The spectra were grouped into three classes, which were visually consistent with different shades of red. A two-dimensional PCA observations plot was constructed, describing 78.6% of the overall variance. The wavelength regions associated with the absorbance of hair and dye were highly correlated. Principal components were selected to represent 95% of the overall variance for analysis with DA. A classification accuracy of 89% was observed for the comprehensive dye set, while external validation using 20 of the dyes resulted in a prediction accuracy of 75%. Significant color loss from successive washing of hair samples was estimated to occur within 3 weeks of dye application.

  2. Detailed analysis of complex single molecule FRET data with the software MASH

    Science.gov (United States)

    Hadzic, Mélodie C. A. S.; Kowerko, Danny; Börner, Richard; Zelger-Paulus, Susann; Sigel, Roland K. O.

    2016-04-01

    The processing and analysis of surface-immobilized single molecule FRET (Förster resonance energy transfer) data follows systematic steps (e.g. single molecule localization, clearance of different sources of noise, selection of the conformational and kinetic model, etc.) that require a solid knowledge in optics, photophysics, signal processing and statistics. The present proceeding aims at standardizing and facilitating procedures for single molecule detection by guiding the reader through an optimization protocol for a particular experimental data set. Relevant features were determined from single molecule movies (SMM) imaging Cy3- and Cy5-labeled Sc.ai5γ group II intron molecules synthetically recreated, to test the performances of four different detection algorithms. Up to 120 different parameterizations per method were routinely evaluated to finally establish an optimum detection procedure. The present protocol is adaptable to any movie displaying surface-immobilized molecules, and can be easily reproduced with our home-written software MASH (multifunctional analysis software for heterogeneous data) and script routines (both available in the download section of www.chem.uzh.ch/rna).

  3. A detailed analysis of the productivity of solar home system in an Amazonian environment

    Energy Technology Data Exchange (ETDEWEB)

    Linguet, L. [Research Group on Renewable Energies (GRER), University of the French Antilles and French Guiana' s, Campus Saint-Denis, Avenue d' Estrees, 97337 Cayenne Cedex (France); Hidair, I. [University of the French Antilles and French Guiana' s, Campus Saint-Denis, Avenue d' Estrees, 97337 Cayenne Cedex (France)

    2010-02-15

    This paper discusses and analyses the productivity of solar home systems in isolated areas in French Guiana, a region characterized by specific human and environmental conditions. Its aim is a better understanding of the attitudes, expectations, and relationship of the users towards the solar home system. The data collected made it possible to make suggestions for adapting the photovoltaic systems to their environment by taking into account social, cultural, and geoclimatic specificities. Analysis of on-site productivity provides valuable information on energy profiles and types of use. Field surveys made it possible to associate users' perception of the energy production equipment and their degree of satisfaction with operating efficiency and on-site maintenance. This aspect is essential for analyzing the actual rate of use of the energy that is theoretically available. Parallel to these surveys, the results of the study carried out on the performance of the solar home systems made it possible to learn the quantitative aspects of the energy produced and consumed as well as the qualitative aspects of the parameters that condition the performance of the photovoltaic systems. After keyboarding, the subjective, qualitative as well as the quantitative variables were processed using a statistical analysis program in order to determine the correlations between them and to prepare the final conclusions. (author)

  4. Sensitivity analysis and optimization of system dynamics models : Regression analysis and statistical design of experiments

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    1995-01-01

    This tutorial discusses what-if analysis and optimization of System Dynamics models. These problems are solved, using the statistical techniques of regression analysis and design of experiments (DOE). These issues are illustrated by applying the statistical techniques to a System Dynamics model for

  5. Consolidity analysis for fully fuzzy functions, matrices, probability and statistics

    Directory of Open Access Journals (Sweden)

    Walaa Ibrahim Gabr

    2015-03-01

    Full Text Available The paper presents a comprehensive review of the know-how for developing the systems consolidity theory for modeling, analysis, optimization and design in fully fuzzy environment. The solving of systems consolidity theory included its development for handling new functions of different dimensionalities, fuzzy analytic geometry, fuzzy vector analysis, functions of fuzzy complex variables, ordinary differentiation of fuzzy functions and partial fraction of fuzzy polynomials. On the other hand, the handling of fuzzy matrices covered determinants of fuzzy matrices, the eigenvalues of fuzzy matrices, and solving least-squares fuzzy linear equations. The approach demonstrated to be also applicable in a systematic way in handling new fuzzy probabilistic and statistical problems. This included extending the conventional probabilistic and statistical analysis for handling fuzzy random data. Application also covered the consolidity of fuzzy optimization problems. Various numerical examples solved have demonstrated that the new consolidity concept is highly effective in solving in a compact form the propagation of fuzziness in linear, nonlinear, multivariable and dynamic problems with different types of complexities. Finally, it is demonstrated that the implementation of the suggested fuzzy mathematics can be easily embedded within normal mathematics through building special fuzzy functions library inside the computational Matlab Toolbox or using other similar software languages.

  6. GIS-BASED SPATIAL STATISTICAL ANALYSIS OF COLLEGE GRADUATES EMPLOYMENT

    Directory of Open Access Journals (Sweden)

    R. Tang

    2012-07-01

    Full Text Available It is urgently necessary to be aware of the distribution and employment status of college graduates for proper allocation of human resources and overall arrangement of strategic industry. This study provides empirical evidence regarding the use of geocoding and spatial analysis in distribution and employment status of college graduates based on the data from 2004–2008 Wuhan Municipal Human Resources and Social Security Bureau, China. Spatio-temporal distribution of employment unit were analyzed with geocoding using ArcGIS software, and the stepwise multiple linear regression method via SPSS software was used to predict the employment and to identify spatially associated enterprise and professionals demand in the future. The results show that the enterprises in Wuhan east lake high and new technology development zone increased dramatically from 2004 to 2008, and tended to distributed southeastward. Furthermore, the models built by statistical analysis suggest that the specialty of graduates major in has an important impact on the number of the employment and the number of graduates engaging in pillar industries. In conclusion, the combination of GIS and statistical analysis which helps to simulate the spatial distribution of the employment status is a potential tool for human resource development research.

  7. Gis-Based Spatial Statistical Analysis of College Graduates Employment

    Science.gov (United States)

    Tang, R.

    2012-07-01

    It is urgently necessary to be aware of the distribution and employment status of college graduates for proper allocation of human resources and overall arrangement of strategic industry. This study provides empirical evidence regarding the use of geocoding and spatial analysis in distribution and employment status of college graduates based on the data from 2004-2008 Wuhan Municipal Human Resources and Social Security Bureau, China. Spatio-temporal distribution of employment unit were analyzed with geocoding using ArcGIS software, and the stepwise multiple linear regression method via SPSS software was used to predict the employment and to identify spatially associated enterprise and professionals demand in the future. The results show that the enterprises in Wuhan east lake high and new technology development zone increased dramatically from 2004 to 2008, and tended to distributed southeastward. Furthermore, the models built by statistical analysis suggest that the specialty of graduates major in has an important impact on the number of the employment and the number of graduates engaging in pillar industries. In conclusion, the combination of GIS and statistical analysis which helps to simulate the spatial distribution of the employment status is a potential tool for human resource development research.

  8. Analysis and meta-analysis of single-case designs with a standardized mean difference statistic: a primer and applications.

    Science.gov (United States)

    Shadish, William R; Hedges, Larry V; Pustejovsky, James E

    2014-04-01

    This article presents a d-statistic for single-case designs that is in the same metric as the d-statistic used in between-subjects designs such as randomized experiments and offers some reasons why such a statistic would be useful in SCD research. The d has a formal statistical development, is accompanied by appropriate power analyses, and can be estimated using user-friendly SPSS macros. We discuss both advantages and disadvantages of d compared to other approaches such as previous d-statistics, overlap statistics, and multilevel modeling. It requires at least three cases for computation and assumes normally distributed outcomes and stationarity, assumptions that are discussed in some detail. We also show how to test these assumptions. The core of the article then demonstrates in depth how to compute d for one study, including estimation of the autocorrelation and the ratio of between case variance to total variance (between case plus within case variance), how to compute power using a macro, and how to use the d to conduct a meta-analysis of studies using single-case designs in the free program R, including syntax in an appendix. This syntax includes how to read data, compute fixed and random effect average effect sizes, prepare a forest plot and a cumulative meta-analysis, estimate various influence statistics to identify studies contributing to heterogeneity and effect size, and do various kinds of publication bias analyses. This d may prove useful for both the analysis and meta-analysis of data from SCDs.

  9. Statistical design and analysis of RNA sequencing data.

    Science.gov (United States)

    Auer, Paul L; Doerge, R W

    2010-06-01

    Next-generation sequencing technologies are quickly becoming the preferred approach for characterizing and quantifying entire genomes. Even though data produced from these technologies are proving to be the most informative of any thus far, very little attention has been paid to fundamental design aspects of data collection and analysis, namely sampling, randomization, replication, and blocking. We discuss these concepts in an RNA sequencing framework. Using simulations we demonstrate the benefits of collecting replicated RNA sequencing data according to well known statistical designs that partition the sources of biological and technical variation. Examples of these designs and their corresponding models are presented with the goal of testing differential expression.

  10. Statistical Analysis of Designed Experiments Theory and Applications

    CERN Document Server

    Tamhane, Ajit C

    2012-01-01

    A indispensable guide to understanding and designing modern experiments The tools and techniques of Design of Experiments (DOE) allow researchers to successfully collect, analyze, and interpret data across a wide array of disciplines. Statistical Analysis of Designed Experiments provides a modern and balanced treatment of DOE methodology with thorough coverage of the underlying theory and standard designs of experiments, guiding the reader through applications to research in various fields such as engineering, medicine, business, and the social sciences. The book supplies a foundation for the

  11. Statistical energy analysis of complex structures, phase 2

    Science.gov (United States)

    Trudell, R. W.; Yano, L. I.

    1980-01-01

    A method for estimating the structural vibration properties of complex systems in high frequency environments was investigated. The structure analyzed was the Materials Experiment Assembly, (MEA), which is a portion of the OST-2A payload for the space transportation system. Statistical energy analysis (SEA) techniques were used to model the structure and predict the structural element response to acoustic excitation. A comparison of the intial response predictions and measured acoustic test data is presented. The conclusions indicate that: the SEA predicted the response of primary structure to acoustic excitation over a wide range of frequencies; and the contribution of mechanically induced random vibration to the total MEA is not significant.

  12. SAS and R data management, statistical analysis, and graphics

    CERN Document Server

    Kleinman, Ken

    2014-01-01

    An Up-to-Date, All-in-One Resource for Using SAS and R to Perform Frequent TasksThe first edition of this popular guide provided a path between SAS and R using an easy-to-understand, dictionary-like approach. Retaining the same accessible format, SAS and R: Data Management, Statistical Analysis, and Graphics, Second Edition explains how to easily perform an analytical task in both SAS and R, without having to navigate through the extensive, idiosyncratic, and sometimes unwieldy software documentation. The book covers many common tasks, such as data management, descriptive summaries, inferentia

  13. Multi-scale statistical analysis of coronal solar activity

    Science.gov (United States)

    Gamborino, Diana; del-Castillo-Negrete, Diego; Martinell, Julio J.

    2016-07-01

    Multi-filter images from the solar corona are used to obtain temperature maps that are analyzed using techniques based on proper orthogonal decomposition (POD) in order to extract dynamical and structural information at various scales. Exploring active regions before and after a solar flare and comparing them with quiet regions, we show that the multi-scale behavior presents distinct statistical properties for each case that can be used to characterize the level of activity in a region. Information about the nature of heat transport is also to be extracted from the analysis.

  14. Spatial Analysis Along Networks Statistical and Computational Methods

    CERN Document Server

    Okabe, Atsuyuki

    2012-01-01

    In the real world, there are numerous and various events that occur on and alongside networks, including the occurrence of traffic accidents on highways, the location of stores alongside roads, the incidence of crime on streets and the contamination along rivers. In order to carry out analyses of those events, the researcher needs to be familiar with a range of specific techniques. Spatial Analysis Along Networks provides a practical guide to the necessary statistical techniques and their computational implementation. Each chapter illustrates a specific technique, from Stochastic Point Process

  15. Feature statistic analysis of ultrasound images of liver cancer

    Science.gov (United States)

    Huang, Shuqin; Ding, Mingyue; Zhang, Songgeng

    2007-12-01

    In this paper, a specific feature analysis of liver ultrasound images including normal liver, liver cancer especially hepatocellular carcinoma (HCC) and other hepatopathy is discussed. According to the classification of hepatocellular carcinoma (HCC), primary carcinoma is divided into four types. 15 features from single gray-level statistic, gray-level co-occurrence matrix (GLCM), and gray-level run-length matrix (GLRLM) are extracted. Experiments for the discrimination of each type of HCC, normal liver, fatty liver, angioma and hepatic abscess have been conducted. Corresponding features to potentially discriminate them are found.

  16. STATISTIC ANALYSIS OF INTERNATIONAL TOURISM ON ROMANIAN SEASIDE

    Directory of Open Access Journals (Sweden)

    MIRELA SECARĂ

    2010-01-01

    Full Text Available In order to meet European and international touristic competition standards, modernization, re-establishment and development of Romanian tourism are necessary as well as creation of modern touristic products that are competitive on this market. The use of modern methods of statistic analysis in the field of tourism facilitates the achievement of systems of information that are the instruments for: evaluation of touristic demand and touristic supply, follow-up of touristic services of each touring form, follow-up of transportation services, leisure activities, hotel accommodation, touristic market study, and a complex flexible system of management and accountancy.

  17. Statistics for proteomics: experimental design and 2-DE differential analysis.

    Science.gov (United States)

    Chich, Jean-François; David, Olivier; Villers, Fanny; Schaeffer, Brigitte; Lutomski, Didier; Huet, Sylvie

    2007-04-15

    Proteomics relies on the separation of complex protein mixtures using bidimensional electrophoresis. This approach is largely used to detect the expression variations of proteins prepared from two or more samples. Recently, attention was drawn on the reliability of the results published in literature. Among the critical points identified were experimental design, differential analysis and the problem of missing data, all problems where statistics can be of help. Using examples and terms understandable by biologists, we describe how a collaboration between biologists and statisticians can improve reliability of results and confidence in conclusions.

  18. A Probabilistic Rain Diagnostic Model Based on Cyclone Statistical Analysis

    OpenAIRE

    Iordanidou, V.; A. G. Koutroulis; I. K. Tsanis

    2014-01-01

    Data from a dense network of 69 daily precipitation gauges over the island of Crete and cyclone climatological analysis over middle-eastern Mediterranean are combined in a statistical approach to develop a rain diagnostic model. Regarding the dataset, 0.5 × 0.5, 33-year (1979–2011) European Centre for Medium-Range Weather Forecasts (ECMWF) reanalysis (ERA-Interim) is used. The cyclone tracks and their characteristics are identified with the aid of Melbourne University algorithm (MS scheme). T...

  19. Research and Development on Food Nutrition Statistical Analysis Software System

    Directory of Open Access Journals (Sweden)

    Du Li

    2013-12-01

    Full Text Available Designing and developing a set of food nutrition component statistical analysis software can realize the automation of nutrition calculation, improve the nutrition processional professional’s working efficiency and achieve the informatization of the nutrition propaganda and education. In the software development process, the software engineering method and database technology are used to calculate the human daily nutritional intake and the intelligent system is used to evaluate the user’s health condition. The experiment can show that the system can correctly evaluate the human health condition and offer the reasonable suggestion, thus exploring a new road to solve the complex nutrition computational problem with information engineering.

  20. Statistical analysis of bound companions in the Coma cluster

    Science.gov (United States)

    Mendelin, Martin; Binggeli, Bruno

    2017-08-01

    Aims: The rich and nearby Coma cluster of galaxies is known to have substructure. We aim to create a more detailed picture of this substructure by searching directly for bound companions around individual giant members. Methods: We have used two catalogs of Coma galaxies, one covering the cluster core for a detailed morphological analysis, another covering the outskirts. The separation limit between possible companions (secondaries) and giants (primaries) is chosen as MB = -19 and MR = -20, respectively for the two catalogs. We have created pseudo-clusters by shuffling positions or velocities of the primaries and search for significant over-densities of possible companions around giants by comparison with the data. This method was developed and applied first to the Virgo cluster. In a second approach we introduced a modified nearest neighbor analysis using several interaction parameters for all galaxies. Results: We find evidence for some excesses due to possible companions for both catalogs. Satellites are typically found among the faintest dwarfs (MB type giants (spirals) in the outskirts, which is expected in an infall scenario of cluster evolution. A rough estimate for an upper limit of bound galaxies within Coma is 2-4%, to be compared with 7% for Virgo. Conclusions: The results agree well with the expected low frequency of bound companions in a regular cluster such as Coma. To exploit the data more fully and reach more detailed insights into the physics of cluster evolution we suggest applying the method also to model clusters created by N-body simulations for comparison.

  1. A probabilistic analysis of wind gusts using extreme value statistics

    Energy Technology Data Exchange (ETDEWEB)

    Friederichs, Petra; Bentzien, Sabrina; Lenz, Anne; Krampitz, Rebekka [Meteorological Inst., Univ. of Bonn (Germany); Goeber, Martin [Deutscher Wetterdienst, Offenbach (Germany)

    2009-12-15

    The spatial variability of wind gusts is probably as large as that of precipitation, but the observational weather station network is much less dense. The lack of an area-wide observational analysis hampers the forecast verification of wind gust warnings. This article develops and compares several approaches to derive a probabilistic analysis of wind gusts for Germany. Such an analysis provides a probability that a wind gust exceeds a certain warning level. To that end we have 5 years of observations of hourly wind maxima at about 140 weather stations of the German weather service at our disposal. The approaches are based on linear statistical modeling using generalized linear models, extreme value theory and quantile regression. Warning level exceedance probabilities are estimated in response to predictor variables such as the observed mean wind or the operational analysis of the wind velocity at a height of 10 m above ground provided by the European Centre for Medium Range Weather Forecasts (ECMWF). The study shows that approaches that apply to the differences between the recorded wind gust and the mean wind perform better in terms of the Brier skill score (which measures the quality of a probability forecast) than those using the gust factor or the wind gusts only. The study points to the benefit from using extreme value theory as the most appropriate and theoretically consistent statistical model. The most informative predictors are the observed mean wind, but also the observed gust velocities recorded at the neighboring stations. Out of the predictors used from the ECMWF analysis, the wind velocity at 10 m above ground is the most informative predictor, whereas the wind shear and the vertical velocity provide no additional skill. For illustration the results for January 2007 and during the winter storm Kyrill are shown. (orig.)

  2. Segmentation, statistical analysis, and modelling of the wall system in ceramic foams

    Energy Technology Data Exchange (ETDEWEB)

    Kampf, Jürgen, E-mail: juergen.kampf@uni-ulm.de [University of Ulm, Mathematics Department, 89069 Ulm (Germany); Schlachter, Anna-Lena [University of Kaiserslautern, Mathematics Department, 67653 Kaiserslautern (Germany); Redenbach, Claudia, E-mail: redenbach@mathematik.uni-kl.de [University of Kaiserslautern, Mathematics Department, 67653 Kaiserslautern (Germany); Liebscher, André, E-mail: liebscher@mathematik.uni-kl.de [University of Kaiserslautern, Mathematics Department, 67653 Kaiserslautern (Germany)

    2015-01-15

    Closed walls in otherwise open foam structures may have a great impact on macroscopic properties of the materials. In this paper, we present two algorithms for the segmentation of such closed walls from micro-computed tomography images of the foam structure. The techniques are compared on simulated data and applied to tomographic images of ceramic filters. This allows for a detailed statistical analysis of the normal directions and sizes of the walls. Finally, we explain how the information derived from the segmented wall system can be included in a stochastic microstructure model for the foam.

  3. Statistical analysis of personal radiofrequency electromagnetic field measurements with nondetects.

    Science.gov (United States)

    Röösli, Martin; Frei, Patrizia; Mohler, Evelyn; Braun-Fahrländer, Charlotte; Bürgi, Alfred; Fröhlich, Jürg; Neubauer, Georg; Theis, Gaston; Egger, Matthias

    2008-09-01

    Exposimeters are increasingly applied in bioelectromagnetic research to determine personal radiofrequency electromagnetic field (RF-EMF) exposure. The main advantages of exposimeter measurements are their convenient handling for study participants and the large amount of personal exposure data, which can be obtained for several RF-EMF sources. However, the large proportion of measurements below the detection limit is a challenge for data analysis. With the robust ROS (regression on order statistics) method, summary statistics can be calculated by fitting an assumed distribution to the observed data. We used a preliminary sample of 109 weekly exposimeter measurements from the QUALIFEX study to compare summary statistics computed by robust ROS with a naïve approach, where values below the detection limit were replaced by the value of the detection limit. For the total RF-EMF exposure, differences between the naïve approach and the robust ROS were moderate for the 90th percentile and the arithmetic mean. However, exposure contributions from minor RF-EMF sources were considerably overestimated with the naïve approach. This results in an underestimation of the exposure range in the population, which may bias the evaluation of potential exposure-response associations. We conclude from our analyses that summary statistics of exposimeter data calculated by robust ROS are more reliable and more informative than estimates based on a naïve approach. Nevertheless, estimates of source-specific medians or even lower percentiles depend on the assumed data distribution and should be considered with caution. Copyright 2008 Wiley-Liss, Inc.

  4. CFD Analysis and Design of Detailed Target Configurations for an Accelerator-Driven Subcritical System

    Energy Technology Data Exchange (ETDEWEB)

    Kraus, Adam; Merzari, Elia; Sofu, Tanju; Zhong, Zhaopeng; Gohar, Yousry

    2016-08-01

    High-fidelity analysis has been utilized in the design of beam target options for an accelerator driven subcritical system. Designs featuring stacks of plates with square cross section have been investigated for both tungsten and uranium target materials. The presented work includes the first thermal-hydraulic simulations of the full, detailed target geometry. The innovative target cooling manifold design features many regions with complex flow features, including 90 bends and merging jets, which necessitate three-dimensional fluid simulations. These were performed using the commercial computational fluid dynamics code STAR-CCM+. Conjugate heat transfer was modeled between the plates, cladding, manifold structure, and fluid. Steady-state simulations were performed but lacked good residual convergence. Unsteady simulations were then performed, which converged well and demonstrated that flow instability existed in the lower portion of the manifold. It was established that the flow instability had little effect on the peak plate temperatures, which were well below the melting point. The estimated plate surface temperatures and target region pressure were shown to provide sufficient margin to subcooled boiling for standard operating conditions. This demonstrated the safety of both potential target configurations during normal operation.

  5. A Detailed Model Atmosphere Analysis of Cool White Dwarfs in the Sloan Digital Sky Survey

    CERN Document Server

    Kilic, Mukremin; Tremblay, P -E; von Hippel, Ted; Bergeron, P; Harris, Hugh C; Munn, Jeffrey A; Williams, Kurtis A; Gates, Evalyn; Farihi, J

    2010-01-01

    We present optical spectroscopy and near-infrared photometry of 126 cool white dwarfs in the Sloan Digital Sky Survey (SDSS). Our sample includes high proper motion targets selected using the SDSS and USNO-B astrometry and a dozen previously known ultracool white dwarf candidates. Our optical spectroscopic observations demonstrate that a clean selection of large samples of cool white dwarfs in the SDSS (and the SkyMapper, Pan-STARRS, and the Large Synoptic Survey Telescope datasets) is possible using a reduced proper motion diagram and a tangential velocity cut-off (depending on the proper motion accuracy) of 30 km/s. Our near-infrared observations reveal eight new stars with significant absorption. We use the optical and near-infrared photometry to perform a detailed model atmosphere analysis. More than 80% of the stars in our sample are consistent with either pure hydrogen or pure helium atmospheres. However, the eight stars with significant infrared absorption and the majority of the previously known ultra...

  6. Cardiometabolic risk in Canada: a detailed analysis and position paper by the cardiometabolic risk working group.

    Science.gov (United States)

    Leiter, Lawrence A; Fitchett, David H; Gilbert, Richard E; Gupta, Milan; Mancini, G B John; McFarlane, Philip A; Ross, Robert; Teoh, Hwee; Verma, Subodh; Anand, Sonia; Camelon, Kathryn; Chow, Chi-Ming; Cox, Jafna L; Després, Jean-Pierre; Genest, Jacques; Harris, Stewart B; Lau, David C W; Lewanczuk, Richard; Liu, Peter P; Lonn, Eva M; McPherson, Ruth; Poirier, Paul; Qaadri, Shafiq; Rabasa-Lhoret, Rémi; Rabkin, Simon W; Sharma, Arya M; Steele, Andrew W; Stone, James A; Tardif, Jean-Claude; Tobe, Sheldon; Ur, Ehud

    2011-01-01

    The concepts of "cardiometabolic risk," "metabolic syndrome," and "risk stratification" overlap and relate to the atherogenic process and development of type 2 diabetes. There is confusion about what these terms mean and how they can best be used to improve our understanding of cardiovascular disease treatment and prevention. With the objectives of clarifying these concepts and presenting practical strategies to identify and reduce cardiovascular risk in multiethnic patient populations, the Cardiometabolic Working Group reviewed the evidence related to emerging cardiovascular risk factors and Canadian guideline recommendations in order to present a detailed analysis and consolidated approach to the identification and management of cardiometabolic risk. The concepts related to cardiometabolic risk, pathophysiology, and strategies for identification and management (including health behaviours, pharmacotherapy, and surgery) in the multiethnic Canadian population are presented. "Global cardiometabolic risk" is proposed as an umbrella term for a comprehensive list of existing and emerging factors that predict cardiovascular disease and/or type 2 diabetes. Health behaviour interventions (weight loss, physical activity, diet, smoking cessation) in people identified at high cardiometabolic risk are of critical importance given the emerging crisis of obesity and the consequent epidemic of type 2 diabetes. Vascular protective measures (health behaviours for all patients and pharmacotherapy in appropriate patients) are essential to reduce cardiometabolic risk, and there is growing consensus that a multidisciplinary approach is needed to adequately address cardiometabolic risk factors. Health care professionals must also consider risk factors related to ethnicity in order to appropriately evaluate everyone in their diverse patient populations.

  7. Detailed specificity analysis of antibodies binding to modified histone tails with peptide arrays.

    Science.gov (United States)

    Bock, Ina; Dhayalan, Arunkumar; Kudithipudi, Srikanth; Brandt, Ole; Rathert, Philipp; Jeltsch, Albert

    2011-02-01

    Chromatin structure is greatly influenced by histone tail post-translational modifications (PTM), which also play a central role in epigenetic processes. Antibodies against modified histone tails are central research reagents in chromatin biology and molecular epigenetics. We applied Celluspots peptide arrays for the specificity analysis of 36 commercial antibodies from different suppliers which are directed towards modified histone tails. The arrays contained 384 peptides from 8 different regions of the N-terminal tails of histones, viz. H3 1-19, 7-26, 16-35 and 26-45, H4 1-19 and 11-30, H2A 1-19 and H2B 1-19, featuring 59 post-translational modifications in many different combinations. Using various controls we document the reliability of the method. Our analysis revealed previously undocumented details in the specificity profile. Most of the antibodies bound well to the PTM they have been raised for, but some failed. In addition some antibodies showed high cross-reactivity and most antibodies were inhibited by specific additional PTMs close to the primary one. Furthermore, specificity profiles for antibodies directed towards the same modification sometimes were very different. The specificity of antibodies used in epigenetic research is an important issue. We provide a catalog of antibody specificity profiles for 36 widely used commercial histone tail PTM antibodies. Better knowledge about the specificity profiles of antibodies will enable researchers to implement necessary control experiments in biological studies and allow more reliable interpretation of biological experiments using these antibodies.

  8. Analysis of Detailed Energy Audits and Energy Use Measures of University Buildings

    Directory of Open Access Journals (Sweden)

    Kęstutis Valančius

    2011-12-01

    Full Text Available The paper explains the results of a detailed energy audit of the buildings of Vilnius Gediminas Technical University. Energy audits were performed with reference to the international scientific project. The article presents the methodology and results of detailed measurements of energy balance characteristics.Article in Lithuanian

  9. STATISTICAL ANALYSIS OF THE TM- MODEL VIA BAYESIAN APPROACH

    Directory of Open Access Journals (Sweden)

    Muhammad Aslam

    2012-11-01

    Full Text Available The method of paired comparisons calls for the comparison of treatments presented in pairs to judges who prefer the better one based on their sensory evaluations. Thurstone (1927 and Mosteller (1951 employ the method of maximum likelihood to estimate the parameters of the Thurstone-Mosteller model for the paired comparisons. A Bayesian analysis of the said model using the non-informative reference (Jeffreys prior is presented in this study. The posterior estimates (means and joint modes of the parameters and the posterior probabilities comparing the two parameters are obtained for the analysis. The predictive probabilities that one treatment (Ti in preferred to any other treatment (Tj in a future single comparison are also computed. In addition, the graphs of the marginal posterior distributions of the individual parameter are drawn. The appropriateness of the model is also tested using the Chi-Square test statistic.

  10. On Understanding Statistical Data Analysis in Higher Education

    CERN Document Server

    Montalbano, Vera

    2012-01-01

    Data analysis is a powerful tool in all experimental sciences. Statistical methods, such as sampling theory, computer technologies necessary for handling large amounts of data, skill in analysing information contained in different types of graphs are all competences necessary for achieving an in-depth data analysis. In higher education, these topics are usually fragmentized in different courses, the interdisciplinary integration can lack, some caution in the use of these topics can missing or be misunderstood. Students are often obliged to acquire these skills by themselves during the preparation of the final experimental thesis. A proposal for a learning path on nuclear phenomena is presented in order to develop these scientific competences in physics courses. An introduction to radioactivity and nuclear phenomenology is followed by measurements of natural radioactivity. Background and weak sources can be monitored for long time in a physics laboratory. The data are collected and analyzed in a computer lab i...

  11. Statistical analysis of cascading failures in power grids

    Energy Technology Data Exchange (ETDEWEB)

    Chertkov, Michael [Los Alamos National Laboratory; Pfitzner, Rene [Los Alamos National Laboratory; Turitsyn, Konstantin [Los Alamos National Laboratory

    2010-12-01

    We introduce a new microscopic model of cascading failures in transmission power grids. This model accounts for automatic response of the grid to load fluctuations that take place on the scale of minutes, when optimum power flow adjustments and load shedding controls are unavailable. We describe extreme events, caused by load fluctuations, which cause cascading failures of loads, generators and lines. Our model is quasi-static in the causal, discrete time and sequential resolution of individual failures. The model, in its simplest realization based on the Directed Current description of the power flow problem, is tested on three standard IEEE systems consisting of 30, 39 and 118 buses. Our statistical analysis suggests a straightforward classification of cascading and islanding phases in terms of the ratios between average number of removed loads, generators and links. The analysis also demonstrates sensitivity to variations in line capacities. Future research challenges in modeling and control of cascading outages over real-world power networks are discussed.

  12. Topics in statistical data analysis for high-energy physics

    CERN Document Server

    Cowan, G

    2013-01-01

    These lectures concern two topics that are becoming increasingly important in the analysis of High Energy Physics (HEP) data: Bayesian statistics and multivariate methods. In the Bayesian approach we extend the interpretation of probability to cover not only the frequency of repeatable outcomes but also to include a degree of belief. In this way we are able to associate probability with a hypothesis and thus to answer directly questions that cannot be addressed easily with traditional frequentist methods. In multivariate analysis we try to exploit as much information as possible from the characteristics that we measure for each event to distinguish between event types. In particular we will look at a method that has gained popularity in HEP in recent years: the boosted decision tree (BDT).

  13. Processes and subdivisions in diogenites, a multivariate statistical analysis

    Science.gov (United States)

    Harriott, T. A.; Hewins, R. H.

    1984-01-01

    Multivariate statistical techniques used on diogenite orthopyroxene analyses show the relationships that occur within diogenites and the two orthopyroxenite components (class I and II) in the polymict diogenite Garland. Cluster analysis shows that only Peckelsheim is similar to Garland class I (Fe-rich) and the other diogenites resemble Garland class II. The unique diogenite Y 75032 may be related to type I by fractionation. Factor analysis confirms the subdivision and shows that Fe does not correlate with the weakly incompatible elements across the entire pyroxene composition range, indicating that igneous fractionation is not the process controlling total diogenite composition variation. The occurrence of two groups of diogenites is interpreted as the result of sampling or mixing of two main sequences of orthopyroxene cumulates with slightly different compositions.

  14. Statistical learning analysis in neuroscience: aiming for transparency

    Directory of Open Access Journals (Sweden)

    Michael Hanke

    2010-05-01

    Full Text Available Encouraged by a rise of reciprocal interest between the machine learning and neuroscience communities, several recent studies have demonstrated the explanatory power of statistical learning techniques for the analysis of neural data. In order to facilitate a wider adoption of these methods neuroscientific research needs to ensure a maximum of transparency to allow for comprehensive evaluation of the employed procedures. We argue that such transparency requires ``neuroscience-aware'' technology for the performance of multivariate pattern analyses of neural data that can be documented in a comprehensive, yet comprehensible way. Recently, we introduced PyMVPA, a specialized Python framework for machine learning based data analysis that addresses this demand. Here we review its features and applicability to various neural data modalities.

  15. Statistical learning analysis in neuroscience: aiming for transparency.

    Science.gov (United States)

    Hanke, Michael; Halchenko, Yaroslav O; Haxby, James V; Pollmann, Stefan

    2010-01-01

    Encouraged by a rise of reciprocal interest between the machine learning and neuroscience communities, several recent studies have demonstrated the explanatory power of statistical learning techniques for the analysis of neural data. In order to facilitate a wider adoption of these methods, neuroscientific research needs to ensure a maximum of transparency to allow for comprehensive evaluation of the employed procedures. We argue that such transparency requires "neuroscience-aware" technology for the performance of multivariate pattern analyses of neural data that can be documented in a comprehensive, yet comprehensible way. Recently, we introduced PyMVPA, a specialized Python framework for machine learning based data analysis that addresses this demand. Here, we review its features and applicability to various neural data modalities.

  16. Multivariate Statistical Analysis Applied in Wine Quality Evaluation

    Directory of Open Access Journals (Sweden)

    Jieling Zou

    2015-08-01

    Full Text Available This study applies multivariate statistical approaches to wine quality evaluation. With 27 red wine samples, four factors were identified out of 12 parameters by principal component analysis, explaining 89.06% of the total variance of data. As iterative weights calculated by the BP neural network revealed little difference from weights determined by information entropy method, the latter was chosen to measure the importance of indicators. Weighted cluster analysis performs well in classifying the sample group further into two sub-clusters. The second cluster of red wine samples, compared with its first, was lighter in color, tasted thinner and had fainter bouquet. Weighted TOPSIS method was used to evaluate the quality of wine in each sub-cluster. With scores obtained, each sub-cluster was divided into three grades. On the whole, the quality of lighter red wine was slightly better than the darker category. This study shows the necessity and usefulness of multivariate statistical techniques in both wine quality evaluation and parameter selection.

  17. Statistical analysis of magnetically soft particles in magnetorheological elastomers

    Science.gov (United States)

    Gundermann, T.; Cremer, P.; Löwen, H.; Menzel, A. M.; Odenbach, S.

    2017-04-01

    The physical properties of magnetorheological elastomers (MRE) are a complex issue and can be influenced and controlled in many ways, e.g. by applying a magnetic field, by external mechanical stimuli, or by an electric potential. In general, the response of MRE materials to these stimuli is crucially dependent on the distribution of the magnetic particles inside the elastomer. Specific knowledge of the interactions between particles or particle clusters is of high relevance for understanding the macroscopic rheological properties and provides an important input for theoretical calculations. In order to gain a better insight into the correlation between the macroscopic effects and microstructure and to generate a database for theoretical analysis, x-ray micro-computed tomography (X-μCT) investigations as a base for a statistical analysis of the particle configurations were carried out. Different MREs with quantities of 2–15 wt% (0.27–2.3 vol%) of iron powder and different allocations of the particles inside the matrix were prepared. The X-μCT results were edited by an image processing software regarding the geometrical properties of the particles with and without the influence of an external magnetic field. Pair correlation functions for the positions of the particles inside the elastomer were calculated to statistically characterize the distributions of the particles in the samples.

  18. Visualization methods for statistical analysis of microarray clusters

    Directory of Open Access Journals (Sweden)

    Li Kai

    2005-05-01

    Full Text Available Abstract Background The most common method of identifying groups of functionally related genes in microarray data is to apply a clustering algorithm. However, it is impossible to determine which clustering algorithm is most appropriate to apply, and it is difficult to verify the results of any algorithm due to the lack of a gold-standard. Appropriate data visualization tools can aid this analysis process, but existing visualization methods do not specifically address this issue. Results We present several visualization techniques that incorporate meaningful statistics that are noise-robust for the purpose of analyzing the results of clustering algorithms on microarray data. This includes a rank-based visualization method that is more robust to noise, a difference display method to aid assessments of cluster quality and detection of outliers, and a projection of high dimensional data into a three dimensional space in order to examine relationships between clusters. Our methods are interactive and are dynamically linked together for comprehensive analysis. Further, our approach applies to both protein and gene expression microarrays, and our architecture is scalable for use on both desktop/laptop screens and large-scale display devices. This methodology is implemented in GeneVAnD (Genomic Visual ANalysis of Datasets and is available at http://function.princeton.edu/GeneVAnD. Conclusion Incorporating relevant statistical information into data visualizations is key for analysis of large biological datasets, particularly because of high levels of noise and the lack of a gold-standard for comparisons. We developed several new visualization techniques and demonstrated their effectiveness for evaluating cluster quality and relationships between clusters.

  19. A detailed study of α-relaxation in epoxy/carbon nanoparticles composites using computational analysis

    Directory of Open Access Journals (Sweden)

    C. A. Stergiou

    2012-02-01

    Full Text Available Nanocomposites were fabricated based on diglycidyl ether of bisphenol A (DGEBA, cured with triethylenetetramine (TETA and filled with: a high conductivity carbon black (CB and b amino-functionalized multiwalled carbon nanotubes (MWCNTs. The full dynamic mechanical analysis (DMA spectra, obtained for the thermomechanical characterization of the partially cured DGEBA/TETA/CB and water saturated DGEBA/TETA/MWCNT composites, reveal a complex behaviour as the α-relaxation appears to consist of more than one individual peaks. By employing some basic calculations along with an optimization procedure, which utilizes the pseudo-Voigt profile function, the experimental data have been successfully analyzed. In fact, additional values of sub-glass transition temperature (Ti corresponding to subrelaxation mechanisms were introduced besides the dominant process. Thus, the physical sense of multiple networks in the composites is investigated and the glass transition temperature Tg is more precisely determined, as the DMA α-relaxation peaks can be reconstructed by the accumulation of individual peaks. Additionally, a novel term, the index of the network homogeneity (IH, is proposed to effectively characterize the degree of statistical perfection of the network.

  20. SEDA: A software package for the Statistical Earthquake Data Analysis.

    Science.gov (United States)

    Lombardi, A M

    2017-03-14

    In this paper, the first version of the software SEDA (SEDAv1.0), designed to help seismologists statistically analyze earthquake data, is presented. The package consists of a user-friendly Matlab-based interface, which allows the user to easily interact with the application, and a computational core of Fortran codes, to guarantee the maximum speed. The primary factor driving the development of SEDA is to guarantee the research reproducibility, which is a growing movement among scientists and highly recommended by the most important scientific journals. SEDAv1.0 is mainly devoted to produce accurate and fast outputs. Less care has been taken for the graphic appeal, which will be improved in the future. The main part of SEDAv1.0 is devoted to the ETAS modeling. SEDAv1.0 contains a set of consistent tools on ETAS, allowing the estimation of parameters, the testing of model on data, the simulation of catalogs, the identification of sequences and forecasts calculation. The peculiarities of routines inside SEDAv1.0 are discussed in this paper. More specific details on the software are presented in the manual accompanying the program package.

  1. SEDA: A software package for the Statistical Earthquake Data Analysis

    Science.gov (United States)

    Lombardi, A. M.

    2017-03-01

    In this paper, the first version of the software SEDA (SEDAv1.0), designed to help seismologists statistically analyze earthquake data, is presented. The package consists of a user-friendly Matlab-based interface, which allows the user to easily interact with the application, and a computational core of Fortran codes, to guarantee the maximum speed. The primary factor driving the development of SEDA is to guarantee the research reproducibility, which is a growing movement among scientists and highly recommended by the most important scientific journals. SEDAv1.0 is mainly devoted to produce accurate and fast outputs. Less care has been taken for the graphic appeal, which will be improved in the future. The main part of SEDAv1.0 is devoted to the ETAS modeling. SEDAv1.0 contains a set of consistent tools on ETAS, allowing the estimation of parameters, the testing of model on data, the simulation of catalogs, the identification of sequences and forecasts calculation. The peculiarities of routines inside SEDAv1.0 are discussed in this paper. More specific details on the software are presented in the manual accompanying the program package.

  2. SEDA: A software package for the Statistical Earthquake Data Analysis

    Science.gov (United States)

    Lombardi, A. M.

    2017-01-01

    In this paper, the first version of the software SEDA (SEDAv1.0), designed to help seismologists statistically analyze earthquake data, is presented. The package consists of a user-friendly Matlab-based interface, which allows the user to easily interact with the application, and a computational core of Fortran codes, to guarantee the maximum speed. The primary factor driving the development of SEDA is to guarantee the research reproducibility, which is a growing movement among scientists and highly recommended by the most important scientific journals. SEDAv1.0 is mainly devoted to produce accurate and fast outputs. Less care has been taken for the graphic appeal, which will be improved in the future. The main part of SEDAv1.0 is devoted to the ETAS modeling. SEDAv1.0 contains a set of consistent tools on ETAS, allowing the estimation of parameters, the testing of model on data, the simulation of catalogs, the identification of sequences and forecasts calculation. The peculiarities of routines inside SEDAv1.0 are discussed in this paper. More specific details on the software are presented in the manual accompanying the program package. PMID:28290482

  3. Statistical analysis of vibration-induced bone and joint damages.

    Science.gov (United States)

    Schenk, T

    1995-01-01

    Vibration-induced damages to bones and joints are still occupational diseases with insufficient knowledge about causing and moderating factors and resulting damages. For a better understanding of these relationships also retrospective analyses of already acknowledged occupational diseases may be used. Already recorded detailed data for 203 in 1970 to 1979 acknowledged occupational diseases in the building industry and the building material industry of the GDR are the basis for the here described investigations. The data were gathered from the original documents of the occupational diseases and scaled in cooperation of an industrial engineer and an industrial physician. For the purposes of this investigations the data are to distinguish between data which describe the conditions of the work place (e.g. material, tools and posture), the exposure parameters (e.g. beginning of exposure and latency period) and the disease (e.g. anamnestical and radiological data). These data are treated for the use with sophisticated computerized statistical methods. The following analyses were carried out. Investigation of the connections between the several characteristics, which describe the occupational disease (health damages), including the comparison of the severity of the damages at the individual joints. Investigation of the side dependence of the damages. Investigation of the influence of the age at the beginning of the exposure and the age at the acknowledgement of the occupational disease and herewith of the exposure duration. Investigation of the effect of different occupational and exposure conditions.

  4. Detailed investigation of Long-Period activity at Campi Flegrei by Convolutive Independent Component Analysis

    Science.gov (United States)

    Capuano, P.; De Lauro, E.; De Martino, S.; Falanga, M.

    2016-04-01

    This work is devoted to the analysis of seismic signals continuously recorded at Campi Flegrei Caldera (Italy) during the entire year 2006. The radiation pattern associated with the Long-Period energy release is investigated. We adopt an innovative Independent Component Analysis algorithm for convolutive seismic series adapted and improved to give automatic procedures for detecting seismic events often buried in the high-level ambient noise. The extracted waveforms characterized by an improved signal-to-noise ratio allows the recognition of Long-Period precursors, evidencing that the seismic activity accompanying the mini-uplift crisis (in 2006), which climaxed in the three days from 26-28 October, had already started at the beginning of the month of October and lasted until mid of November. Hence, a more complete seismic catalog is then provided which can be used to properly quantify the seismic energy release. To better ground our results, we first check the robustness of the method by comparing it with other blind source separation methods based on higher order statistics; secondly, we reconstruct the radiation patterns of the extracted Long-Period events in order to link the individuated signals directly to the sources. We take advantage from Convolutive Independent Component Analysis that provides basic signals along the three directions of motion so that a direct polarization analysis can be performed with no other filtering procedures. We show that the extracted signals are mainly composed of P waves with radial polarization pointing to the seismic source of the main LP swarm, i.e. a small area in the Solfatara, also in the case of the small-events, that both precede and follow the main activity. From a dynamical point of view, they can be described by two degrees of freedom, indicating a low-level of complexity associated with the vibrations from a superficial hydrothermal system. Our results allow us to move towards a full description of the complexity of

  5. Detailed analysis of latencies in image-based dynamic MLC tracking

    Energy Technology Data Exchange (ETDEWEB)

    Poulsen, Per Rugaard; Cho, Byungchul; Sawant, Amit; Ruan, Dan; Keall, Paul J. [Department of Radiation Oncology, Stanford University, Stanford, California 94305 and Department of Oncology and Department of Medical Physics, Aarhus University Hospital, 8000 Aarhus (Denmark); Department of Radiation Oncology, Stanford University, Stanford, California 94305 and Department of Radiation Oncology, Asan Medical Center, Seoul 138-736 (Korea, Republic of); Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States)

    2010-09-15

    Purpose: Previous measurements of the accuracy of image-based real-time dynamic multileaf collimator (DMLC) tracking show that the major contributor to errors is latency, i.e., the delay between target motion and MLC response. Therefore the purpose of this work was to develop a method for detailed analysis of latency contributions during image-based DMLC tracking. Methods: A prototype DMLC tracking system integrated with a linear accelerator was used for tracking a phantom with an embedded fiducial marker during treatment delivery. The phantom performed a sinusoidal motion. Real-time target localization was based on x-ray images acquired either with a portal imager or a kV imager mounted orthogonal to the treatment beam. Each image was stored in a file on the imaging workstation. A marker segmentation program opened the image file, determined the marker position in the image, and transferred it to the DMLC tracking program. This program estimated the three-dimensional target position by a single-imager method and adjusted the MLC aperture to the target position. Imaging intervals {Delta}T{sub image} from 150 to 1000 ms were investigated for both kV and MV imaging. After the experiments, the recorded images were synchronized with MLC log files generated by the MLC controller and tracking log files generated by the tracking program. This synchronization allowed temporal analysis of the information flow for each individual image from acquisition to completed MLC adjustment. The synchronization also allowed investigation of the MLC adjustment dynamics on a considerably finer time scale than the 50 ms time resolution of the MLC log files. Results: For {Delta}T{sub image}=150 ms, the total time from image acquisition to completed MLC adjustment was 380{+-}9 ms for MV and 420{+-}12 ms for kV images. The main part of this time was from image acquisition to completed image file writing (272 ms for MV and 309 ms for kV). Image file opening (38 ms), marker segmentation (4 ms

  6. Statistical analysis for precise estimation of structural properties of NGC 1960

    CERN Document Server

    Joshi, Gireesh C

    2016-01-01

    The statistical analysis of gathered of astronomical objects (such as open cluster) are excellent tools to compute its parameters and to constrain the theory of its evolution. Here, we are represented detailed structural and membership analysis of an open cluster NGC 1960 through various statistical formulas and approaches. King empirical methods provide the information about the cluster extent as 5.2 +/- 0.4 pc. The exact identification of members of cluster is needed to precise estimation of its age, reddening, metallicity etc., therefore, to identify most probable members (MPMs), we adopted the combined approach of various statistical methods (kinematic, photometric and statistical) and 712 members are satisfied needed criteria of MPMs. The basic physical parameters of the cluster such as E(B-V)=0.23+/-0.02 mag, E(V-K)=1.05+/-0.03mag, log(Age)=7.35+/-0.05, and (m-M)=11.35+/-0.10 mag are obtained using the color-color and color-magnitude diagrams. NGC 1960 is found to be located at a distance of 1.34 +/- 0....

  7. Statistical Models and Methods for Network Meta-Analysis.

    Science.gov (United States)

    Madden, L V; Piepho, H-P; Paul, P A

    2016-08-01

    Meta-analysis, the methodology for analyzing the results from multiple independent studies, has grown tremendously in popularity over the last four decades. Although most meta-analyses involve a single effect size (summary result, such as a treatment difference) from each study, there are often multiple treatments of interest across the network of studies in the analysis. Multi-treatment (or network) meta-analysis can be used for simultaneously analyzing the results from all the treatments. However, the methodology is considerably more complicated than for the analysis of a single effect size, and there have not been adequate explanations of the approach for agricultural investigations. We review the methods and models for conducting a network meta-analysis based on frequentist statistical principles, and demonstrate the procedures using a published multi-treatment plant pathology data set. A major advantage of network meta-analysis is that correlations of estimated treatment effects are automatically taken into account when an appropriate model is used. Moreover, treatment comparisons may be possible in a network meta-analysis that are not possible in a single study because all treatments of interest may not be included in any given study. We review several models that consider the study effect as either fixed or random, and show how to interpret model-fitting output. We further show how to model the effect of moderator variables (study-level characteristics) on treatment effects, and present one approach to test for the consistency of treatment effects across the network. Online supplemental files give explanations on fitting the network meta-analytical models using SAS.

  8. 中国古代的统计分析%Statistical analysis in ancient China

    Institute of Scientific and Technical Information of China (English)

    莫曰达

    2003-01-01

    Analyzing social and economic problems through statistics is one of an important aspects of statistics thoughts in ancient China. This paper demonstrates some situations of statistical analysis in ancient China.

  9. Short-run and Current Analysis Model in Statistics

    Directory of Open Access Journals (Sweden)

    Constantin Anghelache

    2006-01-01

    Full Text Available Using the short-run statistic indicators is a compulsory requirement implied in the current analysis. Therefore, there is a system of EUROSTAT indicators on short run which has been set up in this respect, being recommended for utilization by the member-countries. On the basis of these indicators, there are regular, usually monthly, analysis being achieved in respect of: the production dynamic determination; the evaluation of the short-run investment volume; the development of the turnover; the wage evolution: the employment; the price indexes and the consumer price index (inflation; the volume of exports and imports and the extent to which the imports are covered by the exports and the sold of trade balance. The EUROSTAT system of indicators of conjuncture is conceived as an open system, so that it can be, at any moment extended or restricted, allowing indicators to be amended or even removed, depending on the domestic users requirements as well as on the specific requirements of the harmonization and integration. For the short-run analysis, there is also the World Bank system of indicators of conjuncture, which is utilized, relying on the data sources offered by the World Bank, The World Institute for Resources or other international organizations statistics. The system comprises indicators of the social and economic development and focuses on the indicators for the following three fields: human resources, environment and economic performances. At the end of the paper, there is a case study on the situation of Romania, for which we used all these indicators.

  10. Short-run and Current Analysis Model in Statistics

    Directory of Open Access Journals (Sweden)

    Constantin Mitrut

    2006-03-01

    Full Text Available Using the short-run statistic indicators is a compulsory requirement implied in the current analysis. Therefore, there is a system of EUROSTAT indicators on short run which has been set up in this respect, being recommended for utilization by the member-countries. On the basis of these indicators, there are regular, usually monthly, analysis being achieved in respect of: the production dynamic determination; the evaluation of the short-run investment volume; the development of the turnover; the wage evolution: the employment; the price indexes and the consumer price index (inflation; the volume of exports and imports and the extent to which the imports are covered by the exports and the sold of trade balance. The EUROSTAT system of indicators of conjuncture is conceived as an open system, so that it can be, at any moment extended or restricted, allowing indicators to be amended or even removed, depending on the domestic users requirements as well as on the specific requirements of the harmonization and integration. For the short-run analysis, there is also the World Bank system of indicators of conjuncture, which is utilized, relying on the data sources offered by the World Bank, The World Institute for Resources or other international organizations statistics. The system comprises indicators of the social and economic development and focuses on the indicators for the following three fields: human resources, environment and economic performances. At the end of the paper, there is a case study on the situation of Romania, for which we used all these indicators.

  11. Detailed analysis of an Eigen quasispecies model in a periodically moving sharp-peak landscape

    Science.gov (United States)

    Neves, Armando G. M.

    2010-09-01

    The Eigen quasispecies model in a periodically moving sharp-peak landscape considered in previous seminal works [M. Nilsson and N. Snoad, Phys. Rev. Lett. 84, 191 (2000)10.1103/PhysRevLett.84.191] and [C. Ronnewinkel , in Theoretical Aspects of Evolutionary Computing, edited by L. Kallel, B. Naudts, and A. Rogers (Springer-Verlag, Heidelberg, 2001)] is analyzed in greater detail. We show here, through a more rigorous analysis, that results in those papers are qualitatively correct. In particular, we obtain a phase diagram for the existence of a quasispecies with the same shape as in the above cited paper by C. Ronnewinkel , with upper and lower thresholds for the mutation rate between which a quasispecies may survive. A difference is that the upper value is larger and the lower value is smaller than the previously reported ones, so that the range for quasispecies existence is always larger than thought before. The quantitative information provided might also be important in understanding genetic variability in virus populations and has possible applications in antiviral therapies. The results in the quoted papers were obtained by studying the populations only at some few genomes. As we will show, this amounts to diagonalizing a 3×3 matrix. Our work is based instead in a different division of the population allowing a finer control of the populations at various relevant genetic sequences. The existence of a quasispecies will be related to Perron-Frobenius eigenvalues. Although huge matrices of sizes 2ℓ , where ℓ is the genome length, may seem necessary at a first look, we show that such large sizes are not necessary and easily obtain numerical and analytical results for their eigenvalues.

  12. Morphological features and associated anomalies of schizencephaly in the clinical population: detailed analysis of MR images

    Energy Technology Data Exchange (ETDEWEB)

    Hayashi, N. [Department of Radiology, Faculty of Medicine, University of Tokyo (Japan); Department of Radiology, Section of Neuroradiology, University of California San Francisco, CA (United States); Tsutsumi, Y. [Department of Radiology, National Okura Hospital, Tokyo (Japan); Barkovich, A.J. [Department of Radiology, Section of Neuroradiology, University of California San Francisco, CA (United States)

    2002-05-01

    Although they are well documented in autopsy series, the macroscopic features and associated anomalies of schizencephalies have not been described in detail in a large clinical population. To assess the macroscopic findings of schizencephaly and the prevalence of associated findings, we conducted a retrospective MR analysis of a group of patients with schizencephaly. The MR studies of 35 patients with schizencephaly were retrospectively reviewed. The images were examined for the location and size of the schizencephalic cleft, the presence and location of associated polymicrogyria, and the presence, location, and severity of other brain anomalies. A total of 54 schizencephalic clefts were seen in the 35 patients. These clefts were unilateral in 18 (51%) patients and bilateral in 17 (49%) patients; three clefts were identified in two patients. Nine clefts (17%) had fused lips and 45 had separated-lip clefts (83%). Polymicrogyria was present inside 23 clefts (43%), while subependymal heterotopias were present at the cleft orifice in 27 clefts (50%). Polymicrogyria was identified outside the cleft, both adjacent to and remote from the cleft, in 23 patients (66%). Abnormal cerebral white-matter signal intensity was present in seven patients (20%), while white-matter volume diminution was noted in all patients. Ventricular diverticula with mass effect, roofing membranes, remnant floors, and cord-like remnants were present in 12, 1, 11, and 3 patients, respectively. Our results show that the spectrum of macroscopic findings in schizencephaly includes fused-lip and separated-lip clefts, polymicrogyric and non-polymicrogyric cleft linings, cyst-like diverticula and membranous structures, and subependymal heterotopia at the cleft. Concomitant anomalies are polymicrogyria outside the cleft, white-matter diminution, septal and optic pathway anomalies, callosal anomalies and hippocampal anomalies. Unilateral and bilateral clefts occur in a nearly equal frequency in the clinical

  13. Landscape evolution reconstructions on Mars: a detailed analysis of lacustrine and fluvial terraces

    Science.gov (United States)

    Rossato, Sandro; Pajola, Maurizio; Baratti, Emanuele; Mangili, Clara; Coradini, Marcello

    2015-04-01

    Liquid water was flowing on the surface of Mars in the past, leaving behind a wide range of geomorphic features. The ancient major Martian water fluxes vanished about 3.5 Ga. Meteoritic impacts, wind-erosion, gravity-related phenomena, tectonic deformations and volcanic activities deeply altered the landforms during the ages. Hence, the reconstruction of water-shaped landscapes is often complicated. Fluvial and lacustrine terraces analysis and correlation is a useful approach to understand and reconstruct the past changes in Martian landscape evolution. These features are commonly used as reference for the top of water bodies on Earth, since they are void of the uncertainties or errors deriving from erosional or slumping processes that could have acted on the valley flanks or in the plateau, where the hydrological network was carved in. The study area is located in the western hemisphere of Mars, in the Memnonia quadrangle, between latitude 9° 10'-9° 50'South and longitude 167° 0'-167° 30' West and it constitutes a transition region between the southern highlands of Terra Sirenum and the northern lowlands of Lucus Planum. Many water-shaped features have already been described near the study area, the most prominent of them being the Ma'adim Vallis and the Mangala Valles system. Our results derive from the observations and the analysis of HRSC images (12.5 m spatial resolution) and Digital Elevation Models (DEMs) derived from the MEX-HRSC (75 m resolution), that allow the identification of elevation differences up to the tens of meter scale. We were able to reconstruct six main evolutionary stages of a complex hydrologic systems consisting of two main palaeorivers (up to 5 km wide) connected one another by a palaeolake that formed within a meteor crater (~20 km diameter). On the basis of Earth analogs, these stages/terraces should have evolved during a long period of time, at least thousands years long. Furthermore, crater counting date back the deactivation of

  14. Statistical analysis of the breaking processes of Ni nanowires

    Energy Technology Data Exchange (ETDEWEB)

    Garcia-Mochales, P [Departamento de Fisica de la Materia Condensada, Facultad de Ciencias, Universidad Autonoma de Madrid, c/ Francisco Tomas y Valiente 7, Campus de Cantoblanco, E-28049-Madrid (Spain); Paredes, R [Centro de Fisica, Instituto Venezolano de Investigaciones CientIficas, Apartado 20632, Caracas 1020A (Venezuela); Pelaez, S; Serena, P A [Instituto de Ciencia de Materiales de Madrid, Consejo Superior de Investigaciones CientIficas, c/ Sor Juana Ines de la Cruz 3, Campus de Cantoblanco, E-28049-Madrid (Spain)], E-mail: pedro.garciamochales@uam.es

    2008-06-04

    We have performed a massive statistical analysis on the breaking behaviour of Ni nanowires using molecular dynamic simulations. Three stretching directions, five initial nanowire sizes and two temperatures have been studied. We have constructed minimum cross-section histograms and analysed for the first time the role played by monomers and dimers. The shape of such histograms and the absolute number of monomers and dimers strongly depend on the stretching direction and the initial size of the nanowire. In particular, the statistical behaviour of the breakage final stages of narrow nanowires strongly differs from the behaviour obtained for large nanowires. We have analysed the structure around monomers and dimers. Their most probable local configurations differ from those usually appearing in static electron transport calculations. Their non-local environments show disordered regions along the nanowire if the stretching direction is [100] or [110]. Additionally, we have found that, at room temperature, [100] and [110] stretching directions favour the appearance of non-crystalline staggered pentagonal structures. These pentagonal Ni nanowires are reported in this work for the first time. This set of results suggests that experimental Ni conducting histograms could show a strong dependence on the orientation and temperature.

  15. Statistical Scalability Analysis of Communication Operations in Distributed Applications

    Energy Technology Data Exchange (ETDEWEB)

    Vetter, J S; McCracken, M O

    2001-02-27

    Current trends in high performance computing suggest that users will soon have widespread access to clusters of multiprocessors with hundreds, if not thousands, of processors. This unprecedented degree of parallelism will undoubtedly expose scalability limitations in existing applications, where scalability is the ability of a parallel algorithm on a parallel architecture to effectively utilize an increasing number of processors. Users will need precise and automated techniques for detecting the cause of limited scalability. This paper addresses this dilemma. First, we argue that users face numerous challenges in understanding application scalability: managing substantial amounts of experiment data, extracting useful trends from this data, and reconciling performance information with their application's design. Second, we propose a solution to automate this data analysis problem by applying fundamental statistical techniques to scalability experiment data. Finally, we evaluate our operational prototype on several applications, and show that statistical techniques offer an effective strategy for assessing application scalability. In particular, we find that non-parametric correlation of the number of tasks to the ratio of the time for individual communication operations to overall communication time provides a reliable measure for identifying communication operations that scale poorly.

  16. Blog Content and User Engagement - An Insight Using Statistical Analysis.

    Directory of Open Access Journals (Sweden)

    Apoorva Vikrant Kulkarni

    2013-06-01

    Full Text Available Since the past few years organizations have increasingly realized the value of social media in positioning, propagating and marketing the product/service and organization itself. Today every organization be it small or big has realized the essence of creating a space in the World Wide Web. Social Media through its multifaceted platforms has enabled the organizations to propagate their brands. There are a number of social media networks which are helpful in spreading the message to customers. Many organizations are having full time web analytics teams that are regularly trying to ensure that prospectivecustomers are visiting their organization through various forms of social media. Web analytics is foreseen as a tool for Business Intelligence by organizations and there are a large number of analytics tools available for monitoring the visibility of a particular brand on the web. For example, Google has its ownanalytic tool that is very widely used. There are number of free as well as paid analytical tools available on the internet. The objective of this paper is to study what content in a blog present in the social media creates a greater impact on user engagement. The study statistically analyzes the relation between content of the blog and user engagement. The statistical analysis was carried out on a blog of a reputed management institute in Pune to arrive at conclusions.

  17. Detailed analysis of the clinical effects of cell therapy for thoracolumbar spinal cord injury: an original study

    Directory of Open Access Journals (Sweden)

    Sharma A

    2013-07-01

    Full Text Available Alok Sharma,1 Nandini Gokulchandran,1 Hemangi Sane,2 Prerna Badhe,1 Pooja Kulkarni,2 Mamta Lohia,3 Anjana Nagrajan,3 Nancy Thomas3 1Department of Medical Services and Clinical Research, 2Department of Research and Development, 3Department of Neurorehabilitation, NeuroGen Brain and Spine Institute, Surana Sethia Hospital and Research Centre, Chembur, Mumbai, India Background: Cell therapy is amongst the most promising treatment strategies in spinal cord injury (SCI because it focuses on repair. There are many published animal studies and a few human trials showing remarkable results with various cell types. The level of SCI determines whether paraplegia or quadriplegia is present, and greatly influences recovery. The purpose of this study was to determine the significance of the clinical effects and long-term safety of intrathecal administration of autologous bone marrow-derived mononuclear cells, along with changes in functional independence and quality of life in patients with thoracolumbar SCI. Methods: We undertook a retrospective analysis of a clinical study in which a nonrandomized sample of 110 patients with thoracolumbar SCI underwent autologous bone marrow-derived mononuclear cell transplantation intrathecally and subsequent neurorehabilitation, with a mean follow-up of 2 years ± 1 month. Changes on any parameters were recorded at follow-up. The data were analyzed using the Wilcoxon's signed-rank test and McNemar's test. Functional Independence Measure and American Spinal Injury Association (ASIA scores were recorded, and a detailed neurological assessment was performed. Results: Overall improvement was seen in 91% of patients, including reduction in spasticity, partial sensory recovery, and improvement in trunk control, postural hypotension, bladder management, mobility, activities of daily living, and functional independence. A significant association of these symptomatic improvements with the cell therapy intervention was established

  18. Statistical models of video structure for content analysis and characterization.

    Science.gov (United States)

    Vasconcelos, N; Lippman, A

    2000-01-01

    Content structure plays an important role in the understanding of video. In this paper, we argue that knowledge about structure can be used both as a means to improve the performance of content analysis and to extract features that convey semantic information about the content. We introduce statistical models for two important components of this structure, shot duration and activity, and demonstrate the usefulness of these models with two practical applications. First, we develop a Bayesian formulation for the shot segmentation problem that is shown to extend the standard thresholding model in an adaptive and intuitive way, leading to improved segmentation accuracy. Second, by applying the transformation into the shot duration/activity feature space to a database of movie clips, we also illustrate how the Bayesian model captures semantic properties of the content. We suggest ways in which these properties can be used as a basis for intuitive content-based access to movie libraries.

  19. Frequency of PSV inspection optmization using statistical data analysis

    Directory of Open Access Journals (Sweden)

    Alexandre Guimarães Botelho

    2015-12-01

    Full Text Available The present paper shows how qualitative analytical methodologies can be enhanced by statistical failure data analysis of process equipment in order to select an appropriate and cost-effective maintenance policy to reduce equipment life cycle cost. As such, a case study was carried out with failure and maintenance data from a sample of pressure safety valves (PSV of a PETROBRAS’s oil and gas production unit. Data was classified according to a failure mode and effect analysis— FMEA, and adjusted using a weibull distribution. The results show the possibility of reduction of maintenance frequency representing 29% of event reduction, without increasing risk, as well as evidencing the potential failures which must be blocked by the inspection plan.

  20. Supermarket Analysis Based On Product Discount and Statistics

    Directory of Open Access Journals (Sweden)

    Komal Kumawat

    2014-03-01

    Full Text Available E-commerce has been growing rapidly. Its domain can provide all the right ingredients for successful data mining and it is a significant domain of data mining. E commerce refers to buying and selling of products or services over electronic systems such as internet. Various e commerce systems give discount on product and allow user to buy product online. The basic idea used here is to predict the product sale based on discount applied to the product. Our analysis concentrates on how customer behaves when discount is allotted to him. We have developed a model which finds the customer behaviour when discount is applied to the product. This paper elaborates upon how a different technique like session, click stream is used to collect user data online based on discount applied to the product and how statistics is applied to data set to see the variation in the data.

  1. Higher order statistical moment application for solar PV potential analysis

    Science.gov (United States)

    Basri, Mohd Juhari Mat; Abdullah, Samizee; Azrulhisham, Engku Ahmad; Harun, Khairulezuan

    2016-10-01

    Solar photovoltaic energy could be as alternative energy to fossil fuel, which is depleting and posing a global warming problem. However, this renewable energy is so variable and intermittent to be relied on. Therefore the knowledge of energy potential is very important for any site to build this solar photovoltaic power generation system. Here, the application of higher order statistical moment model is being analyzed using data collected from 5MW grid-connected photovoltaic system. Due to the dynamic changes of skewness and kurtosis of AC power and solar irradiance distributions of the solar farm, Pearson system where the probability distribution is calculated by matching their theoretical moments with that of the empirical moments of a distribution could be suitable for this purpose. On the advantage of the Pearson system in MATLAB, a software programming has been developed to help in data processing for distribution fitting and potential analysis for future projection of amount of AC power and solar irradiance availability.

  2. Statistical analysis of $k$-nearest neighbor collaborative recommendation

    CERN Document Server

    Biau, Gérard; Rouvière, Laurent; 10.1214/09-AOS759

    2010-01-01

    Collaborative recommendation is an information-filtering technique that attempts to present information items that are likely of interest to an Internet user. Traditionally, collaborative systems deal with situations with two types of variables, users and items. In its most common form, the problem is framed as trying to estimate ratings for items that have not yet been consumed by a user. Despite wide-ranging literature, little is known about the statistical properties of recommendation systems. In fact, no clear probabilistic model even exists which would allow us to precisely describe the mathematical forces driving collaborative filtering. To provide an initial contribution to this, we propose to set out a general sequential stochastic model for collaborative recommendation. We offer an in-depth analysis of the so-called cosine-type nearest neighbor collaborative method, which is one of the most widely used algorithms in collaborative filtering, and analyze its asymptotic performance as the number of user...

  3. Statistical uncertainty analysis of radon transport in nonisothermal, unsaturated soils

    Energy Technology Data Exchange (ETDEWEB)

    Holford, D.J.; Owczarski, P.C.; Gee, G.W.; Freeman, H.D.

    1990-10-01

    To accurately predict radon fluxes soils to the atmosphere, we must know more than the radium content of the soil. Radon flux from soil is affected not only by soil properties, but also by meteorological factors such as air pressure and temperature changes at the soil surface, as well as the infiltration of rainwater. Natural variations in meteorological factors and soil properties contribute to uncertainty in subsurface model predictions of radon flux, which, when coupled with a building transport model, will also add uncertainty to predictions of radon concentrations in homes. A statistical uncertainty analysis using our Rn3D finite-element numerical model was conducted to assess the relative importance of these meteorological factors and the soil properties affecting radon transport. 10 refs., 10 figs., 3 tabs.

  4. A Statistical Analysis of Cointegration for I(2) Variables

    DEFF Research Database (Denmark)

    Johansen, Søren

    1995-01-01

    This paper discusses inference for I(2) variables in a VAR model. The estimation procedure suggested consists of two reduced rank regressions. The asymptotic distribution of the proposed estimators of the cointegrating coefficients is mixed Gaussian, which implies that asymptotic inference can...... be conducted using the ¿ sup2/sup distribution. It is shown to what extent inference on the cointegration ranks can be conducted using the tables already prepared for the analysis of cointegration of I(1) variables. New tables are needed for the test statistics to control the size of the tests. This paper...... contains a multivariate test for the existence of I(2) variables. This test is illustrated using a data set consisting of U.K. and foreign prices and interest rates as well as the exchange rate....

  5. Identification of Chemical Attribution Signatures of Fentanyl Syntheses Using Multivariate Statistical Analysis of Orthogonal Analytical Data

    Energy Technology Data Exchange (ETDEWEB)

    Mayer, B. P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Mew, D. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); DeHope, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Spackman, P. E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Williams, A. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-09-24

    Attribution of the origin of an illicit drug relies on identification of compounds indicative of its clandestine production and is a key component of many modern forensic investigations. The results of these studies can yield detailed information on method of manufacture, starting material source, and final product - all critical forensic evidence. In the present work, chemical attribution signatures (CAS) associated with the synthesis of the analgesic fentanyl, N-(1-phenylethylpiperidin-4-yl)-N-phenylpropanamide, were investigated. Six synthesis methods, all previously published fentanyl synthetic routes or hybrid versions thereof, were studied in an effort to identify and classify route-specific signatures. 160 distinct compounds and inorganic species were identified using gas and liquid chromatographies combined with mass spectrometric methods (GC-MS and LCMS/ MS-TOF) in conjunction with inductively coupled plasma mass spectrometry (ICPMS). The complexity of the resultant data matrix urged the use of multivariate statistical analysis. Using partial least squares discriminant analysis (PLS-DA), 87 route-specific CAS were classified and a statistical model capable of predicting the method of fentanyl synthesis was validated and tested against CAS profiles from crude fentanyl products deposited and later extracted from two operationally relevant surfaces: stainless steel and vinyl tile. This work provides the most detailed fentanyl CAS investigation to date by using orthogonal mass spectral data to identify CAS of forensic significance for illicit drug detection, profiling, and attribution.

  6. Spectral signature verification using statistical analysis and text mining

    Science.gov (United States)

    DeCoster, Mallory E.; Firpi, Alexe H.; Jacobs, Samantha K.; Cone, Shelli R.; Tzeng, Nigel H.; Rodriguez, Benjamin M.

    2016-05-01

    In the spectral science community, numerous spectral signatures are stored in databases representative of many sample materials collected from a variety of spectrometers and spectroscopists. Due to the variety and variability of the spectra that comprise many spectral databases, it is necessary to establish a metric for validating the quality of spectral signatures. This has been an area of great discussion and debate in the spectral science community. This paper discusses a method that independently validates two different aspects of a spectral signature to arrive at a final qualitative assessment; the textual meta-data and numerical spectral data. Results associated with the spectral data stored in the Signature Database1 (SigDB) are proposed. The numerical data comprising a sample material's spectrum is validated based on statistical properties derived from an ideal population set. The quality of the test spectrum is ranked based on a spectral angle mapper (SAM) comparison to the mean spectrum derived from the population set. Additionally, the contextual data of a test spectrum is qualitatively analyzed using lexical analysis text mining. This technique analyzes to understand the syntax of the meta-data to provide local learning patterns and trends within the spectral data, indicative of the test spectrum's quality. Text mining applications have successfully been implemented for security2 (text encryption/decryption), biomedical3 , and marketing4 applications. The text mining lexical analysis algorithm is trained on the meta-data patterns of a subset of high and low quality spectra, in order to have a model to apply to the entire SigDB data set. The statistical and textual methods combine to assess the quality of a test spectrum existing in a database without the need of an expert user. This method has been compared to other validation methods accepted by the spectral science community, and has provided promising results when a baseline spectral signature is

  7. Feasibility study for biomass power plants in Thailand. Volume 2. appendix: Detailed financial analysis results. Export trade information

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-06-01

    This study, conducted by Black & Veatch, was funded by the U.S. Trade and Development Agency. The report presents a technical and commercial analysis for the development of three nearly identical electricity generating facilities (biomass steam power plants) in the towns of Chachgoengsao, Suphan Buri, and Pichit in Thailand. Volume 2 of the study contains the following appendix: Detailed Financial Analysis Results.

  8. Detailed analysis of contrast-enhanced MRI of hands and wrists in patients with psoriatic arthritis

    Energy Technology Data Exchange (ETDEWEB)

    Tehranzadeh, Jamshid [University of California, Department of Radiological Sciences, Irvine (United States); University of California Medical Center, Department of Radiological Sciences R-140, Orange, CA (United States); Ashikyan, Oganes; Anavim, Arash; Shin, John [University of California, Department of Radiological Sciences, Irvine (United States)

    2008-05-15

    The objective was to perform detailed analysis of the involved soft tissues, tendons, joints, and bones in the hands and wrists of patients with psoriatic arthritis (PsA). We reviewed 23 contrast-enhanced MR imaging studies (13 hands and 10 wrists) in 10 patients with the clinical diagnosis of PsA. We obtained clinical information from medical records and evaluated images for the presence of erosions, bone marrow edema, joint synovitis, tenosynovitis, carpal tunnel, and soft tissue involvement. Two board-certified musculoskeletal radiologists reviewed all images independently. Differences were resolved during a subsequent joint session. The average duration of disease was 71.3 months, ranging from 1 month to 25 years. Eight of the 10 wrists (80%) and 6 of the 13 hands demonstrated bone erosions. Bone marrow abnormalities were shown in 5 of the 10 wrists (50%) and 4 of the 14 hands (31%). Triangular fibrocartilage tears were seen in 6 of the 10 wrists (60%). Wrist and hand joint synovitis were present in all studies (67 wrist joints and 101 hand joints). Wrist soft tissue involvement was detected in 9 of the 10 wrists (90%) and hand soft tissue involvement was present in 12 of the 13 wrists (92%). Findings adjacent to the region of soft tissue involvement included synovitis (4 wrists) and tenosynovitis (3 wrists). Bone marrow edema adjacent to the region of soft tissue involvement was seen in one wrist. Bulge of the flexor retinaculum was seen in 4 of the 10 wrists (40%) and median nerve enhancement was seen in 8 of the 10 wrists (80%). Tenosynovitis was seen in all studies (all 10 of the hands and all 13 of the wrists). The 'rheumatoid' type of distribution of bony lesions was common in our study. Interobserver agreement for various findings ranged from 83% to 100%. Contrast-enhanced MRI unequivocally demonstrated bone marrow edema, erosions, tendon and soft-tissue disease, and median nerve involvement, with good interobserver reliability in patients with

  9. Detailed budget analysis of HONO in central London reveals a missing daytime source

    Directory of Open Access Journals (Sweden)

    J. D. Lee

    2015-08-01

    Full Text Available Measurements of HONO were carried out at an urban background site near central London as part of the Clean air for London (ClearfLo project in summer 2012. Data was collected from 22 July–18 August 2014, with peak values of up to 1.8 ppbV at night and non-zero values of between 0.2 and 0.6 ppbV seen during the day. A wide range of other gas phase, aerosol, radiation and meteorological measurements were made concurrently at the same site, allowing a detailed analysis of the chemistry to be carried out. The peak HONO/NOx ratio of 0.04 is seen at ~ 02:00 UTC, with the presence of a second, daytime peak in HONO/NOx of similar magnitude to the night-time peak suggesting a significant secondary daytime HONO source. A photostationary state calculation of HONO involving formation from the reaction of OH and NO and loss from photolysis, reaction with OH and dry deposition shows a significant underestimation during the day, with calculated values being close to zero, compared to the measurement average of 0.4 ppbV at midday. The addition of further HONO sources, including postulated formation from the reaction of HO2 with NO2 and photolysis of HNO3, increases the daytime modelled HONO to 0.1 ppbV, still leaving a significant extra daytime source. The missing HONO is plotted against a series of parameters including NO2 and OH reactivity, with little correlation seen. Much better correlation is observed with the product of these species with j(NO2, in particular NO2 and the product of NO2 with OH reactivity. This suggests the missing HONO source is in some way related to NO2 and also requires sunlight. The effect of the missing HONO to OH radical production is also investigated and it is shown that the model needs to be constrained to measured HONO in order to accurately reproduce the OH radical measurements.

  10. Microcomputers: Statistical Analysis Software. Evaluation Guide Number 5.

    Science.gov (United States)

    Gray, Peter J.

    This guide discusses six sets of features to examine when purchasing a microcomputer-based statistics program: hardware requirements; data management; data processing; statistical procedures; printing; and documentation. While the current statistical packages have several negative features, they are cost saving and convenient for small to moderate…

  11. Application of Integration of Spatial Statistical Analysis with GIS to Regional Economic Analysis

    Institute of Scientific and Technical Information of China (English)

    CHEN Fei; DU Daosheng

    2004-01-01

    This paper summarizes a few spatial statistical analysis methods for to measuring spatial autocorrelation and spatial association, discusses the criteria for the identification of spatial association by the use of global Moran Coefficient, Local Moran and Local Geary. Furthermore, a user-friendly statistical module, combining spatial statistical analysis methods with GIS visual techniques, is developed in Arcview using Avenue. An example is also given to show the usefulness of this module in identifying and quantifying the underlying spatial association patterns between economic units.

  12. Statistical Analysis of Tank 5 Floor Sample Results

    Energy Technology Data Exchange (ETDEWEB)

    Shine, E. P.

    2013-01-31

    Sampling has been completed for the characterization of the residual material on the floor of Tank 5 in the F-Area Tank Farm at the Savannah River Site (SRS), near Aiken, SC. The sampling was performed by Savannah River Remediation (SRR) LLC using a stratified random sampling plan with volume-proportional compositing. The plan consisted of partitioning the residual material on the floor of Tank 5 into three non-overlapping strata: two strata enclosed accumulations, and a third stratum consisted of a thin layer of material outside the regions of the two accumulations. Each of three composite samples was constructed from five primary sample locations of residual material on the floor of Tank 5. Three of the primary samples were obtained from the stratum containing the thin layer of material, and one primary sample was obtained from each of the two strata containing an accumulation. This report documents the statistical analyses of the analytical results for the composite samples. The objective of the analysis is to determine the mean concentrations and upper 95% confidence (UCL95) bounds for the mean concentrations for a set of analytes in the tank residuals. The statistical procedures employed in the analyses were consistent with the Environmental Protection Agency (EPA) technical guidance by Singh and others [2010]. Savannah River National Laboratory (SRNL) measured the sample bulk density, nonvolatile beta, gross alpha, and the radionuclide1, elemental, and chemical concentrations three times for each of the composite samples. The analyte concentration data were partitioned into three separate groups for further analysis: analytes with every measurement above their minimum detectable concentrations (MDCs), analytes with no measurements above their MDCs, and analytes with a mixture of some measurement results above and below their MDCs. The means, standard deviations, and UCL95s were computed for the analytes in the two groups that had at least some measurements

  13. STATISTICAL ANALYSIS OF TANK 5 FLOOR SAMPLE RESULTS

    Energy Technology Data Exchange (ETDEWEB)

    Shine, E.

    2012-03-14

    Sampling has been completed for the characterization of the residual material on the floor of Tank 5 in the F-Area Tank Farm at the Savannah River Site (SRS), near Aiken, SC. The sampling was performed by Savannah River Remediation (SRR) LLC using a stratified random sampling plan with volume-proportional compositing. The plan consisted of partitioning the residual material on the floor of Tank 5 into three non-overlapping strata: two strata enclosed accumulations, and a third stratum consisted of a thin layer of material outside the regions of the two accumulations. Each of three composite samples was constructed from five primary sample locations of residual material on the floor of Tank 5. Three of the primary samples were obtained from the stratum containing the thin layer of material, and one primary sample was obtained from each of the two strata containing an accumulation. This report documents the statistical analyses of the analytical results for the composite samples. The objective of the analysis is to determine the mean concentrations and upper 95% confidence (UCL95) bounds for the mean concentrations for a set of analytes in the tank residuals. The statistical procedures employed in the analyses were consistent with the Environmental Protection Agency (EPA) technical guidance by Singh and others [2010]. Savannah River National Laboratory (SRNL) measured the sample bulk density, nonvolatile beta, gross alpha, radionuclide, inorganic, and anion concentrations three times for each of the composite samples. The analyte concentration data were partitioned into three separate groups for further analysis: analytes with every measurement above their minimum detectable concentrations (MDCs), analytes with no measurements above their MDCs, and analytes with a mixture of some measurement results above and below their MDCs. The means, standard deviations, and UCL95s were computed for the analytes in the two groups that had at least some measurements above their

  14. Statistical Analysis Of Tank 5 Floor Sample Results

    Energy Technology Data Exchange (ETDEWEB)

    Shine, E. P.

    2012-08-01

    Sampling has been completed for the characterization of the residual material on the floor of Tank 5 in the F-Area Tank Farm at the Savannah River Site (SRS), near Aiken, SC. The sampling was performed by Savannah River Remediation (SRR) LLC using a stratified random sampling plan with volume-proportional compositing. The plan consisted of partitioning the residual material on the floor of Tank 5 into three non-overlapping strata: two strata enclosed accumulations, and a third stratum consisted of a thin layer of material outside the regions of the two accumulations. Each of three composite samples was constructed from five primary sample locations of residual material on the floor of Tank 5. Three of the primary samples were obtained from the stratum containing the thin layer of material, and one primary sample was obtained from each of the two strata containing an accumulation. This report documents the statistical analyses of the analytical results for the composite samples. The objective of the analysis is to determine the mean concentrations and upper 95% confidence (UCL95) bounds for the mean concentrations for a set of analytes in the tank residuals. The statistical procedures employed in the analyses were consistent with the Environmental Protection Agency (EPA) technical guidance by Singh and others [2010]. Savannah River National Laboratory (SRNL) measured the sample bulk density, nonvolatile beta, gross alpha, and the radionuclide, elemental, and chemical concentrations three times for each of the composite samples. The analyte concentration data were partitioned into three separate groups for further analysis: analytes with every measurement above their minimum detectable concentrations (MDCs), analytes with no measurements above their MDCs, and analytes with a mixture of some measurement results above and below their MDCs. The means, standard deviations, and UCL95s were computed for the analytes in the two groups that had at least some measurements

  15. RFI detection by automated feature extraction and statistical analysis

    Science.gov (United States)

    Winkel, B.; Kerp, J.; Stanko, S.

    2007-01-01

    In this paper we present an interference detection toolbox consisting of a high dynamic range Digital Fast-Fourier-Transform spectrometer (DFFT, based on FPGA-technology) and data analysis software for automated radio frequency interference (RFI) detection. The DFFT spectrometer allows high speed data storage of spectra on time scales of less than a second. The high dynamic range of the device assures constant calibration even during extremely powerful RFI events. The software uses an algorithm which performs a two-dimensional baseline fit in the time-frequency domain, searching automatically for RFI signals superposed on the spectral data. We demonstrate, that the software operates successfully on computer-generated RFI data as well as on real DFFT data recorded at the Effelsberg 100-m telescope. At 21-cm wavelength RFI signals can be identified down to the 4σ_rms level. A statistical analysis of all RFI events detected in our observational data revealed that: (1) mean signal strength is comparable to the astronomical line emission of the Milky Way, (2) interferences are polarised, (3) electronic devices in the neighbourhood of the telescope contribute significantly to the RFI radiation. We also show that the radiometer equation is no longer fulfilled in presence of RFI signals.

  16. RFI detection by automated feature extraction and statistical analysis

    CERN Document Server

    Winkel, B; Stanko, S; Winkel, Benjamin; Kerp, Juergen; Stanko, Stephan

    2006-01-01

    In this paper we present an interference detection toolbox consisting of a high dynamic range Digital Fast-Fourier-Transform spectrometer (DFFT, based on FPGA-technology) and data analysis software for automated radio frequency interference (RFI) detection. The DFFT spectrometer allows high speed data storage of spectra on time scales of less than a second. The high dynamic range of the device assures constant calibration even during extremely powerful RFI events. The software uses an algorithm which performs a two-dimensional baseline fit in the time-frequency domain, searching automatically for RFI signals superposed on the spectral data. We demonstrate, that the software operates successfully on computer-generated RFI data as well as on real DFFT data recorded at the Effelsberg 100-m telescope. At 21-cm wavelength RFI signals can be identified down to the 4-sigma level. A statistical analysis of all RFI events detected in our observational data revealed that: (1) mean signal strength is comparable to the a...

  17. Statistical analysis of plasma thermograms measured by differential scanning calorimetry.

    Science.gov (United States)

    Fish, Daniel J; Brewood, Greg P; Kim, Jong Sung; Garbett, Nichola C; Chaires, Jonathan B; Benight, Albert S

    2010-11-01

    Melting curves of human plasma measured by differential scanning calorimetry (DSC), known as thermograms, have the potential to markedly impact diagnosis of human diseases. A general statistical methodology is developed to analyze and classify DSC thermograms to analyze and classify thermograms. Analysis of an acquired thermogram involves comparison with a database of empirical reference thermograms from clinically characterized diseases. Two parameters, a distance metric, P, and correlation coefficient, r, are combined to produce a 'similarity metric,' ρ, which can be used to classify unknown thermograms into pre-characterized categories. Simulated thermograms known to lie within or fall outside of the 90% quantile range around a median reference are also analyzed. Results verify the utility of the methods and establish the apparent dynamic range of the metric ρ. Methods are then applied to data obtained from a collection of plasma samples from patients clinically diagnosed with SLE (lupus). High correspondence is found between curve shapes and values of the metric ρ. In a final application, an elementary classification rule is implemented to successfully analyze and classify unlabeled thermograms. These methods constitute a set of powerful yet easy to implement tools for quantitative classification, analysis and interpretation of DSC plasma melting curves.

  18. A statistical design for testing apomictic diversification through linkage analysis.

    Science.gov (United States)

    Zeng, Yanru; Hou, Wei; Song, Shuang; Feng, Sisi; Shen, Lin; Xia, Guohua; Wu, Rongling

    2014-03-01

    The capacity of apomixis to generate maternal clones through seed reproduction has made it a useful characteristic for the fixation of heterosis in plant breeding. It has been observed that apomixis displays pronounced intra- and interspecific diversification, but the genetic mechanisms underlying this diversification remains elusive, obstructing the exploitation of this phenomenon in practical breeding programs. By capitalizing on molecular information in mapping populations, we describe and assess a statistical design that deploys linkage analysis to estimate and test the pattern and extent of apomictic differences at various levels from genotypes to species. The design is based on two reciprocal crosses between two individuals each chosen from a hermaphrodite or monoecious species. A multinomial distribution likelihood is constructed by combining marker information from two crosses. The EM algorithm is implemented to estimate the rate of apomixis and test its difference between two plant populations or species as the parents. The design is validated by computer simulation. A real data analysis of two reciprocal crosses between hickory (Carya cathayensis) and pecan (C. illinoensis) demonstrates the utilization and usefulness of the design in practice. The design provides a tool to address fundamental and applied questions related to the evolution and breeding of apomixis.

  19. Data Analysis & Statistical Methods for Command File Errors

    Science.gov (United States)

    Meshkat, Leila; Waggoner, Bruce; Bryant, Larry

    2014-01-01

    This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.

  20. A Morphological and Statistical Analysis of Ansae in Barred Galaxies

    CERN Document Server

    Martinez-Valpuesta, I; Buta, R

    2007-01-01

    Many barred galaxies show a set of symmetric enhancements at the ends of the stellar bar, called {\\it ansae}, or the ``handles'' of the bar. The ansa bars have been in the literature for some decades, but their origin has still not been specifically addressed, although, they could be related to the growth process of bars. But even though ansae have been known for a long time, no statistical analysis of their relative frequency of occurrence has been performed yet. Similarly, there has been no study of the varieties in morphology of ansae even though significant morphological variations are known to characterise the features. In this paper, we make a quantitative analysis of the occurrence of ansae in barred galaxies, making use of {\\it The de Vaucouleurs Atlas of Galaxies} by Buta and coworkers. We find that $\\sim 40%$ of SB0's show ansae in their bars, thus confirming that ansae are common features in barred lenticulars. The ansa frequency decreases dramatically with later types, and hardly any ansae are fou...

  1. Statistical analysis of the operating parameters which affect cupola emissions

    Energy Technology Data Exchange (ETDEWEB)

    Davis, J.W.; Draper, A.B.

    1977-12-01

    A sampling program was undertaken to determine the operating parameters which affected air pollution emission from gray iron foundry cupolas. The experimental design utilized the analysis of variance routine. Four independent variables were selected for examination on the basis of previous work reported in the literature. These were: (1) blast rate; (2) iron-coke ratio; (3) blast temperature; and (4) cupola size. The last variable was chosen since it most directly affects melt rate. Emissions from cupolas for which concern has been expressed are particle matter and carbon monoxide. The dependent variables were, therefore, particle loading, particle size distribution, and carbon monoxide concentration. Seven production foundries were visited and samples taken under conditions prescribed by the experimental plan. The data obtained from these tests were analyzed using the analysis of variance and other statistical techniques where applicable. The results indicated that blast rate, blast temperature, and cupola size affected particle emissions and the latter two also affected the particle size distribution. The particle size information was also unique in that it showed a consistent particle size distribution at all seven foundaries with a sizable fraction of the particles less than 1.0 micrometers in diameter.

  2. Criminal victimization in Ukraine: analysis of statistical data

    Directory of Open Access Journals (Sweden)

    Serhiy Nezhurbida

    2007-12-01

    Full Text Available The article is based on the analysis of statistical data provided by law-enforcement, judicial and other bodies of Ukraine. The given analysis allows us to give an accurate quantity of a current status of crime victimization in Ukraine, to characterize its basic features (level, rate, structure, dynamics, and etc.. L’article se concentre sur l’analyse des données statystiques fournies par les institutions de contrôle sociale (forces de police et magistrature et par d’autres organes institutionnels ukrainiens. Les analyses effectuées attirent l'attention sur la situation actuelle des victimes du crime en Ukraine et aident à délinéer leur principales caractéristiques (niveau, taux, structure, dynamiques, etc.L’articolo si basa sull’analisi dei dati statistici forniti dalle agenzie del controllo sociale (forze dell'ordine e magistratura e da altri organi istituzionali ucraini. Le analisi effettuate forniscono molte informazioni sulla situazione attuale delle vittime del crimine in Ucraina e aiutano a delinearne le caratteristiche principali (livello, tasso, struttura, dinamiche, ecc..

  3. Higher order statistical frequency domain decomposition for operational modal analysis

    Science.gov (United States)

    Nita, G. M.; Mahgoub, M. A.; Sharyatpanahi, S. G.; Cretu, N. C.; El-Fouly, T. M.

    2017-02-01

    Experimental methods based on modal analysis under ambient vibrational excitation are often employed to detect structural damages of mechanical systems. Many of such frequency domain methods, such as Basic Frequency Domain (BFD), Frequency Domain Decomposition (FFD), or Enhanced Frequency Domain Decomposition (EFFD), use as first step a Fast Fourier Transform (FFT) estimate of the power spectral density (PSD) associated with the response of the system. In this study it is shown that higher order statistical estimators such as Spectral Kurtosis (SK) and Sample to Model Ratio (SMR) may be successfully employed not only to more reliably discriminate the response of the system against the ambient noise fluctuations, but also to better identify and separate contributions from closely spaced individual modes. It is shown that a SMR-based Maximum Likelihood curve fitting algorithm may improve the accuracy of the spectral shape and location of the individual modes and, when combined with the SK analysis, it provides efficient means to categorize such individual spectral components according to their temporal dynamics as coherent or incoherent system responses to unknown ambient excitations.

  4. Statistical analysis of emotions and opinions at Digg website

    CERN Document Server

    Pohorecki, Piotr; Mitrovic, Marija; Paltoglou, Georgios; Holyst, Janusz A

    2012-01-01

    We performed statistical analysis on data from the Digg.com website, which enables its users to express their opinion on news stories by taking part in forum-like discussions as well as directly evaluate previous posts and stories by assigning so called "diggs". Owing to fact that the content of each post has been annotated with its emotional value, apart from the strictly structural properties, the study also includes an analysis of the average emotional response of the posts commenting the main story. While analysing correlations at the story level, an interesting relationship between the number of diggs and the number of comments received by a story was found. The correlation between the two quantities is high for data where small threads dominate and consistently decreases for longer threads. However, while the correlation of the number of diggs and the average emotional response tends to grow for longer threads, correlations between numbers of comments and the average emotional response are almost zero. ...

  5. Statistical Power Flow Analysis of an Imperfect Ribbed Cylinder

    Science.gov (United States)

    Blakemore, M.; Woodhouse, J.; Hardie, D. J. W.

    1999-05-01

    Prediction of the noise transmitted from machinery and flow sources on a submarine to the sonar arrays poses a complex problem. Vibrations in the pressure hull provide the main transmission mechanism. The pressure hull is characterised by a very large number of modes over the frequency range of interest (at least 100,000) and by high modal overlap, both of which place its analysis beyond the scope of finite element or boundary element methods. A method for calculating the transmission is presented, which is broadly based on Statistical Energy Analysis, but extended in two important ways: (1) a novel subsystem breakdown which exploits the particular geometry of a submarine pressure hull; (2) explicit modelling of energy density variation within a subsystem due to damping. The method takes account of fluid-structure interaction, the underlying pass/stop band characteristics resulting from the near-periodicity of the pressure hull construction, the effect of vibration isolators such as bulkheads, and the cumulative effect of irregularities (e.g., attachments and penetrations).

  6. Data Analysis Details (DS): SE56_DS01 [Metabolonote[Archive

    Lifescience Database Archive (English)

    Full Text Available SE56_DS01 Profiling by METALIGN and MS2T-based peak annotation The data matrix was ...in-house software written in Perl/Tk (‘N toolbox', Appendix S3). Detailed methods for processing and interpr

  7. Detailed Analysis of the Genetic and Epigenetic Signatures of iPSC-Derived Mesodiencephalic Dopaminergic Neurons

    NARCIS (Netherlands)

    Roessler, Reinhard; Smallwood, Sebastien A.; Veenvliet, Jesse V.; Pechlivanoglou, Petros; Peng, Su-Ping; Chakrabarty, Koushik; Groot-Koerkamp, Marian J. A.; Pasterkamp, R. Jeroen; Wesseling, Evelyn; Kelsey, Gavin; Boddeke, Erik; Smidt, Marten P.; Copray, Sjef

    2014-01-01

    Induced pluripotent stem cells (iPSCs) hold great promise for in vitro generation of disease-relevant cell types, such as mesodiencephalic dopaminergic (mdDA) neurons involved in Parkinson's disease. Although iPSC-derived midbrain DA neurons have been generated, detailed genetic and epigenetic

  8. A detailed cost analysis of in vitro fertilization and intracytoplasmic sperm injection treatment.

    NARCIS (Netherlands)

    Bouwmans, C.A.; Lintsen, B.M.; Eijkemans, M.J.; Habbema, J.D.; Braat, D.D.M.; Hakkaart, L.

    2008-01-01

    OBJECTIVE: To provide detailed information about costs of in vitro fertilization (IVF) and intracytoplasmic sperm injection (ICSI) treatment stages and to estimate the cost per IVF and ICSI treatment cycle and ongoing pregnancy. DESIGN: Descriptive micro-costing study. SETTING: Four Dutch IVF center

  9. Detailed Analysis of the Genetic and Epigenetic Signatures of iPSC-Derived Mesodiencephalic Dopaminergic Neurons

    NARCIS (Netherlands)

    Roessler, Reinhard; Smallwood, Sebastien A.; Veenvliet, Jesse V.; Pechlivanoglou, Petros; Peng, Su-Ping; Chakrabarty, Koushik; Groot-Koerkamp, Marian J. A.; Pasterkamp, R. Jeroen; Wesseling, Evelyn; Kelsey, Gavin; Boddeke, Erik; Smidt, Marten P.; Copray, Sjef

    2014-01-01

    Induced pluripotent stem cells (iPSCs) hold great promise for in vitro generation of disease-relevant cell types, such as mesodiencephalic dopaminergic (mdDA) neurons involved in Parkinson's disease. Although iPSC-derived midbrain DA neurons have been generated, detailed genetic and epigenetic chara

  10. An Implementation and Detailed Analysis of the K-SVD Image Denoising Algorithm

    OpenAIRE

    Marc Lebrun; Arthur Leclaire

    2012-01-01

    K-SVD is a signal representation method which, from a set of signals, can derive a dictionary able to approximate each signal with a sparse combination of the atoms. This paper focuses on the K-SVD-based image denoising algorithm. The implementation is described in detail and its parameters are analyzed and varied to come up with a reliable implementation.

  11. Comparability of mixed IC₅₀ data - a statistical analysis.

    Science.gov (United States)

    Kalliokoski, Tuomo; Kramer, Christian; Vulpetti, Anna; Gedeck, Peter

    2013-01-01

    The biochemical half maximal inhibitory concentration (IC50) is the most commonly used metric for on-target activity in lead optimization. It is used to guide lead optimization, build large-scale chemogenomics analysis, off-target activity and toxicity models based on public data. However, the use of public biochemical IC50 data is problematic, because they are assay specific and comparable only under certain conditions. For large scale analysis it is not feasible to check each data entry manually and it is very tempting to mix all available IC50 values from public database even if assay information is not reported. As previously reported for Ki database analysis, we first analyzed the types of errors, the redundancy and the variability that can be found in ChEMBL IC50 database. For assessing the variability of IC50 data independently measured in two different labs at least ten IC50 data for identical protein-ligand systems against the same target were searched in ChEMBL. As a not sufficient number of cases of this type are available, the variability of IC50 data was assessed by comparing all pairs of independent IC50 measurements on identical protein-ligand systems. The standard deviation of IC50 data is only 25% larger than the standard deviation of Ki data, suggesting that mixing IC50 data from different assays, even not knowing assay conditions details, only adds a moderate amount of noise to the overall data. The standard deviation of public ChEMBL IC50 data, as expected, resulted greater than the standard deviation of in-house intra-laboratory/inter-day IC50 data. Augmenting mixed public IC50 data by public Ki data does not deteriorate the quality of the mixed IC50 data, if the Ki is corrected by an offset. For a broad dataset such as ChEMBL database a Ki- IC50 conversion factor of 2 was found to be the most reasonable.

  12. Statistical analysis of cone penetration resistance of railway ballast

    Science.gov (United States)

    Saussine, Gilles; Dhemaied, Amine; Delforge, Quentin; Benfeddoul, Selim

    2017-06-01

    Dynamic penetrometer tests are widely used in geotechnical studies for soils characterization but their implementation tends to be difficult. The light penetrometer test is able to give information about a cone resistance useful in the field of geotechnics and recently validated as a parameter for the case of coarse granular materials. In order to characterize directly the railway ballast on track and sublayers of ballast, a huge test campaign has been carried out for more than 5 years in order to build up a database composed of 19,000 penetration tests including endoscopic video record on the French railway network. The main objective of this work is to give a first statistical analysis of cone resistance in the coarse granular layer which represents a major component of railway track: the ballast. The results show that the cone resistance (qd) increases with depth and presents strong variations corresponding to layers of different natures identified using the endoscopic records. In the first zone corresponding to the top 30cm, (qd) increases linearly with a slope of around 1MPa/cm for fresh ballast and fouled ballast. In the second zone below 30cm deep, (qd) increases more slowly with a slope of around 0,3MPa/cm and decreases below 50cm. These results show that there is no clear difference between fresh and fouled ballast. Hence, the (qd) sensitivity is important and increases with depth. The (qd) distribution for a set of tests does not follow a normal distribution. In the upper 30cm layer of ballast of track, data statistical treatment shows that train load and speed do not have any significant impact on the (qd) distribution for clean ballast; they increase by 50% the average value of (qd) for fouled ballast and increase the thickness as well. Below the 30cm upper layer, train load and speed have a clear impact on the (qd) distribution.

  13. RNA STRAND: The RNA Secondary Structure and Statistical Analysis Database

    Directory of Open Access Journals (Sweden)

    Andronescu Mirela

    2008-08-01

    Full Text Available Abstract Background The ability to access, search and analyse secondary structures of a large set of known RNA molecules is very important for deriving improved RNA energy models, for evaluating computational predictions of RNA secondary structures and for a better understanding of RNA folding. Currently there is no database that can easily provide these capabilities for almost all RNA molecules with known secondary structures. Results In this paper we describe RNA STRAND – the RNA secondary STRucture and statistical ANalysis Database, a curated database containing known secondary structures of any type and organism. Our new database provides a wide collection of known RNA secondary structures drawn from public databases, searchable and downloadable in a common format. Comprehensive statistical information on the secondary structures in our database is provided using the RNA Secondary Structure Analyser, a new tool we have developed to analyse RNA secondary structures. The information thus obtained is valuable for understanding to which extent and with which probability certain structural motifs can appear. We outline several ways in which the data provided in RNA STRAND can facilitate research on RNA structure, including the improvement of RNA energy models and evaluation of secondary structure prediction programs. In order to keep up-to-date with new RNA secondary structure experiments, we offer the necessary tools to add solved RNA secondary structures to our database and invite researchers to contribute to RNA STRAND. Conclusion RNA STRAND is a carefully assembled database of trusted RNA secondary structures, with easy on-line tools for searching, analyzing and downloading user selected entries, and is publicly available at http://www.rnasoft.ca/strand.

  14. Analysis of Statistical Distributions of Energization Overvoltages of EHV Cables

    DEFF Research Database (Denmark)

    Ohno, Teruo; Ametani, Akihiro; Bak, Claus Leth

    Insulation levels of EHV systems have been determined based on the statistical distribution of switching overvoltages since 1970s when the statistical distribution was found for overhead lines. Responding to an increase in the planned and installed EHV cables, the authors have derived the statist......Insulation levels of EHV systems have been determined based on the statistical distribution of switching overvoltages since 1970s when the statistical distribution was found for overhead lines. Responding to an increase in the planned and installed EHV cables, the authors have derived...... the statistical distribution of energization overvoltages for EHV cables and have made clear their characteristics compared with those of the overhead lines. This paper identifies the causes and physical meanings of the characteristics so that it becomes possible to use the obtained statistical distribution...... for the determination of insulation levels of cable systems....

  15. Author Details

    African Journals Online (AJOL)

    Journal Home > Advanced Search > Author Details ... Intra‑Operative Airway Management in Patients with Maxillofacial Trauma having Reduction and ... Clinical Parameters and Challenges of Managing Cervicofacial Necrotizing Fasciitis in a ...

  16. Author Details

    African Journals Online (AJOL)

    Journal Home > Advanced Search > Author Details. Log in or ... Difficult airway management in a patient with giant malignant goitre scheduled for thyroidectomy - case report ... Airway Management Dilemma in a Patient with Maxillofacial Injury

  17. Author Details

    African Journals Online (AJOL)

    Journal Home > Advanced Search > Author Details ... Sequencing for Batch Production in a Group Flowline Machine Shop ... Sampling Plans for Monitoring Quality Control Process at a Plastic Manufacturing Firm in Nigeria: A Case Study

  18. Statistical Analysis of Galaxy Surveys - I. Robust error estimation for 2-point clustering statistics

    CERN Document Server

    Norberg, Peder; Gaztanaga, Enrique; Croton, Darren J

    2008-01-01

    We present a test of different error estimators for 2-point clustering statistics, appropriate for present and future large galaxy redshift surveys. Using an ensemble of very large dark matter LambdaCDM N-body simulations, we compare internal error estimators (jackknife and bootstrap) to external ones (Monte-Carlo realizations). For 3-dimensional clustering statistics, we find that none of the internal error methods investigated are able to reproduce neither accurately nor robustly the errors of external estimators on 1 to 25 Mpc/h scales. The standard bootstrap overestimates the variance of xi(s) by ~40% on all scales probed, but recovers, in a robust fashion, the principal eigenvectors of the underlying covariance matrix. The jackknife returns the correct variance on large scales, but significantly overestimates it on smaller scales. This scale dependence in the jackknife affects the recovered eigenvectors, which tend to disagree on small scales with the external estimates. Our results have important implic...

  19. Statistical Analysis of Data with Non-Detectable Values

    Energy Technology Data Exchange (ETDEWEB)

    Frome, E.L.

    2004-08-26

    Environmental exposure measurements are, in general, positive and may be subject to left censoring, i.e. the measured value is less than a ''limit of detection''. In occupational monitoring, strategies for assessing workplace exposures typically focus on the mean exposure level or the probability that any measurement exceeds a limit. A basic problem of interest in environmental risk assessment is to determine if the mean concentration of an analyte is less than a prescribed action level. Parametric methods, used to determine acceptable levels of exposure, are often based on a two parameter lognormal distribution. The mean exposure level and/or an upper percentile (e.g. the 95th percentile) are used to characterize exposure levels, and upper confidence limits are needed to describe the uncertainty in these estimates. In certain situations it is of interest to estimate the probability of observing a future (or ''missed'') value of a lognormal variable. Statistical methods for random samples (without non-detects) from the lognormal distribution are well known for each of these situations. In this report, methods for estimating these quantities based on the maximum likelihood method for randomly left censored lognormal data are described and graphical methods are used to evaluate the lognormal assumption. If the lognormal model is in doubt and an alternative distribution for the exposure profile of a similar exposure group is not available, then nonparametric methods for left censored data are used. The mean exposure level, along with the upper confidence limit, is obtained using the product limit estimate, and the upper confidence limit on the 95th percentile (i.e. the upper tolerance limit) is obtained using a nonparametric approach. All of these methods are well known but computational complexity has limited their use in routine data analysis with left censored data. The recent development of the R environment for statistical

  20. TECHNIQUE OF THE STATISTICAL ANALYSIS OF INVESTMENT APPEAL OF THE REGION

    Directory of Open Access Journals (Sweden)

    А. А. Vershinina

    2014-01-01

    Full Text Available The technique of the statistical analysis of investment appeal of the region is given in scientific article for direct foreign investments. Definition of a technique of the statistical analysis is given, analysis stages reveal, the mathematico-statistical tools are considered.

  1. An Implementation and Detailed Analysis of the K-SVD Image Denoising Algorithm

    Directory of Open Access Journals (Sweden)

    Marc Lebrun

    2012-05-01

    Full Text Available K-SVD is a signal representation method which, from a set of signals, can derive a dictionary able to approximate each signal with a sparse combination of the atoms. This paper focuses on the K-SVD-based image denoising algorithm. The implementation is described in detail and its parameters are analyzed and varied to come up with a reliable implementation.

  2. Detailed Per-residue Energetic Analysis Explains the Driving Force for Microtubule Disassembly

    OpenAIRE

    Ayoub, Ahmed T.; Mariusz Klobukowski; Tuszynski, Jack A

    2015-01-01

    Microtubules are long filamentous hollow cylinders whose surfaces form lattice structures of αβ-tubulin heterodimers. They perform multiple physiological roles in eukaryotic cells and are targets for therapeutic interventions. In our study, we carried out all-atom molecular dynamics simulations for arbitrarily long microtubules that have either GDP or GTP molecules in the E-site of β-tubulin. A detailed energy balance of the MM/GBSA inter-dimer interaction energy per residue contributing to t...

  3. Analysis of Statistical Methods Currently used in Toxicology Journals

    OpenAIRE

    Na, Jihye; Yang, Hyeri; Bae, SeungJin; Lim, Kyung-Min

    2014-01-01

    Statistical methods are frequently used in toxicology, yet it is not clear whether the methods employed by the studies are used consistently and conducted based on sound statistical grounds. The purpose of this paper is to describe statistical methods used in top toxicology journals. More specifically, we sampled 30 papers published in 2014 from Toxicology and Applied Pharmacology, Archives of Toxicology, and Toxicological Science and described methodologies used to provide descriptive and in...

  4. Analysis of Statistical Distributions of Energization Overvoltages of EHV Cables

    DEFF Research Database (Denmark)

    Ohno, Teruo; Ametani, Akihiro; Bak, Claus Leth;

    Insulation levels of EHV systems have been determined based on the statistical distribution of switching overvoltages since 1970s when the statistical distribution was found for overhead lines. Responding to an increase in the planned and installed EHV cables, the authors have derived...... the statistical distribution of energization overvoltages for EHV cables and have made clear their characteristics compared with those of the overhead lines. This paper identifies the causes and physical meanings of the characteristics so that it becomes possible to use the obtained statistical distribution...... for the determination of insulation levels of cable systems....

  5. Analysis and Evaluation of Statistical Models for Integrated Circuits Design

    Directory of Open Access Journals (Sweden)

    Sáenz-Noval J.J.

    2011-10-01

    Full Text Available Statistical models for integrated circuits (IC allow us to estimate the percentage of acceptable devices in the batch before fabrication. Actually, Pelgrom is the statistical model most accepted in the industry; however it was derived from a micrometer technology, which does not guarantee reliability in nanometric manufacturing processes. This work considers three of the most relevant statistical models in the industry and evaluates their limitations and advantages in analog design, so that the designer has a better criterion to make a choice. Moreover, it shows how several statistical models can be used for each one of the stages and design purposes.

  6. Significance analysis and statistical mechanics: an application to clustering.

    Science.gov (United States)

    Łuksza, Marta; Lässig, Michael; Berg, Johannes

    2010-11-26

    This Letter addresses the statistical significance of structures in random data: given a set of vectors and a measure of mutual similarity, how likely is it that a subset of these vectors forms a cluster with enhanced similarity among its elements? The computation of this cluster p value for randomly distributed vectors is mapped onto a well-defined problem of statistical mechanics. We solve this problem analytically, establishing a connection between the physics of quenched disorder and multiple-testing statistics in clustering and related problems. In an application to gene expression data, we find a remarkable link between the statistical significance of a cluster and the functional relationships between its genes.

  7. A Statistical Aggregation Engine for Climatology and Trend Analysis

    Science.gov (United States)

    Chapman, D. R.; Simon, T. A.; Halem, M.

    2014-12-01

    Fundamental climate data records (FCDRs) from satellite instruments often span tens to hundreds of terabytes or even petabytes in scale. These large volumes make it difficult to aggregate or summarize their climatology and climate trends. It is especially cumbersome to supply the full derivation (provenance) of these aggregate calculations. We present a lightweight and resilient software platform, Gridderama that simplifies the calculation of climatology by exploiting the "Data-Cube" topology often present in earth observing satellite records. By using the large array storage (LAS) paradigm, Gridderama allows the analyst to more easily produce a series of aggregate climate data products at progressively coarser spatial and temporal resolutions. Furthermore, provenance tracking and extensive visualization capabilities allow the analyst to track down and correct for data problems such as missing data and outliers that may impact the scientific results. We have developed and applied Gridderama to calculate a trend analysis of 55 Terabytes of AIRS Level 1b infrared radiances, and show statistically significant trending in the greenhouse gas absorption bands as observed by AIRS over the 2003-2012 decade. We will extend this calculation to show regional changes in CO2 concentration from AIRS over the 2003-2012 decade by using a neural network retrieval algorithm.

  8. Statistical Analysis of Loss of Offsite Power Events

    Directory of Open Access Journals (Sweden)

    Andrija Volkanovski

    2016-01-01

    Full Text Available This paper presents the results of the statistical analysis of the loss of offsite power events (LOOP registered in four reviewed databases. The reviewed databases include the IRSN (Institut de Radioprotection et de Sûreté Nucléaire SAPIDE database and the GRS (Gesellschaft für Anlagen- und Reaktorsicherheit mbH VERA database reviewed over the period from 1992 to 2011. The US NRC (Nuclear Regulatory Commission Licensee Event Reports (LERs database and the IAEA International Reporting System (IRS database were screened for relevant events registered over the period from 1990 to 2013. The number of LOOP events in each year in the analysed period and mode of operation are assessed during the screening. The LOOP frequencies obtained for the French and German nuclear power plants (NPPs during critical operation are of the same order of magnitude with the plant related events as a dominant contributor. A frequency of one LOOP event per shutdown year is obtained for German NPPs in shutdown mode of operation. For the US NPPs, the obtained LOOP frequency for critical and shutdown mode is comparable to the one assessed in NUREG/CR-6890. Decreasing trend is obtained for the LOOP events registered in three databases (IRSN, GRS, and NRC.

  9. Statistical Analysis of Resistivity Anomalies Caused by Underground Caves

    Science.gov (United States)

    Frid, V.; Averbach, A.; Frid, M.; Dudkinski, D.; Liskevich, G.

    2017-03-01

    Geophysical prospecting of underground caves being performed on a construction site is often still a challenging procedure. Estimation of a likelihood level of an anomaly found is frequently a mandatory requirement of a project principal due to necessity of risk/safety assessment. However, the methodology of such estimation is not hitherto developed. Aiming to put forward such a methodology the present study (being performed as a part of an underground caves mapping prior to the land development on the site area) consisted of application of electrical resistivity tomography (ERT) together with statistical analysis utilized for the likelihood assessment of underground anomalies located. The methodology was first verified via a synthetic modeling technique and applied to the in situ collected ERT data and then crossed referenced with intrusive investigations (excavation and drilling) for the data verification. The drilling/excavation results showed that the proper discovering of underground caves can be done if anomaly probability level is not lower than 90 %. Such a probability value was shown to be consistent with the modeling results. More than 30 underground cavities were discovered on the site utilizing the methodology.

  10. Ensemble Solar Forecasting Statistical Quantification and Sensitivity Analysis: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, WanYin; Zhang, Jie; Florita, Anthony; Hodge, Bri-Mathias; Lu, Siyuan; Hamann, Hendrik F.; Sun, Qian; Lehman, Brad

    2015-12-08

    Uncertainties associated with solar forecasts present challenges to maintain grid reliability, especially at high solar penetrations. This study aims to quantify the errors associated with the day-ahead solar forecast parameters and the theoretical solar power output for a 51-kW solar power plant in a utility area in the state of Vermont, U.S. Forecasts were generated by three numerical weather prediction (NWP) models, including the Rapid Refresh, the High Resolution Rapid Refresh, and the North American Model, and a machine-learning ensemble model. A photovoltaic (PV) performance model was adopted to calculate theoretical solar power generation using the forecast parameters (e.g., irradiance, cell temperature, and wind speed). Errors of the power outputs were quantified using statistical moments and a suite of metrics, such as the normalized root mean squared error (NRMSE). In addition, the PV model's sensitivity to different forecast parameters was quantified and analyzed. Results showed that the ensemble model yielded forecasts in all parameters with the smallest NRMSE. The NRMSE of solar irradiance forecasts of the ensemble NWP model was reduced by 28.10% compared to the best of the three NWP models. Further, the sensitivity analysis indicated that the errors of the forecasted cell temperature attributed only approximately 0.12% to the NRMSE of the power output as opposed to 7.44% from the forecasted solar irradiance.

  11. Utility green pricing programs: A statistical analysis of program effectiveness

    Energy Technology Data Exchange (ETDEWEB)

    Wiser, Ryan; Olson, Scott; Bird, Lori; Swezey, Blair

    2004-02-01

    Development of renewable energy. Such programs have grown in number in recent years. The design features and effectiveness of these programs varies considerably, however, leading a variety of stakeholders to suggest specific marketing and program design features that might improve customer response and renewable energy sales. This report analyzes actual utility green pricing program data to provide further insight into which program features might help maximize both customer participation in green pricing programs and the amount of renewable energy purchased by customers in those programs. Statistical analysis is performed on both the residential and non-residential customer segments. Data comes from information gathered through a questionnaire completed for 66 utility green pricing programs in early 2003. The questionnaire specifically gathered data on residential and non-residential participation, amount of renewable energy sold, program length, the type of renewable supply used, program price/cost premiums, types of consumer research and program evaluation performed, different sign-up options available, program marketing efforts, and ancillary benefits offered to participants.

  12. Statistical analysis of CSP plants by simulating extensive meteorological series

    Science.gov (United States)

    Pavón, Manuel; Fernández, Carlos M.; Silva, Manuel; Moreno, Sara; Guisado, María V.; Bernardos, Ana

    2017-06-01

    The feasibility analysis of any power plant project needs the estimation of the amount of energy it will be able to deliver to the grid during its lifetime. To achieve this, its feasibility study requires a precise knowledge of the solar resource over a long term period. In Concentrating Solar Power projects (CSP), financing institutions typically requires several statistical probability of exceedance scenarios of the expected electric energy output. Currently, the industry assumes a correlation between probabilities of exceedance of annual Direct Normal Irradiance (DNI) and energy yield. In this work, this assumption is tested by the simulation of the energy yield of CSP plants using as input a 34-year series of measured meteorological parameters and solar irradiance. The results of this work show that, even if some correspondence between the probabilities of exceedance of annual DNI values and energy yields is found, the intra-annual distribution of DNI may significantly affect this correlation. This result highlights the need of standardized procedures for the elaboration of representative DNI time series representative of a given probability of exceedance of annual DNI.

  13. Metrology Optical Power Budgeting in SIM Using Statistical Analysis Techniques

    Science.gov (United States)

    Kuan, Gary M

    2008-01-01

    The Space Interferometry Mission (SIM) is a space-based stellar interferometry instrument, consisting of up to three interferometers, which will be capable of micro-arc second resolution. Alignment knowledge of the three interferometer baselines requires a three-dimensional, 14-leg truss with each leg being monitored by an external metrology gauge. In addition, each of the three interferometers requires an internal metrology gauge to monitor the optical path length differences between the two sides. Both external and internal metrology gauges are interferometry based, operating at a wavelength of 1319 nanometers. Each gauge has fiber inputs delivering measurement and local oscillator (LO) power, split into probe-LO and reference-LO beam pairs. These beams experience power loss due to a variety of mechanisms including, but not restricted to, design efficiency, material attenuation, element misalignment, diffraction, and coupling efficiency. Since the attenuation due to these sources may degrade over time, an accounting of the range of expected attenuation is needed so an optical power margin can be book kept. A method of statistical optical power analysis and budgeting, based on a technique developed for deep space RF telecommunications, is described in this paper and provides a numerical confidence level for having sufficient optical power relative to mission metrology performance requirements.

  14. Statistical analysis of the ambiguities in the asteroid period determinations

    Science.gov (United States)

    Butkiewicz-Bąk, M.; Kwiatkowski, T.; Bartczak, P.; Dudziński, G.; Marciniak, A.

    2017-09-01

    Among asteroids there exist ambiguities in their rotation period determinations. They are due to incomplete coverage of the rotation, noise and/or aliases resulting from gaps between separate lightcurves. To help to remove such uncertainties, basic characteristic of the lightcurves resulting from constraints imposed by the asteroid shapes and geometries of observations should be identified. We simulated light variations of asteroids whose shapes were modelled as Gaussian random spheres, with random orientations of spin vectors and phase angles changed every 5° from 0° to 65°. This produced 1.4 million lightcurves. For each simulated lightcurve, Fourier analysis has been made and the harmonic of the highest amplitude was recorded. From the statistical point of view, all lightcurves observed at phase angles α 0.2 mag, are bimodal. Second most frequently dominating harmonic is the first one, with the 3rd harmonic following right after. For 1 per cent of lightcurves with amplitudes A < 0.1 mag and phase angles α < 40°, 4th harmonic dominates.

  15. Ensemble Solar Forecasting Statistical Quantification and Sensitivity Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, WanYin; Zhang, Jie; Florita, Anthony; Hodge, Bri-Mathias; Lu, Siyuan; Hamann, Hendrik F.; Sun, Qian; Lehman, Brad

    2015-10-02

    Uncertainties associated with solar forecasts present challenges to maintain grid reliability, especially at high solar penetrations. This study aims to quantify the errors associated with the day-ahead solar forecast parameters and the theoretical solar power output for a 51-kW solar power plant in a utility area in the state of Vermont, U.S. Forecasts were generated by three numerical weather prediction (NWP) models, including the Rapid Refresh, the High Resolution Rapid Refresh, and the North American Model, and a machine-learning ensemble model. A photovoltaic (PV) performance model was adopted to calculate theoretical solar power generation using the forecast parameters (e.g., irradiance, cell temperature, and wind speed). Errors of the power outputs were quantified using statistical moments and a suite of metrics, such as the normalized root mean squared error (NRMSE). In addition, the PV model's sensitivity to different forecast parameters was quantified and analyzed. Results showed that the ensemble model yielded forecasts in all parameters with the smallest NRMSE. The NRMSE of solar irradiance forecasts of the ensemble NWP model was reduced by 28.10% compared to the best of the three NWP models. Further, the sensitivity analysis indicated that the errors of the forecasted cell temperature attributed only approximately 0.12% to the NRMSE of the power output as opposed to 7.44% from the forecasted solar irradiance.

  16. Fluorescence correlation spectroscopy: Statistical analysis and biological applications

    Science.gov (United States)

    Saffarian, Saveez

    2002-01-01

    The experimental design and realization of an apparatus which can be used both for single molecule fluorescence detection and also fluorescence correlation and cross correlation spectroscopy is presented. A thorough statistical analysis of the fluorescence correlation functions including the analysis of bias and errors based on analytical derivations has been carried out. Using the methods developed here, the mechanism of binding and cleavage site recognition of matrix metalloproteinases (MMP) for their substrates has been studied. We demonstrate that two of the MMP family members, Collagenase (MMP-1) and Gelatinase A (MMP-2) exhibit diffusion along their substrates, the importance of this diffusion process and its biological implications are discussed. We show through truncation mutants that the hemopexin domain of the MMP-2 plays and important role in the substrate diffusion of this enzyme. Single molecule diffusion of the collagenase MMP-1 has been observed on collagen fibrils and shown to be biased. The discovered biased diffusion would make the MMP-1 molecule an active motor, thus making it the first active motor that is not coupled to ATP hydrolysis. The possible sources of energy for this enzyme and their implications are discussed. We propose that a possible source of energy for the enzyme can be in the rearrangement of the structure of collagen fibrils. In a separate application, using the methods developed here, we have observed an intermediate in the intestinal fatty acid binding protein folding process through the changes in its hydrodynamic radius also the fluctuations in the structure of the IFABP in solution were measured using FCS.

  17. Statistical analysis and optimization of igbt manufacturing flow

    Directory of Open Access Journals (Sweden)

    Baranov V. V.

    2015-02-01

    Full Text Available The use of computer simulation, design and optimization of power electronic devices formation technological processes can significantly reduce development time, improve the accuracy of calculations, choose the best options for implementation based on strict mathematical analysis. One of the most common power electronic devices is isolated gate bipolar transistor (IGBT, which combines the advantages of MOSFET and bipolar transistor. The achievement of high requirements for these devices is only possible by optimizing device design and manufacturing process parameters. Therefore important and necessary step in the modern cycle of IC design and manufacturing is to carry out the statistical analysis. Procedure of the IGBT threshold voltage optimization was realized. Through screening experiments according to the Plackett-Burman design the most important input parameters (factors that have the greatest impact on the output characteristic was detected. The coefficients of the approximation polynomial adequately describing the relationship between the input parameters and investigated output characteristics ware determined. Using the calculated approximation polynomial, a series of multiple, in a cycle of Monte Carlo, calculations to determine the spread of threshold voltage values at selected ranges of input parameters deviation were carried out. Combinations of input process parameters values were determined randomly by a normal distribution within a given range of changes. The procedure of IGBT process parameters optimization consist a mathematical problem of determining the value range of the input significant structural and technological parameters providing the change of the IGBT threshold voltage in a given interval. The presented results demonstrate the effectiveness of the proposed optimization techniques.

  18. The Statistics Concept Inventory: Development and analysis of a cognitive assessment instrument in statistics

    Science.gov (United States)

    Allen, Kirk

    The Statistics Concept Inventory (SCI) is a multiple choice test designed to assess students' conceptual understanding of topics typically encountered in an introductory statistics course. This dissertation documents the development of the SCI from Fall 2002 up to Spring 2006. The first phase of the project essentially sought to answer the question: "Can you write a test to assess topics typically encountered in introductory statistics?" Book One presents the results utilized in answering this question in the affirmative. The bulk of the results present the development and evolution of the items, primarily relying on objective metrics to gauge effectiveness but also incorporating student feedback. The second phase boils down to: "Now that you have the test, what else can you do with it?" This includes an exploration of Cronbach's alpha, the most commonly-used measure of test reliability in the literature. An online version of the SCI was designed, and its equivalency to the paper version is assessed. Adding an extra wrinkle to the online SCI, subjects rated their answer confidence. These results show a general positive trend between confidence and correct responses. However, some items buck this trend, revealing potential sources of misunderstandings, with comparisons offered to the extant statistics and probability educational research. The third phase is a re-assessment of the SCI: "Are you sure?" A factor analytic study favored a uni-dimensional structure for the SCI, although maintaining the likelihood of a deeper structure if more items can be written to tap similar topics. A shortened version of the instrument is proposed, demonstrated to be able to maintain a reliability nearly identical to that of the full instrument. Incorporating student feedback and a faculty topics survey, improvements to the items and recommendations for further research are proposed. The state of the concept inventory movement is assessed, to offer a comparison to the work presented

  19. Statistics and Analysis of CIAE’s Meteorological Observed Data

    Institute of Scientific and Technical Information of China (English)

    ZHANG; Liang; CHENG; Wei-ya

    2015-01-01

    The work analyzes the recent years’meteorological observed data of CIAE site.The suited statistical method is selected for environment condition evaluation.1 Statistical method The data types are stability,wind direction,wind frequency,wind speed,temperature,and

  20. Research Update: Spatially resolved mapping of electronic structure on atomic level by multivariate statistical analysis

    Energy Technology Data Exchange (ETDEWEB)

    Belianinov, Alex, E-mail: belianinova@ornl.gov; Ganesh, Panchapakesan; Lin, Wenzhi; Jesse, Stephen; Pan, Minghu; Kalinin, Sergei V. [Oak Ridge National Laboratory, Institute for Functional Imaging of Materials, Center for Nanophase Material Science, Oak Ridge, Tennessee 37922 (United States); Sales, Brian C.; Sefat, Athena S. [Oak Ridge National Laboratory, Materials Science and Technology Division, Oak Ridge, Tennessee 37922 (United States)

    2014-12-01

    Atomic level spatial variability of electronic structure in Fe-based superconductor FeTe{sub 0.55}Se{sub 0.45} (T{sub c} = 15 K) is explored using current-imaging tunneling-spectroscopy. Multivariate statistical analysis of the data differentiates regions of dissimilar electronic behavior that can be identified with the segregation of chalcogen atoms, as well as boundaries between terminations and near neighbor interactions. Subsequent clustering analysis allows identification of the spatial localization of these dissimilar regions. Similar statistical analysis of modeled calculated density of states of chemically inhomogeneous FeTe{sub 1−x}Se{sub x} structures further confirms that the two types of chalcogens, i.e., Te and Se, can be identified by their electronic signature and differentiated by their local chemical environment. This approach allows detailed chemical discrimination of the scanning tunneling microscopy data including separation of atomic identities, proximity, and local configuration effects and can be universally applicable to chemically and electronically inhomogeneous surfaces.

  1. Analysis on n-gram statistics and linguistic features of whole genome protein sequences

    Institute of Scientific and Technical Information of China (English)

    DONG Qi-wen; WANG Xiao-long; LIN Lei

    2008-01-01

    To obtain the statistical sequence analysis on a large number of genomic and proteomie sequences available for different organisms,the n-grams of whole genome protein sequences from 20 organisms were extracted.Their linguistic features were analyzed by two tests:Zipf power law and Shannon entropy,developed for analysis of natural languages and symbolic sequences.The natural genome proteins and the artificial genome proteins were compared with each other and some statistical features of n-grams were discovered.The results show that:the n-grams of whole genome protein sequences approximately follow the Zipf law when n is larger than 4;the Shannon n-gram entropy of natural genome proteins is lower than that of artificial proteins;a simple unigram model can distinguish different organisms;there exist organism-specific usages of "phrases" in protein sequences.It is suggested that further detailed analysis on n-gram of whole genome protein sequences will result in a powerful model for mapping the relationship of protein sequence,structure and function.

  2. Frontal Crash Analysis of a Fully Detailed Car Model Based on Finite Element Method

    Institute of Scientific and Technical Information of China (English)

    Han Shan-Ling; Zhu Ping; Lin Zhong-Qin; Shi Yu-Liang

    2004-01-01

    This paper sets up a highly detailed finite element model of a car for frontal crashworthiness applications, and then explains the characteristics of it. The geometry model is preprocessed by Hypermesh software. The finite element method solver program selected for the simulation is LS-DYNA. After the crash simulation is carefully analyzed, the frontal crash experiment is aimed to validate the finite element model. The simulation results are basically in agreement with the experimental results. The validation of the finite element model is crucial for the further research in optimization of the automotive structure or lightweighting of the vehicle.

  3. Detailed analysis of a quench bomb for the study of aluminum agglomeration in solid propellants

    Science.gov (United States)

    Gallier, S.; Kratz, J.-G.; Quaglia, N.; Fouin, G.

    2016-07-01

    A standard quench bomb (QB) - widely used to characterize condensed phase from metalized solid propellant combustion - is studied in detail. Experimental and numerical investigations proved that collected particles are mostly unburned aluminum (Al) agglomerates despite large quenching distances. Particles are actually found to quench early as propellant surface is swept by inert pressurant. Further improvements of the QB are proposed which allow measuring both Al agglomerates and alumina residue with the same setup. Finally, the results obtained on a typical aluminized ammonium perchlorate (AP) / hydroxyl-terminated polybutadiene (HTPB) propellant are briefly discussed.

  4. Dynamical Twisted Mass Fermions with Light Quarks: Simulation and Analysis Details

    CERN Document Server

    Boucaud, Ph; Farchioni, F; Frezzotti, R; Giménez, V; Herdoiza, G; Jansen, K; Lubicz, V; Michael, C; Münster, G; Palao, D; Rossi, G C; Scorzato, L; Shindler, A; Simula, S; Sudmann, T; Urbach, C; Wenger, U

    2008-01-01

    In a recent paper [hep-lat/0701012] we presented precise lattice QCD results of our European Twisted Mass Collaboration (ETMC). They were obtained by employing two mass-degenerate flavours of twisted mass fermions at maximal twist. In the present paper we give details on our simulations and the computation of physical observables. In particular, we discuss the problem of tuning to maximal twist, the techniques we have used to compute correlators and error estimates. In addition, we provide more information on the algorithm used, the autocorrelation times and scale determination, the evaluation of disconnected contributions and the description of our data by means of chiral perturbation theory formulae.

  5. Detailed analysis for a control rod worth of the gas turbine high temperature reactor (GTHTR300)

    Energy Technology Data Exchange (ETDEWEB)

    Nakata, Tetsuo; Katanishi, Shoji; Takada, Shoji; Yan, Xing; Kunitomi, Kazuhiko [Japan Atomic Energy Research Inst., Oarai, Ibaraki (Japan). Oarai Research Establishment

    2002-11-01

    GTHTR300 is composed of a simplified and economical power plant based on an inherent safe 600 MWt reactor and a nearly 50% high efficiency gas turbine power conversion cycle. GTHTR300 core consist of annular fuel region, center and outer side reflectors because of cooling it effectively in depressurized accident conditions, and all control rods are located in both side reflectors of annular core. As a thermal neutron spectrum is strongly distorted in reflector regions, an accurate calculation is especially required for the control rod worth evaluation. In this study, we applied the detailed Monte Carlo calculations of a full core model, and confirmed that our design method has enough accuracy. (author)

  6. Analysis of room transfer function and reverberant signal statistics

    DEFF Research Database (Denmark)

    Georganti, Eleftheria; Mourjopoulos, John; Jacobsen, Finn

    2008-01-01

    smoothing (e.g., as in complex smoothing) with respect to the original RTF statistics. More specifically, the RTF statistics, derived after the complex smoothing calculation, are compared to the original statistics across space inside typical rooms, by varying the source, the receiver position...... and the corresponding ratio of the direct and reverberant signal. In addition, this work examines the statistical quantities for speech and audio signals prior to their reproduction within rooms and when recorded in rooms. Histograms and other statistical distributions are used to compare RTF minima of typical...... “anechoic” and “reverberant” audio speech signals, in order to model the alterations due to room acoustics. The above results are obtained from both in-situ room response measurements and controlled acoustical response simulations....

  7. On Conceptual Analysis as the Primary Qualitative Approach to Statistics Education Research in Psychology

    Science.gov (United States)

    Petocz, Agnes; Newbery, Glenn

    2010-01-01

    Statistics education in psychology often falls disappointingly short of its goals. The increasing use of qualitative approaches in statistics education research has extended and enriched our understanding of statistical cognition processes, and thus facilitated improvements in statistical education and practices. Yet conceptual analysis, a…

  8. Remote sensing of selective logging in Amazonia Assessing limitations based on detailed field observations, Landsat ETM+, and textural analysis.

    Science.gov (United States)

    Gregory P. Asnera; Michael Keller; Rodrigo Pereira; Johan C. Zweeded

    2002-01-01

    We combined a detailed field study of forest canopy damage with calibrated Landsat 7 Enhanced Thematic Mapper Plus (ETM+) reflectance data and texture analysis to assess the sensitivity of basic broadband optical remote sensing to selective logging in Amazonia. Our field study encompassed measurements of ground damage and canopy gap fractions along a chronosequence of...

  9. Combined statistical analysis of landslide release and propagation

    Science.gov (United States)

    Mergili, Martin; Rohmaneo, Mohammad; Chu, Hone-Jay

    2016-04-01

    Statistical methods - often coupled with stochastic concepts - are commonly employed to relate areas affected by landslides with environmental layers, and to estimate spatial landslide probabilities by applying these relationships. However, such methods only concern the release of landslides, disregarding their motion. Conceptual models for mass flow routing are used for estimating landslide travel distances and possible impact areas. Automated approaches combining release and impact probabilities are rare. The present work attempts to fill this gap by a fully automated procedure combining statistical and stochastic elements, building on the open source GRASS GIS software: (1) The landslide inventory is subset into release and deposition zones. (2) We employ a traditional statistical approach to estimate the spatial release probability of landslides. (3) We back-calculate the probability distribution of the angle of reach of the observed landslides, employing the software tool r.randomwalk. One set of random walks is routed downslope from each pixel defined as release area. Each random walk stops when leaving the observed impact area of the landslide. (4) The cumulative probability function (cdf) derived in (3) is used as input to route a set of random walks downslope from each pixel in the study area through the DEM, assigning the probability gained from the cdf to each pixel along the path (impact probability). The impact probability of a pixel is defined as the average impact probability of all sets of random walks impacting a pixel. Further, the average release probabilities of the release pixels of all sets of random walks impacting a given pixel are stored along with the area of the possible release zone. (5) We compute the zonal release probability by increasing the release probability according to the size of the release zone - the larger the zone, the larger the probability that a landslide will originate from at least one pixel within this zone. We

  10. Direct numerical simulation and statistical analysis of turbulent convection in lead-bismuth

    Energy Technology Data Exchange (ETDEWEB)

    Otic, I.; Grotzbach, G. [Forschungszentrum Karlsruhe GmbH, Institut fuer Kern-und Energietechnik (Germany)

    2003-07-01

    Improved turbulent heat flux models are required to develop and analyze the reactor concept of an lead-bismuth cooled Accelerator-Driven-System. Because of specific properties of many liquid metals we have still no sensors for accurate measurements of the high frequency velocity fluctuations. So, the development of the turbulent heat transfer models which are required in our CFD (computational fluid dynamics) tools needs also data from direct numerical simulations of turbulent flows. We use new simulation results for the model problem of Rayleigh-Benard convection to show some peculiarities of the turbulent natural convection in lead-bismuth (Pr = 0.025). Simulations for this flow at sufficiently large turbulence levels became only recently feasible because this flow requires the resolution of very small velocity scales with the need for recording long-wave structures for the slow changes in the convective temperature field. The results are analyzed regarding the principle convection and heat transfer features. They are also used to perform statistical analysis to show that the currently available modeling is indeed not adequate for these fluids. Basing on the knowledge of the details of the statistical features of turbulence in this convection type and using the two-point correlation technique, a proposal for an improved statistical turbulence model is developed which is expected to account better for the peculiarities of the heat transfer in the turbulent convection in low Prandtl number fluids. (authors)

  11. For the Love of Statistics: Appreciating and Learning to Apply Experimental Analysis and Statistics through Computer Programming Activities

    Science.gov (United States)

    Mascaró, Maite; Sacristán, Ana Isabel; Rufino, Marta M.

    2016-01-01

    For the past 4 years, we have been involved in a project that aims to enhance the teaching and learning of experimental analysis and statistics, of environmental and biological sciences students, through computational programming activities (using R code). In this project, through an iterative design, we have developed sequences of R-code-based…

  12. Developmental Coordination Disorder: Validation of a Qualitative Analysis Using Statistical Factor Analysis

    Directory of Open Access Journals (Sweden)

    Kathy Ahern

    2002-09-01

    Full Text Available This study investigates triangulation of the findings of a qualitative analysis by applying an exploratory factor analysis to themes identified in a phenomenological study. A questionnaire was developed from a phenomenological analysis of parents' experiences of parenting a child with Developmental Coordination Disorder (DCD. The questionnaire was administered to 114 parents of DCD children and data were analyzed using an exploratory factor analysis. The extracted factors provided support for the validity of the original qualitative analysis, and a commentary on the validity of the process is provided. The emerging description is of the compromises that were necessary to translate qualitative themes into statistical factors, and of the ways in which the statistical analysis suggests further qualitative study.

  13. Statistical analysis of simple repeats in the human genome

    Science.gov (United States)

    Piazza, F.; Liò, P.

    2005-03-01

    The human genome contains repetitive DNA at different level of sequence length, number and dispersion. Highly repetitive DNA is particularly rich in homo- and di-nucleotide repeats, while middle repetitive DNA is rich of families of interspersed, mobile elements hundreds of base pairs (bp) long, among which belong the Alu families. A link between homo- and di-polymeric tracts and mobile elements has been recently highlighted. In particular, the mobility of Alu repeats, which form 10% of the human genome, has been correlated with the length of poly(A) tracts located at one end of the Alu. These tracts have a rigid and non-bendable structure and have an inhibitory effect on nucleosomes, which normally compact the DNA. We performed a statistical analysis of the genome-wide distribution of lengths and inter-tract separations of poly(X) and poly(XY) tracts in the human genome. Our study shows that in humans the length distributions of these sequences reflect the dynamics of their expansion and DNA replication. By means of general tools from linguistics, we show that the latter play the role of highly-significant content-bearing terms in the DNA text. Furthermore, we find that such tracts are positioned in a non-random fashion, with an apparent periodicity of 150 bases. This allows us to extend the link between repetitive, highly mobile elements such as Alus and low-complexity words in human DNA. More precisely, we show that Alus are sources of poly(X) tracts, which in turn affect in a subtle way the combination and diversification of gene expression and the fixation of multigene families.

  14. Emerging Trends and Statistical Analysis in Computational Modeling in Agriculture

    Directory of Open Access Journals (Sweden)

    Sunil Kumar

    2015-03-01

    Full Text Available In this paper the authors have tried to describe emerging trend in computational modelling used in the sphere of agriculture. Agricultural computational modelling with the use of intelligence techniques for computing the agricultural output by providing minimum input data to lessen the time through cutting down the multi locational field trials and also the labours and other inputs is getting momentum. Development of locally suitable integrated farming systems (IFS is the utmost need of the day, particularly in India where about 95% farms are under small and marginal holding size. Optimization of the size and number of the various enterprises to the desired IFS model for a particular set of agro-climate is essential components of the research to sustain the agricultural productivity for not only filling the stomach of the bourgeoning population of the country, but also to enhance the nutritional security and farms return for quality life. Review of literature pertaining to emerging trends in computational modelling applied in field of agriculture is done and described below for the purpose of understanding its trends mechanism behavior and its applications. Computational modelling is increasingly effective for designing and analysis of the system. Computa-tional modelling is an important tool to analyses the effect of different scenarios of climate and management options on the farming systems and its interaction among themselves. Further, authors have also highlighted the applications of computational modeling in integrated farming system, crops, weather, soil, climate, horticulture and statistical used in agriculture which can show the path to the agriculture researcher and rural farming community to replace some of the traditional techniques.

  15. Detailed interrogation of trypanosome cell biology via differential organelle staining and automated image analysis

    Directory of Open Access Journals (Sweden)

    Wheeler Richard J

    2012-01-01

    Full Text Available Abstract Background Many trypanosomatid protozoa are important human or animal pathogens. The well defined morphology and precisely choreographed division of trypanosomatid cells makes morphological analysis a powerful tool for analyzing the effect of mutations, chemical insults and changes between lifecycle stages. High-throughput image analysis of micrographs has the potential to accelerate collection of quantitative morphological data. Trypanosomatid cells have two large DNA-containing organelles, the kinetoplast (mitochondrial DNA and nucleus, which provide useful markers for morphometric analysis; however they need to be accurately identified and often lie in close proximity. This presents a technical challenge. Accurate identification and quantitation of the DNA content of these organelles is a central requirement of any automated analysis method. Results We have developed a technique based on double staining of the DNA with a minor groove binding (4'', 6-diamidino-2-phenylindole (DAPI and a base pair intercalating (propidium iodide (PI or SYBR green fluorescent stain and color deconvolution. This allows the identification of kinetoplast and nuclear DNA in the micrograph based on whether the organelle has DNA with a more A-T or G-C rich composition. Following unambiguous identification of the kinetoplasts and nuclei the resulting images are amenable to quantitative automated analysis of kinetoplast and nucleus number and DNA content. On this foundation we have developed a demonstrative analysis tool capable of measuring kinetoplast and nucleus DNA content, size and position and cell body shape, length and width automatically. Conclusions Our approach to DNA staining and automated quantitative analysis of trypanosomatid morphology accelerated analysis of trypanosomatid protozoa. We have validated this approach using Leishmania mexicana, Crithidia fasciculata and wild-type and mutant Trypanosoma brucei. Automated analysis of T. brucei

  16. Confirmatory analysis and detail design of the magnet system for mirror fusion test facility (MFTF)

    Energy Technology Data Exchange (ETDEWEB)

    Tatro, R.E.; Baldi, R.W.

    1978-10-01

    This summary covers the six individual reports delivered to the LLL MFTF program staff. They are: (1) literature survey (helium heat transfer), (2) thermodynamic analysis, (3) structural analysis, (4) manufacturing/producibility study, (5) instrumentation plan and (6) quality assurance report. (MOW)

  17. Primordial black holes as a novel probe of primordial gravitational waves II: detailed analysis

    CERN Document Server

    Nakama, Tomohiro

    2016-01-01

    Recently we have proposed a novel method to probe primordial gravitational waves from upper bounds on the abundance of primordial black holes (PBHs). When the amplitude of primordial tensor perturbations generated in the early universe is very large, they induce large scalar perturbations due to their second-order effects. If the amplitude of resultant scalar perturbations is too large at the moment of their horizon reenty, then PBHs are overproduced to a level that is inconsistent with a variety of existing observations constraining the abundance of PBHs. This consideration leads to upper bounds on the amplitude of primordial tensor perturbations on super-horizon scales. In contrast to our recent paper in which we only present simple estimations of the upper bounds from PBHs, in this paper, we present detailed derivations, by solving the Einstein equations for scalar perturbations induced at second order in tensor perturbations. We also derive an approximate formula for the probability density function of in...

  18. System Design of a Trusted SoC and Detailed Analysis of its Secure State Transitions

    Directory of Open Access Journals (Sweden)

    Xianwen Yang

    2010-11-01

    Full Text Available According to the relevant criterion and principle for designing and evaluating various trusted computing chips, we have proposed a new trusted SoC chip, and have given the implementation of its basic functional modules. In detail, we have discussed the design of the trusted SoC security architecture and the main module functional modules such as microprocessor, cryptographic function module, security management module, input/output interface, along with the most important memory management unit. Moreover, we have discussed reliability of relevant parameters and transfer strategy for trusted root in chip development and application, together with the simulation and validation of corresponding functions. At last, we point out that one of the most important further research directions is the trusted measurement of dynamic data and software running in security environment.

  19. Detailed Analysis of Wheat Straw Node and Internode for their Prospective Efficient Utilisation.

    Science.gov (United States)

    Ghaffar, Seyed Hamidreza; Fan, Mizi; Zhou, Yonghui; AboMadyan, Omar

    2017-09-27

    In order to efficiently utilise wheat straw, the systematic examination of their cell wall components, chemical structures, morphology and relation to the physicochemical and mechanical properties is necessary. Detailing of node and internode signifies their different features and characteristics which can ultimately lead to their separated processing for enhanced efficiency and higher added-value bio-refinery. In this study, distinct variations were found amongst characteristics of node and internode, inner and outer surface. It was revealed that node has more extractives, Klason lignin and ash content than internode, higher contents of extractives and ash in the node are related to the thicker epidermis tissue. Hot-water followed by mild steam pre-treatment was used to examine the effects on the characteristics of wheat straw. The results showed: 1) reduced level of waxes and Si (weight %) from the outer surface, and 2) significantly lower (P < 0.05) extractives content in both internode and node.

  20. Model based, detailed fault analysis in the CERN PS complex equipment

    CERN Document Server

    Beharrell, M; Bouché, J M; Cupérus, J; Lelaizant, M; Mérard, L

    1995-01-01

    In the CERN PS Complex of accelerators, about a thousand of equipment of various type (power converters, RF cavities, beam measurement devices, vacuum systems etc...) are controlled using the so-called Control Protocol, already described in previous Conferences. This Protocol, a model based equipment access standard, provides, amongst other facilities, a uniform and structured fault description and report feature. The faults are organized in categories, following their gravity, and are presented at two levels: the first level is global and identical for all devices, the second level is very detailed and adapted to the peculiarities of each single device. All the relevant information is provided by the equipment specialists and is appropriately stored in static and real time data bases; in this way a unique set of data driven application programs can always cope with existing and newly added equipment. Two classes of applications have been implemented, the first one is intended for control room alarm purposes,...

  1. RBioplot: an easy-to-use R pipeline for automated statistical analysis and data visualization in molecular biology and biochemistry

    Science.gov (United States)

    Zhang, Jing

    2016-01-01

    Background Statistical analysis and data visualization are two crucial aspects in molecular biology and biology. For analyses that compare one dependent variable between standard (e.g., control) and one or multiple independent variables, a comprehensive yet highly streamlined solution is valuable. The computer programming language R is a popular platform for researchers to develop tools that are tailored specifically for their research needs. Here we present an R package RBioplot that takes raw input data for automated statistical analysis and plotting, highly compatible with various molecular biology and biochemistry lab techniques, such as, but not limited to, western blotting, PCR, and enzyme activity assays. Method The package is built based on workflows operating on a simple raw data layout, with minimum user input or data manipulation required. The package is distributed through GitHub, which can be easily installed through one single-line R command. A detailed installation guide is available at http://kenstoreylab.com/?page_id=2448. Users can also download demo datasets from the same website. Results and Discussion By integrating selected functions from existing statistical and data visualization packages with extensive customization, RBioplot features both statistical analysis and data visualization functionalities. Key properties of RBioplot include: -Fully automated and comprehensive statistical analysis, including normality test, equal variance test, Student’s t-test and ANOVA (with post-hoc tests);-Fully automated histogram, heatmap and joint-point curve plotting modules;-Detailed output files for statistical analysis, data manipulation and high quality graphs;-Axis range finding and user customizable tick settings;-High user-customizability. PMID:27703842

  2. RBioplot: an easy-to-use R pipeline for automated statistical analysis and data visualization in molecular biology and biochemistry

    Directory of Open Access Journals (Sweden)

    Jing Zhang

    2016-09-01

    Full Text Available Background Statistical analysis and data visualization are two crucial aspects in molecular biology and biology. For analyses that compare one dependent variable between standard (e.g., control and one or multiple independent variables, a comprehensive yet highly streamlined solution is valuable. The computer programming language R is a popular platform for researchers to develop tools that are tailored specifically for their research needs. Here we present an R package RBioplot that takes raw input data for automated statistical analysis and plotting, highly compatible with various molecular biology and biochemistry lab techniques, such as, but not limited to, western blotting, PCR, and enzyme activity assays. Method The package is built based on workflows operating on a simple raw data layout, with minimum user input or data manipulation required. The package is distributed through GitHub, which can be easily installed through one single-line R command. A detailed installation guide is available at http://kenstoreylab.com/?page_id=2448. Users can also download demo datasets from the same website. Results and Discussion By integrating selected functions from existing statistical and data visualization packages with extensive customization, RBioplot features both statistical analysis and data visualization functionalities. Key properties of RBioplot include: -Fully automated and comprehensive statistical analysis, including normality test, equal variance test, Student’s t-test and ANOVA (with post-hoc tests; -Fully automated histogram, heatmap and joint-point curve plotting modules; -Detailed output files for statistical analysis, data manipulation and high quality graphs; -Axis range finding and user customizable tick settings; -High user-customizability.

  3. Detailed Per-residue Energetic Analysis Explains the Driving Force for Microtubule Disassembly.

    Directory of Open Access Journals (Sweden)

    Ahmed T Ayoub

    2015-06-01

    Full Text Available Microtubules are long filamentous hollow cylinders whose surfaces form lattice structures of αβ-tubulin heterodimers. They perform multiple physiological roles in eukaryotic cells and are targets for therapeutic interventions. In our study, we carried out all-atom molecular dynamics simulations for arbitrarily long microtubules that have either GDP or GTP molecules in the E-site of β-tubulin. A detailed energy balance of the MM/GBSA inter-dimer interaction energy per residue contributing to the overall lateral and longitudinal structural stability was performed. The obtained results identified the key residues and tubulin domains according to their energetic contributions. They also identified the molecular forces that drive microtubule disassembly. At the tip of the plus end of the microtubule, the uneven distribution of longitudinal interaction energies within a protofilament generates a torque that bends tubulin outwardly with respect to the cylinder's axis causing disassembly. In the presence of GTP, this torque is opposed by lateral interactions that prevent outward curling, thus stabilizing the whole microtubule. Once GTP hydrolysis reaches the tip of the microtubule (lateral cap, lateral interactions become much weaker, allowing tubulin dimers to bend outwards, causing disassembly. The role of magnesium in the process of outward curling has also been demonstrated. This study also showed that the microtubule seam is the most energetically labile inter-dimer interface and could serve as a trigger point for disassembly. Based on a detailed balance of the energetic contributions per amino acid residue in the microtubule, numerous other analyses could be performed to give additional insights into the properties of microtubule dynamic instability.

  4. A statistical framework for differential network analysis from microarray data

    Directory of Open Access Journals (Sweden)

    Datta Somnath

    2010-02-01

    Full Text Available Abstract Background It has been long well known that genes do not act alone; rather groups of genes act in consort during a biological process. Consequently, the expression levels of genes are dependent on each other. Experimental techniques to detect such interacting pairs of genes have been in place for quite some time. With the advent of microarray technology, newer computational techniques to detect such interaction or association between gene expressions are being proposed which lead to an association network. While most microarray analyses look for genes that are differentially expressed, it is of potentially greater significance to identify how entire association network structures change between two or more biological settings, say normal versus diseased cell types. Results We provide a recipe for conducting a differential analysis of networks constructed from microarray data under two experimental settings. At the core of our approach lies a connectivity score that represents the strength of genetic association or interaction between two genes. We use this score to propose formal statistical tests for each of following queries: (i whether the overall modular structures of the two networks are different, (ii whether the connectivity of a particular set of "interesting genes" has changed between the two networks, and (iii whether the connectivity of a given single gene has changed between the two networks. A number of examples of this score is provided. We carried out our method on two types of simulated data: Gaussian networks and networks based on differential equations. We show that, for appropriate choices of the connectivity scores and tuning parameters, our method works well on simulated data. We also analyze a real data set involving normal versus heavy mice and identify an interesting set of genes that may play key roles in obesity. Conclusions Examining changes in network structure can provide valuable information about the

  5. Olive mill wastewater characteristics: modelling and statistical analysis

    Directory of Open Access Journals (Sweden)

    Martins-Dias, Susete

    2004-09-01

    Full Text Available A synthesis of the work carried out on Olive Mill Wastewater (OMW characterisation is given, covering articles published over the last 50 years. Data on OMW characterisation found in the literature are summarised and correlations between them and with phenolic compounds content are sought. This permits the characteristics of an OMW to be estimated from one simple measurement: the phenolic compounds concentration. A model based on OMW characterisations accounting 6 countries was developed along with a model for Portuguese OMW. The statistical analysis of the correlations obtained indicates that Chemical Oxygen Demand of a given OMW is a second-degree polynomial function of its phenolic compounds concentration. Tests to evaluate the regressions significance were carried out, based on multivariable ANOVA analysis, on visual standardised residuals distribution and their means for confidence levels of 95 and 99 %, validating clearly these models. This modelling work will help in the future planning, operation and monitoring of an OMW treatment plant.Presentamos una síntesis de los trabajos realizados en los últimos 50 años relacionados con la caracterización del alpechín. Realizamos una recopilación de los datos publicados, buscando correlaciones entre los datos relativos al alpechín y los compuestos fenólicos. Esto permite la determinación de las características del alpechín a partir de una sola medida: La concentración de compuestos fenólicos. Proponemos dos modelos, uno basado en datos relativos a seis países y un segundo aplicado únicamente a Portugal. El análisis estadístico de las correlaciones obtenidas indica que la demanda química de oxígeno de un determinado alpechín es una función polinómica de segundo grado de su concentración de compuestos fenólicos. Se comprobó la significancia de esta correlación mediante la aplicación del análisis multivariable ANOVA, y además se evaluó la distribución de residuos y sus

  6. Statistical Design, Models and Analysis for the Job Change Framework.

    Science.gov (United States)

    Gleser, Leon Jay

    1990-01-01

    Proposes statistical methodology for testing Loughead and Black's "job change thermostat." Discusses choice of target population; relationship between job satisfaction and values, perceptions, and opportunities; and determinants of job change. (SK)

  7. Performance evaluation of a direct computed radiography system by means of physical characterization and contrast detail analysis

    Science.gov (United States)

    Rivetti, Stefano; Lanconelli, Nico; Bertolini, Marco; Borasi, Giovanni; Acchiappati, Domenico; Burani, Aldo

    2007-03-01

    The aim of this study is to determine the performance of a direct CR reader, named "FCR Velocity U Focused Phosphor (FP)". The system is based on a CsBr columnar structured crystal, and the system's read out is based on the "linescan technology" that employs a wide-view CCD. The system's physical performance was tested by means of a quantitative analysis, with calculation of the modulation transfer function (MTF), noise power spectrum (NPS) and detective quantum efficiency (DQE). Image quality was assessed by performing a contrast-detail analysis. The results are compared with those obtained with the well known CR system Fuji FCR XG5000, and the new one Kodak DirectView CR 975. For all the measurements the standard radiation quality RQA-5 was used. The relationship between signal amplitude and entrance air kerma is logarithmic for all the systems and the response functions were used to linearize the images before the MTF (edge method) and NPS calculations. The contrast detail analysis has been achieved by using the well known CDRAD phantom and a customized software designed for automatic computation of the contrast-detail curves. The three systems present similar MTFs, whereas the Fuji Velocity U FP system, thanks to its greater efficiency, has a better behavior in terms of NNPS, especially at low frequencies. That allows the system based on columnar phosphor to provide a better DQE. CDRAD analysis basically confirms that the structured phosphor used in the Velocity system improves the visibility of some details. This is especially true for medium and large details.

  8. Analysis of Statistical Methods Currently used in Toxicology Journals.

    Science.gov (United States)

    Na, Jihye; Yang, Hyeri; Bae, SeungJin; Lim, Kyung-Min

    2014-09-01

    Statistical methods are frequently used in toxicology, yet it is not clear whether the methods employed by the studies are used consistently and conducted based on sound statistical grounds. The purpose of this paper is to describe statistical methods used in top toxicology journals. More specifically, we sampled 30 papers published in 2014 from Toxicology and Applied Pharmacology, Archives of Toxicology, and Toxicological Science and described methodologies used to provide descriptive and inferential statistics. One hundred thirteen endpoints were observed in those 30 papers, and most studies had sample size less than 10, with the median and the mode being 6 and 3 & 6, respectively. Mean (105/113, 93%) was dominantly used to measure central tendency, and standard error of the mean (64/113, 57%) and standard deviation (39/113, 34%) were used to measure dispersion, while few studies provide justifications regarding why the methods being selected. Inferential statistics were frequently conducted (93/113, 82%), with one-way ANOVA being most popular (52/93, 56%), yet few studies conducted either normality or equal variance test. These results suggest that more consistent and appropriate use of statistical method is necessary which may enhance the role of toxicology in public health.

  9. USING ARTIFICIAL NEURAL NETWORKS AS STATISTICAL TOOLS FOR ANALYSIS OF MEDICAL DATA

    Directory of Open Access Journals (Sweden)

    ANOUSHIRAVAN KAZEMNEZHAD

    2003-06-01

    Full Text Available Introduction: Artificial neural networks mimic brains behavior. They are able to predict and feature recognition and classification. Therefore, neural networks seem to serious rivals for statistical models like regression and discriminant analysis. Methods: We have introduced biological neuron and generalized their function for artificial neurons and described back propagation error algoritm for training of networks in details. Result: Based on two simulated data and one real data we built neural networks by using back propagation and compared them by regression models. Discussion: Neural networks can be considered as a non parametric method for data modeling and seem that they are potentially. more powerful than regression for modeling, but more ambiguous in notation.

  10. Light water reactor fuel analysis code FEMAXI-IV(Ver.2). Detailed structure and user`s manual

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Motoe [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Saitou, Hiroaki

    1997-11-01

    A light water reactor fuel behavior analysis code FEMAXI-IV(Ver.2) was developed as an improved version of FEMAXI-IV. Development of FEMAXI-IV has been already finished in 1992, though a detailed structure and input manual of the code have not been open to users yet. Here, the basic theories and structure, the models and numerical solutions applied to FEMAXI-IV(Ver.2), and the material properties adopted in the code are described in detail. In FEMAXI-IV(Ver.2), programming bugs in previous FEMAXI-IV were eliminated, renewal of the pellet thermal conductivity was performed, and a model of thermal-stress restraint on FP gas release was incorporated. For facilitation of effective and wide-ranging application of the code, methods of input/output of the code are also described in detail, and sample output is included. (author)

  11. Data Analysis Details (DS): SE57_DS01 [Metabolonote[Archive

    Lifescience Database Archive (English)

    Full Text Available SE57_DS01 Analysis of metabolite accumulation patterns Quantitative data for metabo...lite accumulation in the seeds or seed coats of the 14 accessions or species were obtained by UPLC-TQMS. Pea

  12. Algebraic Monte Carlo precedure reduces statistical analysis time and cost factors

    Science.gov (United States)

    Africano, R. C.; Logsdon, T. S.

    1967-01-01

    Algebraic Monte Carlo procedure statistically analyzes performance parameters in large, complex systems. The individual effects of input variables can be isolated and individual input statistics can be changed without having to repeat the entire analysis.

  13. Automatic Generation of Algorithms for the Statistical Analysis of Planetary Nebulae Images

    Science.gov (United States)

    Fischer, Bernd

    2004-01-01

    Analyzing data sets collected in experiments or by observations is a Core scientific activity. Typically, experimentd and observational data are &aught with uncertainty, and the analysis is based on a statistical model of the conjectured underlying processes, The large data volumes collected by modern instruments make computer support indispensible for this. Consequently, scientists spend significant amounts of their time with the development and refinement of the data analysis programs. AutoBayes [GF+02, FS03] is a fully automatic synthesis system for generating statistical data analysis programs. Externally, it looks like a compiler: it takes an abstract problem specification and translates it into executable code. Its input is a concise description of a data analysis problem in the form of a statistical model as shown in Figure 1; its output is optimized and fully documented C/C++ code which can be linked dynamically into the Matlab and Octave environments. Internally, however, it is quite different: AutoBayes derives a customized algorithm implementing the given model using a schema-based process, and then further refines and optimizes the algorithm into code. A schema is a parameterized code template with associated semantic constraints which define and restrict the template s applicability. The schema parameters are instantiated in a problem-specific way during synthesis as AutoBayes checks the constraints against the original model or, recursively, against emerging sub-problems. AutoBayes schema library contains problem decomposition operators (which are justified by theorems in a formal logic in the domain of Bayesian networks) as well as machine learning algorithms (e.g., EM, k-Means) and nu- meric optimization methods (e.g., Nelder-Mead simplex, conjugate gradient). AutoBayes augments this schema-based approach by symbolic computation to derive closed-form solutions whenever possible. This is a major advantage over other statistical data analysis systems

  14. Preliminary evaluation of crisis-relocation fallout-shelter options. Volume 2. Detailed analysis

    Energy Technology Data Exchange (ETDEWEB)

    Santini, D.J.; Clinch, J.M.; Davis, F.H.; Hill, L.G.; Lynch, E.P.; Tanzman, E.A.; Wernette, D.R.

    1982-12-01

    This report presents a preliminary, detailed evaluation of various shelter options for use if the President orders crisis relocation of the US urban population because of strong expectation of a nuclear war. The availability of livable shelter space at 40 ft/sup 2/ per person (congregate-care space) by state is evaluated. Options are evaluated for construction of fallout shelters allowing 10 ft/sup 2/ per person - such shelters are designed to provide 100% survival at projected levels of radioactive fallout. The FEMA concept of upgrading existing buildings to act as fallout shelters can, in principle, provide adequate shelter throughout most of the US. Exceptions are noted and remedies proposed. In terms of upgrading existing buildings to fallout shelter status, great benefits are possible by turning away from a standard national approach and adopting a more site-specific approach. Existing FEMA research provides a solid foundation for successful crisis relocation planning, but the program can be refined by making suitable modifications in its locational, engineering, and institutionally specific elements.

  15. Detailed Analysis of Early to Late-Time Spectra of Supernova 1993J

    CERN Document Server

    Matheson, T; Ho, L C; Barth, A J; Leonard, D C; Matheson, Thomas; Filippenko, Alexei V.; Ho, Luis C.; Barth, Aaron J.; Leonard, Douglas C.

    2000-01-01

    We present a detailed study of line structure in early to late-time spectra of Supernova (SN) 1993J. Spectra during the nebular phase, but within the first two years after explosion, exhibit small-scale structure in the emission lines of some species, notably oxygen and magnesium, showing that the ejecta of SN 1993J are clumpy. On the other hand, a lack of structure in emission lines of calcium implies that the source of calcium emission is uniformly distributed throughout the ejecta. These results are interpreted as evidence that oxygen emission originates in clumpy, newly synthesized material, while calcium emission arises from material pre-existing in the atmosphere of the progenitor. Spectra spanning the range 433-2454 days after the explosion show box-like profiles for the emission lines, clearly indicating circumstellar interaction in a roughly spherical shell. This is interpreted within the Chevalier & Fransson (1994) model for SNe interacting with mass lost during prior stellar winds. At very late...

  16. Selection Metric for Photovoltaic Materials Screening Based on Detailed-Balance Analysis

    Science.gov (United States)

    Blank, Beatrix; Kirchartz, Thomas; Lany, Stephan; Rau, Uwe

    2017-08-01

    The success of recently discovered absorber materials for photovoltaic applications has been generating increasing interest in systematic materials screening over the last years. However, the key for a successful materials screening is a suitable selection metric that goes beyond the Shockley-Queisser theory that determines the thermodynamic efficiency limit of an absorber material solely by its band-gap energy. In this work, we develop a selection metric to quantify the potential photovoltaic efficiency of a material. Our approach is compatible with detailed balance and applicable in computational and experimental materials screening. We use the complex refractive index to calculate radiative and nonradiative efficiency limits and the respective optimal thickness in the high mobility limit. We compare our model to the widely applied selection metric by Yu and Zunger [Phys. Rev. Lett. 108, 068701 (2012), 10.1103/PhysRevLett.108.068701] with respect to their dependence on thickness, internal luminescence quantum efficiency, and refractive index. Finally, the model is applied to complex refractive indices calculated via electronic structure theory.

  17. A detailed analysis of the recombination landscape of the button mushroom Agaricus bisporus var. bisporus.

    Science.gov (United States)

    Sonnenberg, Anton S M; Gao, Wei; Lavrijssen, Brian; Hendrickx, Patrick; Sedaghat-Tellgerd, Narges; Foulongne-Oriol, Marie; Kong, Won-Sik; Schijlen, Elio G W M; Baars, Johan J P; Visser, Richard G F

    2016-08-01

    The button mushroom (Agaricus bisporus) is one of the world's most cultivated mushroom species, but in spite of its economic importance generation of new cultivars by outbreeding is exceptional. Previous genetic analyses of the white bisporus variety, including all cultivars and most wild isolates revealed that crossing over frequencies are low, which might explain the lack of introducing novel traits into existing cultivars. By generating two high quality whole genome sequence assemblies (one de novo and the other by improving the existing reference genome) of the first commercial white hybrid Horst U1, a detailed study of the crossover (CO) landscape was initiated. Using a set of 626 SNPs in a haploid offspring of 139 single spore isolates and whole genome sequencing on a limited number of homo- and heterokaryotic single spore isolates, we precisely mapped all COs showing that they are almost exclusively restricted to regions of about 100kb at the chromosome ends. Most basidia of A. bisporus var. bisporus produce two spores and pair preferentially via non-sister nuclei. Combined with the COs restricted to the chromosome ends, these spores retain most of the heterozygosity of the parent thus explaining how present-day white cultivars are genetically so close to the first hybrid marketed in 1980. To our knowledge this is the first example of an organism which displays such specific CO landscape.

  18. Detailed balance analysis of area de-coupled double tandem photovoltaic modules

    Science.gov (United States)

    Strandberg, Rune

    2015-01-01

    This paper describes how layers of area de-coupled top and bottom cells in photovoltaic tandem modules can increase the efficiency of two-terminal tandem devices. The point of the area de-coupling is to allow the number of top cells to differ from the number of bottom cells. Within each of the layers, the cells can be horizontally series-connected and the layers can then be current- or voltage-matched with each other in a tandem module. Using detailed balance modeling, it is shown that two-terminal tandem modules of this type can achieve the same theoretical efficiency as stacks of independently operated cells, often referred to as four-terminal cells. Optimal ratios of the number of bottom cells to the number of top cells are calculated. Finally, it is shown that modules with a bottom layer consisting of 60 cells with a band gap of 1.11 eV, resembling standard silicon modules, offer sufficient resolution to optimize the number of top cells and achieve high efficiency over a large range of top cell band gaps. This result extends the list of materials that can be used as top cells in two-terminal tandem modules with silicon bottom cells.

  19. Detailed Analysis of the Genetic and Epigenetic Signatures of iPSC-Derived Mesodiencephalic Dopaminergic Neurons

    Directory of Open Access Journals (Sweden)

    Reinhard Roessler

    2014-04-01

    Full Text Available Induced pluripotent stem cells (iPSCs hold great promise for in vitro generation of disease-relevant cell types, such as mesodiencephalic dopaminergic (mdDA neurons involved in Parkinson’s disease. Although iPSC-derived midbrain DA neurons have been generated, detailed genetic and epigenetic characterizations of such neurons are lacking. The goal of this study was to examine the authenticity of iPSC-derived DA neurons obtained by established protocols. We FACS purified mdDA (Pitx3Gfp/+ neurons derived from mouse iPSCs and primary mdDA (Pitx3Gfp/+ neurons to analyze and compare their genetic and epigenetic features. Although iPSC-derived DA neurons largely adopted characteristics of their in vivo counterparts, relevant deviations in global gene expression and DNA methylation were found. Hypermethylated genes, mainly involved in neurodevelopment and basic neuronal functions, consequently showed reduced expression levels. Such abnormalities should be addressed because they might affect unambiguous long-term functionality and hamper the potential of iPSC-derived DA neurons for in vitro disease modeling or cell-based therapy.

  20. A Detailed Circuit Analysis of the Lawrence Livermore National Laboratory Building 141 Detonator Test Facility

    Energy Technology Data Exchange (ETDEWEB)

    Mayhall, D J; Wilson, M J; Wilson, J H

    2003-10-01

    A detailed electrical equivalent circuit of an as-built utility fault simulator is presented. Standard construction techniques for light industrial facilities were used to build a test-bed for evaluating utility power level faults into unintentional victims. The initial components or victims of interest are commercial detonators. Other possible candidates for fault response analyses include motors, power supplies, control systems, computers, or other electronic equipment. Measured Thevenin parameters of all interconnections provide the selected component values used in the model. Included in the model is an opening 10 HP motor circuit demonstrating voltage transients commonly seen on branch circuits from inductive loads common to industrial installations. Complex transmission lines were developed to represent real world transmission line effects possible from the associated branch circuits. To reduce the initial circuit stabilization delay a set of non-linear resistive elements are employed. The resulting model has assisted in confirming previous detonator safety work and supported the definition of critical parameters needed for continued safety assessment of victims to utility type power sources.

  1. Detailed analysis of the cell-inactivation mechanism by accelerated protons and light ions

    Energy Technology Data Exchange (ETDEWEB)

    Kundrat, Pavel [Institute of Physics, Academy of Sciences of the Czech Republic, Na Slovance 2, CZ-182 21 Praha 8 (Czech Republic)

    2006-03-07

    A detailed study of the biological effects of diverse quality radiations, addressing their biophysical interpretation, is presented. Published survival data for V79 cells irradiated by monoenergetic protons, helium-3, carbon and oxygen ions and for CHO cells irradiated by carbon ions have been analysed using the probabilistic two-stage model of cell inactivation. Three different classes of DNA damage formed by traversing particles have been distinguished, namely severe single-track lesions which might lead to cell inactivation directly, less severe lesions where cell inactivation is caused by their combinations and lesions of negligible severity that can be repaired easily. Probabilities of single ions forming these lesions have been assessed in dependence on their linear energy transfer (LET) values. Damage induction probabilities increase with atomic number and LET. While combined lesions play a crucial role at lower LET values, single-track damage dominates in high-LET regions. The yields of single-track lethal lesions for protons have been compared with Monte Carlo estimates of complex DNA lesions, indicating that lethal events correlate well with complex DNA double-strand breaks. The decrease in the single-track damage probability for protons of LET above approximately 30 keV {mu}m{sup -1}, suggested by limited experimental evidence, is discussed, together with the consequent differences in the mechanisms of biological effects between protons and heavier ions. Applications of the results in hadrontherapy treatment planning are outlined.

  2. Selection Metric for Photovoltaic Materials Screening Based on Detailed-Balance Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lany, Stephan [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Blank, Beatrix [IEK5-Photovoltaics; Kirchartz, Thomas [IEK5-Photovoltaics; University of Duisburg-Essen; Rau, Uwe [IEK5-Photovoltaics

    2017-08-31

    The success of recently discovered absorber materials for photovoltaic applications has been generating increasing interest in systematic materials screening over the last years. However, the key for a successful materials screening is a suitable selection metric that goes beyond the Shockley-Queisser theory that determines the thermodynamic efficiency limit of an absorber material solely by its band-gap energy. In this work, we develop a selection metric to quantify the potential photovoltaic efficiency of a material. Our approach is compatible with detailed balance and applicable in computational and experimental materials screening. We use the complex refractive index to calculate radiative and nonradiative efficiency limits and the respective optimal thickness in the high mobility limit. We compare our model to the widely applied selection metric by Yu and Zunger [Phys. Rev. Lett. 108, 068701 (2012)] with respect to their dependence on thickness, internal luminescence quantum efficiency, and refractive index. Finally, the model is applied to complex refractive indices calculated via electronic structure theory.

  3. Resolution requirements for monitor viewing of digital flat-panel detector radiographs: a contrast detail analysis

    Science.gov (United States)

    Peer, Siegfried; Steingruber, Iris; Gassner, Eva; Peer, Regina; Giacomuzzi, Salvatore M.

    2002-05-01

    Since the introduction of digital flat panel detectors into clinical routine the discussion on monitor specifications for primary soft copy reading has gained new impetus. Major concerns exist for viewing of tiny opacities such as pulmonary nodules. In this study CDRAD phantom images were acquired on a caesium iodid/amorphous silicon detector at varying exposure levels. Images were read three times by three observers on a clinical 1K and 2K monitor workstation. All typical workstation functions such as magnification and window/level setting were applied during image reading. Correct detection ratios were calculated according to the CDRAD evaluation manual. Observer ratings were highest for high dose exposure and 2K monitor reading. No significant difference was detected in the correct detection ratio of observers. However, the difference between the two types of workstations (1K versus 2K monitors) despite less than 3% was significant at a 95% confidence level. This is in good accordance with recently published clinical studies. However, further clinical work will be needed to strengthen this laboratory based impression. Given these subtle differences in low contrast detail detection on 1K and 2K clinical PACS workstation we should probably rethink the recommendations of various national boards for the use of 2K monitors.

  4. Detailed seismotectonic analysis of Sumatra subduction zone revealed by high precision earthquake location

    Science.gov (United States)

    Sagala, Ricardo Alfencius; Harjadi, P. J. Prih; Heryandoko, Nova; Sianipar, Dimas

    2017-07-01

    Sumatra was one of the most high seismicity regions in Indonesia. The subduction of Indo-Australian plate beneath Eurasian plate in western Sumatra contributes for many significant earthquakes that occur in this area. These earthquake events can be used to analyze the seismotectonic of Sumatra subduction zone and its system. In this study we use teleseismic double-difference method to obtain more high precision earthquake distribution in Sumatra subduction zone. We use a 3D nested regional-global velocity model. We use a combination of data from both of ISC (International Seismological Center) and BMKG (Agency for Meteorology Climatology and Geophysics, Indonesia). We successfully relocate about 6886 earthquakes that occur on period of 1981-2015. We consider that this new location is more precise than the regular bulletin. The relocation results show greatly reduced of RMS residual of travel time. Using this data, we can construct a new seismotectonic map of Sumatra. A well-built geometry of subduction slab, faults and volcano arc can be obtained from the new bulletin. It is also showed that at a depth of 140-170 km, there is many events occur as moderate-to-deep earthquakes, and we consider about the relation of the slab's events with volcanic arc and inland fault system. A reliable slab model is also built from regression equation using new relocated data. We also analyze the spatial-temporal of seismotectonic using b-value mapping that inspected in detail horizontally and vertically cross-section.

  5. Crucial role of detailed function, task, timeline, link and human vulnerability analyses in HRA. [Human Reliability Analysis (HRA)

    Energy Technology Data Exchange (ETDEWEB)

    Ryan, T.G.; Haney, L.N.; Ostrom, L.T.

    1992-01-01

    This paper addresses one major cause for large uncertainties in human reliability analysis (HRA) results, that is, an absence of detailed function, task, timeline, link and human vulnerability analyses. All too often this crucial step in the HRA process is done in a cursory fashion using word of mouth or written procedures which themselves may incompletely or inaccurately represent the human action sequences and human error vulnerabilities being analyzed. The paper examines the potential contributions these detailed analyses can make in achieving quantitative and qualitative HRA results which are: (1) creditable, that is, minimize uncertainty, (2) auditable, that is, systematically linking quantitative results and qualitative information from which the results are derived, (3) capable of supporting root cause analyses on human reliability factors determined to be major contributors to risk, and (4) capable of repeated measures and being combined with similar results from other analyses to examine HRA issues transcending individual systems and facilities. Based on experience analyzing test and commercial nuclear reactors, and medical applications of nuclear technology, an iterative process is suggested for doing detailed function, task, timeline, link and human vulnerability analyses using documentation reviews, open-ended and structured interviews, direct observations, and group techniques. Finally, the paper concludes that detailed analyses done in this manner by knowledgeable human factors practitioners, can contribute significantly to the credibility, auditability, causal factor analysis, and combining goals of the HRA.

  6. Analysis of detailed aerodynamic field measurements using results from an aeroelastic code

    Energy Technology Data Exchange (ETDEWEB)

    Schepers, J.G. [Energy Research Centre, Petten (Netherlands); Feigl, L. [Ecotecnia S. coop.c.l. (Spain); Rooij, R. van; Bruining, A. [Delft Univ. of Technology (Netherlands)

    2004-07-01

    In this article an analysis is given of aerodynamic field measurements on wind turbine blades. The analysis starts with a consistency check on the measurements, by relating the measured local aerodynamic segment forces to the overall rotor loads. It is found that the results are very consistent. Moreover, a comparison is made between measured results and results calculated from an aeroelastic code. On the basis of this comparison, the aerodynamic modelling in the aeroelastic code could be improved. This holds in particular for the modelling of 3D stall effects, not only on the lift but also on the drag, and for the modelling of tip effects (author)

  7. Detailed analysis of the predictions of loop quantum cosmology for the primordial power spectra

    CERN Document Server

    Agullo, Ivan

    2015-01-01

    We provide an exhaustive numerical exploration of the predictions of loop quantum cosmology (LQC) with a post-bounce phase of inflation for the primordial power spectrum of scalar and tensor perturbations. We extend previous analysis by characterizing the phenomenologically relevant parameter space and by constraining it using observations. Furthermore, we characterize the shape of LQC-corrections to observable quantities across this parameter space. Our analysis provides a framework to contrast more accurately the theory with forthcoming polarization data, and it also paves the road for the computation of other observables beyond the power spectra, such as non-Gaussianity.

  8. 3D geometry analysis of the medial meniscus--a statistical shape modeling approach.

    Science.gov (United States)

    Vrancken, A C T; Crijns, S P M; Ploegmakers, M J M; O'Kane, C; van Tienen, T G; Janssen, D; Buma, P; Verdonschot, N

    2014-10-01

    The geometry-dependent functioning of the meniscus indicates that detailed knowledge on 3D meniscus geometry and its inter-subject variation is essential to design well functioning anatomically shaped meniscus replacements. Therefore, the aim of this study was to quantify 3D meniscus geometry and to determine whether variation in medial meniscus geometry is size- or shape-driven. Also we performed a cluster analysis to identify distinct morphological groups of medial menisci and assessed whether meniscal geometry is gender-dependent. A statistical shape model was created, containing the meniscus geometries of 35 subjects (20 females, 15 males) that were obtained from MR images. A principal component analysis was performed to determine the most important modes of geometry variation and the characteristic changes per principal component were evaluated. Each meniscus from the original dataset was then reconstructed as a linear combination of principal components. This allowed the comparison of male and female menisci, and a cluster analysis to determine distinct morphological meniscus groups. Of the variation in medial meniscus geometry, 53.8% was found to be due to primarily size-related differences and 29.6% due to shape differences. Shape changes were most prominent in the cross-sectional plane, rather than in the transverse plane. Significant differences between male and female menisci were only found for principal component 1, which predominantly reflected size differences. The cluster analysis resulted in four clusters, yet these clusters represented two statistically different meniscal shapes, as differences between cluster 1, 2 and 4 were only present for principal component 1. This study illustrates that differences in meniscal geometry cannot be explained by scaling only, but that different meniscal shapes can be distinguished. Functional analysis, e.g. through finite element modeling, is required to assess whether these distinct shapes actually influence

  9. 3D geometry analysis of the medial meniscus – a statistical shape modeling approach

    Science.gov (United States)

    Vrancken, A C T; Crijns, S P M; Ploegmakers, M J M; O'Kane, C; van Tienen, T G; Janssen, D; Buma, P; Verdonschot, N

    2014-01-01

    The geometry-dependent functioning of the meniscus indicates that detailed knowledge on 3D meniscus geometry and its inter-subject variation is essential to design well functioning anatomically shaped meniscus replacements. Therefore, the aim of this study was to quantify 3D meniscus geometry and to determine whether variation in medial meniscus geometry is size- or shape-driven. Also we performed a cluster analysis to identify distinct morphological groups of medial menisci and assessed whether meniscal geometry is gender-dependent. A statistical shape model was created, containing the meniscus geometries of 35 subjects (20 females, 15 males) that were obtained from MR images. A principal component analysis was performed to determine the most important modes of geometry variation and the characteristic changes per principal component were evaluated. Each meniscus from the original dataset was then reconstructed as a linear combination of principal components. This allowed the comparison of male and female menisci, and a cluster analysis to determine distinct morphological meniscus groups. Of the variation in medial meniscus geometry, 53.8% was found to be due to primarily size-related differences and 29.6% due to shape differences. Shape changes were most prominent in the cross-sectional plane, rather than in the transverse plane. Significant differences between male and female menisci were only found for principal component 1, which predominantly reflected size differences. The cluster analysis resulted in four clusters, yet these clusters represented two statistically different meniscal shapes, as differences between cluster 1, 2 and 4 were only present for principal component 1. This study illustrates that differences in meniscal geometry cannot be explained by scaling only, but that different meniscal shapes can be distinguished. Functional analysis, e.g. through finite element modeling, is required to assess whether these distinct shapes actually influence

  10. Detailed analysis of the African green monkey model of Nipah virus disease.

    Science.gov (United States)

    Johnston, Sara C; Briese, Thomas; Bell, Todd M; Pratt, William D; Shamblin, Joshua D; Esham, Heather L; Donnelly, Ginger C; Johnson, Joshua C; Hensley, Lisa E; Lipkin, W Ian; Honko, Anna N

    2015-01-01

    Henipaviruses are implicated in severe and frequently fatal pneumonia and encephalitis in humans. There are no approved vaccines or treatments available for human use, and testing of candidates requires the use of well-characterized animal models that mimic human disease. We performed a comprehensive and statistically-powered evaluation of the African green monkey model to define parameters critical to disease progression and the extent to which they correlate with human disease. African green monkeys were inoculated by the intratracheal route with 2.5 × 10(4) plaque forming units of the Malaysia strain of Nipah virus. Physiological data captured using telemetry implants and assessed in conjunction with clinical pathology were consistent with shock, and histopathology confirmed widespread tissue involvement associated with systemic vasculitis in animals that succumbed to acute disease. In addition, relapse encephalitis was identified in 100% of animals that survived beyond the acute disease phase. Our data suggest that disease progression in the African green monkey is comparable to the variable outcome of Nipah virus infection in humans.

  11. Detailed analysis of the African green monkey model of Nipah virus disease.

    Directory of Open Access Journals (Sweden)

    Sara C Johnston

    Full Text Available Henipaviruses are implicated in severe and frequently fatal pneumonia and encephalitis in humans. There are no approved vaccines or treatments available for human use, and testing of candidates requires the use of well-characterized animal models that mimic human disease. We performed a comprehensive and statistically-powered evaluation of the African green monkey model to define parameters critical to disease progression and the extent to which they correlate with human disease. African green monkeys were inoculated by the intratracheal route with 2.5 × 10(4 plaque forming units of the Malaysia strain of Nipah virus. Physiological data captured using telemetry implants and assessed in conjunction with clinical pathology were consistent with shock, and histopathology confirmed widespread tissue involvement associated with systemic vasculitis in animals that succumbed to acute disease. In addition, relapse encephalitis was identified in 100% of animals that survived beyond the acute disease phase. Our data suggest that disease progression in the African green monkey is comparable to the variable outcome of Nipah virus infection in humans.

  12. Bringing statistics up to speed with data in analysis of lymphocyte motility.

    Science.gov (United States)

    Letendre, Kenneth; Donnadieu, Emmanuel; Moses, Melanie E; Cannon, Judy L

    2015-01-01

    Two-photon (2P) microscopy provides immunologists with 3D video of the movement of lymphocytes in vivo. Motility parameters extracted from these videos allow detailed analysis of lymphocyte motility in lymph nodes and peripheral tissues. However, standard parametric statistical analyses such as the Student's t-test are often used incorrectly, and fail to take into account confounds introduced by the experimental methods, potentially leading to erroneous conclusions about T cell motility. Here, we compare the motility of WT T cell versus PKCθ-/-, CARMA1-/-, CCR7-/-, and PTX-treated T cells. We show that the fluorescent dyes used to label T cells have significant effects on T cell motility, and we demonstrate the use of factorial ANOVA as a statistical tool that can control for these effects. In addition, researchers often choose between the use of "cell-based" parameters by averaging multiple steps of a single cell over time (e.g. cell mean speed), or "step-based" parameters, in which all steps of a cell population (e.g. instantaneous speed) are grouped without regard for the cell track. Using mixed model ANOVA, we show that we can maintain cell-based analyses without losing the statistical power of step-based data. We find that as we use additional levels of statistical control, we can more accurately estimate the speed of T cells as they move in lymph nodes as well as measure the impact of individual signaling molecules on T cell motility. As there is increasing interest in using computational modeling to understand T cell behavior in in vivo, these quantitative measures not only give us a better determination of actual T cell movement, they may prove crucial for models to generate accurate predictions about T cell behavior.

  13. Bringing statistics up to speed with data in analysis of lymphocyte motility.

    Directory of Open Access Journals (Sweden)

    Kenneth Letendre

    Full Text Available Two-photon (2P microscopy provides immunologists with 3D video of the movement of lymphocytes in vivo. Motility parameters extracted from these videos allow detailed analysis of lymphocyte motility in lymph nodes and peripheral tissues. However, standard parametric statistical analyses such as the Student's t-test are often used incorrectly, and fail to take into account confounds introduced by the experimental methods, potentially leading to erroneous conclusions about T cell motility. Here, we compare the motility of WT T cell versus PKCθ-/-, CARMA1-/-, CCR7-/-, and PTX-treated T cells. We show that the fluorescent dyes used to label T cells have significant effects on T cell motility, and we demonstrate the use of factorial ANOVA as a statistical tool that can control for these effects. In addition, researchers often choose between the use of "cell-based" parameters by averaging multiple steps of a single cell over time (e.g. cell mean speed, or "step-based" parameters, in which all steps of a cell population (e.g. instantaneous speed are grouped without regard for the cell track. Using mixed model ANOVA, we show that we can maintain cell-based analyses without losing the statistical power of step-based data. We find that as we use additional levels of statistical control, we can more accurately estimate the speed of T cells as they move in lymph nodes as well as measure the impact of individual signaling molecules on T cell motility. As there is increasing interest in using computational modeling to understand T cell behavior in in vivo, these quantitative measures not only give us a better determination of actual T cell movement, they may prove crucial for models to generate accurate predictions about T cell behavior.

  14. Statistics and data analysis for financial engineering with R examples

    CERN Document Server

    Ruppert, David

    2015-01-01

    The new edition of this influential textbook, geared towards graduate or advanced undergraduate students, teaches the statistics necessary for financial engineering. In doing so, it illustrates concepts using financial markets and economic data, R Labs with real-data exercises, and graphical and analytic methods for modeling and diagnosing modeling errors. Financial engineers now have access to enormous quantities of data. To make use of these data, the powerful methods in this book, particularly about volatility and risks, are essential. Strengths of this fully-revised edition include major additions to the R code and the advanced topics covered. Individual chapters cover, among other topics, multivariate distributions, copulas, Bayesian computations, risk management, multivariate volatility and cointegration. Suggested prerequisites are basic knowledge of statistics and probability, matrices and linear algebra, and calculus. There is an appendix on probability, statistics and linear algebra. Practicing fina...

  15. Statistical mechanics analysis of thresholding 1-bit compressed sensing

    CERN Document Server

    Xu, Yingying

    2016-01-01

    The one-bit compressed sensing framework aims to reconstruct a sparse signal by only using the sign information of its linear measurements. To compensate for the loss of scale information, past studies in the area have proposed recovering the signal by imposing an additional constraint on the L2-norm of the signal. Recently, an alternative strategy that captures scale information by introducing a threshold parameter to the quantization process was advanced. In this paper, we analyze the typical behavior of the thresholding 1-bit compressed sensing utilizing the replica method of statistical mechanics, so as to gain an insight for properly setting the threshold value. Our result shows that, fixing the threshold at a constant value yields better performance than varying it randomly when the constant is optimally tuned, statistically. Unfortunately, the optimal threshold value depends on the statistical properties of the target signal, which may not be known in advance. In order to handle this inconvenience, we ...

  16. Sensitivity Analysis and Statistical Convergence of a Saltating Particle Model

    CERN Document Server

    Maldonado, S

    2016-01-01

    Saltation models provide considerable insight into near-bed sediment transport. This paper outlines a simple, efficient numerical model of stochastic saltation, which is validated against previously published experimental data on saltation in a channel of nearly horizontal bed. Convergence tests are systematically applied to ensure the model is free from statistical errors emanating from the number of particle hops considered. Two criteria for statistical convergence are derived; according to the first criterion, at least $10^3$ hops appear to be necessary for convergent results, whereas $10^4$ saltations seem to be the minimum required in order to achieve statistical convergence in accordance with the second criterion. Two empirical formulae for lift force are considered: one dependent on the slip (relative) velocity of the particle multiplied by the vertical gradient of the horizontal flow velocity component; the other dependent on the difference between the squares of the slip velocity components at the to...

  17. Analysis of Alignment Influence on 3-D Anthropometric Statistics

    Institute of Scientific and Technical Information of China (English)

    CAI Xiuwen; LI Zhizhong; CHANG Chien-Chi; DEMPSEY Patrick

    2005-01-01

    Three-dimensional (3-D) surface anthropometry can provide much more useful information for many applications such as ergonomic product design than traditional individual body dimension measurements. However, the traditional definition of the percentile calculation is designed only for 1-D anthropometric data estimates. The same approach cannot be applied directly to 3-D anthropometric statistics otherwise it could lead to misinterpretations. In this paper, the influence of alignment references on 3-D anthropometric statistics is analyzed mathematically, which shows that different alignment reference points (for example, landmarks) for translation alignment could result in different object shapes if 3-D anthropometric data are processed for percentile values based on coordinates and that dimension percentile calculations based on coordinate statistics are incompatible with those traditionally based on individual dimensions.

  18. Statistical analysis of natural disasters and related losses

    CERN Document Server

    Pisarenko, VF

    2014-01-01

    The study of disaster statistics and disaster occurrence is a complicated interdisciplinary field involving the interplay of new theoretical findings from several scientific fields like mathematics, physics, and computer science. Statistical studies on the mode of occurrence of natural disasters largely rely on fundamental findings in the statistics of rare events, which were derived in the 20th century. With regard to natural disasters, it is not so much the fact that the importance of this problem for mankind was recognized during the last third of the 20th century - the myths one encounters in ancient civilizations show that the problem of disasters has always been recognized - rather, it is the fact that mankind now possesses the necessary theoretical and practical tools to effectively study natural disasters, which in turn supports effective, major practical measures to minimize their impact. All the above factors have resulted in considerable progress in natural disaster research. Substantial accrued ma...

  19. Detailed Analysis of Solar Data Related to Historical Extreme Geomagnetic Storms: 1868 – 2010

    DEFF Research Database (Denmark)

    Lefèvre, Laure; Vennerstrøm, Susanne; Dumbović, Mateja

    2016-01-01

    An analysis of historical Sun–Earth connection events in the context of the most extreme space weather events of the last ∼ 150 years is presented. To identify the key factors leading to these extreme events, a sample of the most important geomagnetic storms was selected based mainly on the well-...

  20. Data Analysis Details (DS): SE40_DS1 [Metabolonote[Archive

    Lifescience Database Archive (English)

    Full Text Available tware without cut off value and peaks are extracted from the text files by PowerFT ...SE40_DS1 PowerGet analysis for detection of all peaks Raw data files are converted to text file by MSGet sof

  1. Data Analysis Details (DS): SE41_DS1 [Metabolonote[Archive

    Lifescience Database Archive (English)

    Full Text Available tware without cut off value and peaks are extracted from the text files by PowerFT ...SE41_DS1 PowerGet analysis for detection of all peaks Raw data files are converted to text file by MSGet sof

  2. Data Analysis Details (DS): SE13_DS3 [Metabolonote[Archive

    Lifescience Database Archive (English)

    Full Text Available SE13_DS3 PowerGet analysis for detection of all peaks (C3) Raw data files are converted to text... file by MSGet software without cut off value and peaks are extracted from the text files by Pow

  3. Data Analysis Details (DS): SE30_DS1 [Metabolonote[Archive

    Lifescience Database Archive (English)

    Full Text Available SE30_DS1 PowerGet analysis for detection of all peaks (B3) Raw data files are converted to text... file by MSGet software without cut off value and peaks are extracted from the text files by Pow

  4. Data Analysis Details (DS): SE1_DS3 [Metabolonote[Archive

    Lifescience Database Archive (English)

    Full Text Available SE1_DS3 PowerGet analysis for detection of all peaks (C2) Raw data files are converted to text... file by MSGet software without cut off value and peaks are extracted from the text files by Powe

  5. Data Analysis Details (DS): SE35_DS1 [Metabolonote[Archive

    Lifescience Database Archive (English)

    Full Text Available SE35_DS1 PowerGet analysis for detection of all peaks (B3) Raw data files are converted to text... file by MSGet software without cut off value and peaks are extracted from the text files by Pow

  6. Data Analysis Details (DS): SE5_DS1 [Metabolonote[Archive

    Lifescience Database Archive (English)

    Full Text Available SE5_DS1 PowerGet analysis for annotation of peaks with MS/MS (A3) Raw data files are converted to text... file by MSGet software without cut off value and peaks are extracted from the text files

  7. Data Analysis Details (DS): SE10_DS2 [Metabolonote[Archive

    Lifescience Database Archive (English)

    Full Text Available SE10_DS2 PowerGet analysis for detection of all peaks (B3) Raw data files are converted to text... file by MSGet software without cut off value and peaks are extracted from the text files by Pow

  8. Data Analysis Details (DS): SE29_DS1 [Metabolonote[Archive

    Lifescience Database Archive (English)

    Full Text Available SE29_DS1 PowerGet analysis for detection of all peaks (B3) Raw data files are converted to text... file by MSGet software without cut off value and peaks are extracted from the text files by Pow

  9. Data Analysis Details (DS): SE8_DS2 [Metabolonote[Archive

    Lifescience Database Archive (English)

    Full Text Available SE8_DS2 PowerGet analysis for detection of all peaks (B3) Raw data files are converted to text... file by MSGet software without cut off value and peaks are extracted from the text files by Powe

  10. Data Analysis Details (DS): SE31_DS1 [Metabolonote[Archive

    Lifescience Database Archive (English)

    Full Text Available SE31_DS1 PowerGet analysis for detection of all peaks (B3) Raw data files are converted to text... file by MSGet software without cut off value and peaks are extracted from the text files by Pow

  11. Data Analysis Details (DS): SE33_DS1 [Metabolonote[Archive

    Lifescience Database Archive (English)

    Full Text Available SE33_DS1 PowerGet analysis for detection of all peaks (B3) Raw data files are converted to text... file by MSGet software without cut off value and peaks are extracted from the text files by Pow

  12. Data Analysis Details (DS): SE7_DS3 [Metabolonote[Archive

    Lifescience Database Archive (English)

    Full Text Available SE7_DS3 PowerGet analysis for detection of all peaks (C4) Raw data files are converted to text... file by MSGet software without cut off value and peaks are extracted from the text files by Powe

  13. Data Analysis Details (DS): SE12_DS2 [Metabolonote[Archive

    Lifescience Database Archive (English)

    Full Text Available SE12_DS2 PowerGet analysis for detection of all peaks (B3) Raw data files are converted to text... file by MSGet software without cut off value and peaks are extracted from the text files by Pow

  14. Data Analysis Details (DS): SE36_DS1 [Metabolonote[Archive

    Lifescience Database Archive (English)

    Full Text Available SE36_DS1 PowerGet analysis for detection of all peaks (B3) Raw data files are converted to text... file by MSGet software without cut off value and peaks are extracted from the text files by Pow

  15. Data Analysis Details (DS): SE6_DS3 [Metabolonote[Archive

    Lifescience Database Archive (English)

    Full Text Available SE6_DS3 PowerGet analysis for detection of all peaks (C4) Raw data files are converted to text... file by MSGet software without cut off value and peaks are extracted from the text files by Powe

  16. Data Analysis Details (DS): SE12_DS1 [Metabolonote[Archive

    Lifescience Database Archive (English)

    Full Text Available SE12_DS1 PowerGet analysis for annotation of peaks with MS/MS (A3) Raw data files are converted to text... file by MSGet software without cut off value and peaks are extracted from the text file

  17. Data Analysis Details (DS): SE16_DS2 [Metabolonote[Archive

    Lifescience Database Archive (English)

    Full Text Available SE16_DS2 PowerGet analysis for detection of all peaks (B3) Raw data files are converted to text... file by MSGet software without cut off value and peaks are extracted from the text files by Pow

  18. Data Analysis Details (DS): SE14_DS3 [Metabolonote[Archive

    Lifescience Database Archive (English)

    Full Text Available SE14_DS3 PowerGet analysis for detection of all peaks (C3) Raw data files are converted to text... file by MSGet software without cut off value and peaks are extracted from the text files by Pow

  19. Data Analysis Details (DS): SE17_DS3 [Metabolonote[Archive

    Lifescience Database Archive (English)

    Full Text Available SE17_DS3 PowerGet analysis for detection of all peaks (C3) Raw data files are converted to text... file by MSGet software without cut off value and peaks are extracted from the text files by Pow

  20. Data Analysis Details (DS): SE27_DS1 [Metabolonote[Archive

    Lifescience Database Archive (English)

    Full Text Available SE27_DS1 PowerGet analysis for detection of all peaks (C3) Raw data files are converted to text... file by MSGet software without cut off value and peaks are extracted from the text files by Pow