WorldWideScience

Sample records for sophisticated statistical tools

  1. Automatically Assessing Lexical Sophistication: Indices, Tools, Findings, and Application

    Science.gov (United States)

    Kyle, Kristopher; Crossley, Scott A.

    2015-01-01

    This study explores the construct of lexical sophistication and its applications for measuring second language lexical and speaking proficiency. In doing so, the study introduces the Tool for the Automatic Analysis of LExical Sophistication (TAALES), which calculates text scores for 135 classic and newly developed lexical indices related to word…

  2. Baseline Statistics of Linked Statistical Data

    NARCIS (Netherlands)

    Scharnhorst, Andrea; Meroño-Peñuela, Albert; Guéret, Christophe

    2014-01-01

    We are surrounded by an ever increasing ocean of information, everybody will agree to that. We build sophisticated strategies to govern this information: design data models, develop infrastructures for data sharing, building tool for data analysis. Statistical datasets curated by National

  3. Sophisticated Players and Sophisticated Agents

    NARCIS (Netherlands)

    Rustichini, A.

    1998-01-01

    A sophisticated player is an individual who takes the action of the opponents, in a strategic situation, as determined by decision of rational opponents, and acts accordingly. A sophisticated agent is rational in the choice of his action, but ignores the fact that he is part of a strategic

  4. Cancer Data and Statistics Tools

    Science.gov (United States)

    ... Educational Campaigns Initiatives Stay Informed Cancer Data and Statistics Tools Recommend on Facebook Tweet Share Compartir Cancer Statistics Tools United States Cancer Statistics: Data Visualizations The ...

  5. The value of multivariate model sophistication

    DEFF Research Database (Denmark)

    Rombouts, Jeroen; Stentoft, Lars; Violante, Francesco

    2014-01-01

    We assess the predictive accuracies of a large number of multivariate volatility models in terms of pricing options on the Dow Jones Industrial Average. We measure the value of model sophistication in terms of dollar losses by considering a set of 444 multivariate models that differ in their spec....... In addition to investigating the value of model sophistication in terms of dollar losses directly, we also use the model confidence set approach to statistically infer the set of models that delivers the best pricing performances.......We assess the predictive accuracies of a large number of multivariate volatility models in terms of pricing options on the Dow Jones Industrial Average. We measure the value of model sophistication in terms of dollar losses by considering a set of 444 multivariate models that differ...

  6. The tool for the automatic analysis of lexical sophistication (TAALES): version 2.0.

    Science.gov (United States)

    Kyle, Kristopher; Crossley, Scott; Berger, Cynthia

    2017-07-11

    This study introduces the second release of the Tool for the Automatic Analysis of Lexical Sophistication (TAALES 2.0), a freely available and easy-to-use text analysis tool. TAALES 2.0 is housed on a user's hard drive (allowing for secure data processing) and is available on most operating systems (Windows, Mac, and Linux). TAALES 2.0 adds 316 indices to the original tool. These indices are related to word frequency, word range, n-gram frequency, n-gram range, n-gram strength of association, contextual distinctiveness, word recognition norms, semantic network, and word neighbors. In this study, we validated TAALES 2.0 by investigating whether its indices could be used to model both holistic scores of lexical proficiency in free writes and word choice scores in narrative essays. The results indicated that the TAALES 2.0 indices could be used to explain 58% of the variance in lexical proficiency scores and 32% of the variance in word-choice scores. Newly added TAALES 2.0 indices, including those related to n-gram association strength, word neighborhood, and word recognition norms, featured heavily in these predictor models, suggesting that TAALES 2.0 represents a substantial upgrade.

  7. The Impact of Financial Sophistication on Adjustable Rate Mortgage Ownership

    Science.gov (United States)

    Smith, Hyrum; Finke, Michael S.; Huston, Sandra J.

    2011-01-01

    The influence of a financial sophistication scale on adjustable-rate mortgage (ARM) borrowing is explored. Descriptive statistics and regression analysis using recent data from the Survey of Consumer Finances reveal that ARM borrowing is driven by both the least and most financially sophisticated households but for different reasons. Less…

  8. xSyn: A Software Tool for Identifying Sophisticated 3-Way Interactions From Cancer Expression Data

    Directory of Open Access Journals (Sweden)

    Baishali Bandyopadhyay

    2017-08-01

    Full Text Available Background: Constructing gene co-expression networks from cancer expression data is important for investigating the genetic mechanisms underlying cancer. However, correlation coefficients or linear regression models are not able to model sophisticated relationships among gene expression profiles. Here, we address the 3-way interaction that 2 genes’ expression levels are clustered in different space locations under the control of a third gene’s expression levels. Results: We present xSyn, a software tool for identifying such 3-way interactions from cancer gene expression data based on an optimization procedure involving the usage of UPGMA (Unweighted Pair Group Method with Arithmetic Mean and synergy. The effectiveness is demonstrated by application to 2 real gene expression data sets. Conclusions: xSyn is a useful tool for decoding the complex relationships among gene expression profiles. xSyn is available at http://www.bdxconsult.com/xSyn.html .

  9. The Euclid Statistical Matrix Tool

    Directory of Open Access Journals (Sweden)

    Curtis Tilves

    2017-06-01

    Full Text Available Stataphobia, a term used to describe the fear of statistics and research methods, can result from a lack of improper training in statistical methods. Poor statistical methods training can have an effect on health policy decision making and may play a role in the low research productivity seen in developing countries. One way to reduce Stataphobia is to intervene in the teaching of statistics in the classroom; however, such an intervention must tackle several obstacles, including student interest in the material, multiple ways of learning materials, and language barriers. We present here the Euclid Statistical Matrix, a tool for combatting Stataphobia on a global scale. This free tool is comprised of popular statistical YouTube channels and web sources that teach and demonstrate statistical concepts in a variety of presentation methods. Working with international teams in Iran, Japan, Egypt, Russia, and the United States, we have also developed the Statistical Matrix in multiple languages to address language barriers to learning statistics. By utilizing already-established large networks, we are able to disseminate our tool to thousands of Farsi-speaking university faculty and students in Iran and the United States. Future dissemination of the Euclid Statistical Matrix throughout the Central Asia and support from local universities may help to combat low research productivity in this region.

  10. [Statistics for statistics?--Thoughts about psychological tools].

    Science.gov (United States)

    Berger, Uwe; Stöbel-Richter, Yve

    2007-12-01

    Statistical methods take a prominent place among psychologists' educational programs. Being known as difficult to understand and heavy to learn, students fear of these contents. Those, who do not aspire after a research carrier at the university, will forget the drilled contents fast. Furthermore, because it does not apply for the work with patients and other target groups at a first glance, the methodological education as a whole was often questioned. For many psychological practitioners the statistical education makes only sense by enforcing respect against other professions, namely physicians. For the own business, statistics is rarely taken seriously as a professional tool. The reason seems to be clear: Statistics treats numbers, while psychotherapy treats subjects. So, does statistics ends in itself? With this article, we try to answer the question, if and how statistical methods were represented within the psychotherapeutical and psychological research. Therefore, we analyzed 46 Originals of a complete volume of the journal Psychotherapy, Psychosomatics, Psychological Medicine (PPmP). Within the volume, 28 different analyse methods were applied, from which 89 per cent were directly based upon statistics. To be able to write and critically read Originals as a backbone of research, presumes a high degree of statistical education. To ignore statistics means to ignore research and at least to reveal the own professional work to arbitrariness.

  11. Statistics for economics

    CERN Document Server

    Naghshpour, Shahdad

    2012-01-01

    Statistics is the branch of mathematics that deals with real-life problems. As such, it is an essential tool for economists. Unfortunately, the way you and many other economists learn the concept of statistics is not compatible with the way economists think and learn. The problem is worsened by the use of mathematical jargon and complex derivations. Here's a book that proves none of this is necessary. All the examples and exercises in this book are constructed within the field of economics, thus eliminating the difficulty of learning statistics with examples from fields that have no relation to business, politics, or policy. Statistics is, in fact, not more difficult than economics. Anyone who can comprehend economics can understand and use statistics successfully within this field, including you! This book utilizes Microsoft Excel to obtain statistical results, as well as to perform additional necessary computations. Microsoft Excel is not the software of choice for performing sophisticated statistical analy...

  12. Pension fund sophistication and investment policy

    NARCIS (Netherlands)

    de Dreu, J.|info:eu-repo/dai/nl/364537906; Bikker, J.A.|info:eu-repo/dai/nl/06912261X

    This paper assesses the sophistication of pension funds’ investment policies using data on 748 Dutch pension funds during the 1999–2006 period. We develop three indicators of sophistication: gross rounding of investment choices, investments in alternative sophisticated asset classes and ‘home bias’.

  13. In Praise of the Sophists.

    Science.gov (United States)

    Gibson, Walker

    1993-01-01

    Discusses the thinking of the Greek Sophist philosophers, particularly Gorgias and Protagoras, and their importance and relevance for contemporary English instructors. Considers the problem of language as signs of reality in the context of Sophist philosophy. (HB)

  14. Straightforward statistics understanding the tools of research

    CERN Document Server

    Geher, Glenn

    2014-01-01

    Straightforward Statistics: Understanding the Tools of Research is a clear and direct introduction to statistics for the social, behavioral, and life sciences. Based on the author's extensive experience teaching undergraduate statistics, this book provides a narrative presentation of the core principles that provide the foundation for modern-day statistics. With step-by-step guidance on the nuts and bolts of computing these statistics, the book includes detailed tutorials how to use state-of-the-art software, SPSS, to compute the basic statistics employed in modern academic and applied researc

  15. The value of statistical tools to detect data fabrication

    NARCIS (Netherlands)

    Hartgerink, C.H.J.; Wicherts, J.M.; van Assen, M.A.L.M.

    2016-01-01

    We aim to investigate how statistical tools can help detect potential data fabrication in the social- and medical sciences. In this proposal we outline three projects to assess the value of such statistical tools to detect potential data fabrication and make the first steps in order to apply them

  16. Statistical learning and selective inference.

    Science.gov (United States)

    Taylor, Jonathan; Tibshirani, Robert J

    2015-06-23

    We describe the problem of "selective inference." This addresses the following challenge: Having mined a set of data to find potential associations, how do we properly assess the strength of these associations? The fact that we have "cherry-picked"--searched for the strongest associations--means that we must set a higher bar for declaring significant the associations that we see. This challenge becomes more important in the era of big data and complex statistical modeling. The cherry tree (dataset) can be very large and the tools for cherry picking (statistical learning methods) are now very sophisticated. We describe some recent new developments in selective inference and illustrate their use in forward stepwise regression, the lasso, and principal components analysis.

  17. A Statistical Project Control Tool for Engineering Managers

    Science.gov (United States)

    Bauch, Garland T.

    2001-01-01

    This slide presentation reviews the use of a Statistical Project Control Tool (SPCT) for managing engineering projects. A literature review pointed to a definition of project success, (i.e., A project is successful when the cost, schedule, technical performance, and quality satisfy the customer.) The literature review also pointed to project success factors, and traditional project control tools, and performance measures that are detailed in the report. The essential problem is that with resources becoming more limited, and an increasing number or projects, project failure is increasing, there is a limitation of existing methods and systematic methods are required. The objective of the work is to provide a new statistical project control tool for project managers. Graphs using the SPCT method plotting results of 3 successful projects and 3 failed projects are reviewed, with success and failure being defined by the owner.

  18. The Value of Multivariate Model Sophistication: An Application to pricing Dow Jones Industrial Average options

    DEFF Research Database (Denmark)

    Rombouts, Jeroen V.K.; Stentoft, Lars; Violante, Francesco

    innovation for a Laplace innovation assumption improves the pricing in a smaller way. Apart from investigating directly the value of model sophistication in terms of dollar losses, we also use the model condence set approach to statistically infer the set of models that delivers the best pricing performance.......We assess the predictive accuracy of a large number of multivariate volatility models in terms of pricing options on the Dow Jones Industrial Average. We measure the value of model sophistication in terms of dollar losses by considering a set 248 multivariate models that differer...

  19. Statistical and Visualization Data Mining Tools for Foundry Production

    Directory of Open Access Journals (Sweden)

    M. Perzyk

    2007-07-01

    Full Text Available In recent years a rapid development of a new, interdisciplinary knowledge area, called data mining, is observed. Its main task is extracting useful information from previously collected large amount of data. The main possibilities and potential applications of data mining in manufacturing industry are characterized. The main types of data mining techniques are briefly discussed, including statistical, artificial intelligence, data base and visualization tools. The statistical methods and visualization methods are presented in more detail, showing their general possibilities, advantages as well as characteristic examples of applications in foundry production. Results of the author’s research are presented, aimed at validation of selected statistical tools which can be easily and effectively used in manufacturing industry. A performance analysis of ANOVA and contingency tables based methods, dedicated for determination of the most significant process parameters as well as for detection of possible interactions among them, has been made. Several numerical tests have been performed using simulated data sets, with assumed hidden relationships as well some real data, related to the strength of ductile cast iron, collected in a foundry. It is concluded that the statistical methods offer relatively easy and fairly reliable tools for extraction of that type of knowledge about foundry manufacturing processes. However, further research is needed, aimed at explanation of some imperfections of the investigated tools as well assessment of their validity for more complex tasks.

  20. The musicality of non-musicians: an index for assessing musical sophistication in the general population.

    Directory of Open Access Journals (Sweden)

    Daniel Müllensiefen

    Full Text Available Musical skills and expertise vary greatly in Western societies. Individuals can differ in their repertoire of musical behaviours as well as in the level of skill they display for any single musical behaviour. The types of musical behaviours we refer to here are broad, ranging from performance on an instrument and listening expertise, to the ability to employ music in functional settings or to communicate about music. In this paper, we first describe the concept of 'musical sophistication' which can be used to describe the multi-faceted nature of musical expertise. Next, we develop a novel measurement instrument, the Goldsmiths Musical Sophistication Index (Gold-MSI to assess self-reported musical skills and behaviours on multiple dimensions in the general population using a large Internet sample (n = 147,636. Thirdly, we report results from several lab studies, demonstrating that the Gold-MSI possesses good psychometric properties, and that self-reported musical sophistication is associated with performance on two listening tasks. Finally, we identify occupation, occupational status, age, gender, and wealth as the main socio-demographic factors associated with musical sophistication. Results are discussed in terms of theoretical accounts of implicit and statistical music learning and with regard to social conditions of sophisticated musical engagement.

  1. The musicality of non-musicians: an index for assessing musical sophistication in the general population.

    Science.gov (United States)

    Müllensiefen, Daniel; Gingras, Bruno; Musil, Jason; Stewart, Lauren

    2014-01-01

    Musical skills and expertise vary greatly in Western societies. Individuals can differ in their repertoire of musical behaviours as well as in the level of skill they display for any single musical behaviour. The types of musical behaviours we refer to here are broad, ranging from performance on an instrument and listening expertise, to the ability to employ music in functional settings or to communicate about music. In this paper, we first describe the concept of 'musical sophistication' which can be used to describe the multi-faceted nature of musical expertise. Next, we develop a novel measurement instrument, the Goldsmiths Musical Sophistication Index (Gold-MSI) to assess self-reported musical skills and behaviours on multiple dimensions in the general population using a large Internet sample (n = 147,636). Thirdly, we report results from several lab studies, demonstrating that the Gold-MSI possesses good psychometric properties, and that self-reported musical sophistication is associated with performance on two listening tasks. Finally, we identify occupation, occupational status, age, gender, and wealth as the main socio-demographic factors associated with musical sophistication. Results are discussed in terms of theoretical accounts of implicit and statistical music learning and with regard to social conditions of sophisticated musical engagement.

  2. Instruction of Statistics via Computer-Based Tools: Effects on Statistics' Anxiety, Attitude, and Achievement

    Science.gov (United States)

    Ciftci, S. Koza; Karadag, Engin; Akdal, Pinar

    2014-01-01

    The purpose of this study was to determine the effect of statistics instruction using computer-based tools, on statistics anxiety, attitude, and achievement. This study was designed as quasi-experimental research and the pattern used was a matched pre-test/post-test with control group design. Data was collected using three scales: a Statistics…

  3. RooStats: Statistical Tools for the LHC

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    LHC data, with emphasis on discoveries, confidence intervals, and combined measurements in the both the Bayesian and Frequentist approaches. The tools are built on top of the RooFit data modeling language and core ROOT mathematics libraries and persistence technology. These tools have been developed in collaboration with the LHC experiments and used by them to produce numerous physics results, such as the combination of ATLAS and CMS Higgs searches that resulted in a model with more than 200 parameters. We will review new developments which have been included in RooStats and the performance optimizations, required to cope with such complex models used by the LHC experiments. We will show as well the parallelization capability of these statistical tools using multiple-processors via PROOF.

  4. Basic statistical tools in research and data analysis

    Directory of Open Access Journals (Sweden)

    Zulfiqar Ali

    2016-01-01

    Full Text Available Statistical methods involved in carrying out a study include planning, designing, collecting data, analysing, drawing meaningful interpretation and reporting of the research findings. The statistical analysis gives meaning to the meaningless numbers, thereby breathing life into a lifeless data. The results and inferences are precise only if proper statistical tests are used. This article will try to acquaint the reader with the basic research tools that are utilised while conducting various studies. The article covers a brief outline of the variables, an understanding of quantitative and qualitative variables and the measures of central tendency. An idea of the sample size estimation, power analysis and the statistical errors is given. Finally, there is a summary of parametric and non-parametric tests used for data analysis.

  5. Sophistication and Performance of Italian Agri‐food Exports

    Directory of Open Access Journals (Sweden)

    Anna Carbone

    2012-06-01

    Full Text Available Nonprice competition is increasingly important in world food markets. Recently, the expression ‘export sophistication’ has been introduced in the economic literature to refer to a wide set of attributes that increase product value. An index has been proposed to measure sophistication in an indirect way through the per capita GDP of exporting countries (Lall et al., 2006; Haussmann et al., 2007.The paper applies the sophistication measure to the Italian food export sector, moving from an analysis of trends and performance of Italian food exports. An original way to disentangle different components in the temporal variation of the sophistication index is also proposed.Results show that the sophistication index offers original insights on recent trends in world food exports and with respect to Italian core food exports.

  6. Systematization and sophistication of a comprehensive sensitivity analysis program. Phase 2

    International Nuclear Information System (INIS)

    Oyamada, Kiyoshi; Ikeda, Takao

    2004-02-01

    This study developed minute estimation by adopting comprehensive sensitivity analytical program for reliability of TRU waste repository concepts in a crystalline rock condition. We examined each components and groundwater scenario of geological repository and prepared systematic bases to examine the reliability from the point of comprehensiveness. Models and data are sophisticated to examine the reliability. Based on an existing TRU waste repository concepts, effects of parameters to nuclide migration were quantitatively classified. Those parameters, that will be decided quantitatively, are such as site character of natural barrier and design specification of engineered barriers. Considering the feasibility of those figures of specifications, reliability is re-examined on combinations of those parameters within a practical range. Future issues are; Comprehensive representation of hybrid geosphere model including the fractured medium and permeable matrix medium. Sophistication of tools to develop the reliable combinations of parameters. It is significant to continue this study because the disposal concepts and specification of TRU nuclides containing waste on various sites shall be determined rationally and safely through these studies. (author)

  7. The First Sophists and the Uses of History.

    Science.gov (United States)

    Jarratt, Susan C.

    1987-01-01

    Reviews the history of intellectual views on the Greek sophists in three phases: (1) their disparagement by Plato and Aristotle as the morally disgraceful "other"; (2) nineteenth century British positivists' reappraisal of these relativists as ethically and scientifically superior; and (3) twentieth century versions of the sophists as…

  8. Cumulative Dominance and Probabilistic Sophistication

    NARCIS (Netherlands)

    Wakker, P.P.; Sarin, R.H.

    2000-01-01

    Machina & Schmeidler (Econometrica, 60, 1992) gave preference conditions for probabilistic sophistication, i.e. decision making where uncertainty can be expressed in terms of (subjective) probabilities without commitment to expected utility maximization. This note shows that simpler and more general

  9. Does Investors' Sophistication Affect Persistence and Pricing of Discretionary Accruals?

    OpenAIRE

    Lanfeng Kao

    2007-01-01

    This paper examines whether the sophistication of market investors influences management's strategy on discretionary accounting choice, and thus changes the persistence of discretionary accruals. The results show that the persistence of discretionary accruals for firms face with naive investors is lower than that for firms face with sophisticated investors. The results also demonstrate that sophisticated investors indeed incorporate the implications of current earnings components into future ...

  10. The conceptualization and measurement of cognitive health sophistication.

    Science.gov (United States)

    Bodie, Graham D; Collins, William B; Jensen, Jakob D; Davis, Lashara A; Guntzviller, Lisa M; King, Andy J

    2013-01-01

    This article develops a conceptualization and measure of cognitive health sophistication--the complexity of an individual's conceptual knowledge about health. Study 1 provides initial validity evidence for the measure--the Healthy-Unhealthy Other Instrument--by showing its association with other cognitive health constructs indicative of higher health sophistication. Study 2 presents data from a sample of low-income adults to provide evidence that the measure does not depend heavily on health-related vocabulary or ethnicity. Results from both studies suggest that the Healthy-Unhealthy Other Instrument can be used to capture variability in the sophistication or complexity of an individual's health-related schematic structures on the basis of responses to two simple open-ended questions. Methodological advantages of the Healthy-Unhealthy Other Instrument and suggestions for future research are highlighted in the discussion.

  11. Obfuscation, Learning, and the Evolution of Investor Sophistication

    OpenAIRE

    Bruce Ian Carlin; Gustavo Manso

    2011-01-01

    Investor sophistication has lagged behind the growing complexity of retail financial markets. To explore this, we develop a dynamic model to study the interaction between obfuscation and investor sophistication in mutual fund markets. Taking into account different learning mechanisms within the investor population, we characterize the optimal timing of obfuscation for financial institutions who offer retail products. We show that educational initiatives that are directed to facilitate learnin...

  12. Probabilistic Sophistication, Second Order Stochastic Dominance, and Uncertainty Aversion

    OpenAIRE

    Simone Cerreia-Vioglio; Fabio Maccheroni; Massimo Marinacci; Luigi Montrucchio

    2010-01-01

    We study the interplay of probabilistic sophistication, second order stochastic dominance, and uncertainty aversion, three fundamental notions in choice under uncertainty. In particular, our main result, Theorem 2, characterizes uncertainty averse preferences that satisfy second order stochastic dominance, as well as uncertainty averse preferences that are probabilistically sophisticated.

  13. PVeStA: A Parallel Statistical Model Checking and Quantitative Analysis Tool

    KAUST Repository

    AlTurki, Musab

    2011-01-01

    Statistical model checking is an attractive formal analysis method for probabilistic systems such as, for example, cyber-physical systems which are often probabilistic in nature. This paper is about drastically increasing the scalability of statistical model checking, and making such scalability of analysis available to tools like Maude, where probabilistic systems can be specified at a high level as probabilistic rewrite theories. It presents PVeStA, an extension and parallelization of the VeStA statistical model checking tool [10]. PVeStA supports statistical model checking of probabilistic real-time systems specified as either: (i) discrete or continuous Markov Chains; or (ii) probabilistic rewrite theories in Maude. Furthermore, the properties that it can model check can be expressed in either: (i) PCTL/CSL, or (ii) the QuaTEx quantitative temporal logic. As our experiments show, the performance gains obtained from parallelization can be very high. © 2011 Springer-Verlag.

  14. Statistical thinking: tool for development of nursing as a science

    Directory of Open Access Journals (Sweden)

    Sonia Patricia Carreño Moreno

    2017-09-01

    Full Text Available Objective: To integrate findings of scientific literature that report on the importance of statistical thinking for development of nursing as a science.  Content synthesis: Literature review of articles published in indexed scientific journals between 1998 and 2017 in databases lilacs, sage Journals, Wiley Online Library, Scopus, bireme, Scielo, ScienceDirect, PubMed, cuiden® y ProQuest. 22 publications were included and findings were extracted, classified, and simplified using descriptor codes, nominal codes, and emerging topics. The following six topics emerged from searches: Education for statistical thinking; Statistical thinking for decision-making in practice; Obstacles to the statistical thinking development; Skills necessary to statistical thinking; Statistics in creating scientific knowledge; and Challenges for statistical thinking development. Conclusion: In the current development of nursing as a science, statistical thinking has primarily been a useful tool for the research field and training of researchers. The existence of obstacles to the statistical thinking development in nurse practitioners has been reported, revealing the need to bound statistics with nursing practice. For this purpose, it is essential to prepare texts and subject of statistics applied to the context of discipline and practice. Descriptors: Biostatistics; Statistics as Topic; Statistics; Science; Nursing(source: decs, bireme.

  15. The role of sophisticated accounting system in strategy management

    OpenAIRE

    Naranjo Gil, David

    2004-01-01

    Organizations are designing more sophisticated accounting information systems to meet the strategic goals and enhance their performance. This study examines the effect of accounting information system design on the performance of organizations pursuing different strategic priorities. The alignment between sophisticated accounting information systems and organizational strategy is analyzed. The enabling effect of the accounting information system on performance is also examined. Relationships ...

  16. Financial Literacy and Financial Sophistication in the Older Population

    Science.gov (United States)

    Lusardi, Annamaria; Mitchell, Olivia S.; Curto, Vilsa

    2017-01-01

    Using a special-purpose module implemented in the Health and Retirement Study, we evaluate financial sophistication in the American population over the age of 50. We combine several financial literacy questions into an overall index to highlight which questions best capture financial sophistication and examine the sensitivity of financial literacy responses to framing effects. Results show that many older respondents are not financially sophisticated: they fail to grasp essential aspects of risk diversification, asset valuation, portfolio choice, and investment fees. Subgroups with notable deficits include women, the least educated, non-Whites, and those over age 75. In view of the fact that retirees increasingly must take on responsibility for their own retirement security, such meager levels of knowledge have potentially serious and negative implications. PMID:28553191

  17. Financial Literacy and Financial Sophistication in the Older Population.

    Science.gov (United States)

    Lusardi, Annamaria; Mitchell, Olivia S; Curto, Vilsa

    2014-10-01

    Using a special-purpose module implemented in the Health and Retirement Study, we evaluate financial sophistication in the American population over the age of 50. We combine several financial literacy questions into an overall index to highlight which questions best capture financial sophistication and examine the sensitivity of financial literacy responses to framing effects. Results show that many older respondents are not financially sophisticated: they fail to grasp essential aspects of risk diversification, asset valuation, portfolio choice, and investment fees. Subgroups with notable deficits include women, the least educated, non-Whites, and those over age 75. In view of the fact that retirees increasingly must take on responsibility for their own retirement security, such meager levels of knowledge have potentially serious and negative implications.

  18. A Climate Statistics Tool and Data Repository

    Science.gov (United States)

    Wang, J.; Kotamarthi, V. R.; Kuiper, J. A.; Orr, A.

    2017-12-01

    Researchers at Argonne National Laboratory and collaborating organizations have generated regional scale, dynamically downscaled climate model output using Weather Research and Forecasting (WRF) version 3.3.1 at a 12km horizontal spatial resolution over much of North America. The WRF model is driven by boundary conditions obtained from three independent global scale climate models and two different future greenhouse gas emission scenarios, named representative concentration pathways (RCPs). The repository of results has a temporal resolution of three hours for all the simulations, includes more than 50 variables, is stored in Network Common Data Form (NetCDF) files, and the data volume is nearly 600Tb. A condensed 800Gb set of NetCDF files were made for selected variables most useful for climate-related planning, including daily precipitation, relative humidity, solar radiation, maximum temperature, minimum temperature, and wind. The WRF model simulations are conducted for three 10-year time periods (1995-2004, 2045-2054, and 2085-2094), and two future scenarios RCP4.5 and RCP8.5). An open-source tool was coded using Python 2.7.8 and ESRI ArcGIS 10.3.1 programming libraries to parse the NetCDF files, compute summary statistics, and output results as GIS layers. Eight sets of summary statistics were generated as examples for the contiguous U.S. states and much of Alaska, including number of days over 90°F, number of days with a heat index over 90°F, heat waves, monthly and annual precipitation, drought, extreme precipitation, multi-model averages, and model bias. This paper will provide an overview of the project to generate the main and condensed data repositories, describe the Python tool and how to use it, present the GIS results of the computed examples, and discuss some of the ways they can be used for planning. The condensed climate data, Python tool, computed GIS results, and documentation of the work are shared on the Internet.

  19. Building Models in the Classroom: Taking Advantage of Sophisticated Geomorphic Numerical Tools Using a Simple Graphical User Interface

    Science.gov (United States)

    Roy, S. G.; Koons, P. O.; Gerbi, C. C.; Capps, D. K.; Tucker, G. E.; Rogers, Z. A.

    2014-12-01

    Sophisticated numerical tools exist for modeling geomorphic processes and linking them to tectonic and climatic systems, but they are often seen as inaccessible for users with an exploratory level of interest. We have improved the accessibility of landscape evolution models by producing a simple graphics user interface (GUI) that takes advantage of the Channel-Hillslope Integrated Landscape Development (CHILD) model. Model access is flexible: the user can edit values for basic geomorphic, tectonic, and climate parameters, or obtain greater control by defining the spatiotemporal distributions of those parameters. Users can make educated predictions by choosing their own parametric values for the governing equations and interpreting the results immediately through model graphics. This method of modeling allows users to iteratively build their understanding through experimentation. Use of this GUI is intended for inquiry and discovery-based learning activities. We discuss a number of examples of how the GUI can be used at the upper high school, introductory university, and advanced university level. Effective teaching modules initially focus on an inquiry-based example guided by the instructor. As students become familiar with the GUI and the CHILD model, the class can shift to more student-centered exploration and experimentation. To make model interpretations more robust, digital elevation models can be imported and direct comparisons can be made between CHILD model results and natural topography. The GUI is available online through the University of Maine's Earth and Climate Sciences website, through the Community Surface Dynamics Modeling System (CSDMS) model repository, or by contacting the corresponding author.

  20. A framework for evaluating innovative statistical and risk assessment tools to solve environment restoration problems

    International Nuclear Information System (INIS)

    Hassig, N.L.; Gilbert, R.O.; Pulsipher, B.A.

    1991-09-01

    Environmental restoration activities at the US Department of Energy (DOE) Hanford site face complex issues due to history of varied past contaminant disposal practices. Data collection and analysis required for site characterization, pathway modeling, and remediation selection decisions must deal with inherent uncertainties and unique problems associated with the restoration. A framework for working through the statistical aspects of the site characterization and remediation selection problems is needed. This framework would facilitate the selection of appropriate statistical tools for solving unique aspects of the environmental restoration problem. This paper presents a framework for selecting appropriate statistical and risk assessment methods. The following points will be made: (1) pathway modelers and risk assessors often recognize that ''some type'' of statistical methods are required but don't work with statisticians on tools development in the early planning phases of the project; (2) statistical tools selection and development are problem-specific and often site-specific, further indicating a need for up-front involvement of statisticians; and (3) the right tool, applied in the right way can minimize sampling costs, get as much information as possible out of the data that does exist, provide consistency and defensibility for the results, and given structure and quantitative measures to decision risks and uncertainties

  1. Cognitive Load and Strategic Sophistication

    OpenAIRE

    Allred, Sarah; Duffy, Sean; Smith, John

    2013-01-01

    We study the relationship between the cognitive load manipulation and strategic sophistication. The cognitive load manipulation is designed to reduce the subject's cognitive resources that are available for deliberation on a choice. In our experiment, subjects are placed under a large cognitive load (given a difficult number to remember) or a low cognitive load (given a number which is not difficult to remember). Subsequently, the subjects play a one-shot game then they are asked to recall...

  2. Moral foundations and political attitudes: The moderating role of political sophistication.

    Science.gov (United States)

    Milesi, Patrizia

    2016-08-01

    Political attitudes can be associated with moral concerns. This research investigated whether people's level of political sophistication moderates this association. Based on the Moral Foundations Theory, this article examined whether political sophistication moderates the extent to which reliance on moral foundations, as categories of moral concerns, predicts judgements about policy positions. With this aim, two studies examined four policy positions shown by previous research to be best predicted by the endorsement of Sanctity, that is, the category of moral concerns focused on the preservation of physical and spiritual purity. The results showed that reliance on Sanctity predicted political sophisticates' judgements, as opposed to those of unsophisticates, on policy positions dealing with equal rights for same-sex and unmarried couples and with euthanasia. Political sophistication also interacted with Fairness endorsement, which includes moral concerns for equal treatment of everybody and reciprocity, in predicting judgements about equal rights for unmarried couples, and interacted with reliance on Authority, which includes moral concerns for obedience and respect for traditional authorities, in predicting opposition to stem cell research. Those findings suggest that, at least for these particular issues, endorsement of moral foundations can be associated with political attitudes more strongly among sophisticates than unsophisticates. © 2015 International Union of Psychological Science.

  3. Infrastructure requirement of knowledge management system model of statistical learning tool (SLT) for education community

    Science.gov (United States)

    Abdullah, Rusli; Samah, Bahaman Abu; Bolong, Jusang; D'Silva, Jeffrey Lawrence; Shaffril, Hayrol Azril Mohamed

    2014-09-01

    Today, teaching and learning (T&L) using technology as tool is becoming more important especially in the field of statistics as a part of the subject matter in higher education system environment. Eventhough, there are many types of technology of statistical learnig tool (SLT) which can be used to support and enhance T&L environment, however, there is lack of a common standard knowledge management as a knowledge portal for guidance especially in relation to infrastructure requirement of SLT in servicing the community of user (CoU) such as educators, students and other parties who are interested in performing this technology as a tool for their T&L. Therefore, there is a need of a common standard infrastructure requirement of knowledge portal in helping CoU for managing of statistical knowledge in acquiring, storing, desseminating and applying of the statistical knowedge for their specific purposes. Futhermore, by having this infrastructure requirement of knowledge portal model of SLT as a guidance in promoting knowledge of best practise among the CoU, it can also enhance the quality and productivity of their work towards excellence of statistical knowledge application in education system environment.

  4. Aristotle and Social-Epistemic Rhetoric: The Systematizing of the Sophistic Legacy.

    Science.gov (United States)

    Allen, James E.

    While Aristotle's philosophical views are more foundational than those of many of the Older Sophists, Aristotle's rhetorical theories inherit and incorporate many of the central tenets ascribed to Sophistic rhetoric, albeit in a more systematic fashion, as represented in the "Rhetoric." However, Aristotle was more than just a rhetorical…

  5. Spreadsheets as tools for statistical computing and statistics education

    OpenAIRE

    Neuwirth, Erich

    2000-01-01

    Spreadsheets are an ubiquitous program category, and we will discuss their use in statistics and statistics education on various levels, ranging from very basic examples to extremely powerful methods. Since the spreadsheet paradigm is very familiar to many potential users, using it as the interface to statistical methods can make statistics more easily accessible.

  6. The predictors of economic sophistication: media, interpersonal communication and negative economic experiences

    NARCIS (Netherlands)

    Kalogeropoulos, A.; Albæk, E.; de Vreese, C.H.; van Dalen, A.

    2015-01-01

    In analogy to political sophistication, it is imperative that citizens have a certain level of economic sophistication, especially in times of heated debates about the economy. This study examines the impact of different influences (media, interpersonal communication and personal experiences) on

  7. PAUL AND SOPHISTIC RHETORIC: A PERSPECTIVE ON HIS ...

    African Journals Online (AJOL)

    use of modern rhetorical theories but analyses the letter in terms of the clas- ..... If a critical reader would have had the traditional anti-sophistic arsenal ..... pressions and that 'rhetoric' is mainly a matter of communicating these thoughts.

  8. Isocratean Discourse Theory and Neo-Sophistic Pedagogy: Implications for the Composition Classroom.

    Science.gov (United States)

    Blair, Kristine L.

    With the recent interest in the fifth century B.C. theories of Protagoras and Gorgias come assumptions about the philosophical affinity of the Greek educator Isocrates to this pair of older sophists. Isocratean education in discourse, with its emphasis on collaborative political discourse, falls within recent definitions of a sophist curriculum.…

  9. SMEs and new ventures need business model sophistication

    DEFF Research Database (Denmark)

    Kesting, Peter; Günzel-Jensen, Franziska

    2015-01-01

    , and Spreadshirt, this article develops a framework that introduces five business model sophistication strategies: (1) uncover additional functions of your product, (2) identify strategic benefits for third parties, (3) take advantage of economies of scope, (4) utilize cross-selling opportunities, and (5) involve...

  10. Learning Axes and Bridging Tools in a Technology-Based Design for Statistics

    Science.gov (United States)

    Abrahamson, Dor; Wilensky, Uri

    2007-01-01

    We introduce a design-based research framework, "learning axes and bridging tools," and demonstrate its application in the preparation and study of an implementation of a middle-school experimental computer-based unit on probability and statistics, "ProbLab" (Probability Laboratory, Abrahamson and Wilensky 2002 [Abrahamson, D., & Wilensky, U.…

  11. Statistical relation between particle contaminations in ultra pure water and defects generated by process tools

    NARCIS (Netherlands)

    Wali, F.; Knotter, D. Martin; Wortelboer, Ronald; Mud, Auke

    2007-01-01

    Ultra pure water supplied inside the Fab is used in different tools at different stages of processing. Data of the particles measured in ultra pure water was compared with the defect density on wafers processed on these tools and a statistical relation is found Keywords— Yield, defect density,

  12. The design and use of reliability data base with analysis tool

    Energy Technology Data Exchange (ETDEWEB)

    Doorepall, J.; Cooke, R.; Paulsen, J.; Hokstadt, P.

    1996-06-01

    With the advent of sophisticated computer tools, it is possible to give a distributed population of users direct access to reliability component operational histories. This allows the user a greater freedom in defining statistical populations of components and selecting failure modes. However, the reliability data analyst`s current analytical instrumentarium is not adequate for this purpose. The terminology used in organizing and gathering reliability data is standardized, and the statistical methods used in analyzing this data are not always suitably chosen. This report attempts to establish a baseline with regard to terminology and analysis methods, to support the use of a new analysis tool. It builds on results obtained in several projects for the ESTEC and SKI on the design of reliability databases. Starting with component socket time histories, we identify a sequence of questions which should be answered prior to the employment of analytical methods. These questions concern the homogeneity and stationarity of (possible dependent) competing failure modes and the independence of competing failure modes. Statistical tests, some of them new, are proposed for answering these questions. Attention is given to issues of non-identifiability of competing risk and clustering of failure-repair events. These ideas have been implemented in an analysis tool for grazing component socket time histories, and illustrative results are presented. The appendix provides background on statistical tests and competing failure modes. (au) 4 tabs., 17 ills., 61 refs.

  13. The design and use of reliability data base with analysis tool

    International Nuclear Information System (INIS)

    Doorepall, J.; Cooke, R.; Paulsen, J.; Hokstadt, P.

    1996-06-01

    With the advent of sophisticated computer tools, it is possible to give a distributed population of users direct access to reliability component operational histories. This allows the user a greater freedom in defining statistical populations of components and selecting failure modes. However, the reliability data analyst's current analytical instrumentarium is not adequate for this purpose. The terminology used in organizing and gathering reliability data is standardized, and the statistical methods used in analyzing this data are not always suitably chosen. This report attempts to establish a baseline with regard to terminology and analysis methods, to support the use of a new analysis tool. It builds on results obtained in several projects for the ESTEC and SKI on the design of reliability databases. Starting with component socket time histories, we identify a sequence of questions which should be answered prior to the employment of analytical methods. These questions concern the homogeneity and stationarity of (possible dependent) competing failure modes and the independence of competing failure modes. Statistical tests, some of them new, are proposed for answering these questions. Attention is given to issues of non-identifiability of competing risk and clustering of failure-repair events. These ideas have been implemented in an analysis tool for grazing component socket time histories, and illustrative results are presented. The appendix provides background on statistical tests and competing failure modes. (au) 4 tabs., 17 ills., 61 refs

  14. STATISTICAL TOOLS FOR CLASSIFYING GALAXY GROUP DYNAMICS

    International Nuclear Information System (INIS)

    Hou, Annie; Parker, Laura C.; Harris, William E.; Wilman, David J.

    2009-01-01

    The dynamical state of galaxy groups at intermediate redshifts can provide information about the growth of structure in the universe. We examine three goodness-of-fit tests, the Anderson-Darling (A-D), Kolmogorov, and χ 2 tests, in order to determine which statistical tool is best able to distinguish between groups that are relaxed and those that are dynamically complex. We perform Monte Carlo simulations of these three tests and show that the χ 2 test is profoundly unreliable for groups with fewer than 30 members. Power studies of the Kolmogorov and A-D tests are conducted to test their robustness for various sample sizes. We then apply these tests to a sample of the second Canadian Network for Observational Cosmology Redshift Survey (CNOC2) galaxy groups and find that the A-D test is far more reliable and powerful at detecting real departures from an underlying Gaussian distribution than the more commonly used χ 2 and Kolmogorov tests. We use this statistic to classify a sample of the CNOC2 groups and find that 34 of 106 groups are inconsistent with an underlying Gaussian velocity distribution, and thus do not appear relaxed. In addition, we compute velocity dispersion profiles (VDPs) for all groups with more than 20 members and compare the overall features of the Gaussian and non-Gaussian groups, finding that the VDPs of the non-Gaussian groups are distinct from those classified as Gaussian.

  15. An integrated user-friendly ArcMAP tool for bivariate statistical modeling in geoscience applications

    Science.gov (United States)

    Jebur, M. N.; Pradhan, B.; Shafri, H. Z. M.; Yusof, Z.; Tehrany, M. S.

    2014-10-01

    Modeling and classification difficulties are fundamental issues in natural hazard assessment. A geographic information system (GIS) is a domain that requires users to use various tools to perform different types of spatial modeling. Bivariate statistical analysis (BSA) assists in hazard modeling. To perform this analysis, several calculations are required and the user has to transfer data from one format to another. Most researchers perform these calculations manually by using Microsoft Excel or other programs. This process is time consuming and carries a degree of uncertainty. The lack of proper tools to implement BSA in a GIS environment prompted this study. In this paper, a user-friendly tool, BSM (bivariate statistical modeler), for BSA technique is proposed. Three popular BSA techniques such as frequency ratio, weights-of-evidence, and evidential belief function models are applied in the newly proposed ArcMAP tool. This tool is programmed in Python and is created by a simple graphical user interface, which facilitates the improvement of model performance. The proposed tool implements BSA automatically, thus allowing numerous variables to be examined. To validate the capability and accuracy of this program, a pilot test area in Malaysia is selected and all three models are tested by using the proposed program. Area under curve is used to measure the success rate and prediction rate. Results demonstrate that the proposed program executes BSA with reasonable accuracy. The proposed BSA tool can be used in numerous applications, such as natural hazard, mineral potential, hydrological, and other engineering and environmental applications.

  16. An integrated user-friendly ArcMAP tool for bivariate statistical modelling in geoscience applications

    Science.gov (United States)

    Jebur, M. N.; Pradhan, B.; Shafri, H. Z. M.; Yusoff, Z. M.; Tehrany, M. S.

    2015-03-01

    Modelling and classification difficulties are fundamental issues in natural hazard assessment. A geographic information system (GIS) is a domain that requires users to use various tools to perform different types of spatial modelling. Bivariate statistical analysis (BSA) assists in hazard modelling. To perform this analysis, several calculations are required and the user has to transfer data from one format to another. Most researchers perform these calculations manually by using Microsoft Excel or other programs. This process is time-consuming and carries a degree of uncertainty. The lack of proper tools to implement BSA in a GIS environment prompted this study. In this paper, a user-friendly tool, bivariate statistical modeler (BSM), for BSA technique is proposed. Three popular BSA techniques, such as frequency ratio, weight-of-evidence (WoE), and evidential belief function (EBF) models, are applied in the newly proposed ArcMAP tool. This tool is programmed in Python and created by a simple graphical user interface (GUI), which facilitates the improvement of model performance. The proposed tool implements BSA automatically, thus allowing numerous variables to be examined. To validate the capability and accuracy of this program, a pilot test area in Malaysia is selected and all three models are tested by using the proposed program. Area under curve (AUC) is used to measure the success rate and prediction rate. Results demonstrate that the proposed program executes BSA with reasonable accuracy. The proposed BSA tool can be used in numerous applications, such as natural hazard, mineral potential, hydrological, and other engineering and environmental applications.

  17. A Review of Pathway-Based Analysis Tools That Visualize Genetic Variants

    Directory of Open Access Journals (Sweden)

    Elisa Cirillo

    2017-11-01

    Full Text Available Pathway analysis is a powerful method for data analysis in genomics, most often applied to gene expression analysis. It is also promising for single-nucleotide polymorphism (SNP data analysis, such as genome-wide association study data, because it allows the interpretation of variants with respect to the biological processes in which the affected genes and proteins are involved. Such analyses support an interactive evaluation of the possible effects of variations on function, regulation or interaction of gene products. Current pathway analysis software often does not support data visualization of variants in pathways as an alternate method to interpret genetic association results, and specific statistical methods for pathway analysis of SNP data are not combined with these visualization features. In this review, we first describe the visualization options of the tools that were identified by a literature review, in order to provide insight for improvements in this developing field. Tool evaluation was performed using a computational epistatic dataset of gene–gene interactions for obesity risk. Next, we report the necessity to include in these tools statistical methods designed for the pathway-based analysis with SNP data, expressly aiming to define features for more comprehensive pathway-based analysis tools. We conclude by recognizing that pathway analysis of genetic variations data requires a sophisticated combination of the most useful and informative visual aspects of the various tools evaluated.

  18. A Matlab user interface for the statistically assisted fluid registration algorithm and tensor-based morphometry

    Science.gov (United States)

    Yepes-Calderon, Fernando; Brun, Caroline; Sant, Nishita; Thompson, Paul; Lepore, Natasha

    2015-01-01

    Tensor-Based Morphometry (TBM) is an increasingly popular method for group analysis of brain MRI data. The main steps in the analysis consist of a nonlinear registration to align each individual scan to a common space, and a subsequent statistical analysis to determine morphometric differences, or difference in fiber structure between groups. Recently, we implemented the Statistically-Assisted Fluid Registration Algorithm or SAFIRA,1 which is designed for tracking morphometric differences among populations. To this end, SAFIRA allows the inclusion of statistical priors extracted from the populations being studied as regularizers in the registration. This flexibility and degree of sophistication limit the tool to expert use, even more so considering that SAFIRA was initially implemented in command line mode. Here, we introduce a new, intuitive, easy to use, Matlab-based graphical user interface for SAFIRA's multivariate TBM. The interface also generates different choices for the TBM statistics, including both the traditional univariate statistics on the Jacobian matrix, and comparison of the full deformation tensors.2 This software will be freely disseminated to the neuroimaging research community.

  19. Application of Statistical Tools for Data Analysis and Interpretation in Rice Plant Pathology

    Directory of Open Access Journals (Sweden)

    Parsuram Nayak

    2018-01-01

    Full Text Available There has been a significant advancement in the application of statistical tools in plant pathology during the past four decades. These tools include multivariate analysis of disease dynamics involving principal component analysis, cluster analysis, factor analysis, pattern analysis, discriminant analysis, multivariate analysis of variance, correspondence analysis, canonical correlation analysis, redundancy analysis, genetic diversity analysis, and stability analysis, which involve in joint regression, additive main effects and multiplicative interactions, and genotype-by-environment interaction biplot analysis. The advanced statistical tools, such as non-parametric analysis of disease association, meta-analysis, Bayesian analysis, and decision theory, take an important place in analysis of disease dynamics. Disease forecasting methods by simulation models for plant diseases have a great potentiality in practical disease control strategies. Common mathematical tools such as monomolecular, exponential, logistic, Gompertz and linked differential equations take an important place in growth curve analysis of disease epidemics. The highly informative means of displaying a range of numerical data through construction of box and whisker plots has been suggested. The probable applications of recent advanced tools of linear and non-linear mixed models like the linear mixed model, generalized linear model, and generalized linear mixed models have been presented. The most recent technologies such as micro-array analysis, though cost effective, provide estimates of gene expressions for thousands of genes simultaneously and need attention by the molecular biologists. Some of these advanced tools can be well applied in different branches of rice research, including crop improvement, crop production, crop protection, social sciences as well as agricultural engineering. The rice research scientists should take advantage of these new opportunities adequately in

  20. Lies, damn lies and statistics

    International Nuclear Information System (INIS)

    Jones, M.D.

    2001-01-01

    Statistics are widely employed within archaeological research. This is becoming increasingly so as user friendly statistical packages make increasingly sophisticated analyses available to non statisticians. However, all statistical techniques are based on underlying assumptions of which the end user may be unaware. If statistical analyses are applied in ignorance of the underlying assumptions there is the potential for highly erroneous inferences to be drawn. This does happen within archaeology and here this is illustrated with the example of 'date pooling', a technique that has been widely misused in archaeological research. This misuse may have given rise to an inevitable and predictable misinterpretation of New Zealand's archaeological record. (author). 10 refs., 6 figs., 1 tab

  1. Cognitive ability rivals the effect of political sophistication on ideological voting

    DEFF Research Database (Denmark)

    Hebbelstrup Rye Rasmussen, Stig

    2016-01-01

    This article examines the impact of cognitive ability on ideological voting. We find, using a US sample and a Danish sample, that the effect of cognitive ability rivals the effect of the traditionally strongest predicter of ideological voting political sophistication. Furthermore, the results...... are consistent with the effect of cognitive ability being partly mediated by political sophistication. Much of the effect of cognitive ability remains however and is not explained by differences in education or Openness to experience either. The implications of these results for democratic theory are discussed....

  2. Autonomic Differentiation Map: A Novel Statistical Tool for Interpretation of Heart Rate Variability

    Directory of Open Access Journals (Sweden)

    Daniela Lucini

    2018-04-01

    Full Text Available In spite of the large body of evidence suggesting Heart Rate Variability (HRV alone or combined with blood pressure variability (providing an estimate of baroreflex gain as a useful technique to assess the autonomic regulation of the cardiovascular system, there is still an ongoing debate about methodology, interpretation, and clinical applications. In the present investigation, we hypothesize that non-parametric and multivariate exploratory statistical manipulation of HRV data could provide a novel informational tool useful to differentiate normal controls from clinical groups, such as athletes, or subjects affected by obesity, hypertension, or stress. With a data-driven protocol in 1,352 ambulant subjects, we compute HRV and baroreflex indices from short-term data series as proxies of autonomic (ANS regulation. We apply a three-step statistical procedure, by first removing age and gender effects. Subsequently, by factor analysis, we extract four ANS latent domains that detain the large majority of information (86.94%, subdivided in oscillatory (40.84%, amplitude (18.04%, pressure (16.48%, and pulse domains (11.58%. Finally, we test the overall capacity to differentiate clinical groups vs. control. To give more practical value and improve readability, statistical results concerning individual discriminant ANS proxies and ANS differentiation profiles are displayed through peculiar graphical tools, i.e., significance diagram and ANS differentiation map, respectively. This approach, which simultaneously uses all available information about the system, shows what domains make up the difference in ANS discrimination. e.g., athletes differ from controls in all domains, but with a graded strength: maximal in the (normalized oscillatory and in the pulse domains, slightly less in the pressure domain and minimal in the amplitude domain. The application of multiple (non-parametric and exploratory statistical and graphical tools to ANS proxies defines

  3. Statistical tests for the Gaussian nature of primordial fluctuations through CBR experiments

    International Nuclear Information System (INIS)

    Luo, X.

    1994-01-01

    Information about the physical processes that generate the primordial fluctuations in the early Universe can be gained by testing the Gaussian nature of the fluctuations through cosmic microwave background radiation (CBR) temperature anisotropy experiments. One of the crucial aspects of density perturbations that are produced by the standard inflation scenario is that they are Gaussian, whereas seeds produced by topological defects left over from an early cosmic phase transition tend to be non-Gaussian. To carry out this test, sophisticated statistical tools are required. In this paper, we will discuss several such statistical tools, including multivariant skewness and kurtosis, Euler-Poincare characteristics, the three-point temperature correlation function, and Hotelling's T 2 statistic defined through bispectral estimates of a one-dimensional data set. The effect of noise present in the current data is discussed in detail and the COBE 53 GHz data set is analyzed. Our analysis shows that, on the large angular scale to which COBE is sensitive, the statistics are probably Gaussian. On the small angular scales, the importance of Hotelling's T 2 statistic is stressed, and the minimum sample size required to test Gaussianity is estimated. Although the current data set available from various experiments at half-degree scales is still too small, improvement of the data set by roughly a factor of 2 will be enough to test the Gaussianity statistically. On the arc min scale, we analyze the recent RING data through bispectral analysis, and the result indicates possible deviation from Gaussianity. Effects of point sources are also discussed. It is pointed out that the Gaussianity problem can be resolved in the near future by ground-based or balloon-borne experiments

  4. Detection of Cutting Tool Wear using Statistical Analysis and Regression Model

    Science.gov (United States)

    Ghani, Jaharah A.; Rizal, Muhammad; Nuawi, Mohd Zaki; Haron, Che Hassan Che; Ramli, Rizauddin

    2010-10-01

    This study presents a new method for detecting the cutting tool wear based on the measured cutting force signals. A statistical-based method called Integrated Kurtosis-based Algorithm for Z-Filter technique, called I-kaz was used for developing a regression model and 3D graphic presentation of I-kaz 3D coefficient during machining process. The machining tests were carried out using a CNC turning machine Colchester Master Tornado T4 in dry cutting condition. A Kistler 9255B dynamometer was used to measure the cutting force signals, which were transmitted, analyzed, and displayed in the DasyLab software. Various force signals from machining operation were analyzed, and each has its own I-kaz 3D coefficient. This coefficient was examined and its relationship with flank wear lands (VB) was determined. A regression model was developed due to this relationship, and results of the regression model shows that the I-kaz 3D coefficient value decreases as tool wear increases. The result then is used for real time tool wear monitoring.

  5. PANDA-view: An easy-to-use tool for statistical analysis and visualization of quantitative proteomics data.

    Science.gov (United States)

    Chang, Cheng; Xu, Kaikun; Guo, Chaoping; Wang, Jinxia; Yan, Qi; Zhang, Jian; He, Fuchu; Zhu, Yunping

    2018-05-22

    Compared with the numerous software tools developed for identification and quantification of -omics data, there remains a lack of suitable tools for both downstream analysis and data visualization. To help researchers better understand the biological meanings in their -omics data, we present an easy-to-use tool, named PANDA-view, for both statistical analysis and visualization of quantitative proteomics data and other -omics data. PANDA-view contains various kinds of analysis methods such as normalization, missing value imputation, statistical tests, clustering and principal component analysis, as well as the most commonly-used data visualization methods including an interactive volcano plot. Additionally, it provides user-friendly interfaces for protein-peptide-spectrum representation of the quantitative proteomics data. PANDA-view is freely available at https://sourceforge.net/projects/panda-view/. 1987ccpacer@163.com and zhuyunping@gmail.com. Supplementary data are available at Bioinformatics online.

  6. Statistical investigations into the erosion of material from the tool in micro-electrical discharge machining

    DEFF Research Database (Denmark)

    Puthumana, Govindan

    2018-01-01

    This paper presents a statistical study of the erosion of material from the tool electrode in a micro-electrical discharge machining process. The work involves analysis of variance and analysis of means approaches on the results of the tool electrode wear rate obtained based on design...... current (Id) and discharge frequency (fd) control the erosion of material from the tool electrode. The material erosion from the tool electrode (Me) increases linearly with the discharge frequency. As the current index increases from 20 to 35, the Me decreases linearly by 29%, and then increases by of 36......%. The current index of 35 gives the minimum material erosion from the tool. It is observed that none of the two-factor interactions are significant in controlling the erosion of the material from the tool....

  7. Revisiting Information Technology tools serving authorship and editorship: a case-guided tutorial to statistical analysis and plagiarism detection

    Science.gov (United States)

    Bamidis, P D; Lithari, C; Konstantinidis, S T

    2010-01-01

    With the number of scientific papers published in journals, conference proceedings, and international literature ever increasing, authors and reviewers are not only facilitated with an abundance of information, but unfortunately continuously confronted with risks associated with the erroneous copy of another's material. In parallel, Information Communication Technology (ICT) tools provide to researchers novel and continuously more effective ways to analyze and present their work. Software tools regarding statistical analysis offer scientists the chance to validate their work and enhance the quality of published papers. Moreover, from the reviewers and the editor's perspective, it is now possible to ensure the (text-content) originality of a scientific article with automated software tools for plagiarism detection. In this paper, we provide a step-bystep demonstration of two categories of tools, namely, statistical analysis and plagiarism detection. The aim is not to come up with a specific tool recommendation, but rather to provide useful guidelines on the proper use and efficiency of either category of tools. In the context of this special issue, this paper offers a useful tutorial to specific problems concerned with scientific writing and review discourse. A specific neuroscience experimental case example is utilized to illustrate the young researcher's statistical analysis burden, while a test scenario is purpose-built using open access journal articles to exemplify the use and comparative outputs of seven plagiarism detection software pieces. PMID:21487489

  8. Revisiting Information Technology tools serving authorship and editorship: a case-guided tutorial to statistical analysis and plagiarism detection.

    Science.gov (United States)

    Bamidis, P D; Lithari, C; Konstantinidis, S T

    2010-12-01

    With the number of scientific papers published in journals, conference proceedings, and international literature ever increasing, authors and reviewers are not only facilitated with an abundance of information, but unfortunately continuously confronted with risks associated with the erroneous copy of another's material. In parallel, Information Communication Technology (ICT) tools provide to researchers novel and continuously more effective ways to analyze and present their work. Software tools regarding statistical analysis offer scientists the chance to validate their work and enhance the quality of published papers. Moreover, from the reviewers and the editor's perspective, it is now possible to ensure the (text-content) originality of a scientific article with automated software tools for plagiarism detection. In this paper, we provide a step-bystep demonstration of two categories of tools, namely, statistical analysis and plagiarism detection. The aim is not to come up with a specific tool recommendation, but rather to provide useful guidelines on the proper use and efficiency of either category of tools. In the context of this special issue, this paper offers a useful tutorial to specific problems concerned with scientific writing and review discourse. A specific neuroscience experimental case example is utilized to illustrate the young researcher's statistical analysis burden, while a test scenario is purpose-built using open access journal articles to exemplify the use and comparative outputs of seven plagiarism detection software pieces.

  9. JULIDE: a software tool for 3D reconstruction and statistical analysis of autoradiographic mouse brain sections.

    Directory of Open Access Journals (Sweden)

    Delphine Ribes

    Full Text Available In this article we introduce JULIDE, a software toolkit developed to perform the 3D reconstruction, intensity normalization, volume standardization by 3D image registration and voxel-wise statistical analysis of autoradiographs of mouse brain sections. This software tool has been developed in the open-source ITK software framework and is freely available under a GPL license. The article presents the complete image processing chain from raw data acquisition to 3D statistical group analysis. Results of the group comparison in the context of a study on spatial learning are shown as an illustration of the data that can be obtained with this tool.

  10. The Relationship between Logistics Sophistication and Drivers of the Outsourcing of Logistics Activities

    Directory of Open Access Journals (Sweden)

    Peter Wanke

    2008-10-01

    Full Text Available A strong link has been established between operational excellence and the degree of sophistication of logistics organization, a function of factors such as performance monitoring, investment in Information Technology [IT] and the formalization of logistics organization, as proposed in the Bowersox, Daugherty, Dröge, Germain and Rogers (1992 Leading Edge model. At the same time, shippers have been increasingly outsourcing their logistics activities to third party providers. This paper, based on a survey with large Brazilian shippers, addresses a gap in the literature by investigating the relationship between dimensions of logistics organization sophistication and drivers of logistics outsourcing. To this end, the dimensions behind the logistics sophistication construct were first investigated. Results from factor analysis led to the identification of six dimensions of logistics sophistication. By means of multivariate logistical regression analyses it was possible to relate some of these dimensions, such as the formalization of the logistics organization, to certain drivers of the outsourcing of logistics activities of Brazilian shippers, such as cost savings. These results indicate the possibility of segmenting shippers according to characteristics of their logistics organization, which may be particularly useful to logistics service providers.

  11. Lexical Complexity Development from Dynamic Systems Theory Perspective: Lexical Density, Diversity, and Sophistication

    Directory of Open Access Journals (Sweden)

    Reza Kalantari

    2017-10-01

    Full Text Available This longitudinal case study explored Iranian EFL learners’ lexical complexity (LC through the lenses of Dynamic Systems Theory (DST. Fifty independent essays written by five intermediate to advanced female EFL learners in a TOEFL iBT preparation course over six months constituted the corpus of this study. Three Coh-Metrix indices (Graesser, McNamara, Louwerse, & Cai, 2004; McNamara & Graesser, 2012, three Lexical Complexity Analyzer indices (Lu, 2010, 2012; Lu & Ai, 2011, and four Vocabprofile indices (Cobb, 2000 were selected to measure different dimensions of LC. Results of repeated measures analysis of variance (RM ANOVA indicated an improvement with regard to only lexical sophistication. Positive and significant relationships were found between time and mean values in Academic Word List and Beyond-2000 as indicators of lexical sophistication. The remaining seven indices of LC, falling short of significance, tended to flatten over the course of this writing program. Correlation analyses among LC indices indicated that lexical density enjoyed positive correlations with lexical sophistication. However, lexical diversity revealed no significant correlations with both lexical density and lexical sophistication. This study suggests that DST perspective specifies a viable foundation for analyzing lexical complexity

  12. Statistical physics

    CERN Document Server

    Guénault, Tony

    2007-01-01

    In this revised and enlarged second edition of an established text Tony Guénault provides a clear and refreshingly readable introduction to statistical physics, an essential component of any first degree in physics. The treatment itself is self-contained and concentrates on an understanding of the physical ideas, without requiring a high level of mathematical sophistication. A straightforward quantum approach to statistical averaging is adopted from the outset (easier, the author believes, than the classical approach). The initial part of the book is geared towards explaining the equilibrium properties of a simple isolated assembly of particles. Thus, several important topics, for example an ideal spin-½ solid, can be discussed at an early stage. The treatment of gases gives full coverage to Maxwell-Boltzmann, Fermi-Dirac and Bose-Einstein statistics. Towards the end of the book the student is introduced to a wider viewpoint and new chapters are included on chemical thermodynamics, interactions in, for exam...

  13. Indexing Combined with Statistical Deflation as a Tool for Analysis of Longitudinal Data.

    Science.gov (United States)

    Babcock, Judith A.

    Indexing is a tool that can be used with longitudinal, quantitative data for analysis of relative changes and for comparisons of changes among items. For greater accuracy, raw financial data should be deflated into constant dollars prior to indexing. This paper demonstrates the procedures for indexing, statistical deflation, and the use of…

  14. Calibrating the Difficulty of an Assessment Tool: The Blooming of a Statistics Examination

    Science.gov (United States)

    Dunham, Bruce; Yapa, Gaitri; Yu, Eugenia

    2015-01-01

    Bloom's taxonomy is proposed as a tool by which to assess the level of complexity of assessment tasks in statistics. Guidelines are provided for how to locate tasks at each level of the taxonomy, along with descriptions and examples of suggested test questions. Through the "Blooming" of an examination--that is, locating its constituent…

  15. A case study: application of statistical process control tool for determining process capability and sigma level.

    Science.gov (United States)

    Chopra, Vikram; Bairagi, Mukesh; Trivedi, P; Nagar, Mona

    2012-01-01

    Statistical process control is the application of statistical methods to the measurement and analysis of variation process. Various regulatory authorities such as Validation Guidance for Industry (2011), International Conference on Harmonisation ICH Q10 (2009), the Health Canada guidelines (2009), Health Science Authority, Singapore: Guidance for Product Quality Review (2008), and International Organization for Standardization ISO-9000:2005 provide regulatory support for the application of statistical process control for better process control and understanding. In this study risk assessments, normal probability distributions, control charts, and capability charts are employed for selection of critical quality attributes, determination of normal probability distribution, statistical stability, and capability of production processes, respectively. The objective of this study is to determine tablet production process quality in the form of sigma process capability. By interpreting data and graph trends, forecasting of critical quality attributes, sigma process capability, and stability of process were studied. The overall study contributes to an assessment of process at the sigma level with respect to out-of-specification attributes produced. Finally, the study will point to an area where the application of quality improvement and quality risk assessment principles for achievement of six sigma-capable processes is possible. Statistical process control is the most advantageous tool for determination of the quality of any production process. This tool is new for the pharmaceutical tablet production process. In the case of pharmaceutical tablet production processes, the quality control parameters act as quality assessment parameters. Application of risk assessment provides selection of critical quality attributes among quality control parameters. Sequential application of normality distributions, control charts, and capability analyses provides a valid statistical

  16. CRITON : A Hypermedia Design Tool

    NARCIS (Netherlands)

    Avgeriou, Paris; Retalis, Symeon

    2005-01-01

    The WWW has turned into a development and run-time environment for large-scale and complex applications. Such sophisticated applications are being deployed in increasing numbers without having been developed according to appropriate methodologies, tools and quality standards. The reason is not only

  17. Angular-momentum nonclassicality by breaking classical bounds on statistics

    Energy Technology Data Exchange (ETDEWEB)

    Luis, Alfredo [Departamento de Optica, Facultad de Ciencias Fisicas, Universidad Complutense, E-28040 Madrid (Spain); Rivas, Angel [Departamento de Fisica Teorica I, Facultad de Ciencias Fisicas, Universidad Complutense, E-28040 Madrid (Spain)

    2011-10-15

    We derive simple practical procedures revealing the quantum behavior of angular momentum variables by the violation of classical upper bounds on the statistics. Data analysis is minimum and definite conclusions are obtained without evaluation of moments, or any other more sophisticated procedures. These nonclassical tests are very general and independent of other typical quantum signatures of nonclassical behavior such as sub-Poissonian statistics, squeezing, or oscillatory statistics, being insensitive to the nonclassical behavior displayed by other variables.

  18. Structural reliability in context of statistical uncertainties and modelling discrepancies

    International Nuclear Information System (INIS)

    Pendola, Maurice

    2000-01-01

    Structural reliability methods have been largely improved during the last years and have showed their ability to deal with uncertainties during the design stage or to optimize the functioning and the maintenance of industrial installations. They are based on a mechanical modeling of the structural behavior according to the considered failure modes and on a probabilistic representation of input parameters of this modeling. In practice, only limited statistical information is available to build the probabilistic representation and different sophistication levels of the mechanical modeling may be introduced. Thus, besides the physical randomness, other uncertainties occur in such analyses. The aim of this work is triple: 1. at first, to propose a methodology able to characterize the statistical uncertainties due to the limited number of data in order to take them into account in the reliability analyses. The obtained reliability index measures the confidence in the structure considering the statistical information available. 2. Then, to show a methodology leading to reliability results evaluated from a particular mechanical modeling but by using a less sophisticated one. The objective is then to decrease the computational efforts required by the reference modeling. 3. Finally, to propose partial safety factors that are evolving as a function of the number of statistical data available and as a function of the sophistication level of the mechanical modeling that is used. The concepts are illustrated in the case of a welded pipe and in the case of a natural draught cooling tower. The results show the interest of the methodologies in an industrial context. [fr

  19. Using assemblage data in ecological indicators: A comparison and evaluation of commonly available statistical tools

    Science.gov (United States)

    Smith, Joseph M.; Mather, Martha E.

    2012-01-01

    Ecological indicators are science-based tools used to assess how human activities have impacted environmental resources. For monitoring and environmental assessment, existing species assemblage data can be used to make these comparisons through time or across sites. An impediment to using assemblage data, however, is that these data are complex and need to be simplified in an ecologically meaningful way. Because multivariate statistics are mathematical relationships, statistical groupings may not make ecological sense and will not have utility as indicators. Our goal was to define a process to select defensible and ecologically interpretable statistical simplifications of assemblage data in which researchers and managers can have confidence. For this, we chose a suite of statistical methods, compared the groupings that resulted from these analyses, identified convergence among groupings, then we interpreted the groupings using species and ecological guilds. When we tested this approach using a statewide stream fish dataset, not all statistical methods worked equally well. For our dataset, logistic regression (Log), detrended correspondence analysis (DCA), cluster analysis (CL), and non-metric multidimensional scaling (NMDS) provided consistent, simplified output. Specifically, the Log, DCA, CL-1, and NMDS-1 groupings were ≥60% similar to each other, overlapped with the fluvial-specialist ecological guild, and contained a common subset of species. Groupings based on number of species (e.g., Log, DCA, CL and NMDS) outperformed groupings based on abundance [e.g., principal components analysis (PCA) and Poisson regression]. Although the specific methods that worked on our test dataset have generality, here we are advocating a process (e.g., identifying convergent groupings with redundant species composition that are ecologically interpretable) rather than the automatic use of any single statistical tool. We summarize this process in step-by-step guidance for the

  20. The New Toxicology of Sophisticated Materials: Nanotoxicology and Beyond

    Science.gov (United States)

    Maynard, Andrew D.; Warheit, David B.; Philbert, Martin A.

    2011-01-01

    It has long been recognized that the physical form of materials can mediate their toxicity—the health impacts of asbestiform materials, industrial aerosols, and ambient particulate matter are prime examples. Yet over the past 20 years, toxicology research has suggested complex and previously unrecognized associations between material physicochemistry at the nanoscale and biological interactions. With the rapid rise of the field of nanotechnology and the design and production of increasingly complex nanoscale materials, it has become ever more important to understand how the physical form and chemical composition of these materials interact synergistically to determine toxicity. As a result, a new field of research has emerged—nanotoxicology. Research within this field is highlighting the importance of material physicochemical properties in how dose is understood, how materials are characterized in a manner that enables quantitative data interpretation and comparison, and how materials move within, interact with, and are transformed by biological systems. Yet many of the substances that are the focus of current nanotoxicology studies are relatively simple materials that are at the vanguard of a new era of complex materials. Over the next 50 years, there will be a need to understand the toxicology of increasingly sophisticated materials that exhibit novel, dynamic and multifaceted functionality. If the toxicology community is to meet the challenge of ensuring the safe use of this new generation of substances, it will need to move beyond “nano” toxicology and toward a new toxicology of sophisticated materials. Here, we present a brief overview of the current state of the science on the toxicology of nanoscale materials and focus on three emerging toxicology-based challenges presented by sophisticated materials that will become increasingly important over the next 50 years: identifying relevant materials for study, physicochemical characterization, and

  1. Procles the Carthaginian: A North African Sophist in Pausanias’ Periegesis

    Directory of Open Access Journals (Sweden)

    Juan Pablo Sánchez Hernández

    2010-11-01

    Full Text Available Procles, cited by Pausanias (in the imperfect tense about a display in Rome and for an opinion about Pyrrhus of Epirus, probably was not a historian of Hellenistic date, but a contemporary sophist whom Pausanias encountered in person in Rome.

  2. On the blind use of statistical tools in the analysis of globular cluster stars

    Science.gov (United States)

    D'Antona, Francesca; Caloi, Vittoria; Tailo, Marco

    2018-04-01

    As with most data analysis methods, the Bayesian method must be handled with care. We show that its application to determine stellar evolution parameters within globular clusters can lead to paradoxical results if used without the necessary precautions. This is a cautionary tale on the use of statistical tools for big data analysis.

  3. Does underground storage still require sophisticated studies?

    International Nuclear Information System (INIS)

    Marsily, G. de

    1997-01-01

    Most countries agree to the necessity of burying high or medium-level wastes in geological layers situated at a few hundred meters below the ground level. The advantages and disadvantages of different types of rock such as salt, clay, granite and volcanic material are examined. Sophisticated studies are lead to determine the best geological confinement but questions arise about the time for which safety must be ensured. France has chosen 3 possible sites. These sites are geologically described in the article. The final place will be proposed after a testing phase of about 5 years in an underground facility. (A.C.)

  4. Evaluation of air quality in a megacity using statistics tools

    Science.gov (United States)

    Ventura, Luciana Maria Baptista; de Oliveira Pinto, Fellipe; Soares, Laiza Molezon; Luna, Aderval Severino; Gioda, Adriana

    2018-06-01

    Local physical characteristics (e.g., meteorology and topography) associate to particle concentrations are important to evaluate air quality in a region. Meteorology and topography affect air pollutant dispersions. This study used statistics tools (PCA, HCA, Kruskal-Wallis, Mann-Whitney's test and others) to a better understanding of the relationship between fine particulate matter (PM2.5) levels and seasons, meteorological conditions and air basins. To our knowledge, it is one of the few studies performed in Latin America involving all parameters together. PM2.5 samples were collected in six sampling sites with different emission sources (industrial, vehicular, soil dust) in Rio de Janeiro, Brazil. The PM2.5 daily concentrations ranged from 1 to 61 µg m-3, with averages higher than the annual limit (15 µg m-3) for some of the sites. The results of the statistics evaluation showed that PM2.5 concentrations were not influenced by seasonality. Furthermore, air basins defined previously were not confirmed, because some sites presented similar emission sources. Therefore, new redefinitions of air basins need to be done, once they are important to air quality management.

  5. Evaluation of air quality in a megacity using statistics tools

    Science.gov (United States)

    Ventura, Luciana Maria Baptista; de Oliveira Pinto, Fellipe; Soares, Laiza Molezon; Luna, Aderval Severino; Gioda, Adriana

    2017-03-01

    Local physical characteristics (e.g., meteorology and topography) associate to particle concentrations are important to evaluate air quality in a region. Meteorology and topography affect air pollutant dispersions. This study used statistics tools (PCA, HCA, Kruskal-Wallis, Mann-Whitney's test and others) to a better understanding of the relationship between fine particulate matter (PM2.5) levels and seasons, meteorological conditions and air basins. To our knowledge, it is one of the few studies performed in Latin America involving all parameters together. PM2.5 samples were collected in six sampling sites with different emission sources (industrial, vehicular, soil dust) in Rio de Janeiro, Brazil. The PM2.5 daily concentrations ranged from 1 to 61 µg m-3, with averages higher than the annual limit (15 µg m-3) for some of the sites. The results of the statistics evaluation showed that PM2.5 concentrations were not influenced by seasonality. Furthermore, air basins defined previously were not confirmed, because some sites presented similar emission sources. Therefore, new redefinitions of air basins need to be done, once they are important to air quality management.

  6. Statistical energy as a tool for binning-free, multivariate goodness-of-fit tests, two-sample comparison and unfolding

    International Nuclear Information System (INIS)

    Aslan, B.; Zech, G.

    2005-01-01

    We introduce the novel concept of statistical energy as a statistical tool. We define statistical energy of statistical distributions in a similar way as for electric charge distributions. Charges of opposite sign are in a state of minimum energy if they are equally distributed. This property is used to check whether two samples belong to the same parent distribution, to define goodness-of-fit tests and to unfold distributions distorted by measurement. The approach is binning-free and especially powerful in multidimensional applications

  7. Probing Chromatin-modifying Enzymes with Chemical Tools

    KAUST Repository

    Fischle, Wolfgang; Schwarzer, Dirk

    2016-01-01

    and represent promising drug targets in modern medicine. We summarize and discuss recent advances in the field of chemical biology that have provided chromatin research with sophisticated tools for investigating the composition, activity, and target sites

  8. Enhancing interest in statistics among computer science students using computer tool entrepreneur role play

    Science.gov (United States)

    Judi, Hairulliza Mohamad; Sahari @ Ashari, Noraidah; Eksan, Zanaton Hj

    2017-04-01

    Previous research in Malaysia indicates that there is a problem regarding attitude towards statistics among students. They didn't show positive attitude in affective, cognitive, capability, value, interest and effort aspects although did well in difficulty. This issue should be given substantial attention because students' attitude towards statistics may give impacts on the teaching and learning process of the subject. Teaching statistics using role play is an appropriate attempt to improve attitudes to statistics, to enhance the learning of statistical techniques and statistical thinking, and to increase generic skills. The objectives of the paper are to give an overview on role play in statistics learning and to access the effect of these activities on students' attitude and learning in action research framework. The computer tool entrepreneur role play is conducted in a two-hour tutorial class session of first year students in Faculty of Information Sciences and Technology (FTSM), Universiti Kebangsaan Malaysia, enrolled in Probability and Statistics course. The results show that most students feel that they have enjoyable and great time in the role play. Furthermore, benefits and disadvantages from role play activities were highlighted to complete the review. Role play is expected to serve as an important activities that take into account students' experience, emotions and responses to provide useful information on how to modify student's thinking or behavior to improve learning.

  9. SWToolbox: A surface-water tool-box for statistical analysis of streamflow time series

    Science.gov (United States)

    Kiang, Julie E.; Flynn, Kate; Zhai, Tong; Hummel, Paul; Granato, Gregory

    2018-03-07

    This report is a user guide for the low-flow analysis methods provided with version 1.0 of the Surface Water Toolbox (SWToolbox) computer program. The software combines functionality from two software programs—U.S. Geological Survey (USGS) SWSTAT and U.S. Environmental Protection Agency (EPA) DFLOW. Both of these programs have been used primarily for computation of critical low-flow statistics. The main analysis methods are the computation of hydrologic frequency statistics such as the 7-day minimum flow that occurs on average only once every 10 years (7Q10), computation of design flows including biologically based flows, and computation of flow-duration curves and duration hydrographs. Other annual, monthly, and seasonal statistics can also be computed. The interface facilitates retrieval of streamflow discharge data from the USGS National Water Information System and outputs text reports for a record of the analysis. Tools for graphing data and screening tests are available to assist the analyst in conducting the analysis.

  10. Statistical-Mechanical Analysis of Pre-training and Fine Tuning in Deep Learning

    Science.gov (United States)

    Ohzeki, Masayuki

    2015-03-01

    In this paper, we present a statistical-mechanical analysis of deep learning. We elucidate some of the essential components of deep learning — pre-training by unsupervised learning and fine tuning by supervised learning. We formulate the extraction of features from the training data as a margin criterion in a high-dimensional feature-vector space. The self-organized classifier is then supplied with small amounts of labelled data, as in deep learning. Although we employ a simple single-layer perceptron model, rather than directly analyzing a multi-layer neural network, we find a nontrivial phase transition that is dependent on the number of unlabelled data in the generalization error of the resultant classifier. In this sense, we evaluate the efficacy of the unsupervised learning component of deep learning. The analysis is performed by the replica method, which is a sophisticated tool in statistical mechanics. We validate our result in the manner of deep learning, using a simple iterative algorithm to learn the weight vector on the basis of belief propagation.

  11. Tool Sequence Trends in Minimally Invasive Surgery: Statistical Analysis and Implications for Predictive Control of Multifunction Instruments

    Directory of Open Access Journals (Sweden)

    Carl A. Nelson

    2012-01-01

    Full Text Available This paper presents an analysis of 67 minimally invasive surgical procedures covering 11 different procedure types to determine patterns of tool use. A new graph-theoretic approach was taken to organize and analyze the data. Through grouping surgeries by type, trends of common tool changes were identified. Using the concept of signal/noise ratio, these trends were found to be statistically strong. The tool-use trends were used to generate tool placement patterns for modular (multi-tool, cartridge-type surgical tool systems, and the same 67 surgeries were numerically simulated to determine the optimality of these tool arrangements. The results indicate that aggregated tool-use data (by procedure type can be employed to predict tool-use sequences with good accuracy, and also indicate the potential for artificial intelligence as a means of preoperative and/or intraoperative planning. Furthermore, this suggests that the use of multifunction surgical tools can be optimized to streamline surgical workflow.

  12. Finding the Fabulous Few: Why Your Program Needs Sophisticated Research.

    Science.gov (United States)

    Pfizenmaier, Emily

    1981-01-01

    Fund raising, it is argued, needs sophisticated prospect research. Professional prospect researchers play an important role in helping to identify prospective donors and also in helping to stimulate interest in gift giving. A sample of an individual work-up on a donor and a bibliography are provided. (MLW)

  13. Reading wild minds: A computational assay of Theory of Mind sophistication across seven primate species.

    Directory of Open Access Journals (Sweden)

    Marie Devaine

    2017-11-01

    Full Text Available Theory of Mind (ToM, i.e. the ability to understand others' mental states, endows humans with highly adaptive social skills such as teaching or deceiving. Candidate evolutionary explanations have been proposed for the unique sophistication of human ToM among primates. For example, the Machiavellian intelligence hypothesis states that the increasing complexity of social networks may have induced a demand for sophisticated ToM. This type of scenario ignores neurocognitive constraints that may eventually be crucial limiting factors for ToM evolution. In contradistinction, the cognitive scaffolding hypothesis asserts that a species' opportunity to develop sophisticated ToM is mostly determined by its general cognitive capacity (on which ToM is scaffolded. However, the actual relationships between ToM sophistication and either brain volume (a proxy for general cognitive capacity or social group size (a proxy for social network complexity are unclear. Here, we let 39 individuals sampled from seven non-human primate species (lemurs, macaques, mangabeys, orangutans, gorillas and chimpanzees engage in simple dyadic games against artificial ToM players (via a familiar human caregiver. Using computational analyses of primates' choice sequences, we found that the probability of exhibiting a ToM-compatible learning style is mainly driven by species' brain volume (rather than by social group size. Moreover, primates' social cognitive sophistication culminates in a precursor form of ToM, which still falls short of human fully-developed ToM abilities.

  14. The use of machine learning and nonlinear statistical tools for ADME prediction.

    Science.gov (United States)

    Sakiyama, Yojiro

    2009-02-01

    Absorption, distribution, metabolism and excretion (ADME)-related failure of drug candidates is a major issue for the pharmaceutical industry today. Prediction of ADME by in silico tools has now become an inevitable paradigm to reduce cost and enhance efficiency in pharmaceutical research. Recently, machine learning as well as nonlinear statistical tools has been widely applied to predict routine ADME end points. To achieve accurate and reliable predictions, it would be a prerequisite to understand the concepts, mechanisms and limitations of these tools. Here, we have devised a small synthetic nonlinear data set to help understand the mechanism of machine learning by 2D-visualisation. We applied six new machine learning methods to four different data sets. The methods include Naive Bayes classifier, classification and regression tree, random forest, Gaussian process, support vector machine and k nearest neighbour. The results demonstrated that ensemble learning and kernel machine displayed greater accuracy of prediction than classical methods irrespective of the data set size. The importance of interaction with the engineering field is also addressed. The results described here provide insights into the mechanism of machine learning, which will enable appropriate usage in the future.

  15. Uterine Cancer Statistics

    Science.gov (United States)

    ... Doing AMIGAS Stay Informed Cancer Home Uterine Cancer Statistics Language: English (US) Español (Spanish) Recommend on Facebook ... the most commonly diagnosed gynecologic cancer. U.S. Cancer Statistics Data Visualizations Tool The Data Visualizations tool makes ...

  16. Lexical Sophistication as a Multidimensional Phenomenon: Relations to Second Language Lexical Proficiency, Development, and Writing Quality

    Science.gov (United States)

    Kim, Minkyung; Crossley, Scott A.; Kyle, Kristopher

    2018-01-01

    This study conceptualizes lexical sophistication as a multidimensional phenomenon by reducing numerous lexical features of lexical sophistication into 12 aggregated components (i.e., dimensions) via a principal component analysis approach. These components were then used to predict second language (L2) writing proficiency levels, holistic lexical…

  17. Few remarks on chiral theories with sophisticated topology

    International Nuclear Information System (INIS)

    Golo, V.L.; Perelomov, A.M.

    1978-01-01

    Two classes of the two-dimensional Euclidean chiral field theoreties are singled out: 1) the field phi(x) takes the values in the compact Hermitiam symmetric space 2) the field phi(x) takes the values in an orbit of the adjoint representation of the comcompact Lie group. The theories have sophisticated topological and rich analytical structures. They are considered with the help of topological invariants (topological charges). Explicit formulae for the topological charges are indicated, and the lower bound extimate for the action is given

  18. SimHap GUI: an intuitive graphical user interface for genetic association analysis.

    Science.gov (United States)

    Carter, Kim W; McCaskie, Pamela A; Palmer, Lyle J

    2008-12-25

    Researchers wishing to conduct genetic association analysis involving single nucleotide polymorphisms (SNPs) or haplotypes are often confronted with the lack of user-friendly graphical analysis tools, requiring sophisticated statistical and informatics expertise to perform relatively straightforward tasks. Tools, such as the SimHap package for the R statistics language, provide the necessary statistical operations to conduct sophisticated genetic analysis, but lacks a graphical user interface that allows anyone but a professional statistician to effectively utilise the tool. We have developed SimHap GUI, a cross-platform integrated graphical analysis tool for conducting epidemiological, single SNP and haplotype-based association analysis. SimHap GUI features a novel workflow interface that guides the user through each logical step of the analysis process, making it accessible to both novice and advanced users. This tool provides a seamless interface to the SimHap R package, while providing enhanced functionality such as sophisticated data checking, automated data conversion, and real-time estimations of haplotype simulation progress. SimHap GUI provides a novel, easy-to-use, cross-platform solution for conducting a range of genetic and non-genetic association analyses. This provides a free alternative to commercial statistics packages that is specifically designed for genetic association analysis.

  19. Software Used to Generate Cancer Statistics - SEER Cancer Statistics

    Science.gov (United States)

    Videos that highlight topics and trends in cancer statistics and definitions of statistical terms. Also software tools for analyzing and reporting cancer statistics, which are used to compile SEER's annual reports.

  20. Building an asynchronous web-based tool for machine learning classification.

    Science.gov (United States)

    Weber, Griffin; Vinterbo, Staal; Ohno-Machado, Lucila

    2002-01-01

    Various unsupervised and supervised learning methods including support vector machines, classification trees, linear discriminant analysis and nearest neighbor classifiers have been used to classify high-throughput gene expression data. Simpler and more widely accepted statistical tools have not yet been used for this purpose, hence proper comparisons between classification methods have not been conducted. We developed free software that implements logistic regression with stepwise variable selection as a quick and simple method for initial exploration of important genetic markers in disease classification. To implement the algorithm and allow our collaborators in remote locations to evaluate and compare its results against those of other methods, we developed a user-friendly asynchronous web-based application with a minimal amount of programming using free, downloadable software tools. With this program, we show that classification using logistic regression can perform as well as other more sophisticated algorithms, and it has the advantages of being easy to interpret and reproduce. By making the tool freely and easily available, we hope to promote the comparison of classification methods. In addition, we believe our web application can be used as a model for other bioinformatics laboratories that need to develop web-based analysis tools in a short amount of time and on a limited budget.

  1. U-Compare: share and compare text mining tools with UIMA

    Science.gov (United States)

    Kano, Yoshinobu; Baumgartner, William A.; McCrohon, Luke; Ananiadou, Sophia; Cohen, K. Bretonnel; Hunter, Lawrence; Tsujii, Jun'ichi

    2009-01-01

    Summary: Due to the increasing number of text mining resources (tools and corpora) available to biologists, interoperability issues between these resources are becoming significant obstacles to using them effectively. UIMA, the Unstructured Information Management Architecture, is an open framework designed to aid in the construction of more interoperable tools. U-Compare is built on top of the UIMA framework, and provides both a concrete framework for out-of-the-box text mining and a sophisticated evaluation platform allowing users to run specific tools on any target text, generating both detailed statistics and instance-based visualizations of outputs. U-Compare is a joint project, providing the world's largest, and still growing, collection of UIMA-compatible resources. These resources, originally developed by different groups for a variety of domains, include many famous tools and corpora. U-Compare can be launched straight from the web, without needing to be manually installed. All U-Compare components are provided ready-to-use and can be combined easily via a drag-and-drop interface without any programming. External UIMA components can also simply be mixed with U-Compare components, without distinguishing between locally and remotely deployed resources. Availability: http://u-compare.org/ Contact: kano@is.s.u-tokyo.ac.jp PMID:19414535

  2. Textbook-Bundled Metacognitive Tools: A Study of LearnSmart's Efficacy in General Chemistry

    Science.gov (United States)

    Thadani, Vandana; Bouvier-Brown, Nicole C.

    2016-01-01

    College textbook publishers increasingly bundle sophisticated technology-based study tools with their texts. These tools appear promising, but empirical work on their efficacy is needed. We examined whether LearnSmart, a study tool bundled with McGraw-Hill's textbook "Chemistry" (Chang & Goldsby, 2013), improved learning in an…

  3. STOCK EXCHANGE LISTING INDUCES SOPHISTICATION OF CAPITAL BUDGETING

    Directory of Open Access Journals (Sweden)

    Wesley Mendes-da-Silva

    2014-08-01

    Full Text Available This article compares capital budgeting techniques employed in listed and unlisted companies in Brazil. We surveyed the Chief Financial Officers (CFOs of 398 listed companies and 300 large unlisted companies, and based on 91 respondents, the results suggest that the CFOs of listed companies tend to use less simplistic methods more often, for example: NPV and CAPM, and that CFOs of unlisted companies are less likely to estimate the cost of equity, despite being large companies. These findings indicate that stock exchange listing may require greater sophistication of the capital budgeting process.

  4. Differential ethnic associations between maternal flexibility and play sophistication in toddlers born very low birth weight

    Science.gov (United States)

    Erickson, Sarah J.; Montague, Erica Q.; Maclean, Peggy C.; Bancroft, Mary E.; Lowe, Jean R.

    2013-01-01

    Children born very low birth weight (development of self-regulation and effective functional skills, and play serves as an important avenue of early intervention. The current study investigated associations between maternal flexibility and toddler play sophistication in Caucasian, Spanish speaking Hispanic, English speaking Hispanic, and Native American toddlers (18-22 months adjusted age) in a cross-sectional cohort of 73 toddlers born VLBW and their mothers. We found that the association between maternal flexibility and toddler play sophistication differed by ethnicity (F(3,65) = 3.34, p = .02). In particular, Spanish speaking Hispanic dyads evidenced a significant positive association between maternal flexibility and play sophistication of medium effect size. Results for Native Americans were parallel to those of Spanish speaking Hispanic dyads: the relationship between flexibility and play sophistication was positive and of small-medium effect size. Findings indicate that for Caucasians and English speaking Hispanics, flexibility evidenced a non-significant (negative and small effect size) association with toddler play sophistication. Significant follow-up contrasts revealed that the associations for Caucasian and English speaking Hispanic dyads were significantly different from those of the other two ethnic groups. Results remained unchanged after adjusting for the amount of maternal language, an index of maternal engagement and stimulation; and after adjusting for birth weight, gestational age, gender, test age, cognitive ability, as well maternal age, education, and income. Our results provide preliminary evidence that ethnicity and acculturation may mediate the association between maternal interactive behavior such as flexibility and toddler developmental outcomes, as indexed by play sophistication. Addressing these association differences is particularly important in children born VLBW because interventions targeting parent interaction strategies such as

  5. The CEO performance effect : Statistical issues and a complex fit perspective

    NARCIS (Netherlands)

    Blettner, D.P.; Chaddad, F.R.; Bettis, R.

    2012-01-01

    How CEOs affect strategy and performance is important to strategic management research. We show that sophisticated statistical analysis alone is problematic for establishing the magnitude and causes of CEO impact on performance. We discuss three problem areas that substantially distort the

  6. Does a more sophisticated storm erosion model improve probabilistic erosion estimates?

    NARCIS (Netherlands)

    Ranasinghe, R.W.M.R.J.B.; Callaghan, D.; Roelvink, D.

    2013-01-01

    The dependency between the accuracy/uncertainty of storm erosion exceedance estimates obtained via a probabilistic model and the level of sophistication of the structural function (storm erosion model) embedded in the probabilistic model is assessed via the application of Callaghan et al.'s (2008)

  7. Sophisticating a naive Liapunov function

    International Nuclear Information System (INIS)

    Smith, D.; Lewins, J.D.

    1985-01-01

    The art of the direct method of Liapunov to determine system stability is to construct a suitable Liapunov or V function where V is to be positive definite (PD), to shrink to a center, which may be conveniently chosen as the origin, and where V is the negative definite (ND). One aid to the art is to solve an approximation to the system equations in order to provide a candidate V function. It can happen, however, that the V function is not strictly ND but vanishes at a finite number of isolated points. Naively, one anticipates that stability has been demonstrated since the trajectory of the system at such points is only momentarily tangential and immediately enters a region of inward directed trajectories. To demonstrate stability rigorously requires the construction of a sophisticated Liapunov function from what can be called the naive original choice. In this paper, the authors demonstrate the method of perturbing the naive function in the context of the well-known second-order oscillator and then apply the method to a more complicated problem based on a prompt jump model for a nuclear fission reactor

  8. Designing Solutions by a Student Centred Approach: Integration of Chemical Process Simulation with Statistical Tools to Improve Distillation Systems

    Directory of Open Access Journals (Sweden)

    Isabel M. Joao

    2017-09-01

    Full Text Available Projects thematically focused on simulation and statistical techniques for designing and optimizing chemical processes can be helpful in chemical engineering education in order to meet the needs of engineers. We argue for the relevance of the projects to improve a student centred approach and boost higher order thinking skills. This paper addresses the use of Aspen HYSYS by Portuguese chemical engineering master students to model distillation systems together with statistical experimental design techniques in order to optimize the systems highlighting the value of applying problem specific knowledge, simulation tools and sound statistical techniques. The paper summarizes the work developed by the students in order to model steady-state processes, dynamic processes and optimize the distillation systems emphasizing the benefits of the simulation tools and statistical techniques in helping the students learn how to learn. Students strengthened their domain specific knowledge and became motivated to rethink and improve chemical processes in their future chemical engineering profession. We discuss the main advantages of the methodology from the students’ and teachers perspective

  9. Strategic sophistication of individuals and teams. Experimental evidence

    Science.gov (United States)

    Sutter, Matthias; Czermak, Simon; Feri, Francesco

    2013-01-01

    Many important decisions require strategic sophistication. We examine experimentally whether teams act more strategically than individuals. We let individuals and teams make choices in simple games, and also elicit first- and second-order beliefs. We find that teams play the Nash equilibrium strategy significantly more often, and their choices are more often a best response to stated first order beliefs. Distributional preferences make equilibrium play less likely. Using a mixture model, the estimated probability to play strategically is 62% for teams, but only 40% for individuals. A model of noisy introspection reveals that teams differ from individuals in higher order beliefs. PMID:24926100

  10. Statistical mechanical analysis of LMFBR fuel cladding tubes

    International Nuclear Information System (INIS)

    Poncelet, J.-P.; Pay, A.

    1977-01-01

    The most important design requirement on fuel pin cladding for LMFBR's is its mechanical integrity. Disruptive factors include internal pressure from mixed oxide fuel fission gas release, thermal stresses and high temperature creep, neutron-induced differential void-swelling as a source of stress in the cladding and irradiation creep of stainless steel material, corrosion by fission products. Under irradiation these load-restraining mechanisms are accentuated by stainless steel embrittlement and strength alterations. To account for the numerous uncertainties involved in the analysis by theoretical models and computer codes statistical tools are unavoidably requested, i.e. Monte Carlo simulation methods. Thanks to these techniques, uncertainties in nominal characteristics, material properties and environmental conditions can be linked up in a correct way and used for a more accurate conceptual design. First, a thermal creep damage index is set up through a sufficiently sophisticated clad physical analysis including arbitrary time dependence of power and neutron flux as well as effects of sodium temperature, burnup and steel mechanical behavior. Although this strain limit approach implies a more general but time consuming model., on the counterpart the net output is improved and e.g. clad temperature, stress and strain maxima may be easily assessed. A full spectrum of variables are statistically treated to account for their probability distributions. Creep damage probability may be obtained and can contribute to a quantitative fuel probability estimation

  11. Power spectra as a diagnostic tool in probing statistical/nonstatistical behavior in unimolecular reactions

    Science.gov (United States)

    Chang, Xiaoyen Y.; Sewell, Thomas D.; Raff, Lionel M.; Thompson, Donald L.

    1992-11-01

    The possibility of utilizing different types of power spectra obtained from classical trajectories as a diagnostic tool to identify the presence of nonstatistical dynamics is explored by using the unimolecular bond-fission reactions of 1,2-difluoroethane and the 2-chloroethyl radical as test cases. In previous studies, the reaction rates for these systems were calculated by using a variational transition-state theory and classical trajectory methods. A comparison of the results showed that 1,2-difluoroethane is a nonstatistical system, while the 2-chloroethyl radical behaves statistically. Power spectra for these two systems have been generated under various conditions. The characteristics of these spectra are as follows: (1) The spectra for the 2-chloroethyl radical are always broader and more coupled to other modes than is the case for 1,2-difluoroethane. This is true even at very low levels of excitation. (2) When an internal energy near or above the dissociation threshold is initially partitioned into a local C-H stretching mode, the power spectra for 1,2-difluoroethane broaden somewhat, but discrete and somewhat isolated bands are still clearly evident. In contrast, the analogous power spectra for the 2-chloroethyl radical exhibit a near complete absence of isolated bands. The general appearance of the spectrum suggests a very high level of mode-to-mode coupling, large intramolecular vibrational energy redistribution (IVR) rates, and global statistical behavior. (3) The appearance of the power spectrum for the 2-chloroethyl radical is unaltered regardless of whether the initial C-H excitation is in the CH2 or the CH2Cl group. This result also suggests statistical behavior. These results are interpreted to mean that power spectra may be used as a diagnostic tool to assess the statistical character of a system. The presence of a diffuse spectrum exhibiting a nearly complete loss of isolated structures indicates that the dissociation dynamics of the molecule will

  12. A Comparison of Several Statistical Tests of Reciprocity of Self-Disclosure.

    Science.gov (United States)

    Dindia, Kathryn

    1988-01-01

    Reports the results of a study that used several statistical tests of reciprocity of self-disclosure. Finds little evidence for reciprocity of self-disclosure, and concludes that either reciprocity is an illusion, or that different or more sophisticated methods are needed to detect it. (MS)

  13. Stupid statistics!

    Science.gov (United States)

    Tellinghuisen, Joel

    2008-01-01

    The method of least squares is probably the most powerful data analysis tool available to scientists. Toward a fuller appreciation of that power, this work begins with an elementary review of statistics fundamentals, and then progressively increases in sophistication as the coverage is extended to the theory and practice of linear and nonlinear least squares. The results are illustrated in application to data analysis problems important in the life sciences. The review of fundamentals includes the role of sampling and its connection to probability distributions, the Central Limit Theorem, and the importance of finite variance. Linear least squares are presented using matrix notation, and the significance of the key probability distributions-Gaussian, chi-square, and t-is illustrated with Monte Carlo calculations. The meaning of correlation is discussed, including its role in the propagation of error. When the data themselves are correlated, special methods are needed for the fitting, as they are also when fitting with constraints. Nonlinear fitting gives rise to nonnormal parameter distributions, but the 10% Rule of Thumb suggests that such problems will be insignificant when the parameter is sufficiently well determined. Illustrations include calibration with linear and nonlinear response functions, the dangers inherent in fitting inverted data (e.g., Lineweaver-Burk equation), an analysis of the reliability of the van't Hoff analysis, the problem of correlated data in the Guggenheim method, and the optimization of isothermal titration calorimetry procedures using the variance-covariance matrix for experiment design. The work concludes with illustrations on assessing and presenting results.

  14. Development Strategies for Tourism Destinations: Tourism Sophistication vs. Resource Investments

    OpenAIRE

    Rainer Andergassen; Guido Candela

    2010-01-01

    This paper investigates the effectiveness of development strategies for tourism destinations. We argue that resource investments unambiguously increase tourism revenues and that increasing the degree of tourism sophistication, that is increasing the variety of tourism related goods and services, increases tourism activity and decreases the perceived quality of the destination's resource endowment, leading to an ambiguous effect on tourism revenues. We disentangle these two effects and charact...

  15. A New Statistical Tool: Scalar Score Function

    Czech Academy of Sciences Publication Activity Database

    Fabián, Zdeněk

    2011-01-01

    Roč. 2, - (2011), s. 109-116 ISSN 1934-7332 R&D Projects: GA ČR GA205/09/1079 Institutional research plan: CEZ:AV0Z10300504 Keywords : statistics * inference function * data characteristics * point estimates * heavy tails Subject RIV: BB - Applied Statistics, Operational Research

  16. An exercise in model validation: Comparing univariate statistics and Monte Carlo-based multivariate statistics

    International Nuclear Information System (INIS)

    Weathers, J.B.; Luck, R.; Weathers, J.W.

    2009-01-01

    The complexity of mathematical models used by practicing engineers is increasing due to the growing availability of sophisticated mathematical modeling tools and ever-improving computational power. For this reason, the need to define a well-structured process for validating these models against experimental results has become a pressing issue in the engineering community. This validation process is partially characterized by the uncertainties associated with the modeling effort as well as the experimental results. The net impact of the uncertainties on the validation effort is assessed through the 'noise level of the validation procedure', which can be defined as an estimate of the 95% confidence uncertainty bounds for the comparison error between actual experimental results and model-based predictions of the same quantities of interest. Although general descriptions associated with the construction of the noise level using multivariate statistics exists in the literature, a detailed procedure outlining how to account for the systematic and random uncertainties is not available. In this paper, the methodology used to derive the covariance matrix associated with the multivariate normal pdf based on random and systematic uncertainties is examined, and a procedure used to estimate this covariance matrix using Monte Carlo analysis is presented. The covariance matrices are then used to construct approximate 95% confidence constant probability contours associated with comparison error results for a practical example. In addition, the example is used to show the drawbacks of using a first-order sensitivity analysis when nonlinear local sensitivity coefficients exist. Finally, the example is used to show the connection between the noise level of the validation exercise calculated using multivariate and univariate statistics.

  17. An exercise in model validation: Comparing univariate statistics and Monte Carlo-based multivariate statistics

    Energy Technology Data Exchange (ETDEWEB)

    Weathers, J.B. [Shock, Noise, and Vibration Group, Northrop Grumman Shipbuilding, P.O. Box 149, Pascagoula, MS 39568 (United States)], E-mail: James.Weathers@ngc.com; Luck, R. [Department of Mechanical Engineering, Mississippi State University, 210 Carpenter Engineering Building, P.O. Box ME, Mississippi State, MS 39762-5925 (United States)], E-mail: Luck@me.msstate.edu; Weathers, J.W. [Structural Analysis Group, Northrop Grumman Shipbuilding, P.O. Box 149, Pascagoula, MS 39568 (United States)], E-mail: Jeffrey.Weathers@ngc.com

    2009-11-15

    The complexity of mathematical models used by practicing engineers is increasing due to the growing availability of sophisticated mathematical modeling tools and ever-improving computational power. For this reason, the need to define a well-structured process for validating these models against experimental results has become a pressing issue in the engineering community. This validation process is partially characterized by the uncertainties associated with the modeling effort as well as the experimental results. The net impact of the uncertainties on the validation effort is assessed through the 'noise level of the validation procedure', which can be defined as an estimate of the 95% confidence uncertainty bounds for the comparison error between actual experimental results and model-based predictions of the same quantities of interest. Although general descriptions associated with the construction of the noise level using multivariate statistics exists in the literature, a detailed procedure outlining how to account for the systematic and random uncertainties is not available. In this paper, the methodology used to derive the covariance matrix associated with the multivariate normal pdf based on random and systematic uncertainties is examined, and a procedure used to estimate this covariance matrix using Monte Carlo analysis is presented. The covariance matrices are then used to construct approximate 95% confidence constant probability contours associated with comparison error results for a practical example. In addition, the example is used to show the drawbacks of using a first-order sensitivity analysis when nonlinear local sensitivity coefficients exist. Finally, the example is used to show the connection between the noise level of the validation exercise calculated using multivariate and univariate statistics.

  18. Do organizations adopt sophisticated capital budgeting practices to deal with uncertainty in the investment decision? : A research note

    NARCIS (Netherlands)

    Verbeeten, Frank H M

    This study examines the impact of uncertainty on the sophistication of capital budgeting practices. While the theoretical applications of sophisticated capital budgeting practices (defined as the use of real option reasoning and/or game theory decision rules) have been well documented, empirical

  19. Cognitive ergonomics of operational tools

    International Nuclear Information System (INIS)

    Luedeke, A.

    2012-01-01

    Control systems have become increasingly more powerful over the past decades. The availability of high data throughput and sophisticated graphical interactions has opened a variety of new possibilities. But has this helped to provide intuitive, easy to use applications to simplify the operation of modern large scale accelerator facilities? We will discuss what makes an application useful to operation and what is necessary to make a tool easy to use. We will show that even the implementation of a small number of simple application design rules can help to create ergonomic operational tools. The author is convinced that such tools do indeed help to achieve higher beam availability and better beam performance at all accelerator facilities. (author)

  20. Exploring students’ perceived and actual ability in solving statistical problems based on Rasch measurement tools

    Science.gov (United States)

    Azila Che Musa, Nor; Mahmud, Zamalia; Baharun, Norhayati

    2017-09-01

    One of the important skills that is required from any student who are learning statistics is knowing how to solve statistical problems correctly using appropriate statistical methods. This will enable them to arrive at a conclusion and make a significant contribution and decision for the society. In this study, a group of 22 students majoring in statistics at UiTM Shah Alam were given problems relating to topics on testing of hypothesis which require them to solve the problems using confidence interval, traditional and p-value approach. Hypothesis testing is one of the techniques used in solving real problems and it is listed as one of the difficult concepts for students to grasp. The objectives of this study is to explore students’ perceived and actual ability in solving statistical problems and to determine which item in statistical problem solving that students find difficult to grasp. Students’ perceived and actual ability were measured based on the instruments developed from the respective topics. Rasch measurement tools such as Wright map and item measures for fit statistics were used to accomplish the objectives. Data were collected and analysed using Winsteps 3.90 software which is developed based on the Rasch measurement model. The results showed that students’ perceived themselves as moderately competent in solving the statistical problems using confidence interval and p-value approach even though their actual performance showed otherwise. Item measures for fit statistics also showed that the maximum estimated measures were found on two problems. These measures indicate that none of the students have attempted these problems correctly due to reasons which include their lack of understanding in confidence interval and probability values.

  1. Sophistic Ethics in the Technical Writing Classroom: Teaching "Nomos," Deliberation, and Action.

    Science.gov (United States)

    Scott, J. Blake

    1995-01-01

    Claims that teaching ethics is particularly important to technical writing. Outlines a classical, sophistic approach to ethics based on the theories and pedagogies of Protagoras, Gorgias, and Isocrates, which emphasizes the Greek concept of "nomos," internal and external deliberation, and responsible action. Discusses problems and…

  2. MyPMFs: a simple tool for creating statistical potentials to assess protein structural models.

    Science.gov (United States)

    Postic, Guillaume; Hamelryck, Thomas; Chomilier, Jacques; Stratmann, Dirk

    2018-05-29

    Evaluating the model quality of protein structures that evolve in environments with particular physicochemical properties requires scoring functions that are adapted to their specific residue compositions and/or structural characteristics. Thus, computational methods developed for structures from the cytosol cannot work properly on membrane or secreted proteins. Here, we present MyPMFs, an easy-to-use tool that allows users to train statistical potentials of mean force (PMFs) on the protein structures of their choice, with all parameters being adjustable. We demonstrate its use by creating an accurate statistical potential for transmembrane protein domains. We also show its usefulness to study the influence of the physical environment on residue interactions within protein structures. Our open-source software is freely available for download at https://github.com/bibip-impmc/mypmfs. Copyright © 2018. Published by Elsevier B.V.

  3. Sophisticated Fowl: The Complex Behaviour and Cognitive Skills of Chickens and Red Junglefowl

    Directory of Open Access Journals (Sweden)

    Laura Garnham

    2018-01-01

    Full Text Available The world’s most numerous bird, the domestic chicken, and their wild ancestor, the red junglefowl, have long been used as model species for animal behaviour research. Recently, this research has advanced our understanding of the social behaviour, personality, and cognition of fowl, and demonstrated their sophisticated behaviour and cognitive skills. Here, we overview some of this research, starting with describing research investigating the well-developed senses of fowl, before presenting how socially and cognitively complex they can be. The realisation that domestic chickens, our most abundant production animal, are behaviourally and cognitively sophisticated should encourage an increase in general appraise and fascination towards them. In turn, this should inspire increased use of them as both research and hobby animals, as well as improvements in their unfortunately often poor welfare.

  4. TU-FG-201-05: Varian MPC as a Statistical Process Control Tool

    International Nuclear Information System (INIS)

    Carver, A; Rowbottom, C

    2016-01-01

    Purpose: Quality assurance in radiotherapy requires the measurement of various machine parameters to ensure they remain within permitted values over time. In Truebeam release 2.0 the Machine Performance Check (MPC) was released allowing beam output and machine axis movements to be assessed in a single test. We aim to evaluate the Varian Machine Performance Check (MPC) as a tool for Statistical Process Control (SPC). Methods: Varian’s MPC tool was used on three Truebeam and one EDGE linac for a period of approximately one year. MPC was commissioned against independent systems. After this period the data were reviewed to determine whether or not the MPC was useful as a process control tool. Analyses on individual tests were analysed using Shewhart control plots, using Matlab for analysis. Principal component analysis was used to determine if a multivariate model was of any benefit in analysing the data. Results: Control charts were found to be useful to detect beam output changes, worn T-nuts and jaw calibration issues. Upper and lower control limits were defined at the 95% level. Multivariate SPC was performed using Principal Component Analysis. We found little evidence of clustering beyond that which might be naively expected such as beam uniformity and beam output. Whilst this makes multivariate analysis of little use it suggests that each test is giving independent information. Conclusion: The variety of independent parameters tested in MPC makes it a sensitive tool for routine machine QA. We have determined that using control charts in our QA programme would rapidly detect changes in machine performance. The use of control charts allows large quantities of tests to be performed on all linacs without visual inspection of all results. The use of control limits alerts users when data are inconsistent with previous measurements before they become out of specification. A. Carver has received a speaker’s honorarium from Varian

  5. Assessing Epistemic Sophistication by Considering Domain-Specific Absolute and Multiplicistic Beliefs Separately

    Science.gov (United States)

    Peter, Johannes; Rosman, Tom; Mayer, Anne-Kathrin; Leichner, Nikolas; Krampen, Günter

    2016-01-01

    Background: Particularly in higher education, not only a view of science as a means of finding absolute truths (absolutism), but also a view of science as generally tentative (multiplicism) can be unsophisticated and obstructive for learning. Most quantitative epistemic belief inventories neglect this and understand epistemic sophistication as…

  6. Cintichem modified process - {sup 99}Mo precipitation step: application of statistical analysis tools over the reaction parameters

    Energy Technology Data Exchange (ETDEWEB)

    Teodoro, Rodrigo; Dias, Carla R.B.R.; Osso Junior, Joao A., E-mail: jaosso@ipen.b [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Fernandez Nunez, Eutimio Gustavo [Universidade de Sao Paulo (EP/USP), SP (Brazil). Escola Politecnica. Dept. de Engenharia Quimica

    2011-07-01

    Precipitation of {sup 99}Mo by {alpha}-benzoin oxime ({alpha}-Bz) is a standard precipitation method for molybdenum due the high selectivity of this agent. Nowadays, statistical analysis tools have been employed in analytical systems to prove its efficiency and feasibility. IPEN has a project aiming the production of {sup 99}Mo by the fission of {sup 235}U route. The processing uses as the first step the precipitation of {sup 99}Mo with {alpha}-Bz. This precipitation step involves many key reaction parameters. The aim of this work is based on the development of the already known acidic route to produce {sup 99}Mo as well as the optimization of the reactional parameters applying statistical tools. In order to simulate {sup 99}Mo precipitation, the study was conducted in acidic media using HNO{sub 3}, {alpha}Bz as precipitant agent and NaOH /1%H{sub 2}O{sub 2} as dissolver solution. Then, a Mo carrier, KMnO{sub 4} solutions and {sup 99}Mo tracer were added to the reaction flask. The reactional parameters ({alpha}-Bz/Mo ratio, Mo carrier, reaction time and temperature, and cooling reaction time before filtration) were evaluated under a fractional factorial design of resolution V. The best values of each reactional parameter were determined by a response surface statistical planning. The precipitation and recovery yields of {sup 99}Mo were measured using HPGe detector. Statistical analysis from experimental data suggested that the reactional parameters {alpha}-Bz/Mo ratio, reaction time and temperature have a significant impact on {sup 99}Mo precipitation. Optimization statistical planning showed that higher {alpha}Bz/Mo ratios, room temperature, and lower reaction time lead to higher {sup 99}Mo yields. (author)

  7. The State of Nursing Home Information Technology Sophistication in Rural and Nonrural US Markets.

    Science.gov (United States)

    Alexander, Gregory L; Madsen, Richard W; Miller, Erin L; Wakefield, Douglas S; Wise, Keely K; Alexander, Rachel L

    2017-06-01

    To test for significant differences in information technology sophistication (ITS) in US nursing homes (NH) based on location. We administered a primary survey January 2014 to July 2015 to NH in each US state. The survey was cross-sectional and examined 3 dimensions (IT capabilities, extent of IT use, degree of IT integration) among 3 domains (resident care, clinical support, administrative activities) of ITS. ITS was broken down by NH location. Mean responses were compared across 4 NH categories (Metropolitan, Micropolitan, Small Town, and Rural) for all 9 ITS dimensions and domains. Least square means and Tukey's method were used for multiple comparisons. Methods yielded 815/1,799 surveys (45% response rate). In every health care domain (resident care, clinical support, and administrative activities) statistical differences in facility ITS occurred in larger (metropolitan or micropolitan) and smaller (small town or rural) populated areas. This study represents the most current national assessment of NH IT since 2004. Historically, NH IT has been used solely for administrative activities and much less for resident care and clinical support. However, results are encouraging as ITS in other domains appears to be greater than previously imagined. © 2016 National Rural Health Association.

  8. The Impact of Services on Economic Complexity: Service Sophistication as Route for Economic Growth.

    Science.gov (United States)

    Stojkoski, Viktor; Utkovski, Zoran; Kocarev, Ljupco

    2016-01-01

    Economic complexity reflects the amount of knowledge that is embedded in the productive structure of an economy. By combining tools from network science and econometrics, a robust and stable relationship between a country's productive structure and its economic growth has been established. Here we report that not only goods but also services are important for predicting the rate at which countries will grow. By adopting a terminology which classifies manufactured goods and delivered services as products, we investigate the influence of services on the country's productive structure. In particular, we provide evidence that complexity indices for services are in general higher than those for goods, which is reflected in a general tendency to rank countries with developed service sector higher than countries with economy centred on manufacturing of goods. By focusing on country dynamics based on experimental data, we investigate the impact of services on the economic complexity of countries measured in the product space (consisting of both goods and services). Importantly, we show that diversification of service exports and its sophistication can provide an additional route for economic growth in both developing and developed countries.

  9. New applications of statistical tools in plant pathology.

    Science.gov (United States)

    Garrett, K A; Madden, L V; Hughes, G; Pfender, W F

    2004-09-01

    ABSTRACT The series of papers introduced by this one address a range of statistical applications in plant pathology, including survival analysis, nonparametric analysis of disease associations, multivariate analyses, neural networks, meta-analysis, and Bayesian statistics. Here we present an overview of additional applications of statistics in plant pathology. An analysis of variance based on the assumption of normally distributed responses with equal variances has been a standard approach in biology for decades. Advances in statistical theory and computation now make it convenient to appropriately deal with discrete responses using generalized linear models, with adjustments for overdispersion as needed. New nonparametric approaches are available for analysis of ordinal data such as disease ratings. Many experiments require the use of models with fixed and random effects for data analysis. New or expanded computing packages, such as SAS PROC MIXED, coupled with extensive advances in statistical theory, allow for appropriate analyses of normally distributed data using linear mixed models, and discrete data with generalized linear mixed models. Decision theory offers a framework in plant pathology for contexts such as the decision about whether to apply or withhold a treatment. Model selection can be performed using Akaike's information criterion. Plant pathologists studying pathogens at the population level have traditionally been the main consumers of statistical approaches in plant pathology, but new technologies such as microarrays supply estimates of gene expression for thousands of genes simultaneously and present challenges for statistical analysis. Applications to the study of the landscape of the field and of the genome share the risk of pseudoreplication, the problem of determining the appropriate scale of the experimental unit and of obtaining sufficient replication at that scale.

  10. Aortic Aneurysm Statistics

    Science.gov (United States)

    ... Summary Coverdell Program 2012-2015 State Summaries Data & Statistics Fact Sheets Heart Disease and Stroke Fact Sheets ... Roadmap for State Planning Other Data Resources Other Statistic Resources Grantee Information Cross-Program Information Online Tools ...

  11. Statistical analysis of electrical resistivity as a tool for estimating cement type of 12-year-old concrete specimens

    NARCIS (Netherlands)

    Polder, R.B.; Morales-Napoles, O.; Pacheco, J.

    2012-01-01

    Statistical tests on values of concrete resistivity can be used as a fast tool for estimating the cement type of old concrete. Electrical resistivity of concrete is a material property that describes the electrical resistance of concrete in a unit cell. Influences of binder type, water-to-binder

  12. Musical Sophistication and the Effect of Complexity on Auditory Discrimination in Finnish Speakers

    Science.gov (United States)

    Dawson, Caitlin; Aalto, Daniel; Šimko, Juraj; Vainio, Martti; Tervaniemi, Mari

    2017-01-01

    Musical experiences and native language are both known to affect auditory processing. The present work aims to disentangle the influences of native language phonology and musicality on behavioral and subcortical sound feature processing in a population of musically diverse Finnish speakers as well as to investigate the specificity of enhancement from musical training. Finnish speakers are highly sensitive to duration cues since in Finnish, vowel and consonant duration determine word meaning. Using a correlational approach with a set of behavioral sound feature discrimination tasks, brainstem recordings, and a musical sophistication questionnaire, we find no evidence for an association between musical sophistication and more precise duration processing in Finnish speakers either in the auditory brainstem response or in behavioral tasks, but they do show an enhanced pitch discrimination compared to Finnish speakers with less musical experience and show greater duration modulation in a complex task. These results are consistent with a ceiling effect set for certain sound features which corresponds to the phonology of the native language, leaving an opportunity for music experience-based enhancement of sound features not explicitly encoded in the language (such as pitch, which is not explicitly encoded in Finnish). Finally, the pattern of duration modulation in more musically sophisticated Finnish speakers suggests integrated feature processing for greater efficiency in a real world musical situation. These results have implications for research into the specificity of plasticity in the auditory system as well as to the effects of interaction of specific language features with musical experiences. PMID:28450829

  13. Musical Sophistication and the Effect of Complexity on Auditory Discrimination in Finnish Speakers.

    Science.gov (United States)

    Dawson, Caitlin; Aalto, Daniel; Šimko, Juraj; Vainio, Martti; Tervaniemi, Mari

    2017-01-01

    Musical experiences and native language are both known to affect auditory processing. The present work aims to disentangle the influences of native language phonology and musicality on behavioral and subcortical sound feature processing in a population of musically diverse Finnish speakers as well as to investigate the specificity of enhancement from musical training. Finnish speakers are highly sensitive to duration cues since in Finnish, vowel and consonant duration determine word meaning. Using a correlational approach with a set of behavioral sound feature discrimination tasks, brainstem recordings, and a musical sophistication questionnaire, we find no evidence for an association between musical sophistication and more precise duration processing in Finnish speakers either in the auditory brainstem response or in behavioral tasks, but they do show an enhanced pitch discrimination compared to Finnish speakers with less musical experience and show greater duration modulation in a complex task. These results are consistent with a ceiling effect set for certain sound features which corresponds to the phonology of the native language, leaving an opportunity for music experience-based enhancement of sound features not explicitly encoded in the language (such as pitch, which is not explicitly encoded in Finnish). Finally, the pattern of duration modulation in more musically sophisticated Finnish speakers suggests integrated feature processing for greater efficiency in a real world musical situation. These results have implications for research into the specificity of plasticity in the auditory system as well as to the effects of interaction of specific language features with musical experiences.

  14. Statistical approach to quantum field theory. An introduction

    International Nuclear Information System (INIS)

    Wipf, Andreas

    2013-01-01

    Based on course-tested notes and pedagogical in style. Authored by a leading researcher in the field. Contains end-of-chapter problems and listings of short, useful computer programs. Authored by a leading researcher in the field. Contains end-of-chapter problems and listings of short, useful computer programs. Contains end-of-chapter problems and listings of short, useful computer programs. Over the past few decades the powerful methods of statistical physics and Euclidean quantum field theory have moved closer together, with common tools based on the use of path integrals. The interpretation of Euclidean field theories as particular systems of statistical physics has opened up new avenues for understanding strongly coupled quantum systems or quantum field theories at zero or finite temperatures. Accordingly, the first chapters of this book contain a self-contained introduction to path integrals in Euclidean quantum mechanics and statistical mechanics. The resulting high-dimensional integrals can be estimated with the help of Monte Carlo simulations based on Markov processes. The most commonly used algorithms are presented in detail so as to prepare the reader for the use of high-performance computers as an ''experimental'' tool for this burgeoning field of theoretical physics. Several chapters are then devoted to an introduction to simple lattice field theories and a variety of spin systems with discrete and continuous spins, where the ubiquitous Ising model serves as an ideal guide for introducing the fascinating area of phase transitions. As an alternative to the lattice formulation of quantum field theories, variants of the flexible renormalization group methods are discussed in detail. Since, according to our present-day knowledge, all fundamental interactions in nature are described by gauge theories, the remaining chapters of the book deal with gauge theories without and with matter. This text is based on course-tested notes for graduate students and, as

  15. Reacting to Neighborhood Cues?: Political Sophistication Moderates the Effect of Exposure to Immigrants

    DEFF Research Database (Denmark)

    Danckert, Bolette; Dinesen, Peter Thisted; Sønderskov, Kim Mannemar

    2017-01-01

    is founded on politically sophisticated individuals having a greater comprehension of news and other mass-mediated sources, which makes them less likely to rely on neighborhood cues as sources of information relevant for political attitudes. Based on a unique panel data set with fine-grained information...

  16. Data Tools and Apps

    Science.gov (United States)

    Employment and Payroll Survey of Business Owners Work from Home Our statistics highlight trends in household statistics from multiple surveys. Data Tools & Apps Main American FactFinder Census Business Builder My ). Business Dynamics Statistics This tool shows tabulations on establishments, firms, and employment with

  17. Tools for Assessing Readability of Statistics Teaching Materials

    Science.gov (United States)

    Lesser, Lawrence; Wagler, Amy

    2016-01-01

    This article provides tools and rationale for instructors in math and science to make their assessment and curriculum materials (more) readable for students. The tools discussed (MSWord, LexTutor, Coh-Metrix TEA) are readily available linguistic analysis applications that are grounded in current linguistic theory, but present output that can…

  18. Financial Sophistication and the Distribution of the Welfare Cost of Inflation

    OpenAIRE

    Paola Boel; Gabriele Camera

    2009-01-01

    The welfare cost of anticipated inflation is quantified in a calibrated model of the U.S. economy that exhibits tractable equilibrium dispersion in wealth and earnings. Inflation does not generate large losses in societal welfare, yet its impact varies noticeably across segments of society depending also on the financial sophistication of the economy. If money is the only asset, then inflation hurts mostly the wealthier and more productive agents, while those poorer and less productive may ev...

  19. Statistical analysis of surface roughness in turning based on cutting parameters and tool vibrations with response surface methodology (RSM)

    Science.gov (United States)

    Touati, Soufiane; Mekhilef, Slimane

    2018-03-01

    In this paper, we present an experimental study to determine the effect of the cutting conditions and tool vibration on the surface roughness in finish turning of 32CrMoV12-28 steel, using carbide cutting tool YT15. For these purposes, a linear quadratic model in interaction of connecting surface roughness (Ra, Rz) with different combinations of cutting parameters such as cutting speed, feed rate, depth of cut and tool vibration, in radial and in tangential cutting force directions (Vy) and (Vz) is elaborated. In order to express the degree of interaction of cutting parameters and tool vibration, a multiple linear regression and response surface methodology are adopted. The application of this statistical technique for predicting the surface roughness shows that the feed rate is the most dominant factor followed by the cutting speed. However, the depth of the cut and tool vibrations have secondary effect. The presented models have some interest since they are used in the cutting process optimization.

  20. How Our Cognition Shapes and Is Shaped by Technology: A Common Framework for Understanding Human Tool-Use Interactions in the Past, Present, and Future

    Directory of Open Access Journals (Sweden)

    François Osiurak

    2018-03-01

    Full Text Available Over the evolution, humans have constantly developed and improved their technologies. This evolution began with the use of physical tools, those tools that increase our sensorimotor abilities (e.g., first stone tools, modern knives, hammers, pencils. Although we still use some of these tools, we also employ in daily life more sophisticated tools for which we do not systematically understand the underlying physical principles (e.g., computers, cars. Current research is also turned toward the development of brain–computer interfaces directly linking our brain activity to machines (i.e., symbiotic tools. The ultimate goal of research on this topic is to identify the key cognitive processes involved in these different modes of interaction. As a primary step to fulfill this goal, we offer a first attempt at a common framework, based on the idea that humans shape technologies, which also shape us in return. The framework proposed is organized into three levels, describing how we interact when using physical (Past, sophisticated (Present, and symbiotic (Future technologies. Here we emphasize the role played by technical reasoning and practical reasoning, two key cognitive processes that could nevertheless be progressively suppressed by the proficient use of sophisticated and symbiotic tools. We hope that this framework will provide a common ground for researchers interested in the cognitive basis of human tool-use interactions, from paleoanthropology to neuroergonomics.

  1. Statistical Process Control: A Quality Tool for a Venous Thromboembolic Disease Registry.

    Science.gov (United States)

    Posadas-Martinez, Maria Lourdes; Rojas, Liliana Paloma; Vazquez, Fernando Javier; De Quiros, Fernan Bernaldo; Waisman, Gabriel Dario; Giunta, Diego Hernan

    2016-01-01

    We aim to describe Statistical Control Process as a quality tool for the Institutional Registry of Venous Thromboembolic Disease (IRTD), a registry developed in a community-care tertiary hospital in Buenos Aires, Argentina. The IRTD is a prospective cohort. The process of data acquisition began with the creation of a computerized alert generated whenever physicians requested imaging or laboratory study to diagnose venous thromboembolism, which defined eligible patients. The process then followed a structured methodology for patient's inclusion, evaluation, and posterior data entry. To control this process, process performance indicators were designed to be measured monthly. These included the number of eligible patients, the number of included patients, median time to patient's evaluation, and percentage of patients lost to evaluation. Control charts were graphed for each indicator. The registry was evaluated in 93 months, where 25,757 patients were reported and 6,798 patients met inclusion criteria. The median time to evaluation was 20 hours (SD, 12) and 7.7% of the total was lost to evaluation. Each indicator presented trends over time, caused by structural changes and improvement cycles, and therefore the central limit suffered inflexions. Statistical process control through process performance indicators allowed us to control the performance of the registry over time to detect systematic problems. We postulate that this approach could be reproduced for other clinical registries.

  2. Laser Pointers: Low-Cost, Low-Tech Innovative, Interactive Instruction Tool

    Science.gov (United States)

    Zdravkovska, Nevenka; Cech, Maureen; Beygo, Pinar; Kackley, Bob

    2010-01-01

    This paper discusses the use of laser pointers at the Engineering and Physical Sciences Library, University of Maryland, College Park, as a personal response system (PRS) tool to encourage student engagement in and interactivity with one-shot, lecture-based information literacy sessions. Unlike more sophisticated personal response systems like…

  3. Putin’s Russia: Russian Mentality and Sophisticated Imperialism in Military Policies

    OpenAIRE

    Szénási, Lieutenant-Colonel Endre

    2016-01-01

    According to my experiences, the Western world hopelessly fails to understand Russian mentality, or misinterprets it. During my analysis of the Russian way of thinking I devoted special attention to the examination of military mentality. I have connected the issue of the Russian way of thinking to the contemporary imperial policies of Putin’s Russia.  I have also attempted to prove the level of sophistication of both. I hope that a better understanding of both the Russian mentality and imperi...

  4. Performance Analysis Tool for HPC and Big Data Applications on Scientific Clusters

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Wucherl [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Koo, Michelle [Univ. of California, Berkeley, CA (United States); Cao, Yu [California Inst. of Technology (CalTech), Pasadena, CA (United States); Sim, Alex [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Nugent, Peter [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Berkeley, CA (United States); Wu, Kesheng [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-09-17

    Big data is prevalent in HPC computing. Many HPC projects rely on complex workflows to analyze terabytes or petabytes of data. These workflows often require running over thousands of CPU cores and performing simultaneous data accesses, data movements, and computation. It is challenging to analyze the performance involving terabytes or petabytes of workflow data or measurement data of the executions, from complex workflows over a large number of nodes and multiple parallel task executions. To help identify performance bottlenecks or debug the performance issues in large-scale scientific applications and scientific clusters, we have developed a performance analysis framework, using state-ofthe- art open-source big data processing tools. Our tool can ingest system logs and application performance measurements to extract key performance features, and apply the most sophisticated statistical tools and data mining methods on the performance data. It utilizes an efficient data processing engine to allow users to interactively analyze a large amount of different types of logs and measurements. To illustrate the functionality of the big data analysis framework, we conduct case studies on the workflows from an astronomy project known as the Palomar Transient Factory (PTF) and the job logs from the genome analysis scientific cluster. Our study processed many terabytes of system logs and application performance measurements collected on the HPC systems at NERSC. The implementation of our tool is generic enough to be used for analyzing the performance of other HPC systems and Big Data workows.

  5. Stochastic simulations for the time evolution of systems which obey generalized statistics: fractional exclusion statistics and Gentile's statistics

    International Nuclear Information System (INIS)

    Nemnes, G A; Anghel, D V

    2010-01-01

    We present a stochastic method for the simulation of the time evolution in systems which obey generalized statistics, namely fractional exclusion statistics and Gentile's statistics. The transition rates are derived in the framework of canonical ensembles. This approach introduces a tool for describing interacting fermionic and bosonic systems in non-equilibrium as ideal FES systems, in a computationally efficient manner. The two types of statistics are analyzed comparatively, indicating their intrinsic thermodynamic differences and revealing key aspects related to the species size

  6. Statistical methods for the forensic analysis of striated tool marks

    Energy Technology Data Exchange (ETDEWEB)

    Hoeksema, Amy Beth [Iowa State Univ., Ames, IA (United States)

    2013-01-01

    In forensics, fingerprints can be used to uniquely identify suspects in a crime. Similarly, a tool mark left at a crime scene can be used to identify the tool that was used. However, the current practice of identifying matching tool marks involves visual inspection of marks by forensic experts which can be a very subjective process. As a result, declared matches are often successfully challenged in court, so law enforcement agencies are particularly interested in encouraging research in more objective approaches. Our analysis is based on comparisons of profilometry data, essentially depth contours of a tool mark surface taken along a linear path. In current practice, for stronger support of a match or non-match, multiple marks are made in the lab under the same conditions by the suspect tool. We propose the use of a likelihood ratio test to analyze the difference between a sample of comparisons of lab tool marks to a field tool mark, against a sample of comparisons of two lab tool marks. Chumbley et al. (2010) point out that the angle of incidence between the tool and the marked surface can have a substantial impact on the tool mark and on the effectiveness of both manual and algorithmic matching procedures. To better address this problem, we describe how the analysis can be enhanced to model the effect of tool angle and allow for angle estimation for a tool mark left at a crime scene. With sufficient development, such methods may lead to more defensible forensic analyses.

  7. Statistical analysis with Excel for dummies

    CERN Document Server

    Schmuller, Joseph

    2013-01-01

    Take the mystery out of statistical terms and put Excel to work! If you need to create and interpret statistics in business or classroom settings, this easy-to-use guide is just what you need. It shows you how to use Excel's powerful tools for statistical analysis, even if you've never taken a course in statistics. Learn the meaning of terms like mean and median, margin of error, standard deviation, and permutations, and discover how to interpret the statistics of everyday life. You'll learn to use Excel formulas, charts, PivotTables, and other tools to make sense of everything fro

  8. Optimizing the maximum reported cluster size in the spatial scan statistic for ordinal data.

    Science.gov (United States)

    Kim, Sehwi; Jung, Inkyung

    2017-01-01

    The spatial scan statistic is an important tool for spatial cluster detection. There have been numerous studies on scanning window shapes. However, little research has been done on the maximum scanning window size or maximum reported cluster size. Recently, Han et al. proposed to use the Gini coefficient to optimize the maximum reported cluster size. However, the method has been developed and evaluated only for the Poisson model. We adopt the Gini coefficient to be applicable to the spatial scan statistic for ordinal data to determine the optimal maximum reported cluster size. Through a simulation study and application to a real data example, we evaluate the performance of the proposed approach. With some sophisticated modification, the Gini coefficient can be effectively employed for the ordinal model. The Gini coefficient most often picked the optimal maximum reported cluster sizes that were the same as or smaller than the true cluster sizes with very high accuracy. It seems that we can obtain a more refined collection of clusters by using the Gini coefficient. The Gini coefficient developed specifically for the ordinal model can be useful for optimizing the maximum reported cluster size for ordinal data and helpful for properly and informatively discovering cluster patterns.

  9. Statistics for wildlifers: how much and what kind?

    Science.gov (United States)

    Johnson, D.H.; Shaffer, T.L.; Newton, W.E.

    2001-01-01

    Quantitative methods are playing increasingly important roles in wildlife ecology and, ultimately, management. This change poses a challenge for wildlife practitioners and students who are not well-educated in mathematics and statistics. Here we give our opinions on what wildlife biologists should know about statistics, while recognizing that not everyone is inclined mathematically. For those who are, we recommend that they take mathematics coursework at least through calculus and linear algebra. They should take statistics courses that are focused conceptually , stressing the Why rather than the How of doing statistics. For less mathematically oriented wildlifers, introductory classes in statistical techniques will furnish some useful background in basic methods but may provide little appreciation of when the methods are appropriate. These wildlifers will have to rely much more on advice from statisticians. Far more important than knowing how to analyze data is an understanding of how to obtain and recognize good data. Regardless of the statistical education they receive, all wildlife biologists should appreciate the importance of controls, replication, and randomization in studies they conduct. Understanding these concepts requires little mathematical sophistication, but is critical to advancing the science of wildlife ecology.

  10. Cancer Statistics Animator

    Science.gov (United States)

    This tool allows users to animate cancer trends over time by cancer site and cause of death, race, and sex. Provides access to incidence, mortality, and survival. Select the type of statistic, variables, format, and then extract the statistics in a delimited format for further analyses.

  11. The relation between maturity and sophistication shall be properly dealt with in nuclear power development

    International Nuclear Information System (INIS)

    Li Yongjiang

    2009-01-01

    The paper analyses the advantages and disadvantages of the second generation improved technologies and third generation technologies mainly developed in China in terms of safety and economy. The paper also discusses the maturity of the second generation improved technologies and the sophistication of the third generation technologies respectively. Meanwhile, the paper proposes that the advantage and disadvantage of second generation improved technologies and third generation technologies should be carefully taken into consideration and the relationship between the maturity and sophistication should be properly dealt with in the current stage. A two-step strategy shall be taken as a solution to solve the problem of insufficient capacity of nuclear power, trace and develop the third generation technologies, so as to ensure the sound and fast development of nuclear power. (authors)

  12. Electronic Systems for Spacecraft Vehicles: Required EDA Tools

    Science.gov (United States)

    Bachnak, Rafic

    1999-01-01

    The continuous increase in complexity of electronic systems is making the design and manufacturing of such systems more challenging than ever before. As a result, designers are finding it impossible to design efficient systems without the use of sophisticated Electronic Design Automation (EDA) tools. These tools offer integrated simulation of the electrical, mechanical, and manufacturing functions and lead to a correct by design methodology. This report identifies the EDA tools that would be needed to design, analyze, simulate, and evaluate electronic systems for spacecraft vehicles. In addition, the report presents recommendations to enhance the current JSC electronic design capabilities. This includes cost information and a discussion as to the impact, both positive and negative, of implementing the recommendations.

  13. Recent advances in systems metabolic engineering tools and strategies.

    Science.gov (United States)

    Chae, Tong Un; Choi, So Young; Kim, Je Woong; Ko, Yoo-Sung; Lee, Sang Yup

    2017-10-01

    Metabolic engineering has been playing increasingly important roles in developing microbial cell factories for the production of various chemicals and materials to achieve sustainable chemical industry. Nowadays, many tools and strategies are available for performing systems metabolic engineering that allows systems-level metabolic engineering in more sophisticated and diverse ways by adopting rapidly advancing methodologies and tools of systems biology, synthetic biology and evolutionary engineering. As an outcome, development of more efficient microbial cell factories has become possible. Here, we review recent advances in systems metabolic engineering tools and strategies together with accompanying application examples. In addition, we describe how these tools and strategies work together in simultaneous and synergistic ways to develop novel microbial cell factories. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Design research in statistics education : on symbolizing and computer tools

    NARCIS (Netherlands)

    Bakker, A.

    2004-01-01

    The present knowledge society requires statistical literacy-the ability to interpret, critically evaluate, and communicate about statistical information and messages (Gal, 2002). However, research shows that students generally do not gain satisfactory statistical understanding. The research

  15. Basic elements of computational statistics

    CERN Document Server

    Härdle, Wolfgang Karl; Okhrin, Yarema

    2017-01-01

    This textbook on computational statistics presents tools and concepts of univariate and multivariate statistical data analysis with a strong focus on applications and implementations in the statistical software R. It covers mathematical, statistical as well as programming problems in computational statistics and contains a wide variety of practical examples. In addition to the numerous R sniplets presented in the text, all computer programs (quantlets) and data sets to the book are available on GitHub and referred to in the book. This enables the reader to fully reproduce as well as modify and adjust all examples to their needs. The book is intended for advanced undergraduate and first-year graduate students as well as for data analysts new to the job who would like a tour of the various statistical tools in a data analysis workshop. The experienced reader with a good knowledge of statistics and programming might skip some sections on univariate models and enjoy the various mathematical roots of multivariate ...

  16. Industrial statistics with Minitab

    CERN Document Server

    Cintas, Pere Grima; Llabres, Xavier Tort-Martorell

    2012-01-01

    Industrial Statistics with MINITAB demonstrates the use of MINITAB as a tool for performing statistical analysis in an industrial context. This book covers introductory industrial statistics, exploring the most commonly used techniques alongside those that serve to give an overview of more complex issues. A plethora of examples in MINITAB are featured along with case studies for each of the statistical techniques presented. Industrial Statistics with MINITAB: Provides comprehensive coverage of user-friendly practical guidance to the essential statistical methods applied in industry.Explores

  17. OECD eXplorer: Making Regional Statistics Come Alive through a Geo-Visual Web-Tool

    Directory of Open Access Journals (Sweden)

    Monica Brezzi

    2011-06-01

    Full Text Available Recent advances in web-enabled graphics technologies have the potential to make a dramatic impact on developing highly interactive Geovisual Analytics applications for the Internet. An emerging and challenging application domain is geovisualization of regional (sub-national statistics. Higher integration drivenby institutional processes and economic globalisation is eroding national borders and creating competition along regional lines in the world market. Sound information at sub-national level and benchmark of regions across borders have gained importance in the policy agenda of many countries. In this paper, we introduce “OECD eXplorer” — an interactive tool for analyzing and communicating gained insights and discoveries about spatial-temporal and multivariate OECD regional data. This database is a potential treasure chest for policy-makers, researchers and citizens to gain a better understanding of a region’s structure and performance and to carry out analysis of territorial trends and disparities based on sound information comparableacross countries. Many approaches and tools have been developed in spatial-related knowledge discovery but generally they do not scale well with dynamic visualization of larger spatial data on the Internet. In this context, we introduce a web-compliant Geovisual Analytics toolkit that supports a broad collection offunctional components for analysis, hypothesis generation and validation. The same tool enables the communicationof results on the basis of a snapshot mechanism that captures, re-uses and shares task-related explorative findings. Further developments underway are in the creation of a generic highly interactive web “eXplorer” platform that can be the foundation for easy customization of similar web applications usingdifferent geographical boundaries and indicators. Given this global dimension, a “generic eXplorer” will be a powerful tool to explore different territorial dimensions

  18. StAR: a simple tool for the statistical comparison of ROC curves

    Directory of Open Access Journals (Sweden)

    Melo Francisco

    2008-06-01

    Full Text Available Abstract Background As in many different areas of science and technology, most important problems in bioinformatics rely on the proper development and assessment of binary classifiers. A generalized assessment of the performance of binary classifiers is typically carried out through the analysis of their receiver operating characteristic (ROC curves. The area under the ROC curve (AUC constitutes a popular indicator of the performance of a binary classifier. However, the assessment of the statistical significance of the difference between any two classifiers based on this measure is not a straightforward task, since not many freely available tools exist. Most existing software is either not free, difficult to use or not easy to automate when a comparative assessment of the performance of many binary classifiers is intended. This constitutes the typical scenario for the optimization of parameters when developing new classifiers and also for their performance validation through the comparison to previous art. Results In this work we describe and release new software to assess the statistical significance of the observed difference between the AUCs of any two classifiers for a common task estimated from paired data or unpaired balanced data. The software is able to perform a pairwise comparison of many classifiers in a single run, without requiring any expert or advanced knowledge to use it. The software relies on a non-parametric test for the difference of the AUCs that accounts for the correlation of the ROC curves. The results are displayed graphically and can be easily customized by the user. A human-readable report is generated and the complete data resulting from the analysis are also available for download, which can be used for further analysis with other software. The software is released as a web server that can be used in any client platform and also as a standalone application for the Linux operating system. Conclusion A new software for

  19. Statistical data analysis handbook

    National Research Council Canada - National Science Library

    Wall, Francis J

    1986-01-01

    It must be emphasized that this is not a text book on statistics. Instead it is a working tool that presents data analysis in clear, concise terms which can be readily understood even by those without formal training in statistics...

  20. Close to the Clothes : Materiality and Sophisticated Archaism in Alexander van Slobbe’s Design Practices

    NARCIS (Netherlands)

    Baronian, M.-A.

    This article looks at the work of contemporary Dutch fashion designer Alexander van Slobbe (1959) and examines how, since the 1990s, his fashion practices have consistently and consciously put forward a unique reflection on questions related to materiality, sophisticated archaism, luxury,

  1. Close to the Clothes: Materiality and Sophisticated Archaism in Alexander van Slobbe’s Design Practices

    NARCIS (Netherlands)

    Baronian, M.-A.

    This article looks at the work of contemporary Dutch fashion designer Alexander van Slobbe (1959) and examines how, since the 1990s, his fashion practices have consistently and consciously put forward a unique reflection on questions related to materiality, sophisticated archaism, luxury,

  2. Multi-disciplinary communication networks for skin risk assessment in nursing homes with high IT sophistication.

    Science.gov (United States)

    Alexander, Gregory L; Pasupathy, Kalyan S; Steege, Linsey M; Strecker, E Bradley; Carley, Kathleen M

    2014-08-01

    The role of nursing home (NH) information technology (IT) in quality improvement has not been clearly established, and its impacts on communication between care givers and patient outcomes in these settings deserve further attention. In this research, we describe a mixed method approach to explore communication strategies used by healthcare providers for resident skin risk in NH with high IT sophistication (ITS). Sample included NH participating in the statewide survey of ITS. We incorporated rigorous observation of 8- and 12-h shifts, and focus groups to identify how NH IT and a range of synchronous and asynchronous tools are used. Social network analysis tools and qualitative analysis were used to analyze data and identify relationships between ITS dimensions and communication interactions between care providers. Two of the nine ITS dimensions (resident care-technological and administrative activities-technological) and total ITS were significantly negatively correlated with number of unique interactions. As more processes in resident care and administrative activities are supported by technology, the lower the number of observed unique interactions. Additionally, four thematic areas emerged from staff focus groups that demonstrate how important IT is to resident care in these facilities including providing resident-centered care, teamwork and collaboration, maintaining safety and quality, and using standardized information resources. Our findings in this study confirm prior research that as technology support (resident care and administrative activities) and overall ITS increases, observed interactions between staff members decrease. Conversations during staff interviews focused on how technology facilitated resident centered care through enhanced information sharing, greater virtual collaboration between team members, and improved care delivery. These results provide evidence for improving the design and implementation of IT in long term care systems to support

  3. Philosophy and the practice of Bayesian statistics.

    Science.gov (United States)

    Gelman, Andrew; Shalizi, Cosma Rohilla

    2013-02-01

    A substantial school in the philosophy of science identifies Bayesian inference with inductive inference and even rationality as such, and seems to be strengthened by the rise and practical success of Bayesian statistics. We argue that the most successful forms of Bayesian statistics do not actually support that particular philosophy but rather accord much better with sophisticated forms of hypothetico-deductivism. We examine the actual role played by prior distributions in Bayesian models, and the crucial aspects of model checking and model revision, which fall outside the scope of Bayesian confirmation theory. We draw on the literature on the consistency of Bayesian updating and also on our experience of applied work in social science. Clarity about these matters should benefit not just philosophy of science, but also statistical practice. At best, the inductivist view has encouraged researchers to fit and compare models without checking them; at worst, theorists have actively discouraged practitioners from performing model checking because it does not fit into their framework. © 2012 The British Psychological Society.

  4. Insightful problem solving and creative tool modification by captive nontool-using rooks.

    Science.gov (United States)

    Bird, Christopher D; Emery, Nathan J

    2009-06-23

    The ability to use tools has been suggested to indicate advanced physical cognition in animals. Here we show that rooks, a member of the corvid family that do not appear to use tools in the wild are capable of insightful problem solving related to sophisticated tool use, including spontaneously modifying and using a variety of tools, shaping hooks out of wire, and using a series of tools in a sequence to gain a reward. It is remarkable that a species that does not use tools in the wild appears to possess an understanding of tools rivaling habitual tool users such as New Caledonian crows and chimpanzees. Our findings suggest that the ability to represent tools may be a domain-general cognitive capacity rather than an adaptive specialization and questions the relationship between physical intelligence and wild tool use.

  5. Degrees of separation as a statistical tool for evaluating candidate genes.

    Science.gov (United States)

    Nelson, Ronald M; Pettersson, Mats E

    2014-12-01

    Selection of candidate genes is an important step in the exploration of complex genetic architecture. The number of gene networks available is increasing and these can provide information to help with candidate gene selection. It is currently common to use the degree of connectedness in gene networks as validation in Genome Wide Association (GWA) and Quantitative Trait Locus (QTL) mapping studies. However, it can cause misleading results if not validated properly. Here we present a method and tool for validating the gene pairs from GWA studies given the context of the network they co-occur in. It ensures that proposed interactions and gene associations are not statistical artefacts inherent to the specific gene network architecture. The CandidateBacon package provides an easy and efficient method to calculate the average degree of separation (DoS) between pairs of genes to currently available gene networks. We show how these empirical estimates of average connectedness are used to validate candidate gene pairs. Validation of interacting genes by comparing their connectedness with the average connectedness in the gene network will provide support for said interactions by utilising the growing amount of gene network information available. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Integrated statistical learning of metabolic ion mobility spectrometry profiles for pulmonary disease identification

    DEFF Research Database (Denmark)

    Hauschild, A.C.; Baumbach, Jan; Baumbach, J.

    2012-01-01

    sophisticated statistical learning techniques for VOC-based feature selection and supervised classification into patient groups. We analyzed breath data from 84 volunteers, each of them either suffering from chronic obstructive pulmonary disease (COPD), or both COPD and bronchial carcinoma (COPD + BC), as well...... as from 35 healthy volunteers, comprising a control group (CG). We standardized and integrated several statistical learning methods to provide a broad overview of their potential for distinguishing the patient groups. We found that there is strong potential for separating MCC/IMS chromatograms of healthy...... patients from healthy controls. We conclude that these statistical learning methods have a generally high accuracy when applied to well-structured, medical MCC/IMS data....

  7. Comment on the asymptotics of a distribution-free goodness of fit test statistic.

    Science.gov (United States)

    Browne, Michael W; Shapiro, Alexander

    2015-03-01

    In a recent article Jennrich and Satorra (Psychometrika 78: 545-552, 2013) showed that a proof by Browne (British Journal of Mathematical and Statistical Psychology 37: 62-83, 1984) of the asymptotic distribution of a goodness of fit test statistic is incomplete because it fails to prove that the orthogonal component function employed is continuous. Jennrich and Satorra (Psychometrika 78: 545-552, 2013) showed how Browne's proof can be completed satisfactorily but this required the development of an extensive and mathematically sophisticated framework for continuous orthogonal component functions. This short note provides a simple proof of the asymptotic distribution of Browne's (British Journal of Mathematical and Statistical Psychology 37: 62-83, 1984) test statistic by using an equivalent form of the statistic that does not involve orthogonal component functions and consequently avoids all complicating issues associated with them.

  8. A Snapshot of Serial Rape: An Investigation of Criminal Sophistication and Use of Force on Victim Injury and Severity of the Assault.

    Science.gov (United States)

    de Heer, Brooke

    2016-02-01

    Prior research on rapes reported to law enforcement has identified criminal sophistication and the use of force against the victim as possible unique identifiers to serial rape versus one-time rape. This study sought to contribute to the current literature on reported serial rape by investigating how the level of criminal sophistication of the rapist and use of force used were associated with two important outcomes of rape: victim injury and overall severity of the assault. In addition, it was evaluated whether rapist and victim ethnicity affected these relationships. A nation-wide sample of serial rape cases reported to law enforcement collected by the Federal Bureau of Investigation (FBI) was analyzed (108 rapists, 543 victims). Results indicated that serial rapists typically used a limited amount of force against the victim and displayed a high degree of criminal sophistication. In addition, the more criminally sophisticated the perpetrator was, the more sexual acts he performed on his victim. Finally, rapes between a White rapist and White victim were found to exhibit higher levels of criminal sophistication and were more severe in terms of number and types of sexual acts committed. These findings provide a more in-depth understanding of serial rape that can inform both academics and practitioners in the field about contributors to victim injury and severity of the assault. © The Author(s) 2014.

  9. Recurrence time statistics: versatile tools for genomic DNA sequence analysis.

    Science.gov (United States)

    Cao, Yinhe; Tung, Wen-Wen; Gao, J B

    2004-01-01

    With the completion of the human and a few model organisms' genomes, and the genomes of many other organisms waiting to be sequenced, it has become increasingly important to develop faster computational tools which are capable of easily identifying the structures and extracting features from DNA sequences. One of the more important structures in a DNA sequence is repeat-related. Often they have to be masked before protein coding regions along a DNA sequence are to be identified or redundant expressed sequence tags (ESTs) are to be sequenced. Here we report a novel recurrence time based method for sequence analysis. The method can conveniently study all kinds of periodicity and exhaustively find all repeat-related features from a genomic DNA sequence. An efficient codon index is also derived from the recurrence time statistics, which has the salient features of being largely species-independent and working well on very short sequences. Efficient codon indices are key elements of successful gene finding algorithms, and are particularly useful for determining whether a suspected EST belongs to a coding or non-coding region. We illustrate the power of the method by studying the genomes of E. coli, the yeast S. cervisivae, the nematode worm C. elegans, and the human, Homo sapiens. Computationally, our method is very efficient. It allows us to carry out analysis of genomes on the whole genomic scale by a PC.

  10. The challenges of transportation/traffic statistics in Japan and directions for the future

    Directory of Open Access Journals (Sweden)

    Shigeru Kawasaki

    2015-07-01

    Full Text Available In order to respond to new challenges in transportation and traffic problems, it is essential to enhance statistics in this field that provides the basis for policy researches. Many of the statistics in this field in Japan consist of “official statistics” created by the government. This paper gives a review of the current status of transportation and traffic statistics (hereinafter called “transportation statistics” in short in Japan. Furthermore, the paper discusses challenges in such statistics in the new environment and the direction that statistics that should take in the future. For Japan’s transportation statistics to play vital roles in more sophisticated analyses, it is necessary to improve the environment that facilitates the use of microdata for analysis. It is also necessary to establish an environment where big data can be more easily used for compilation of official statistics and performing policy researches. To achieve this end, close cooperation among the government, academia, and businesses will be essential.

  11. Data and Statistics: Heart Failure

    Science.gov (United States)

    ... Summary Coverdell Program 2012-2015 State Summaries Data & Statistics Fact Sheets Heart Disease and Stroke Fact Sheets ... Roadmap for State Planning Other Data Resources Other Statistic Resources Grantee Information Cross-Program Information Online Tools ...

  12. Statistical Viewer: a tool to upload and integrate linkage and association data as plots displayed within the Ensembl genome browser

    Directory of Open Access Journals (Sweden)

    Hauser Elizabeth R

    2005-04-01

    Full Text Available Abstract Background To facilitate efficient selection and the prioritization of candidate complex disease susceptibility genes for association analysis, increasingly comprehensive annotation tools are essential to integrate, visualize and analyze vast quantities of disparate data generated by genomic screens, public human genome sequence annotation and ancillary biological databases. We have developed a plug-in package for Ensembl called "Statistical Viewer" that facilitates the analysis of genomic features and annotation in the regions of interest defined by linkage analysis. Results Statistical Viewer is an add-on package to the open-source Ensembl Genome Browser and Annotation System that displays disease study-specific linkage and/or association data as 2 dimensional plots in new panels in the context of Ensembl's Contig View and Cyto View pages. An enhanced upload server facilitates the upload of statistical data, as well as additional feature annotation to be displayed in DAS tracts, in the form of Excel Files. The Statistical View panel, drawn directly under the ideogram, illustrates lod score values for markers from a study of interest that are plotted against their position in base pairs. A module called "Get Map" easily converts the genetic locations of markers to genomic coordinates. The graph is placed under the corresponding ideogram features a synchronized vertical sliding selection box that is seamlessly integrated into Ensembl's Contig- and Cyto- View pages to choose the region to be displayed in Ensembl's "Overview" and "Detailed View" panels. To resolve Association and Fine mapping data plots, a "Detailed Statistic View" plot corresponding to the "Detailed View" may be displayed underneath. Conclusion Features mapping to regions of linkage are accentuated when Statistic View is used in conjunction with the Distributed Annotation System (DAS to display supplemental laboratory information such as differentially expressed disease

  13. Statistical Approaches Used to Assess the Equity of Access to Food Outlets: A Systematic Review.

    Science.gov (United States)

    Lamb, Karen E; Thornton, Lukar E; Cerin, Ester; Ball, Kylie

    2015-01-01

    Inequalities in eating behaviours are often linked to the types of food retailers accessible in neighbourhood environments. Numerous studies have aimed to identify if access to healthy and unhealthy food retailers is socioeconomically patterned across neighbourhoods, and thus a potential risk factor for dietary inequalities. Existing reviews have examined differences between methodologies, particularly focussing on neighbourhood and food outlet access measure definitions. However, no review has informatively discussed the suitability of the statistical methodologies employed; a key issue determining the validity of study findings. Our aim was to examine the suitability of statistical approaches adopted in these analyses. Searches were conducted for articles published from 2000-2014. Eligible studies included objective measures of the neighbourhood food environment and neighbourhood-level socio-economic status, with a statistical analysis of the association between food outlet access and socio-economic status. Fifty-four papers were included. Outlet accessibility was typically defined as the distance to the nearest outlet from the neighbourhood centroid, or as the number of food outlets within a neighbourhood (or buffer). To assess if these measures were linked to neighbourhood disadvantage, common statistical methods included ANOVA, correlation, and Poisson or negative binomial regression. Although all studies involved spatial data, few considered spatial analysis techniques or spatial autocorrelation. With advances in GIS software, sophisticated measures of neighbourhood outlet accessibility can be considered. However, approaches to statistical analysis often appear less sophisticated. Care should be taken to consider assumptions underlying the analysis and the possibility of spatially correlated residuals which could affect the results.

  14. Statistical Approaches Used to Assess the Equity of Access to Food Outlets: A Systematic Review

    Directory of Open Access Journals (Sweden)

    Karen E. Lamb

    2015-07-01

    Full Text Available BackgroundInequalities in eating behaviours are often linked to the types of food retailers accessible in neighbourhood environments. Numerous studies have aimed to identify if access to healthy and unhealthy food retailers is socioeconomically patterned across neighbourhoods, and thus a potential risk factor for dietary inequalities. Existing reviews have examined differences between methodologies, particularly focussing on neighbourhood and food outlet access measure definitions. However, no review has informatively discussed the suitability of the statistical methodologies employed; a key issue determining the validity of study findings. Our aim was to examine the suitability of statistical approaches adopted in these analyses.MethodsSearches were conducted for articles published from 2000-2014. Eligible studies included objective measures of the neighbourhood food environment and neighbourhood-level socio-economic status, with a statistical analysis of the association between food outlet access and socio-economic status.ResultsFifty-four papers were included. Outlet accessibility was typically defined as the distance to the nearest outlet from the neighbourhood centroid, or as the number of food outlets within a neighbourhood (or buffer. To assess if these measures were linked to neighbourhood disadvantage, common statistical methods included ANOVA, correlation, and Poisson or negative binomial regression. Although all studies involved spatial data, few considered spatial analysis techniques or spatial autocorrelation.ConclusionsWith advances in GIS software, sophisticated measures of neighbourhood outlet accessibility can be considered. However, approaches to statistical analysis often appear less sophisticated. Care should be taken to consider assumptions underlying the analysis and the possibility of spatially correlated residuals which could affect the results.

  15. Reducing statistics anxiety and enhancing statistics learning achievement: effectiveness of a one-minute strategy.

    Science.gov (United States)

    Chiou, Chei-Chang; Wang, Yu-Min; Lee, Li-Tze

    2014-08-01

    Statistical knowledge is widely used in academia; however, statistics teachers struggle with the issue of how to reduce students' statistics anxiety and enhance students' statistics learning. This study assesses the effectiveness of a "one-minute paper strategy" in reducing students' statistics-related anxiety and in improving students' statistics-related achievement. Participants were 77 undergraduates from two classes enrolled in applied statistics courses. An experiment was implemented according to a pretest/posttest comparison group design. The quasi-experimental design showed that the one-minute paper strategy significantly reduced students' statistics anxiety and improved students' statistics learning achievement. The strategy was a better instructional tool than the textbook exercise for reducing students' statistics anxiety and improving students' statistics achievement.

  16. Exploring the predictive power of interaction terms in a sophisticated risk equalization model using regression trees.

    Science.gov (United States)

    van Veen, S H C M; van Kleef, R C; van de Ven, W P M M; van Vliet, R C J A

    2018-02-01

    This study explores the predictive power of interaction terms between the risk adjusters in the Dutch risk equalization (RE) model of 2014. Due to the sophistication of this RE-model and the complexity of the associations in the dataset (N = ~16.7 million), there are theoretically more than a million interaction terms. We used regression tree modelling, which has been applied rarely within the field of RE, to identify interaction terms that statistically significantly explain variation in observed expenses that is not already explained by the risk adjusters in this RE-model. The interaction terms identified were used as additional risk adjusters in the RE-model. We found evidence that interaction terms can improve the prediction of expenses overall and for specific groups in the population. However, the prediction of expenses for some other selective groups may deteriorate. Thus, interactions can reduce financial incentives for risk selection for some groups but may increase them for others. Furthermore, because regression trees are not robust, additional criteria are needed to decide which interaction terms should be used in practice. These criteria could be the right incentive structure for risk selection and efficiency or the opinion of medical experts. Copyright © 2017 John Wiley & Sons, Ltd.

  17. WE-DE-201-02: A Statistical Analysis Tool for Plan Quality Verification in HDR Brachytherapy Forward Planning for Cervix Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Ma, R; Zhu, X; Li, S; Zheng, D; Lei, Y; Wang, S; Verma, V; Bennion, N; Wahl, A; Zhou, S [University of Nebraska Medical Center, Omaha, NE (United States)

    2016-06-15

    Purpose: High Dose Rate (HDR) brachytherapy forward planning is principally an iterative process; hence, plan quality is affected by planners’ experiences and limited planning time. Thus, this may lead to sporadic errors and inconsistencies in planning. A statistical tool based on previous approved clinical treatment plans would help to maintain the consistency of planning quality and improve the efficiency of second checking. Methods: An independent dose calculation tool was developed from commercial software. Thirty-three previously approved cervical HDR plans with the same prescription dose (550cGy), applicator type, and treatment protocol were examined, and ICRU defined reference point doses (bladder, vaginal mucosa, rectum, and points A/B) along with dwell times were collected. Dose calculation tool then calculated appropriate range with a 95% confidence interval for each parameter obtained, which would be used as the benchmark for evaluation of those parameters in future HDR treatment plans. Model quality was verified using five randomly selected approved plans from the same dataset. Results: Dose variations appears to be larger at the reference point of bladder and mucosa as compared with rectum. Most reference point doses from verification plans fell between the predicted range, except the doses of two points of rectum and two points of reference position A (owing to rectal anatomical variations & clinical adjustment in prescription points, respectively). Similar results were obtained for tandem and ring dwell times despite relatively larger uncertainties. Conclusion: This statistical tool provides an insight into clinically acceptable range of cervical HDR plans, which could be useful in plan checking and identifying potential planning errors, thus improving the consistency of plan quality.

  18. Introduction to statistical data analysis for the life sciences

    CERN Document Server

    Ekstrom, Claus Thorn

    2014-01-01

    This text provides a computational toolbox that enables students to analyze real datasets and gain the confidence and skills to undertake more sophisticated analyses. Although accessible with any statistical software, the text encourages a reliance on R. For those new to R, an introduction to the software is available in an appendix. The book also includes end-of-chapter exercises as well as an entire chapter of case exercises that help students apply their knowledge to larger datasets and learn more about approaches specific to the life sciences.

  19. Multivariate statistical tools for the radiometric features of volcanic islands

    International Nuclear Information System (INIS)

    Basile, S.; Brai, M.; Marrale, M.; Micciche, S.; Lanzo, G.; Rizzo, S.

    2009-01-01

    The Aeolian Islands represents a Quaternary volcanic arc related to the subduction of the Ionian plate beneath the Calabrian Arc. The geochemical variability of the islands has led to a broad spectrum of magma rocks. Volcanic products from calc-alkaline (CA) to calc-alkaline high in potassium (HKCA) are present throughout the Archipelago, but products belonging to shoshonitic (SHO) and potassium (KS) series characterize the southern portion of Lipari, Vulcano and Stromboli. Tectonics also plays an important role in the process of the islands differentiation. In this work, we want to review and cross-analyze the data on Lipari, Stromboli and Vulcano, collected in measurement and sampling campaigns over the last years. Chemical data were obtained by X-ray fluorescence. High resolution gamma-ray spectrometry with germanium detectors was used to measure primordial radionuclide activities. The activity of primordial radionuclides in the volcanic products of these three islands is strongly dependent on their chemism. The highest contents are found in more differentiated products (rhyolites). The CA products have lower concentrations, while the HKCA and Shoshonitic product concentrations are in between. Calculated dose rates have been correlated with the petrochemical features in order to gain further insight in evolution and differentiation of volcanic products. Ratio matching technique and multivariate statistical analyses, such as Principal Component Analysis and Minimum Spanning Tree, have been applied as an additional tool helpful to better describe the lithological affinities of the samples. (Author)

  20. The sophisticated control of the tram bogie on track

    Directory of Open Access Journals (Sweden)

    Radovan DOLECEK

    2015-09-01

    Full Text Available The paper deals with the problems of routing control algorithms of new conception of tram vehicle bogie. The main goal of these research activities is wear reduction of rail wheels and tracks, wear reduction of traction energy losses and increasing of running comfort. The testing experimental tram vehicle with special bogie construction powered by traction battery is utilized for these purposes. This vehicle has a rotary bogie with independent rotating wheels driven by permanent magnets synchronous motors and a solid axle. The wheel forces in bogie are measured by large amounts of the various sensors placed on the testing experimental tram vehicle. Nowadays the designed control algorithms are implemented to the vehicle superset control system. The traction requirements and track characteristics have an effect to these control algorithms. This control including sophisticated routing brings other improvements which is verified and corrected according to individual traction and driving characteristics, and opens new possibilities.

  1. Sequence History Update Tool

    Science.gov (United States)

    Khanampompan, Teerapat; Gladden, Roy; Fisher, Forest; DelGuercio, Chris

    2008-01-01

    The Sequence History Update Tool performs Web-based sequence statistics archiving for Mars Reconnaissance Orbiter (MRO). Using a single UNIX command, the software takes advantage of sequencing conventions to automatically extract the needed statistics from multiple files. This information is then used to populate a PHP database, which is then seamlessly formatted into a dynamic Web page. This tool replaces a previous tedious and error-prone process of manually editing HTML code to construct a Web-based table. Because the tool manages all of the statistics gathering and file delivery to and from multiple data sources spread across multiple servers, there is also a considerable time and effort savings. With the use of The Sequence History Update Tool what previously took minutes is now done in less than 30 seconds, and now provides a more accurate archival record of the sequence commanding for MRO.

  2. Pedagogical Utilization and Assessment of the Statistic Online Computational Resource in Introductory Probability and Statistics Courses.

    Science.gov (United States)

    Dinov, Ivo D; Sanchez, Juana; Christou, Nicolas

    2008-01-01

    Technology-based instruction represents a new recent pedagogical paradigm that is rooted in the realization that new generations are much more comfortable with, and excited about, new technologies. The rapid technological advancement over the past decade has fueled an enormous demand for the integration of modern networking, informational and computational tools with classical pedagogical instruments. Consequently, teaching with technology typically involves utilizing a variety of IT and multimedia resources for online learning, course management, electronic course materials, and novel tools of communication, engagement, experimental, critical thinking and assessment.The NSF-funded Statistics Online Computational Resource (SOCR) provides a number of interactive tools for enhancing instruction in various undergraduate and graduate courses in probability and statistics. These resources include online instructional materials, statistical calculators, interactive graphical user interfaces, computational and simulation applets, tools for data analysis and visualization. The tools provided as part of SOCR include conceptual simulations and statistical computing interfaces, which are designed to bridge between the introductory and the more advanced computational and applied probability and statistics courses. In this manuscript, we describe our designs for utilizing SOCR technology in instruction in a recent study. In addition, present the results of the effectiveness of using SOCR tools at two different course intensity levels on three outcome measures: exam scores, student satisfaction and choice of technology to complete assignments. Learning styles assessment was completed at baseline. We have used three very different designs for three different undergraduate classes. Each course included a treatment group, using the SOCR resources, and a control group, using classical instruction techniques. Our findings include marginal effects of the SOCR treatment per individual

  3. Reports on Cancer - Cancer Statistics

    Science.gov (United States)

    Interactive tools for access to statistics for a cancer site by gender, race, ethnicity, calendar year, age, state, county, stage, and histology. Statistics include incidence, mortality, prevalence, cost, risk factors, behaviors, tobacco use, and policies and are presented as graphs, tables, or maps.

  4. Statistics and probability with applications for engineers and scientists

    CERN Document Server

    Gupta, Bhisham C

    2013-01-01

    Introducing the tools of statistics and probability from the ground up An understanding of statistical tools is essential for engineers and scientists who often need to deal with data analysis over the course of their work. Statistics and Probability with Applications for Engineers and Scientists walks readers through a wide range of popular statistical techniques, explaining step-by-step how to generate, analyze, and interpret data for diverse applications in engineering and the natural sciences. Unique among books of this kind, Statistics and Prob

  5. Integration of modern statistical tools for the analysis of climate extremes into the web-GIS “CLIMATE”

    Science.gov (United States)

    Ryazanova, A. A.; Okladnikov, I. G.; Gordov, E. P.

    2017-11-01

    The frequency of occurrence and magnitude of precipitation and temperature extreme events show positive trends in several geographical regions. These events must be analyzed and studied in order to better understand their impact on the environment, predict their occurrences, and mitigate their effects. For this purpose, we augmented web-GIS called “CLIMATE” to include a dedicated statistical package developed in the R language. The web-GIS “CLIMATE” is a software platform for cloud storage processing and visualization of distributed archives of spatial datasets. It is based on a combined use of web and GIS technologies with reliable procedures for searching, extracting, processing, and visualizing the spatial data archives. The system provides a set of thematic online tools for the complex analysis of current and future climate changes and their effects on the environment. The package includes new powerful methods of time-dependent statistics of extremes, quantile regression and copula approach for the detailed analysis of various climate extreme events. Specifically, the very promising copula approach allows obtaining the structural connections between the extremes and the various environmental characteristics. The new statistical methods integrated into the web-GIS “CLIMATE” can significantly facilitate and accelerate the complex analysis of climate extremes using only a desktop PC connected to the Internet.

  6. QUALITY IMPROVEMENT USING STATISTICAL PROCESS CONTROL TOOLS IN GLASS BOTTLES MANUFACTURING COMPANY

    Directory of Open Access Journals (Sweden)

    Yonatan Mengesha Awaj

    2013-03-01

    Full Text Available In order to survive in a competitive market, improving quality and productivity of product or process is a must for any company. This study is about to apply the statistical process control (SPC tools in the production processing line and on final product in order to reduce defects by identifying where the highest waste is occur at and to give suggestion for improvement. The approach used in this study is direct observation, thorough examination of production process lines, brain storming session, fishbone diagram, and information has been collected from potential customers and company's workers through interview and questionnaire, Pareto chart/analysis and control chart (p-chart was constructed. It has been found that the company has many problems; specifically there is high rejection or waste in the production processing line. The highest waste occurs in melting process line which causes loss due to trickle and in the forming process line which causes loss due to defective product rejection. The vital few problems were identified, it was found that the blisters, double seam, stone, pressure failure and overweight are the vital few problems. The principal aim of the study is to create awareness to quality team how to use SPC tools in the problem analysis, especially to train quality team on how to held an effective brainstorming session, and exploit these data in cause-and-effect diagram construction, Pareto analysis and control chart construction. The major causes of non-conformities and root causes of the quality problems were specified, and possible remedies were proposed. Although the company has many constraints to implement all suggestion for improvement within short period of time, the company recognized that the suggestion will provide significant productivity improvement in the long run.

  7. The GenABEL Project for statistical genomics.

    Science.gov (United States)

    Karssen, Lennart C; van Duijn, Cornelia M; Aulchenko, Yurii S

    2016-01-01

    Development of free/libre open source software is usually done by a community of people with an interest in the tool. For scientific software, however, this is less often the case. Most scientific software is written by only a few authors, often a student working on a thesis. Once the paper describing the tool has been published, the tool is no longer developed further and is left to its own device. Here we describe the broad, multidisciplinary community we formed around a set of tools for statistical genomics. The GenABEL project for statistical omics actively promotes open interdisciplinary development of statistical methodology and its implementation in efficient and user-friendly software under an open source licence. The software tools developed withing the project collectively make up the GenABEL suite, which currently consists of eleven tools. The open framework of the project actively encourages involvement of the community in all stages, from formulation of methodological ideas to application of software to specific data sets. A web forum is used to channel user questions and discussions, further promoting the use of the GenABEL suite. Developer discussions take place on a dedicated mailing list, and development is further supported by robust development practices including use of public version control, code review and continuous integration. Use of this open science model attracts contributions from users and developers outside the "core team", facilitating agile statistical omics methodology development and fast dissemination.

  8. The Concise Encyclopedia of Statistics

    CERN Document Server

    Dodge, Yadolah

    2008-01-01

    The Concise Encyclopedia of Statistics presents the essential information about statistical tests, concepts, and analytical methods in language that is accessible to practitioners and students of the vast community using statistics in medicine, engineering, physical science, life science, social science, and business/economics. The reference is alphabetically arranged to provide quick access to the fundamental tools of statistical methodology and biographies of famous statisticians. The more than 500 entries include definitions, history, mathematical details, limitations, examples, references,

  9. Sophisticated approval voting, ignorance priors, and plurality heuristics: a behavioral social choice analysis in a Thurstonian framework.

    Science.gov (United States)

    Regenwetter, Michel; Ho, Moon-Ho R; Tsetlin, Ilia

    2007-10-01

    This project reconciles historically distinct paradigms at the interface between individual and social choice theory, as well as between rational and behavioral decision theory. The authors combine a utility-maximizing prescriptive rule for sophisticated approval voting with the ignorance prior heuristic from behavioral decision research and two types of plurality heuristics to model approval voting behavior. When using a sincere plurality heuristic, voters simplify their decision process by voting for their single favorite candidate. When using a strategic plurality heuristic, voters strategically focus their attention on the 2 front-runners and vote for their preferred candidate among these 2. Using a hierarchy of Thurstonian random utility models, the authors implemented these different decision rules and tested them statistically on 7 real world approval voting elections. They cross-validated their key findings via a psychological Internet experiment. Although a substantial number of voters used the plurality heuristic in the real elections, they did so sincerely, not strategically. Moreover, even though Thurstonian models do not force such agreement, the results show, in contrast to common wisdom about social choice rules, that the sincere social orders by Condorcet, Borda, plurality, and approval voting are identical in all 7 elections and in the Internet experiment. PsycINFO Database Record (c) 2007 APA, all rights reserved.

  10. Temporal aspects of surface water quality variation using robust statistical tools.

    Science.gov (United States)

    Mustapha, Adamu; Aris, Ahmad Zaharin; Ramli, Mohammad Firuz; Juahir, Hafizan

    2012-01-01

    Robust statistical tools were applied on the water quality datasets with the aim of determining the most significance parameters and their contribution towards temporal water quality variation. Surface water samples were collected from four different sampling points during dry and wet seasons and analyzed for their physicochemical constituents. Discriminant analysis (DA) provided better results with great discriminatory ability by using five parameters with (P < 0.05) for dry season affording more than 96% correct assignation and used five and six parameters for forward and backward stepwise in wet season data with P-value (P < 0.05) affording 68.20% and 82%, respectively. Partial correlation results revealed that there are strong (r(p) = 0.829) and moderate (r(p) = 0.614) relationships between five-day biochemical oxygen demand (BOD(5)) and chemical oxygen demand (COD), total solids (TS) and dissolved solids (DS) controlling for the linear effect of nitrogen in the form of ammonia (NH(3)) and conductivity for dry and wet seasons, respectively. Multiple linear regression identified the contribution of each variable with significant values r = 0.988, R(2) = 0.976 and r = 0.970, R(2) = 0.942 (P < 0.05) for dry and wet seasons, respectively. Repeated measure t-test confirmed that the surface water quality varies significantly between the seasons with significant value P < 0.05.

  11. A Framework for Assessing High School Students' Statistical Reasoning.

    Science.gov (United States)

    Chan, Shiau Wei; Ismail, Zaleha; Sumintono, Bambang

    2016-01-01

    Based on a synthesis of literature, earlier studies, analyses and observations on high school students, this study developed an initial framework for assessing students' statistical reasoning about descriptive statistics. Framework descriptors were established across five levels of statistical reasoning and four key constructs. The former consisted of idiosyncratic reasoning, verbal reasoning, transitional reasoning, procedural reasoning, and integrated process reasoning. The latter include describing data, organizing and reducing data, representing data, and analyzing and interpreting data. In contrast to earlier studies, this initial framework formulated a complete and coherent statistical reasoning framework. A statistical reasoning assessment tool was then constructed from this initial framework. The tool was administered to 10 tenth-grade students in a task-based interview. The initial framework was refined, and the statistical reasoning assessment tool was revised. The ten students then participated in the second task-based interview, and the data obtained were used to validate the framework. The findings showed that the students' statistical reasoning levels were consistent across the four constructs, and this result confirmed the framework's cohesion. Developed to contribute to statistics education, this newly developed statistical reasoning framework provides a guide for planning learning goals and designing instruction and assessments.

  12. Do we need statistics when we have linguistics?

    Directory of Open Access Journals (Sweden)

    Cantos Gómez Pascual

    2002-01-01

    Full Text Available Statistics is known to be a quantitative approach to research. However, most of the research done in the fields of language and linguistics is of a different kind, namely qualitative. Succinctly, qualitative analysis differs from quantitative analysis is that in the former no attempt is made to assign frequencies, percentages and the like, to the linguistic features found or identified in the data. In quantitative research, linguistic features are classified and counted, and even more complex statistical models are constructed in order to explain these observed facts. In qualitative research, however, we use the data only for identifying and describing features of language usage and for providing real occurrences/examples of particular phenomena. In this paper, we shall try to show how quantitative methods and statistical techniques can supplement qualitative analyses of language. We shall attempt to present some mathematical and statistical properties of natural languages, and introduce some of the quantitative methods which are of the most value in working empirically with texts and corpora, illustrating the various issues with numerous examples and moving from the most basic descriptive techniques (frequency counts and percentages to decision-taking techniques (chi-square and z-score and to more sophisticated statistical language models (Type-Token/Lemma-Token/Lemma-Type formulae, cluster analysis and discriminant function analysis.

  13. A resilient and efficient CFD framework: Statistical learning tools for multi-fidelity and heterogeneous information fusion

    Science.gov (United States)

    Lee, Seungjoon; Kevrekidis, Ioannis G.; Karniadakis, George Em

    2017-09-01

    Exascale-level simulations require fault-resilient algorithms that are robust against repeated and expected software and/or hardware failures during computations, which may render the simulation results unsatisfactory. If each processor can share some global information about the simulation from a coarse, limited accuracy but relatively costless auxiliary simulator we can effectively fill-in the missing spatial data at the required times by a statistical learning technique - multi-level Gaussian process regression, on the fly; this has been demonstrated in previous work [1]. Based on the previous work, we also employ another (nonlinear) statistical learning technique, Diffusion Maps, that detects computational redundancy in time and hence accelerate the simulation by projective time integration, giving the overall computation a "patch dynamics" flavor. Furthermore, we are now able to perform information fusion with multi-fidelity and heterogeneous data (including stochastic data). Finally, we set the foundations of a new framework in CFD, called patch simulation, that combines information fusion techniques from, in principle, multiple fidelity and resolution simulations (and even experiments) with a new adaptive timestep refinement technique. We present two benchmark problems (the heat equation and the Navier-Stokes equations) to demonstrate the new capability that statistical learning tools can bring to traditional scientific computing algorithms. For each problem, we rely on heterogeneous and multi-fidelity data, either from a coarse simulation of the same equation or from a stochastic, particle-based, more "microscopic" simulation. We consider, as such "auxiliary" models, a Monte Carlo random walk for the heat equation and a dissipative particle dynamics (DPD) model for the Navier-Stokes equations. More broadly, in this paper we demonstrate the symbiotic and synergistic combination of statistical learning, domain decomposition, and scientific computing in

  14. Simple Plans or Sophisticated Habits? State, Transition and Learning Interactions in the Two-Step Task.

    Directory of Open Access Journals (Sweden)

    Thomas Akam

    2015-12-01

    Full Text Available The recently developed 'two-step' behavioural task promises to differentiate model-based from model-free reinforcement learning, while generating neurophysiologically-friendly decision datasets with parametric variation of decision variables. These desirable features have prompted its widespread adoption. Here, we analyse the interactions between a range of different strategies and the structure of transitions and outcomes in order to examine constraints on what can be learned from behavioural performance. The task involves a trade-off between the need for stochasticity, to allow strategies to be discriminated, and a need for determinism, so that it is worth subjects' investment of effort to exploit the contingencies optimally. We show through simulation that under certain conditions model-free strategies can masquerade as being model-based. We first show that seemingly innocuous modifications to the task structure can induce correlations between action values at the start of the trial and the subsequent trial events in such a way that analysis based on comparing successive trials can lead to erroneous conclusions. We confirm the power of a suggested correction to the analysis that can alleviate this problem. We then consider model-free reinforcement learning strategies that exploit correlations between where rewards are obtained and which actions have high expected value. These generate behaviour that appears model-based under these, and also more sophisticated, analyses. Exploiting the full potential of the two-step task as a tool for behavioural neuroscience requires an understanding of these issues.

  15. Simple Plans or Sophisticated Habits? State, Transition and Learning Interactions in the Two-Step Task

    Science.gov (United States)

    Akam, Thomas; Costa, Rui; Dayan, Peter

    2015-01-01

    The recently developed ‘two-step’ behavioural task promises to differentiate model-based from model-free reinforcement learning, while generating neurophysiologically-friendly decision datasets with parametric variation of decision variables. These desirable features have prompted its widespread adoption. Here, we analyse the interactions between a range of different strategies and the structure of transitions and outcomes in order to examine constraints on what can be learned from behavioural performance. The task involves a trade-off between the need for stochasticity, to allow strategies to be discriminated, and a need for determinism, so that it is worth subjects’ investment of effort to exploit the contingencies optimally. We show through simulation that under certain conditions model-free strategies can masquerade as being model-based. We first show that seemingly innocuous modifications to the task structure can induce correlations between action values at the start of the trial and the subsequent trial events in such a way that analysis based on comparing successive trials can lead to erroneous conclusions. We confirm the power of a suggested correction to the analysis that can alleviate this problem. We then consider model-free reinforcement learning strategies that exploit correlations between where rewards are obtained and which actions have high expected value. These generate behaviour that appears model-based under these, and also more sophisticated, analyses. Exploiting the full potential of the two-step task as a tool for behavioural neuroscience requires an understanding of these issues. PMID:26657806

  16. Simple Plans or Sophisticated Habits? State, Transition and Learning Interactions in the Two-Step Task.

    Science.gov (United States)

    Akam, Thomas; Costa, Rui; Dayan, Peter

    2015-12-01

    The recently developed 'two-step' behavioural task promises to differentiate model-based from model-free reinforcement learning, while generating neurophysiologically-friendly decision datasets with parametric variation of decision variables. These desirable features have prompted its widespread adoption. Here, we analyse the interactions between a range of different strategies and the structure of transitions and outcomes in order to examine constraints on what can be learned from behavioural performance. The task involves a trade-off between the need for stochasticity, to allow strategies to be discriminated, and a need for determinism, so that it is worth subjects' investment of effort to exploit the contingencies optimally. We show through simulation that under certain conditions model-free strategies can masquerade as being model-based. We first show that seemingly innocuous modifications to the task structure can induce correlations between action values at the start of the trial and the subsequent trial events in such a way that analysis based on comparing successive trials can lead to erroneous conclusions. We confirm the power of a suggested correction to the analysis that can alleviate this problem. We then consider model-free reinforcement learning strategies that exploit correlations between where rewards are obtained and which actions have high expected value. These generate behaviour that appears model-based under these, and also more sophisticated, analyses. Exploiting the full potential of the two-step task as a tool for behavioural neuroscience requires an understanding of these issues.

  17. The Planetary Data System (PDS) Data Dictionary Tool (LDDTool)

    Science.gov (United States)

    Raugh, Anne C.; Hughes, John S.

    2017-10-01

    One of the major design goals of the PDS4 development effort was to provide an avenue for discipline specialists and large data preparers such as mission archivists to extend the core PDS4 Information Model (IM) to include metadata definitions specific to their own contexts. This capability is critical for the Planetary Data System - an archive that deals with a data collection that is diverse along virtually every conceivable axis. Amid such diversity, it is in the best interests of the PDS archive and its users that all extensions to the core IM follow the same design techniques, conventions, and restrictions as the core implementation itself. Notwithstanding, expecting all mission and discipline archivist seeking to define metadata for a new context to acquire expertise in information modeling, model-driven design, ontology, schema formulation, and PDS4 design conventions and philosophy is unrealistic, to say the least.To bridge that expertise gap, the PDS Engineering Node has developed the data dictionary creation tool known as “LDDTool”. This tool incorporates the same software used to maintain and extend the core IM, packaged with an interface that enables a developer to create his contextual information model using the same, open standards-based metadata framework PDS itself uses. Through this interface, the novice dictionary developer has immediate access to the common set of data types and unit classes for defining attributes, and a straight-forward method for constructing classes. The more experienced developer, using the same tool, has access to more sophisticated modeling methods like abstraction and extension, and can define very sophisticated validation rules.We present the key features of the PDS Local Data Dictionary Tool, which both supports the development of extensions to the PDS4 IM, and ensures their compatibility with the IM.

  18. Statistics? You Must Be Joking: The Application and Evaluation of Humor when Teaching Statistics

    Science.gov (United States)

    Neumann, David L.; Hood, Michelle; Neumann, Michelle M.

    2009-01-01

    Humor has been promoted as a teaching tool that enhances student engagement and learning. The present report traces the pathway from research to practice by reflecting upon various ways to incorporate humor into the face-to-face teaching of statistics. The use of humor in an introductory university statistics course was evaluated via interviews…

  19. Low Level RF Including a Sophisticated Phase Control System for CTF3

    CERN Document Server

    Mourier, J; Nonglaton, J M; Syratchev, I V; Tanner, L

    2004-01-01

    CTF3 (CLIC Test Facility 3), currently under construction at CERN, is a test facility designed to demonstrate the key feasibility issues of the CLIC (Compact LInear Collider) two-beam scheme. When completed, this facility will consist of a 150 MeV linac followed by two rings for bunch-interleaving, and a test stand where 30 GHz power will be generated. In this paper, the work that has been carried out on the linac's low power RF system is described. This includes, in particular, a sophisticated phase control system for the RF pulse compressor to produce a flat-top rectangular pulse over 1.4 µs.

  20. The Math Problem: Advertising Students' Attitudes toward Statistics

    Science.gov (United States)

    Fullerton, Jami A.; Kendrick, Alice

    2013-01-01

    This study used the Students' Attitudes toward Statistics Scale (STATS) to measure attitude toward statistics among a national sample of advertising students. A factor analysis revealed four underlying factors make up the attitude toward statistics construct--"Interest & Future Applicability," "Confidence," "Statistical Tools," and "Initiative."…

  1. Methods of statistical physics

    CERN Document Server

    Akhiezer, Aleksandr I

    1981-01-01

    Methods of Statistical Physics is an exposition of the tools of statistical mechanics, which evaluates the kinetic equations of classical and quantized systems. The book also analyzes the equations of macroscopic physics, such as the equations of hydrodynamics for normal and superfluid liquids and macroscopic electrodynamics. The text gives particular attention to the study of quantum systems. This study begins with a discussion of problems of quantum statistics with a detailed description of the basics of quantum mechanics along with the theory of measurement. An analysis of the asymptotic be

  2. Terminology tools: state of the art and practical lessons.

    Science.gov (United States)

    Cimino, J J

    2001-01-01

    As controlled medical terminologies evolve from simple code-name-hierarchy arrangements, into rich, knowledge-based ontologies of medical concepts, increased demands are placed on both the developers and users of the terminologies. In response, researchers have begun developing tools to address their needs. The aims of this article are to review previous work done to develop these tools and then to describe work done at Columbia University and New York Presbyterian Hospital (NYPH). Researchers working with the Systematized Nomenclature of Medicine (SNOMED), the Unified Medical Language System (UMLS), and NYPH's Medical Entities Dictionary (MED) have created a wide variety of terminology browsers, editors and servers to facilitate creation, maintenance and use of these terminologies. Although much work has been done, no generally available tools have yet emerged. Consensus on requirement for tool functions, especially terminology servers is emerging. Tools at NYPH have been used successfully to support the integration of clinical applications and the merger of health care institutions. Significant advancement has occurred over the past fifteen years in the development of sophisticated controlled terminologies and the tools to support them. The tool set at NYPH provides a case study to demonstrate one feasible architecture.

  3. A statistical manual for chemists

    CERN Document Server

    Bauer, Edward

    1971-01-01

    A Statistical Manual for Chemists, Second Edition presents simple and fast statistical tools for data analysis of working chemists. This edition is organized into nine chapters and begins with an overview of the fundamental principles of the statistical techniques used in experimental data analysis. The subsequent chapters deal with the concept of statistical average, experimental design, and analysis of variance. The discussion then shifts to control charts, with particular emphasis on variable charts that are more useful to chemists and chemical engineers. A chapter focuses on the effect

  4. Sophisticated Approval Voting, Ignorance Priors, and Plurality Heuristics: A Behavioral Social Choice Analysis in a Thurstonian Framework

    Science.gov (United States)

    Regenwetter, Michel; Ho, Moon-Ho R.; Tsetlin, Ilia

    2007-01-01

    This project reconciles historically distinct paradigms at the interface between individual and social choice theory, as well as between rational and behavioral decision theory. The authors combine a utility-maximizing prescriptive rule for sophisticated approval voting with the ignorance prior heuristic from behavioral decision research and two…

  5. Bulk tank somatic cell counts analyzed by statistical process control tools to identify and monitor subclinical mastitis incidence.

    Science.gov (United States)

    Lukas, J M; Hawkins, D M; Kinsel, M L; Reneau, J K

    2005-11-01

    The objective of this study was to examine the relationship between monthly Dairy Herd Improvement (DHI) subclinical mastitis and new infection rate estimates and daily bulk tank somatic cell count (SCC) summarized by statistical process control tools. Dairy Herd Improvement Association test-day subclinical mastitis and new infection rate estimates along with daily or every other day bulk tank SCC data were collected for 12 mo of 2003 from 275 Upper Midwest dairy herds. Herds were divided into 5 herd production categories. A linear score [LNS = ln(BTSCC/100,000)/0.693147 + 3] was calculated for each individual bulk tank SCC. For both the raw SCC and the transformed data, the mean and sigma were calculated using the statistical quality control individual measurement and moving range chart procedure of Statistical Analysis System. One hundred eighty-three herds of the 275 herds from the study data set were then randomly selected and the raw (method 1) and transformed (method 2) bulk tank SCC mean and sigma were used to develop models for predicting subclinical mastitis and new infection rate estimates. Herd production category was also included in all models as 5 dummy variables. Models were validated by calculating estimates of subclinical mastitis and new infection rates for the remaining 92 herds and plotting them against observed values of each of the dependents. Only herd production category and bulk tank SCC mean were significant and remained in the final models. High R2 values (0.83 and 0.81 for methods 1 and 2, respectively) indicated a strong correlation between the bulk tank SCC and herd's subclinical mastitis prevalence. The standard errors of the estimate were 4.02 and 4.28% for methods 1 and 2, respectively, and decreased with increasing herd production. As a case study, Shewhart Individual Measurement Charts were plotted from the bulk tank SCC to identify shifts in mastitis incidence. Four of 5 charts examined signaled a change in bulk tank SCC before

  6. Six sigma for organizational excellence a statistical approach

    CERN Document Server

    Muralidharan, K

    2015-01-01

    This book discusses the integrated concepts of statistical quality engineering and management tools. It will help readers to understand and apply the concepts of quality through project management and technical analysis, using statistical methods. Prepared in a ready-to-use form, the text will equip practitioners to implement the Six Sigma principles in projects. The concepts discussed are all critically assessed and explained, allowing them to be practically applied in managerial decision-making, and in each chapter, the objectives and connections to the rest of the work are clearly illustrated. To aid in understanding, the book includes a wealth of tables, graphs, descriptions and checklists, as well as charts and plots, worked-out examples and exercises. Perhaps the most unique feature of the book is its approach, using statistical tools, to explain the science behind Six Sigma project management and integrated in engineering concepts. The material on quality engineering and statistical management tools of...

  7. Data and Statistics: Women and Heart Disease

    Science.gov (United States)

    ... Summary Coverdell Program 2012-2015 State Summaries Data & Statistics Fact Sheets Heart Disease and Stroke Fact Sheets ... Roadmap for State Planning Other Data Resources Other Statistic Resources Grantee Information Cross-Program Information Online Tools ...

  8. Functional statistics and related fields

    CERN Document Server

    Bongiorno, Enea; Cao, Ricardo; Vieu, Philippe

    2017-01-01

    This volume collects latest methodological and applied contributions on functional, high-dimensional and other complex data, related statistical models and tools as well as on operator-based statistics. It contains selected and refereed contributions presented at the Fourth International Workshop on Functional and Operatorial Statistics (IWFOS 2017) held in A Coruña, Spain, from 15 to 17 June 2017. The series of IWFOS workshops was initiated by the Working Group on Functional and Operatorial Statistics at the University of Toulouse in 2008. Since then, many of the major advances in functional statistics and related fields have been periodically presented and discussed at the IWFOS workshops. .

  9. AP statistics

    CERN Document Server

    Levine-Wissing, Robin

    2012-01-01

    All Access for the AP® Statistics Exam Book + Web + Mobile Everything you need to prepare for the Advanced Placement® exam, in a study system built around you! There are many different ways to prepare for an Advanced Placement® exam. What's best for you depends on how much time you have to study and how comfortable you are with the subject matter. To score your highest, you need a system that can be customized to fit you: your schedule, your learning style, and your current level of knowledge. This book, and the online tools that come with it, will help you personalize your AP® Statistics prep

  10. Medicaid Drug Claims Statistics

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Medicaid Drug Claims Statistics CD is a useful tool that conveniently breaks up Medicaid claim counts and separates them by quarter and includes an annual count.

  11. Use of Statistical Heuristics in Everyday Inductive Reasoning.

    Science.gov (United States)

    Nisbett, Richard E.; And Others

    1983-01-01

    In everyday reasoning, people use statistical heuristics (judgmental tools that are rough intuitive equivalents of statistical principles). Use of statistical heuristics is more likely when (1) sampling is clear, (2) the role of chance is clear, (3) statistical reasoning is normative for the event, or (4) the subject has had training in…

  12. Statistical methods in quality assurance

    International Nuclear Information System (INIS)

    Eckhard, W.

    1980-01-01

    During the different phases of a production process - planning, development and design, manufacturing, assembling, etc. - most of the decision rests on a base of statistics, the collection, analysis and interpretation of data. Statistical methods can be thought of as a kit of tools to help to solve problems in the quality functions of the quality loop with respect to produce quality products and to reduce quality costs. Various statistical methods are represented, typical examples for their practical application are demonstrated. (RW)

  13. Basic statistics for social research

    CERN Document Server

    Hanneman, Robert A; Riddle, Mark D

    2012-01-01

    A core statistics text that emphasizes logical inquiry, notmath Basic Statistics for Social Research teaches core generalstatistical concepts and methods that all social science majorsmust master to understand (and do) social research. Its use ofmathematics and theory are deliberately limited, as the authorsfocus on the use of concepts and tools of statistics in theanalysis of social science data, rather than on the mathematicaland computational aspects. Research questions and applications aretaken from a wide variety of subfields in sociology, and eachchapter is organized arou

  14. High Accuracy Nonlinear Control and Estimation for Machine Tool Systems

    DEFF Research Database (Denmark)

    Papageorgiou, Dimitrios

    Component mass production has been the backbone of industry since the second industrial revolution, and machine tools are producing parts of widely varying size and design complexity. The ever-increasing level of automation in modern manufacturing processes necessitates the use of more...... sophisticated machine tool systems that are adaptable to different workspace conditions, while at the same time being able to maintain very narrow workpiece tolerances. The main topic of this thesis is to suggest control methods that can maintain required manufacturing tolerances, despite moderate wear and tear....... The purpose is to ensure that full accuracy is maintained between service intervals and to advice when overhaul is needed. The thesis argues that quality of manufactured components is directly related to the positioning accuracy of the machine tool axes, and it shows which low level control architectures...

  15. Purification through Emotions: The Role of Shame in Plato's "Sophist" 230B4-E5

    Science.gov (United States)

    Candiotto, Laura

    2018-01-01

    This article proposes an analysis of Plato's "Sophist" (230b4--e5) that underlines the bond between the logical and the emotional components of the Socratic "elenchus", with the aim of depicting the social valence of this philosophical practice. The use of emotions characterizing the 'elenctic' method described by Plato is…

  16. Statistics for Finance

    DEFF Research Database (Denmark)

    Lindström, Erik; Madsen, Henrik; Nielsen, Jan Nygaard

    Statistics for Finance develops students’ professional skills in statistics with applications in finance. Developed from the authors’ courses at the Technical University of Denmark and Lund University, the text bridges the gap between classical, rigorous treatments of financial mathematics...... that rarely connect concepts to data and books on econometrics and time series analysis that do not cover specific problems related to option valuation. The book discusses applications of financial derivatives pertaining to risk assessment and elimination. The authors cover various statistical...... and mathematical techniques, including linear and nonlinear time series analysis, stochastic calculus models, stochastic differential equations, Itō’s formula, the Black–Scholes model, the generalized method-of-moments, and the Kalman filter. They explain how these tools are used to price financial derivatives...

  17. Scipion web tools: Easy to use cryo-EM image processing over the web.

    Science.gov (United States)

    Conesa Mingo, Pablo; Gutierrez, José; Quintana, Adrián; de la Rosa Trevín, José Miguel; Zaldívar-Peraza, Airén; Cuenca Alba, Jesús; Kazemi, Mohsen; Vargas, Javier; Del Cano, Laura; Segura, Joan; Sorzano, Carlos Oscar S; Carazo, Jose María

    2018-01-01

    Macromolecular structural determination by Electron Microscopy under cryogenic conditions is revolutionizing the field of structural biology, interesting a large community of potential users. Still, the path from raw images to density maps is complex, and sophisticated image processing suites are required in this process, often demanding the installation and understanding of different software packages. Here, we present Scipion Web Tools, a web-based set of tools/workflows derived from the Scipion image processing framework, specially tailored to nonexpert users in need of very precise answers at several key stages of the structural elucidation process. © 2017 The Protein Society.

  18. Statistical tools applied for the reduction of the defect rate of coffee degassing valves

    Directory of Open Access Journals (Sweden)

    Giorgio Olmi

    2015-04-01

    Full Text Available Coffee is a very common beverage exported all over the world: just after roasting, coffee beans are packed in plastic or paper bags, which then experience long transfers with long storage times. Fresh roasted coffee emits large amounts of CO2 for several weeks. This gas must be gradually released, to prevent package over-inflation and to preserve aroma, moreover beans must be protected from oxygen coming from outside. Therefore, one-way degassing valves are applied to each package: their correct functionality is strictly related to the interference coupling between their bodies and covers and to the correct assembly of the other involved parts. This work takes inspiration from an industrial problem: a company that assembles valve components, supplied by different manufacturers, observed a high level of defect rate, affecting its valve production. An integrated approach, consisting in the adoption of quality charts, in an experimental campaign for the dimensional analysis of the mating parts and in the statistical processing of the data, was necessary to tackle the question. In particular, a simple statistical tool was made available to predict the defect rate and to individuate the best strategy for its reduction. The outcome was that requiring a strict protocol, regarding the combinations of parts from different manufacturers for assembly, would have been almost ineffective. Conversely, this study led to the individuation of the weak point in the manufacturing process of the mating components and to the suggestion of a slight improvement to be performed, with the final result of a significant (one order of magnitude decrease of the defect rate.

  19. Basics of modern mathematical statistics

    CERN Document Server

    Spokoiny, Vladimir

    2015-01-01

    This textbook provides a unified and self-contained presentation of the main approaches to and ideas of mathematical statistics. It collects the basic mathematical ideas and tools needed as a basis for more serious studies or even independent research in statistics. The majority of existing textbooks in mathematical statistics follow the classical asymptotic framework. Yet, as modern statistics has changed rapidly in recent years, new methods and approaches have appeared. The emphasis is on finite sample behavior, large parameter dimensions, and model misspecifications. The present book provides a fully self-contained introduction to the world of modern mathematical statistics, collecting the basic knowledge, concepts and findings needed for doing further research in the modern theoretical and applied statistics. This textbook is primarily intended for graduate and postdoc students and young researchers who are interested in modern statistical methods.

  20. [''R"--project for statistical computing

    DEFF Research Database (Denmark)

    Dessau, R.B.; Pipper, Christian Bressen

    2008-01-01

    An introduction to the R project for statistical computing (www.R-project.org) is presented. The main topics are: 1. To make the professional community aware of "R" as a potent and free software for graphical and statistical analysis of medical data; 2. Simple well-known statistical tests are fai...... are fairly easy to perform in R, but more complex modelling requires programming skills; 3. R is seen as a tool for teaching statistics and implementing complex modelling of medical data among medical professionals Udgivelsesdato: 2008/1/28......An introduction to the R project for statistical computing (www.R-project.org) is presented. The main topics are: 1. To make the professional community aware of "R" as a potent and free software for graphical and statistical analysis of medical data; 2. Simple well-known statistical tests...

  1. Kinetic Analysis of Dynamic Positron Emission Tomography Data using Open-Source Image Processing and Statistical Inference Tools.

    Science.gov (United States)

    Hawe, David; Hernández Fernández, Francisco R; O'Suilleabháin, Liam; Huang, Jian; Wolsztynski, Eric; O'Sullivan, Finbarr

    2012-05-01

    In dynamic mode, positron emission tomography (PET) can be used to track the evolution of injected radio-labelled molecules in living tissue. This is a powerful diagnostic imaging technique that provides a unique opportunity to probe the status of healthy and pathological tissue by examining how it processes substrates. The spatial aspect of PET is well established in the computational statistics literature. This article focuses on its temporal aspect. The interpretation of PET time-course data is complicated because the measured signal is a combination of vascular delivery and tissue retention effects. If the arterial time-course is known, the tissue time-course can typically be expressed in terms of a linear convolution between the arterial time-course and the tissue residue. In statistical terms, the residue function is essentially a survival function - a familiar life-time data construct. Kinetic analysis of PET data is concerned with estimation of the residue and associated functionals such as flow, flux, volume of distribution and transit time summaries. This review emphasises a nonparametric approach to the estimation of the residue based on a piecewise linear form. Rapid implementation of this by quadratic programming is described. The approach provides a reference for statistical assessment of widely used one- and two-compartmental model forms. We illustrate the method with data from two of the most well-established PET radiotracers, (15)O-H(2)O and (18)F-fluorodeoxyglucose, used for assessment of blood perfusion and glucose metabolism respectively. The presentation illustrates the use of two open-source tools, AMIDE and R, for PET scan manipulation and model inference.

  2. Statistical Models and Methods for Lifetime Data

    CERN Document Server

    Lawless, Jerald F

    2011-01-01

    Praise for the First Edition"An indispensable addition to any serious collection on lifetime data analysis and . . . a valuable contribution to the statistical literature. Highly recommended . . ."-Choice"This is an important book, which will appeal to statisticians working on survival analysis problems."-Biometrics"A thorough, unified treatment of statistical models and methods used in the analysis of lifetime data . . . this is a highly competent and agreeable statistical textbook."-Statistics in MedicineThe statistical analysis of lifetime or response time data is a key tool in engineering,

  3. Implementation of an adaptive training and tracking game in statistics teaching

    NARCIS (Netherlands)

    Groeneveld, C.M.; Kalz, M.; Ras, E.

    2014-01-01

    Statistics teaching in higher education has a number of challenges. An adaptive training, tracking and teaching tool in a gaming environment aims to address problems inherent in statistics teaching. This paper discusses the implementation of this tool in a large first year university programme and

  4. Computational Tools and Algorithms for Designing Customized Synthetic Genes

    Energy Technology Data Exchange (ETDEWEB)

    Gould, Nathan [Department of Computer Science, The College of New Jersey, Ewing, NJ (United States); Hendy, Oliver [Department of Biology, The College of New Jersey, Ewing, NJ (United States); Papamichail, Dimitris, E-mail: papamicd@tcnj.edu [Department of Computer Science, The College of New Jersey, Ewing, NJ (United States)

    2014-10-06

    Advances in DNA synthesis have enabled the construction of artificial genes, gene circuits, and genomes of bacterial scale. Freedom in de novo design of synthetic constructs provides significant power in studying the impact of mutations in sequence features, and verifying hypotheses on the functional information that is encoded in nucleic and amino acids. To aid this goal, a large number of software tools of variable sophistication have been implemented, enabling the design of synthetic genes for sequence optimization based on rationally defined properties. The first generation of tools dealt predominantly with singular objectives such as codon usage optimization and unique restriction site incorporation. Recent years have seen the emergence of sequence design tools that aim to evolve sequences toward combinations of objectives. The design of optimal protein-coding sequences adhering to multiple objectives is computationally hard, and most tools rely on heuristics to sample the vast sequence design space. In this review, we study some of the algorithmic issues behind gene optimization and the approaches that different tools have adopted to redesign genes and optimize desired coding features. We utilize test cases to demonstrate the efficiency of each approach, as well as identify their strengths and limitations.

  5. Computational Tools and Algorithms for Designing Customized Synthetic Genes

    International Nuclear Information System (INIS)

    Gould, Nathan; Hendy, Oliver; Papamichail, Dimitris

    2014-01-01

    Advances in DNA synthesis have enabled the construction of artificial genes, gene circuits, and genomes of bacterial scale. Freedom in de novo design of synthetic constructs provides significant power in studying the impact of mutations in sequence features, and verifying hypotheses on the functional information that is encoded in nucleic and amino acids. To aid this goal, a large number of software tools of variable sophistication have been implemented, enabling the design of synthetic genes for sequence optimization based on rationally defined properties. The first generation of tools dealt predominantly with singular objectives such as codon usage optimization and unique restriction site incorporation. Recent years have seen the emergence of sequence design tools that aim to evolve sequences toward combinations of objectives. The design of optimal protein-coding sequences adhering to multiple objectives is computationally hard, and most tools rely on heuristics to sample the vast sequence design space. In this review, we study some of the algorithmic issues behind gene optimization and the approaches that different tools have adopted to redesign genes and optimize desired coding features. We utilize test cases to demonstrate the efficiency of each approach, as well as identify their strengths and limitations.

  6. Tucker tensor analysis of Matern functions in spatial statistics

    KAUST Repository

    Litvinenko, Alexander

    2018-04-20

    Low-rank Tucker tensor methods in spatial statistics 1. Motivation: improve statistical models 2. Motivation: disadvantages of matrices 3. Tools: Tucker tensor format 4. Tensor approximation of Matern covariance function via FFT 5. Typical statistical operations in Tucker tensor format 6. Numerical experiments

  7. State-of-the-Art for Hygrothermal Simulation Tools

    Energy Technology Data Exchange (ETDEWEB)

    Boudreaux, Philip R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); New, Joshua Ryan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Shrestha, Som S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Adams, Mark B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Pallin, Simon B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2017-02-01

    The hygrothermal (heat and moisture) performance of buildings can be assessed by utilizing simulation tools. There are currently a number of available hygrothermal calculation tools available which vary in their degree of sophistication and runtime requirements. This report investigates three of the most commonly used models (WUFI, HAMT, and EMPD) to assess their limitations and potential to generate physically realistic results to prioritize improvements for EnergyPlus (which uses HAMT and EMPD). The outcome of the study shows that, out of these three tools, WUFI has the greatest hygrothermal capabilities. Limitations of these tools were also assessed including: WUFI’s inability to properly account for air leakage and transfer at surface boundaries; HAMT’s inability to handle air leakage, precipitationrelated moisture problems, or condensation problems from high relative humidity; and multiple limitations for EMPD as a simplified method to estimate indoor temperature and humidity levels and generally not used to estimate the hygrothermal performance of the building envelope materials. In conclusion, out of the three investigated simulation tools, HAMT has the greatest modeling potential, is open source, and we have prioritized specific features that can enable EnergyPlus to model all relevant heat and moisture transfer mechanisms that impact the performance of building envelope components.

  8. Application of Statistics in Engineering Technology Programs

    Science.gov (United States)

    Zhan, Wei; Fink, Rainer; Fang, Alex

    2010-01-01

    Statistics is a critical tool for robustness analysis, measurement system error analysis, test data analysis, probabilistic risk assessment, and many other fields in the engineering world. Traditionally, however, statistics is not extensively used in undergraduate engineering technology (ET) programs, resulting in a major disconnect from industry…

  9. Extended statistical entropy analysis as a quantitative management tool for water resource systems

    Science.gov (United States)

    Sobantka, Alicja; Rechberger, Helmut

    2010-05-01

    The use of entropy in hydrology and water resources has been applied to various applications. As water resource systems are inherently spatial and complex, a stochastic description of these systems is needed, and entropy theory enables development of such a description by providing determination of the least-biased probability distributions with limited knowledge and data. Entropy can also serve as a basis for risk and reliability analysis. The relative entropy has been variously interpreted as a measure freedom of choice, uncertainty and disorder, information content, missing information or information gain or loss. In the analysis of empirical data, entropy is another measure of dispersion, an alternative to the variance. Also, as an evaluation tool, the statistical entropy analysis (SEA) has been developed by previous workers to quantify the power of a process to concentrate chemical elements. Within this research programme the SEA is aimed to be extended for application to chemical compounds and tested for its deficits and potentials in systems where water resources play an important role. The extended SEA (eSEA) will be developed first for the nitrogen balance in waste water treatment plants (WWTP). Later applications on the emission of substances to water bodies such as groundwater (e.g. leachate from landfills) will also be possible. By applying eSEA to the nitrogen balance in a WWTP, all possible nitrogen compounds, which may occur during the water treatment process, are taken into account and are quantified in their impact towards the environment and human health. It has been shown that entropy reducing processes are part of modern waste management. Generally, materials management should be performed in a way that significant entropy rise is avoided. The entropy metric might also be used to perform benchmarking on WWTPs. The result out of this management tool would be the determination of the efficiency of WWTPs. By improving and optimizing the efficiency

  10. Business statistics for competitive advantage with Excel 2016 basics, model building, simulation and cases

    CERN Document Server

    Fraser, Cynthia

    2016-01-01

    The revised Fourth Edition of this popular textbook is redesigned with Excel 2016 to encourage business students to develop competitive advantages for use in their future careers as decision makers. Students learn to build models using logic and experience, produce statistics using Excel 2016 with shortcuts, and translate results into implications for decision makers. The textbook features new examples and assignments on global markets, including cases featuring Chipotle and Costco. Exceptional managers know that they can create competitive advantages by basing decisions on performance response under alternative scenarios, and managers need to understand how to use statistics to create such advantages. Statistics, from basic to sophisticated models, are illustrated with examples using real data such as students will encounter in their roles as managers. A number of examples focus on business in emerging global markets with particular emphasis on emerging markets in Latin America, China, and India. Results are...

  11. Introduction to statistics

    CERN Multimedia

    CERN. Geneva

    2005-01-01

    The three lectures will present an introduction to statistical methods as used in High Energy Physics. As the time will be very limited, the course will seek mainly to define the important issues and to introduce the most wide used tools. Topics will include the interpretation and use of probability, estimation of parameters and testing of hypotheses.

  12. Introduction to statistics

    CERN Multimedia

    CERN. Geneva

    2004-01-01

    The three lectures will present an introduction to statistical methods as used in High Energy Physics. As the time will be very limited, the course will seek mainly to define the important issues and to introduce the most wide used tools. Topics will include the interpretation and use of probability, estimation of parameters and testing of hypotheses.

  13. Library of sophisticated functions for analysis of nuclear spectra

    Science.gov (United States)

    Morháč, Miroslav; Matoušek, Vladislav

    2009-10-01

    In the paper we present compact library for analysis of nuclear spectra. The library consists of sophisticated functions for background elimination, smoothing, peak searching, deconvolution, and peak fitting. The functions can process one- and two-dimensional spectra. The software described in the paper comprises a number of conventional as well as newly developed methods needed to analyze experimental data. Program summaryProgram title: SpecAnalysLib 1.1 Catalogue identifier: AEDZ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDZ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 42 154 No. of bytes in distributed program, including test data, etc.: 2 379 437 Distribution format: tar.gz Programming language: C++ Computer: Pentium 3 PC 2.4 GHz or higher, Borland C++ Builder v. 6. A precompiled Windows version is included in the distribution package Operating system: Windows 32 bit versions RAM: 10 MB Word size: 32 bits Classification: 17.6 Nature of problem: The demand for advanced highly effective experimental data analysis functions is enormous. The library package represents one approach to give the physicists the possibility to use the advanced routines simply by calling them from their own programs. SpecAnalysLib is a collection of functions for analysis of one- and two-parameter γ-ray spectra, but they can be used for other types of data as well. The library consists of sophisticated functions for background elimination, smoothing, peak searching, deconvolution, and peak fitting. Solution method: The algorithms of background estimation are based on Sensitive Non-linear Iterative Peak (SNIP) clipping algorithm. The smoothing algorithms are based on the convolution of the original data with several types of filters and algorithms based on discrete

  14. Statistical methods for ranking data

    CERN Document Server

    Alvo, Mayer

    2014-01-01

    This book introduces advanced undergraduate, graduate students and practitioners to statistical methods for ranking data. An important aspect of nonparametric statistics is oriented towards the use of ranking data. Rank correlation is defined through the notion of distance functions and the notion of compatibility is introduced to deal with incomplete data. Ranking data are also modeled using a variety of modern tools such as CART, MCMC, EM algorithm and factor analysis. This book deals with statistical methods used for analyzing such data and provides a novel and unifying approach for hypotheses testing. The techniques described in the book are illustrated with examples and the statistical software is provided on the authors’ website.

  15. Photogrammetric computer vision statistics, geometry, orientation and reconstruction

    CERN Document Server

    Förstner, Wolfgang

    2016-01-01

    This textbook offers a statistical view on the geometry of multiple view analysis, required for camera calibration and orientation and for geometric scene reconstruction based on geometric image features. The authors have backgrounds in geodesy and also long experience with development and research in computer vision, and this is the first book to present a joint approach from the converging fields of photogrammetry and computer vision. Part I of the book provides an introduction to estimation theory, covering aspects such as Bayesian estimation, variance components, and sequential estimation, with a focus on the statistically sound diagnostics of estimation results essential in vision metrology. Part II provides tools for 2D and 3D geometric reasoning using projective geometry. This includes oriented projective geometry and tools for statistically optimal estimation and test of geometric entities and transformations and their rela­tions, tools that are useful also in the context of uncertain reasoning in po...

  16. One Step Construction of Agrobacterium Recombination-ready-plasmids (OSCAR), an efficient and robust tool for ATMT based gene deletion construction in fungi

    Science.gov (United States)

    Increasing availability of genomic data and sophistication of analytical methodology in fungi has elevated the need for functional genomics tools in these organisms. Previously we reported a method called DelsGate for rapid preparation of deletion constructs for protoplast-mediated fungal transforma...

  17. A novel form of spontaneous tool use displayed by several captive greater vasa parrots (Coracopsis vasa).

    Science.gov (United States)

    Lambert, Megan L; Seed, Amanda M; Slocombe, Katie E

    2015-12-01

    Parrots are frequently cited for their sophisticated problem-solving abilities, but cases of habitual tool use among psittacines are scarce. We report the first evidence, to our knowledge, of tool use by greater vasa parrots (Coracopsis vasa). Several members of a captive population spontaneously adopted a novel tool-using technique by using pebbles and date pits either (i) to scrape on the inner surface of seashells, subsequently licking the resulting calcium powder from the tool, or (ii) as a wedge to break off smaller pieces of the shell for ingestion. Tool use occurred most frequently just prior to the breeding season, during which time numerous instances of tool transfer were also documented. These observations provide new insights into the tool-using capabilities of parrots and highlight the greater vasa parrot as a species of interest for studies of physical cognition. © 2015 The Author(s).

  18. Overview of Automotive Core Tools: Applications and Benefits

    Science.gov (United States)

    Doshi, Jigar A.; Desai, Darshak

    2017-08-01

    Continuous improvement of product and process quality is always challenging and creative task in today's era of globalization. Various quality tools are available and used for the same. Some of them are successful and few of them are not. Considering the complexity in the continuous quality improvement (CQI) process various new techniques are being introduced by the industries, as well as proposed by researchers and academia. Lean Manufacturing, Six Sigma, Lean Six Sigma is some of the techniques. In recent years, there are new tools being opted by the industry, especially automotive, called as Automotive Core Tools (ACT). The intention of this paper is to review the applications and benefits along with existing research on Automotive Core Tools with special emphasis on continuous quality improvement. The methodology uses an extensive review of literature through reputed publications—journals, conference proceedings, research thesis, etc. This paper provides an overview of ACT, its enablers, and exertions, how it evolved into sophisticated methodologies and benefits used in organisations. It should be of value to practitioners of Automotive Core Tools and to academics who are interested in how CQI can be achieved using ACT. It needs to be stressed here that this paper is not intended to scorn Automotive Core Tools, rather, its purpose is limited only to provide a balance on the prevailing positive views toward ACT.

  19. Decommissioned Data Tools and Web Applications

    Science.gov (United States)

    Employment and Payroll Survey of Business Owners Work from Home Our statistics highlight trends in household statistics from multiple surveys. Data Tools & Apps Main American FactFinder Census Business Builder My small business owners selected Census Bureau & other statistics to guide their research for opening

  20. Multivariate Statistical Methods as a Tool of Financial Analysis of Farm Business

    Czech Academy of Sciences Publication Activity Database

    Novák, J.; Sůvová, H.; Vondráček, Jiří

    2002-01-01

    Roč. 48, č. 1 (2002), s. 9-12 ISSN 0139-570X Institutional research plan: AV0Z1030915 Keywords : financial analysis * financial ratios * multivariate statistical methods * correlation analysis * discriminant analysis * cluster analysis Subject RIV: BB - Applied Statistics, Operational Research

  1. Results of a multicentre randomised controlled trial of statistical process control charts and structured diagnostic tools to reduce ward-acquired meticillin-resistant Staphylococcus aureus: the CHART Project.

    Science.gov (United States)

    Curran, E; Harper, P; Loveday, H; Gilmour, H; Jones, S; Benneyan, J; Hood, J; Pratt, R

    2008-10-01

    Statistical process control (SPC) charts have previously been advocated for infection control quality improvement. To determine their effectiveness, a multicentre randomised controlled trial was undertaken to explore whether monthly SPC feedback from infection control nurses (ICNs) to healthcare workers of ward-acquired meticillin-resistant Staphylococcus aureus (WA-MRSA) colonisation or infection rates would produce any reductions in incidence. Seventy-five wards in 24 hospitals in the UK were randomised into three arms: (1) wards receiving SPC chart feedback; (2) wards receiving SPC chart feedback in conjunction with structured diagnostic tools; and (3) control wards receiving neither type of feedback. Twenty-five months of pre-intervention WA-MRSA data were compared with 24 months of post-intervention data. Statistically significant and sustained decreases in WA-MRSA rates were identified in all three arms (Pcontrol wards, but with no significant difference between the control and intervention arms (P=0.23). There were significantly more post-intervention 'out-of-control' episodes (P=0.021) in the control arm (averages of 0.60, 0.28, and 0.28 for Control, SPC and SPC+Tools wards, respectively). Participants identified SPC charts as an effective communication tool and valuable for disseminating WA-MRSA data.

  2. Prerequisites for Systems Analysts: Analytic and Management Demands of a New Approach to Educational Administration.

    Science.gov (United States)

    Ammentorp, William

    There is much to be gained by using systems analysis in educational administration. Most administrators, presently relying on classical statistical techniques restricted to problems having few variables, should be trained to use more sophisticated tools such as systems analysis. The systems analyst, interested in the basic processes of a group or…

  3. Discover Space Weather and Sun's Superpowers: Using CCMC's innovative tools and applications

    Science.gov (United States)

    Mendoza, A. M. M.; Maddox, M. M.; Kuznetsova, M. M.; Chulaki, A.; Rastaetter, L.; Mullinix, R.; Weigand, C.; Boblitt, J.; Taktakishvili, A.; MacNeice, P. J.; Pulkkinen, A. A.; Pembroke, A. D.; Mays, M. L.; Zheng, Y.; Shim, J. S.

    2015-12-01

    Community Coordinated Modeling Center (CCMC) has developed a comprehensive set of tools and applications that are directly applicable to space weather and space science education. These tools, some of which were developed by our student interns, are capable of serving a wide range of student audiences, from middle school to postgraduate research. They include a web-based point of access to sophisticated space physics models and visualizations, and a powerful space weather information dissemination system, available on the web and as a mobile app. In this demonstration, we will use CCMC's innovative tools to engage the audience in real-time space weather analysis and forecasting and will share some of our interns' hands-on experiences while being trained as junior space weather forecasters. The main portals to CCMC's educational material are ccmc.gsfc.nasa.gov and iswa.gsfc.nasa.gov

  4. Statistical inference for financial engineering

    CERN Document Server

    Taniguchi, Masanobu; Ogata, Hiroaki; Taniai, Hiroyuki

    2014-01-01

    This monograph provides the fundamentals of statistical inference for financial engineering and covers some selected methods suitable for analyzing financial time series data. In order to describe the actual financial data, various stochastic processes, e.g. non-Gaussian linear processes, non-linear processes, long-memory processes, locally stationary processes etc. are introduced and their optimal estimation is considered as well. This book also includes several statistical approaches, e.g., discriminant analysis, the empirical likelihood method, control variate method, quantile regression, realized volatility etc., which have been recently developed and are considered to be powerful tools for analyzing the financial data, establishing a new bridge between time series and financial engineering. This book is well suited as a professional reference book on finance, statistics and statistical financial engineering. Readers are expected to have an undergraduate-level knowledge of statistics.

  5. Statistical fluctuations of an ocean surface inferred from shoes and ships

    Science.gov (United States)

    Lerche, Ian; Maubeuge, Frédéric

    1995-12-01

    This paper shows that it is possible to roughly estimate some ocean properties using simple time-dependent statistical models of ocean fluctuations. Based on a real incident, the loss by a vessel of a Nike shoes container in the North Pacific Ocean, a statistical model was tested on data sets consisting of the Nike shoes found by beachcombers a few months later. This statistical treatment of the shoes' motion allows one to infer velocity trends of the Pacific Ocean, together with their fluctuation strengths. The idea is to suppose that there is a mean bulk flow speed that can depend on location on the ocean surface and time. The fluctuations of the surface flow speed are then treated as statistically random. The distribution of shoes is described in space and time using Markov probability processes related to the mean and fluctuating ocean properties. The aim of the exercise is to provide some of the properties of the Pacific Ocean that are otherwise calculated using a sophisticated numerical model, OSCURS, where numerous data are needed. Relevant quantities are sharply estimated, which can be useful to (1) constrain output results from OSCURS computations, and (2) elucidate the behavior patterns of ocean flow characteristics on long time scales.

  6. Modern applied statistics with S-plus

    CERN Document Server

    Venables, W N

    1994-01-01

    S-Plus is a powerful environment for statistical and graphical analysis of data. It provides the tools to implement many statistical ideas which have been made possible by the widespread availability of workstations having good graphics and computational capabilities. This book is a guide to using S-Plus to perform statistical analyses and provides both an introduction to the use of S-Plus and a course in modern statistical methods. The aim of the book is to show how to use S-Plus as a powerful and graphical system. Readers are assumed to have a basic grounding in statistics, and so the book is intended for would-be users of S-Plus, and both students and researchers using statistics. Throughout, the emphasis is on presenting practical problems and full analyses of real data sets.

  7. "SOCRATICS" AS ADDRESSES OF ISOCRATES’ EPIDEICTIC SPEECHES (Against the Sophists, Encomium of Helen, Busiris

    Directory of Open Access Journals (Sweden)

    Anna Usacheva

    2012-06-01

    Full Text Available This article analyses the three epideictic orations of Isocrates which are in themselves a precious testimony of the quality of intellectual life at the close of the fourth century before Christ. To this period belong also the Socratics who are generally seen as an important link between Socrates and Plato. The author of this article proposes a more productive approach to the study of Antisthenes, Euclid of Megara and other so-called Socratics, revealing them not as independent thinkers but rather as adherents of the sophistic school and also as teachers, thereby, including them among those who took part in the educative activity of their time

  8. International Conference on Robust Statistics

    CERN Document Server

    Filzmoser, Peter; Gather, Ursula; Rousseeuw, Peter

    2003-01-01

    Aspects of Robust Statistics are important in many areas. Based on the International Conference on Robust Statistics 2001 (ICORS 2001) in Vorau, Austria, this volume discusses future directions of the discipline, bringing together leading scientists, experienced researchers and practitioners, as well as younger researchers. The papers cover a multitude of different aspects of Robust Statistics. For instance, the fundamental problem of data summary (weights of evidence) is considered and its robustness properties are studied. Further theoretical subjects include e.g.: robust methods for skewness, time series, longitudinal data, multivariate methods, and tests. Some papers deal with computational aspects and algorithms. Finally, the aspects of application and programming tools complete the volume.

  9. Flaws and fallacies in statistical thinking

    CERN Document Server

    Campbell, Stephen K

    2004-01-01

    This book was written with a dual purpose: first, the author was motivated to relieve his distress over the faulty conclusions drawn from the frequent misuse of relatively simple statistical tools such as percents, graphs, and averages. Second, his objective was to create a nontechnical book that would help people make better-informed decisions by increasing their ability to judge the quality of statistical evidence. This volume achieves both, serving as a supplemental text for students taking their first course in statistics, and as a self-help guide for anyone wishing to evaluate statistica

  10. Using health statistics: a Nightingale legacy.

    Science.gov (United States)

    Schloman, B F

    2001-01-01

    No more forceful example of the value of using health statistics to understand and improve health conditions exists than displayed by Florence Nightingale. The recent book by Dossey (1999), Florence Nightingale: Mystic, Visionary, Healer, relates the dramatic tale of Nightingale s use of statistics to understand the causes of deaths in the Crimean War and of her advocacy to standardize the collection of medical data within the army and in civilian hospitals. For her, the use of health statistics was a major tool to improve health and influence public opinion.

  11. Undergraduate experiments on statistical optics

    International Nuclear Information System (INIS)

    Scholz, Ruediger; Friege, Gunnar; Weber, Kim-Alessandro

    2016-01-01

    Since the pioneering experiments of Forrester et al (1955 Phys. Rev. 99 1691) and Hanbury Brown and Twiss (1956 Nature 177 27; Nature 178 1046), along with the introduction of the laser in the 1960s, the systematic analysis of random fluctuations of optical fields has developed to become an indispensible part of physical optics for gaining insight into features of the fields. In 1985 Joseph W Goodman prefaced his textbook on statistical optics with a strong commitment to the ‘tools of probability and statistics’ (Goodman 2000 Statistical Optics (New York: John Wiley and Sons Inc.)) in the education of advanced optics. Since then a wide range of novel undergraduate optical counting experiments and corresponding pedagogical approaches have been introduced to underpin the rapid growth of the interest in coherence and photon statistics. We propose low cost experimental steps that are a fair way off ‘real’ quantum optics, but that give deep insight into random optical fluctuation phenomena: (1) the introduction of statistical methods into undergraduate university optical lab work, and (2) the connection between the photoelectrical signal and the characteristics of the light source. We describe three experiments and theoretical approaches which may be used to pave the way for a well balanced growth of knowledge, providing students with an opportunity to enhance their abilities to adapt the ‘tools of probability and statistics’. (paper)

  12. Entropy statistics and information theory

    NARCIS (Netherlands)

    Frenken, K.; Hanusch, H.; Pyka, A.

    2007-01-01

    Entropy measures provide important tools to indicate variety in distributions at particular moments in time (e.g., market shares) and to analyse evolutionary processes over time (e.g., technical change). Importantly, entropy statistics are suitable to decomposition analysis, which renders the

  13. Software for statistical data analysis used in Higgs searches

    International Nuclear Information System (INIS)

    Gumpert, Christian; Moneta, Lorenzo; Cranmer, Kyle; Kreiss, Sven; Verkerke, Wouter

    2014-01-01

    The analysis and interpretation of data collected by the Large Hadron Collider (LHC) requires advanced statistical tools in order to quantify the agreement between observation and theoretical models. RooStats is a project providing a statistical framework for data analysis with the focus on discoveries, confidence intervals and combination of different measurements in both Bayesian and frequentist approaches. It employs the RooFit data modelling language where mathematical concepts such as variables, (probability density) functions and integrals are represented as C++ objects. RooStats and RooFit rely on the persistency technology of the ROOT framework. The usage of a common data format enables the concept of digital publishing of complicated likelihood functions. The statistical tools have been developed in close collaboration with the LHC experiments to ensure their applicability to real-life use cases. Numerous physics results have been produced using the RooStats tools, with the discovery of the Higgs boson by the ATLAS and CMS experiments being certainly the most popular among them. We will discuss tools currently used by LHC experiments to set exclusion limits, to derive confidence intervals and to estimate discovery significances based on frequentist statistics and the asymptotic behaviour of likelihood functions. Furthermore, new developments in RooStats and performance optimisation necessary to cope with complex models depending on more than 1000 variables will be reviewed

  14. Computational Tools and Algorithms for Designing Customized Synthetic Genes

    Directory of Open Access Journals (Sweden)

    Nathan eGould

    2014-10-01

    Full Text Available Advances in DNA synthesis have enabled the construction of artificial genes, gene circuits, and genomes of bacterial scale. Freedom in de-novo design of synthetic constructs provides significant power in studying the impact of mutations in sequence features, and verifying hypotheses on the functional information that is encoded in nucleic and amino acids. To aid this goal, a large number of software tools of variable sophistication have been implemented, enabling the design of synthetic genes for sequence optimization based on rationally defined properties. The first generation of tools dealt predominantly with singular objectives such as codon usage optimization and unique restriction site incorporation. Recent years have seen the emergence of sequence design tools that aim to evolve sequences toward combinations of objectives. The design of optimal protein coding sequences adhering to multiple objectives is computationally hard, and most tools rely on heuristics to sample the vast sequence design space. In this review we study some of the algorithmic issues behind gene optimization and the approaches that different tools have adopted to redesign genes and optimize desired coding features. We utilize test cases to demonstrate the efficiency of each approach, as well as identify their strengths and limitations.

  15. SOCR: Statistics Online Computational Resource

    Directory of Open Access Journals (Sweden)

    Ivo D. Dinov

    2006-10-01

    Full Text Available The need for hands-on computer laboratory experience in undergraduate and graduate statistics education has been firmly established in the past decade. As a result a number of attempts have been undertaken to develop novel approaches for problem-driven statistical thinking, data analysis and result interpretation. In this paper we describe an integrated educational web-based framework for: interactive distribution modeling, virtual online probability experimentation, statistical data analysis, visualization and integration. Following years of experience in statistical teaching at all college levels using established licensed statistical software packages, like STATA, S-PLUS, R, SPSS, SAS, Systat, etc., we have attempted to engineer a new statistics education environment, the Statistics Online Computational Resource (SOCR. This resource performs many of the standard types of statistical analysis, much like other classical tools. In addition, it is designed in a plug-in object-oriented architecture and is completely platform independent, web-based, interactive, extensible and secure. Over the past 4 years we have tested, fine-tuned and reanalyzed the SOCR framework in many of our undergraduate and graduate probability and statistics courses and have evidence that SOCR resources build student's intuition and enhance their learning.

  16. Probing Chromatin-modifying Enzymes with Chemical Tools

    KAUST Repository

    Fischle, Wolfgang

    2016-02-04

    Chromatin is the universal template of genetic information in all eukaryotic organisms. Chemical modifications of the DNA-packaging histone proteins and the DNA bases are crucial signaling events in directing the use and readout of eukaryotic genomes. The enzymes that install and remove these chromatin modifications as well as the proteins that bind these marks govern information that goes beyond the sequence of DNA. Therefore, these so-called epigenetic regulators are intensively studied and represent promising drug targets in modern medicine. We summarize and discuss recent advances in the field of chemical biology that have provided chromatin research with sophisticated tools for investigating the composition, activity, and target sites of chromatin modifying enzymes and reader proteins.

  17. HAMMER: Reweighting tool for simulated data samples

    CERN Document Server

    Duell, Stephan; Ligeti, Zoltan; Papucci, Michele; Robinson, Dean

    2016-01-01

    Modern flavour physics experiments, such as Belle II or LHCb, require large samples of generated Monte Carlo events. Monte Carlo events often are processed in a sophisticated chain that includes a simulation of the detector response. The generation and reconstruction of large samples is resource-intensive and in principle would need to be repeated if e.g. parameters responsible for the underlying models change due to new measurements or new insights. To avoid having to regenerate large samples, we work on a tool, The Helicity Amplitude Module for Matrix Element Reweighting (HAMMER), which allows one to easily reweight existing events in the context of semileptonic b → q ` ̄ ν ` analyses to new model parameters or new physics scenarios.

  18. Physics analysis tools

    International Nuclear Information System (INIS)

    Kunz, P.F.

    1991-04-01

    There are many tools used in analysis in High Energy Physics (HEP). They range from low level tools such as a programming language to high level such as a detector simulation package. This paper will discuss some aspects of these tools that are directly associated with the process of analyzing HEP data. Physics analysis tools cover the whole range from the simulation of the interactions of particles to the display and fitting of statistical data. For purposes of this paper, the stages of analysis is broken down to five main stages. The categories are also classified as areas of generation, reconstruction, and analysis. Different detector groups use different terms for these stages thus it is useful to define what is meant by them in this paper. The particle generation stage is a simulation of the initial interaction, the production of particles, and the decay of the short lived particles. The detector simulation stage simulates the behavior of an event in a detector. The track reconstruction stage does pattern recognition on the measured or simulated space points, calorimeter information, etc., and reconstructs track segments of the original event. The event reconstruction stage takes the reconstructed tracks, along with particle identification information and assigns masses to produce 4-vectors. Finally the display and fit stage displays statistical data accumulated in the preceding stages in the form of histograms, scatter plots, etc. The remainder of this paper will consider what analysis tools are available today, and what one might expect in the future. In each stage, the integration of the tools with other stages and the portability of the tool will be analyzed

  19. COMPARATIVE STATISTICAL ANALYSIS OF GENOTYPES’ COMBINING

    Directory of Open Access Journals (Sweden)

    V. Z. Stetsyuk

    2015-05-01

    The program provides the creation of desktop program complex for statistics calculations on a personal computer of doctor. Modern methods and tools for development of information systems were described to create program.

  20. "Dear Fresher …"--How Online Questionnaires Can Improve Learning and Teaching Statistics

    Science.gov (United States)

    Bebermeier, Sarah; Nussbeck, Fridtjof W.; Ontrup, Greta

    2015-01-01

    Lecturers teaching statistics are faced with several challenges supporting students' learning in appropriate ways. A variety of methods and tools exist to facilitate students' learning on statistics courses. The online questionnaires presented in this report are a new, slightly different computer-based tool: the central aim was to support students…

  1. Decision support using nonparametric statistics

    CERN Document Server

    Beatty, Warren

    2018-01-01

    This concise volume covers nonparametric statistics topics that most are most likely to be seen and used from a practical decision support perspective. While many degree programs require a course in parametric statistics, these methods are often inadequate for real-world decision making in business environments. Much of the data collected today by business executives (for example, customer satisfaction opinions) requires nonparametric statistics for valid analysis, and this book provides the reader with a set of tools that can be used to validly analyze all data, regardless of type. Through numerous examples and exercises, this book explains why nonparametric statistics will lead to better decisions and how they are used to reach a decision, with a wide array of business applications. Online resources include exercise data, spreadsheets, and solutions.

  2. DMA Friends: the mobilization of statistics in a media innovation experiment in the museum sector

    Directory of Open Access Journals (Sweden)

    GERMAN Ronan

    2016-07-01

    Full Text Available On January 23, 2013, the Dallas Museum of Art (DMA returned to free general admission. This announcement coincided with the official launching of two programs: DMA Friends, a loyalty program, and DMA Partners, a membership program which is free of charge. The aim of this article is to propose an analysis, at the crossroads of media semiology, of political science and the historical sociology of statistical rationality, with a view to studying the ways in which the DMA Friends program designers have mobilized the statistical argument to justify the soundness of their approach. In order to do so, they leaned on a range of media forms generated by a sophisticated techno-semiotic apparatus which represents, in a statistical form, the behavior of visitors inside the museum. The program (and the whole instrumentation that sustains it illustrates a media innovation experiment in the museum sector that questions the ways by which the statistical work is mediated according to the communicational situations in which it is mobilized and enhanced.

  3. Dynamic principle for ensemble control tools.

    Science.gov (United States)

    Samoletov, A; Vasiev, B

    2017-11-28

    Dynamical equations describing physical systems in contact with a thermal bath are commonly extended by mathematical tools called "thermostats." These tools are designed for sampling ensembles in statistical mechanics. Here we propose a dynamic principle underlying a range of thermostats which is derived using fundamental laws of statistical physics and ensures invariance of the canonical measure. The principle covers both stochastic and deterministic thermostat schemes. Our method has a clear advantage over a range of proposed and widely used thermostat schemes that are based on formal mathematical reasoning. Following the derivation of the proposed principle, we show its generality and illustrate its applications including design of temperature control tools that differ from the Nosé-Hoover-Langevin scheme.

  4. Testing statistical hypotheses

    CERN Document Server

    Lehmann, E L

    2005-01-01

    The third edition of Testing Statistical Hypotheses updates and expands upon the classic graduate text, emphasizing optimality theory for hypothesis testing and confidence sets. The principal additions include a rigorous treatment of large sample optimality, together with the requisite tools. In addition, an introduction to the theory of resampling methods such as the bootstrap is developed. The sections on multiple testing and goodness of fit testing are expanded. The text is suitable for Ph.D. students in statistics and includes over 300 new problems out of a total of more than 760. E.L. Lehmann is Professor of Statistics Emeritus at the University of California, Berkeley. He is a member of the National Academy of Sciences and the American Academy of Arts and Sciences, and the recipient of honorary degrees from the University of Leiden, The Netherlands and the University of Chicago. He is the author of Elements of Large-Sample Theory and (with George Casella) he is also the author of Theory of Point Estimat...

  5. Applied statistics for agriculture, veterinary, fishery, dairy and allied fields

    CERN Document Server

    Sahu, Pradip Kumar

    2016-01-01

    This book is aimed at a wide range of readers who lack confidence in the mathematical and statistical sciences, particularly in the fields of Agriculture, Veterinary, Fishery, Dairy and other related areas. Its goal is to present the subject of statistics and its useful tools in various disciplines in such a manner that, after reading the book, readers will be equipped to apply the statistical tools to extract otherwise hidden information from their data sets with confidence. Starting with the meaning of statistics, the book introduces measures of central tendency, dispersion, association, sampling methods, probability, inference, designs of experiments and many other subjects of interest in a step-by-step and lucid manner. The relevant theories are described in detail, followed by a broad range of real-world worked-out examples, solved either manually or with the help of statistical packages. In closing, the book also includes a chapter on which statistical packages to use, depending on the user’s respecti...

  6. New LWD tools are just in time to probe for baby elephants

    Energy Technology Data Exchange (ETDEWEB)

    Ghiselin, D.

    1997-04-01

    Development of sophisticated formation evaluation instrumentation for use while drilling has led to a stratification of while-drilling services. Measurements while drilling (MWD) comprises measurements of mechanical parameters like weight-on-bit, mud pressures, torque, vibration, hole angle and direction. Logging while drilling (LWD) describes resistivity, sonic, and radiation logging which rival wireline measurements in accuracy. A critical feature of LWD is the rate that data can be telemetered to the surface. Early tools could only transmit 3 bits per second one way. In the last decade, the data rate has more than tripled. Despite these improvements, LWD tools have the ability to make many more measurements than can be telemetered in real-time. The paper discusses the development of this technology and its applications.

  7. Evolution of tyre/road noise research in India: Investigations using statistical pass-by method and noise trailer

    Directory of Open Access Journals (Sweden)

    Vivek Khan

    2018-05-01

    Full Text Available The objective of this research study was to investigate and analyze the acoustical characteristics of asphalt concrete and cement concrete surface types by two noise measurement techniques: statistical pass-by (SPB and Close Proximity (CPX methods. A noise trailer was devised and manufactured as part of the CPX methodology to evaluate tyre/pavement noise interaction at source. Two national highway test sections covering over 11 km of asphalt and cement concrete surfaces were selected to carry out the noise measurements, and the effects of vehicle speeds and/or sizes on the overall noise profiles were investigated. The major contribution of this first of its kind study in India was the utilization of sophisticated tools and techniques to measure the tyre/pavement interaction noise at source through CPX, which helped correlate the influence of road surfaces on the generation of overall road traffic noise using SPB technique. The SPB method noise profiles revealed that the noise pressure levels increased with increasing vehicle speeds and weights. The noise trailer CPX findings corroborated the results obtained from the SPB method in that cement concrete surface produced a higher noise at source than that of the asphalt concrete surface by about 5 dBA. Further, there was about 5 dBA differential in noise between SPB and CPX methods for cement concrete pavement sections; also, there was about 10 dBA differential in noise between the two methods for asphalt concrete pavement stretches. Keywords: Tyre/road noise, Statistical pass-by, Close proximity, Noise trailer, Asphalt concrete, Cement concrete

  8. Statistical Techniques for Project Control

    CERN Document Server

    Badiru, Adedeji B

    2012-01-01

    A project can be simple or complex. In each case, proven project management processes must be followed. In all cases of project management implementation, control must be exercised in order to assure that project objectives are achieved. Statistical Techniques for Project Control seamlessly integrates qualitative and quantitative tools and techniques for project control. It fills the void that exists in the application of statistical techniques to project control. The book begins by defining the fundamentals of project management then explores how to temper quantitative analysis with qualitati

  9. Evidence-Based Medicine as a Tool for Undergraduate Probability and Statistics Education.

    Science.gov (United States)

    Masel, J; Humphrey, P T; Blackburn, B; Levine, J A

    2015-01-01

    Most students have difficulty reasoning about chance events, and misconceptions regarding probability can persist or even strengthen following traditional instruction. Many biostatistics classes sidestep this problem by prioritizing exploratory data analysis over probability. However, probability itself, in addition to statistics, is essential both to the biology curriculum and to informed decision making in daily life. One area in which probability is particularly important is medicine. Given the preponderance of pre health students, in addition to more general interest in medicine, we capitalized on students' intrinsic motivation in this area to teach both probability and statistics. We use the randomized controlled trial as the centerpiece of the course, because it exemplifies the most salient features of the scientific method, and the application of critical thinking to medicine. The other two pillars of the course are biomedical applications of Bayes' theorem and science and society content. Backward design from these three overarching aims was used to select appropriate probability and statistics content, with a focus on eliciting and countering previously documented misconceptions in their medical context. Pretest/posttest assessments using the Quantitative Reasoning Quotient and Attitudes Toward Statistics instruments are positive, bucking several negative trends previously reported in statistics education. © 2015 J. Masel et al. CBE—Life Sciences Education © 2015 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  10. Concept Maps in Introductory Statistics

    Science.gov (United States)

    Witmer, Jeffrey A.

    2016-01-01

    Concept maps are tools for organizing thoughts on the main ideas in a course. I present an example of a concept map that was created through the work of students in an introductory class and discuss major topics in statistics and relationships among them.

  11. Working with Proteins in silico: A Review of Online Available Tools for Basic Identification of Proteins

    Directory of Open Access Journals (Sweden)

    Caner Yavuz

    2017-01-01

    Full Text Available Increase in online available bioinformatics tools for protein research creates an important opportunity for scientists to reveal characteristics of the protein of interest by only starting from the predicted or known amino acid sequence without fully depending on experimental approaches. There are many sophisticated tools used for diverse purposes; however, there are not enough reviews covering the tips and tricks in selecting and using the correct tools as the literature mainly state the promotion of the new ones. In this review, with the aim of providing young scientists with no specific experience on protein work a reliable starting point for in silico analysis of the protein of interest, we summarized tools for annotation, identification of motifs and domains, determination isoelectric point, molecular weight, subcellular localization, and post-translational modifications by focusing on the important points to be considered while selecting from online available tools.

  12. A Statistically Based Training Diagnostic Tool for Marine Aviation

    Science.gov (United States)

    2014-06-01

    the behavioral and social sciences in pursuit of an assessment tool to measure the tactical cognitive skills of officers in the combat arms...presence, flow and interaction, media for mood and arousal, media to attract and persuade, media for emotional effects, media to attract attention, and...expletive] better know how to read and execute checklist items, or they shouldn’t have made it out of the FRS. I could rant , but I won’t. Suffice it

  13. Orangutans (Pongo spp.) may prefer tools with rigid properties to flimsy tools.

    Science.gov (United States)

    Walkup, Kristina R; Shumaker, Robert W; Pruetz, Jill D

    2010-11-01

    Preference for tools with either rigid or flexible properties was explored in orangutans (Pongo spp.) through an extension of D. J. Povinelli, J. E. Reaux, and L. A. Theall's (2000) flimsy-tool problem. Three captive orangutans were presented with three unfamiliar pairs of tools to solve a novel problem. Although each orangutan has spontaneously used tools in the past, the tools presented in this study were novel to the apes. Each pair of tools contained one tool with rigid properties (functional) and one tool with flimsy properties (nonfunctional). Solving the problem required selection of a rigid tool to retrieve a food reward. The functional tool was selected in nearly all trials. Moreover, two of the orangutans demonstrated this within the first test trials with each of the three tool types. Although further research is required to test this statistically, it suggests either a preexisting preference for rigid tools or comprehension of the relevant features required in a tool to solve the task. The results of this study demonstrate that orangutans can recognize, or learn to recognize, relevant tool properties and can choose an appropriate tool to solve a problem. (PsycINFO Database Record (c) 2010 APA, all rights reserved).

  14. Introduction to Statistics course

    CERN Multimedia

    CERN. Geneva HR-RFA

    2006-01-01

    The four lectures will present an introduction to statistical methods as used in High Energy Physics. As the time will be very limited, the course will seek mainly to define the important issues and to introduce the most wide used tools. Topics will include the interpretation and use of probability, estimation of parameters and testing of hypotheses.

  15. A sophisticated simulation for the fracture behavior of concrete material using XFEM

    Science.gov (United States)

    Zhai, Changhai; Wang, Xiaomin; Kong, Jingchang; Li, Shuang; Xie, Lili

    2017-10-01

    The development of a powerful numerical model to simulate the fracture behavior of concrete material has long been one of the dominant research areas in earthquake engineering. A reliable model should be able to adequately represent the discontinuous characteristics of cracks and simulate various failure behaviors under complicated loading conditions. In this paper, a numerical formulation, which incorporates a sophisticated rigid-plastic interface constitutive model coupling cohesion softening, contact, friction and shear dilatation into the XFEM, is proposed to describe various crack behaviors of concrete material. An effective numerical integration scheme for accurately assembling the contribution to the weak form on both sides of the discontinuity is introduced. The effectiveness of the proposed method has been assessed by simulating several well-known experimental tests. It is concluded that the numerical method can successfully capture the crack paths and accurately predict the fracture behavior of concrete structures. The influence of mode-II parameters on the mixed-mode fracture behavior is further investigated to better determine these parameters.

  16. Statistical methods to evaluate thermoluminescence ionizing radiation dosimetry data

    International Nuclear Information System (INIS)

    Segre, Nadia; Matoso, Erika; Fagundes, Rosane Correa

    2011-01-01

    Ionizing radiation levels, evaluated through the exposure of CaF 2 :Dy thermoluminescence dosimeters (TLD- 200), have been monitored at Centro Experimental Aramar (CEA), located at Ipero in Sao Paulo state, Brazil, since 1991 resulting in a large amount of measurements until 2009 (more than 2,000). The data amount associated with measurements dispersion, since every process has deviation, reinforces the utilization of statistical tools to evaluate the results, procedure also imposed by the Brazilian Standard CNEN-NN-3.01/PR- 3.01-008 which regulates the radiometric environmental monitoring. Thermoluminescence ionizing radiation dosimetry data are statistically compared in order to evaluate potential CEA's activities environmental impact. The statistical tools discussed in this work are box plots, control charts and analysis of variance. (author)

  17. TECHNIQUE OF THE STATISTICAL ANALYSIS OF INVESTMENT APPEAL OF THE REGION

    Directory of Open Access Journals (Sweden)

    А. А. Vershinina

    2014-01-01

    Full Text Available The technique of the statistical analysis of investment appeal of the region is given in scientific article for direct foreign investments. Definition of a technique of the statistical analysis is given, analysis stages reveal, the mathematico-statistical tools are considered.

  18. Two statistics for evaluating parameter identifiability and error reduction

    Science.gov (United States)

    Doherty, John; Hunt, Randall J.

    2009-01-01

    Two statistics are presented that can be used to rank input parameters utilized by a model in terms of their relative identifiability based on a given or possible future calibration dataset. Identifiability is defined here as the capability of model calibration to constrain parameters used by a model. Both statistics require that the sensitivity of each model parameter be calculated for each model output for which there are actual or presumed field measurements. Singular value decomposition (SVD) of the weighted sensitivity matrix is then undertaken to quantify the relation between the parameters and observations that, in turn, allows selection of calibration solution and null spaces spanned by unit orthogonal vectors. The first statistic presented, "parameter identifiability", is quantitatively defined as the direction cosine between a parameter and its projection onto the calibration solution space. This varies between zero and one, with zero indicating complete non-identifiability and one indicating complete identifiability. The second statistic, "relative error reduction", indicates the extent to which the calibration process reduces error in estimation of a parameter from its pre-calibration level where its value must be assigned purely on the basis of prior expert knowledge. This is more sophisticated than identifiability, in that it takes greater account of the noise associated with the calibration dataset. Like identifiability, it has a maximum value of one (which can only be achieved if there is no measurement noise). Conceptually it can fall to zero; and even below zero if a calibration problem is poorly posed. An example, based on a coupled groundwater/surface-water model, is included that demonstrates the utility of the statistics. ?? 2009 Elsevier B.V.

  19. The Use of Social Media for Communication In Official Statistics at European Level

    Directory of Open Access Journals (Sweden)

    Ionela-Roxana GLĂVAN

    2016-12-01

    Full Text Available Social media tools are wide spread in web communication and are gaining popularity in the communication process between public institutions and citizens. This study conducts an analysis on how social media is used by Official Statistical Institutes to interact with citizens and disseminate information. A linear regression technique is performed to examine which social media platforms (Twitter or Facebook is a more effective tool in the communication process in the official statistics area. Our study suggests that Twitter is a more powerful tool than Facebook in enhancing the relationship between official statistics and citizens, complying with several other studies. Next, we performed an analysis on Twitter network characteristics discussing “official statistics” using NodeXL that revealed the unexploited potential of this network by official statistical agencies.

  20. The Media as an Invaluable Tool for Informal Earth System Science Education

    Science.gov (United States)

    James, E.; Gautier, C.

    2001-12-01

    One of the most widely utilized avenues for educating the general public about the Earth's environment is the media, be it print, radio or broadcast. Accurate and effective communication of issues in Earth System Science (ESS), however, is significantly hindered by the public's relative scientific illiteracy. Discussion of ESS concepts requires the laying down of a foundation of complex scientific information, which must first be conveyed to an incognizant audience before any strata of sophisticated social context can be appropriately considered. Despite such a substantial obstacle to be negotiated, the environmental journalist is afforded the unique opportunity of providing a broad-reaching informal scientific education to a largely scientifically uninformed population base. This paper will review the tools used by various environmental journalists to address ESS issues and consider how successful each of these approaches has been at conveying complex scientific messages to a general audience lacking sufficient scientific sophistication. Different kinds of media materials used to this effect will be analyzed for their ideas and concepts conveyed, as well as their effectiveness in reaching the public at large.

  1. A flexible statistics web processing service--added value for information systems for experiment data.

    Science.gov (United States)

    Heimann, Dennis; Nieschulze, Jens; König-Ries, Birgitta

    2010-04-20

    Data management in the life sciences has evolved from simple storage of data to complex information systems providing additional functionalities like analysis and visualization capabilities, demanding the integration of statistical tools. In many cases the used statistical tools are hard-coded within the system. That leads to an expensive integration, substitution, or extension of tools because all changes have to be done in program code. Other systems are using generic solutions for tool integration but adapting them to another system is mostly rather extensive work. This paper shows a way to provide statistical functionality over a statistics web service, which can be easily integrated in any information system and set up using XML configuration files. The statistical functionality is extendable by simply adding the description of a new application to a configuration file. The service architecture as well as the data exchange process between client and service and the adding of analysis applications to the underlying service provider are described. Furthermore a practical example demonstrates the functionality of the service.

  2. Seismicity map tools for earthquake studies

    Science.gov (United States)

    Boucouvalas, Anthony; Kaskebes, Athanasios; Tselikas, Nikos

    2014-05-01

    We report on the development of new and online set of tools for use within Google Maps, for earthquake research. We demonstrate this server based and online platform (developped with PHP, Javascript, MySQL) with the new tools using a database system with earthquake data. The platform allows us to carry out statistical and deterministic analysis on earthquake data use of Google Maps and plot various seismicity graphs. The tool box has been extended to draw on the map line segments, multiple straight lines horizontally and vertically as well as multiple circles, including geodesic lines. The application is demonstrated using localized seismic data from the geographic region of Greece as well as other global earthquake data. The application also offers regional segmentation (NxN) which allows the studying earthquake clustering, and earthquake cluster shift within the segments in space. The platform offers many filters such for plotting selected magnitude ranges or time periods. The plotting facility allows statistically based plots such as cumulative earthquake magnitude plots and earthquake magnitude histograms, calculation of 'b' etc. What is novel for the platform is the additional deterministic tools. Using the newly developed horizontal and vertical line and circle tools we have studied the spatial distribution trends of many earthquakes and we here show for the first time the link between Fibonacci Numbers and spatiotemporal location of some earthquakes. The new tools are valuable for examining visualizing trends in earthquake research as it allows calculation of statistics as well as deterministic precursors. We plan to show many new results based on our newly developed platform.

  3. Condenser: a statistical aggregation tool for multi-sample quantitative proteomic data from Matrix Science Mascot Distiller™.

    Science.gov (United States)

    Knudsen, Anders Dahl; Bennike, Tue; Kjeldal, Henrik; Birkelund, Svend; Otzen, Daniel Erik; Stensballe, Allan

    2014-05-30

    We describe Condenser, a freely available, comprehensive open-source tool for merging multidimensional quantitative proteomics data from the Matrix Science Mascot Distiller Quantitation Toolbox into a common format ready for subsequent bioinformatic analysis. A number of different relative quantitation technologies, such as metabolic (15)N and amino acid stable isotope incorporation, label-free and chemical-label quantitation are supported. The program features multiple options for curative filtering of the quantified peptides, allowing the user to choose data quality thresholds appropriate for the current dataset, and ensure the quality of the calculated relative protein abundances. Condenser also features optional global normalization, peptide outlier removal, multiple testing and calculation of t-test statistics for highlighting and evaluating proteins with significantly altered relative protein abundances. Condenser provides an attractive addition to the gold-standard quantitative workflow of Mascot Distiller, allowing easy handling of larger multi-dimensional experiments. Source code, binaries, test data set and documentation are available at http://condenser.googlecode.com/. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Statistical Physics in the Era of Big Data

    Science.gov (United States)

    Wang, Dashun

    2013-01-01

    With the wealth of data provided by a wide range of high-throughout measurement tools and technologies, statistical physics of complex systems is entering a new phase, impacting in a meaningful fashion a wide range of fields, from cell biology to computer science to economics. In this dissertation, by applying tools and techniques developed in…

  5. PathMAPA: a tool for displaying gene expression and performing statistical tests on metabolic pathways at multiple levels for Arabidopsis

    Directory of Open Access Journals (Sweden)

    Ma Ligeng

    2003-11-01

    Full Text Available Abstract Background To date, many genomic and pathway-related tools and databases have been developed to analyze microarray data. In published web-based applications to date, however, complex pathways have been displayed with static image files that may not be up-to-date or are time-consuming to rebuild. In addition, gene expression analyses focus on individual probes and genes with little or no consideration of pathways. These approaches reveal little information about pathways that are key to a full understanding of the building blocks of biological systems. Therefore, there is a need to provide useful tools that can generate pathways without manually building images and allow gene expression data to be integrated and analyzed at pathway levels for such experimental organisms as Arabidopsis. Results We have developed PathMAPA, a web-based application written in Java that can be easily accessed over the Internet. An Oracle database is used to store, query, and manipulate the large amounts of data that are involved. PathMAPA allows its users to (i upload and populate microarray data into a database; (ii integrate gene expression with enzymes of the pathways; (iii generate pathway diagrams without building image files manually; (iv visualize gene expressions for each pathway at enzyme, locus, and probe levels; and (v perform statistical tests at pathway, enzyme and gene levels. PathMAPA can be used to examine Arabidopsis thaliana gene expression patterns associated with metabolic pathways. Conclusion PathMAPA provides two unique features for the gene expression analysis of Arabidopsis thaliana: (i automatic generation of pathways associated with gene expression and (ii statistical tests at pathway level. The first feature allows for the periodical updating of genomic data for pathways, while the second feature can provide insight into how treatments affect relevant pathways for the selected experiment(s.

  6. Impact of sophisticated fog spray models on accident analyses

    International Nuclear Information System (INIS)

    Roblyer, S.P.; Owzarski, P.C.

    1978-01-01

    The N-Reactor confinement system release dose to the public in a postulated accident is reduced by washing the confinement atmosphere with fog sprays. This allows a low pressure release of confinement atmosphere containing fission products through filters and out an elevated stack. The current accident analysis required revision of the CORRAL code and other codes such as CONTEMPT to properly model the N Reactor confinement into a system of multiple fog-sprayed compartments. In revising these codes, more sophisticated models for the fog sprays and iodine plateout were incorporated to remove some of the conservatism of steam condensing rate, fission product washout and iodine plateout than used in previous studies. The CORRAL code, which was used to describe the transport and deposition of airborne fission products in LWR containment systems for the Rasmussen Study, was revised to describe fog spray removal of molecular iodine (I 2 ) and particulates in multiple compartments for sprays having individual characteristics of on-off times, flow rates, fall heights, and drop sizes in changing containment atmospheres. During postulated accidents, the code determined the fission product removal rates internally rather than from input decontamination factors. A discussion is given of how the calculated plateout and washout rates vary with time throughout the analysis. The results of the accident analyses indicated that more credit could be given to fission product washout and plateout. An important finding was that the release of fission products to the atmosphere and adsorption of fission products on the filters were significantly lower than previous studies had indicated

  7. Sophisticated Communication in the Brazilian Torrent Frog Hylodes japi.

    Science.gov (United States)

    de Sá, Fábio P; Zina, Juliana; Haddad, Célio F B

    2016-01-01

    Intraspecific communication in frogs plays an important role in the recognition of conspecifics in general and of potential rivals or mates in particular and therefore with relevant consequences for pre-zygotic reproductive isolation. We investigate intraspecific communication in Hylodes japi, an endemic Brazilian torrent frog with territorial males and an elaborate courtship behavior. We describe its repertoire of acoustic signals as well as one of the most complex repertoires of visual displays known in anurans, including five new visual displays. Previously unknown in frogs, we also describe a bimodal inter-sexual communication system where the female stimulates the male to emit a courtship call. As another novelty for frogs, we show that in addition to choosing which limb to signal with, males choose which of their two vocal sacs will be used for visual signaling. We explain how and why this is accomplished. Control of inflation also provides additional evidence that vocal sac movement and color must be important for visual communication, even while producing sound. Through the current knowledge on visual signaling in Neotropical torrent frogs (i.e. hylodids), we discuss and highlight the behavioral diversity in the family Hylodidae. Our findings indicate that communication in species of Hylodes is undoubtedly more sophisticated than we expected and that visual communication in anurans is more widespread than previously thought. This is especially true in tropical regions, most likely due to the higher number of species and phylogenetic groups and/or to ecological factors, such as higher microhabitat diversity.

  8. Sophisticated Communication in the Brazilian Torrent Frog Hylodes japi.

    Directory of Open Access Journals (Sweden)

    Fábio P de Sá

    Full Text Available Intraspecific communication in frogs plays an important role in the recognition of conspecifics in general and of potential rivals or mates in particular and therefore with relevant consequences for pre-zygotic reproductive isolation. We investigate intraspecific communication in Hylodes japi, an endemic Brazilian torrent frog with territorial males and an elaborate courtship behavior. We describe its repertoire of acoustic signals as well as one of the most complex repertoires of visual displays known in anurans, including five new visual displays. Previously unknown in frogs, we also describe a bimodal inter-sexual communication system where the female stimulates the male to emit a courtship call. As another novelty for frogs, we show that in addition to choosing which limb to signal with, males choose which of their two vocal sacs will be used for visual signaling. We explain how and why this is accomplished. Control of inflation also provides additional evidence that vocal sac movement and color must be important for visual communication, even while producing sound. Through the current knowledge on visual signaling in Neotropical torrent frogs (i.e. hylodids, we discuss and highlight the behavioral diversity in the family Hylodidae. Our findings indicate that communication in species of Hylodes is undoubtedly more sophisticated than we expected and that visual communication in anurans is more widespread than previously thought. This is especially true in tropical regions, most likely due to the higher number of species and phylogenetic groups and/or to ecological factors, such as higher microhabitat diversity.

  9. Tools and procedures for visualization of proteins and other biomolecules.

    Science.gov (United States)

    Pan, Lurong; Aller, Stephen G

    2015-04-01

    Protein, peptides, and nucleic acids are biomolecules that drive biological processes in living organisms. An enormous amount of structural data for a large number of these biomolecules has been described with atomic precision in the form of structural "snapshots" that are freely available in public repositories. These snapshots can help explain how the biomolecules function, the nature of interactions between multi-molecular complexes, and even how small-molecule drugs can modulate the biomolecules for clinical benefits. Furthermore, these structural snapshots serve as inputs for sophisticated computer simulations to turn the biomolecules into moving, "breathing" molecular machines for understanding their dynamic properties in real-time computer simulations. In order for the researcher to take advantage of such a wealth of structural data, it is necessary to gain competency in the use of computer molecular visualization tools for exploring the structures and visualizing three-dimensional spatial representations. Here, we present protocols for using two common visualization tools--the Web-based Jmol and the stand-alone PyMOL package--as well as a few examples of other popular tools. Copyright © 2015 John Wiley & Sons, Inc.

  10. A Career in Statistics Beyond the Numbers

    CERN Document Server

    Hahn, Gerald J

    2012-01-01

    A valuable guide to a successful career as a statistician A Career in Statistics: Beyond the Numbers prepares readers for careers in statistics by emphasizing essential concepts and practices beyond the technical tools provided in standard courses and texts. This insider's guide from internationally recognized applied statisticians helps readers decide whether a career in statistics is right for them, provides hands-on guidance on how to prepare for such a career, and shows how to succeed on the job. The book provides non-technical guidance for a successful career. The authors' extensive indu

  11. Statistical Decision Support Tools for System-Oriented Runway Management, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The feasibility of developing a statistical decision support system for traffic flow management in the terminal area and runway load balancing was demonstrated in...

  12. Software tools to aid Pascal and Ada program design

    Energy Technology Data Exchange (ETDEWEB)

    Jankowitz, H.T.

    1987-01-01

    This thesis describes a software tool which analyses the style and structure of Pascal and Ada programs by ensuring that some minimum design requirements are fulfilled. The tool is used in much the same way as a compiler is used to teach students the syntax of a language, only in this case issues related to the design and structure of the program are of paramount importance. The tool operates by analyzing the design and structure of a syntactically correct program, automatically generating a report detailing changes that need to be made in order to ensure that the program is structurally sound. The author discusses how the model gradually evolved from a plagiarism detection system which extracted several measurable characteristics in a program to a model that analyzed the style of Pascal programs. In order to incorporate more-sophistical concepts like data abstraction, information hiding and data protection, this model was then extended to analyze the composition of Ada programs. The Ada model takes full advantage of facilities offered in the language and by using this tool the standard and quality of written programs is raised whilst the fundamental principles of program design are grasped through a process of self-tuition.

  13. The GenABEL Project for statistical genomics [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Lennart C. Karssen

    2016-05-01

    Full Text Available Development of free/libre open source software is usually done by a community of people with an interest in the tool. For scientific software, however, this is less often the case. Most scientific software is written by only a few authors, often a student working on a thesis. Once the paper describing the tool has been published, the tool is no longer developed further and is left to its own device. Here we describe the broad, multidisciplinary community we formed around a set of tools for statistical genomics. The GenABEL project for statistical omics actively promotes open interdisciplinary development of statistical methodology and its implementation in efficient and user-friendly software under an open source licence. The software tools developed withing the project collectively make up the GenABEL suite, which currently consists of eleven tools. The open framework of the project actively encourages involvement of the community in all stages, from formulation of methodological ideas to application of software to specific data sets. A web forum is used to channel user questions and discussions, further promoting the use of the GenABEL suite. Developer discussions take place on a dedicated mailing list, and development is further supported by robust development practices including use of public version control, code review and continuous integration. Use of this open science model attracts contributions from users and developers outside the “core team”, facilitating agile statistical omics methodology development and fast dissemination.

  14. Semantic Web applications and tools for the life sciences: SWAT4LS 2010.

    Science.gov (United States)

    Burger, Albert; Paschke, Adrian; Romano, Paolo; Marshall, M Scott; Splendiani, Andrea

    2012-01-25

    As Semantic Web technologies mature and new releases of key elements, such as SPARQL 1.1 and OWL 2.0, become available, the Life Sciences continue to push the boundaries of these technologies with ever more sophisticated tools and applications. Unsurprisingly, therefore, interest in the SWAT4LS (Semantic Web Applications and Tools for the Life Sciences) activities have remained high, as was evident during the third international SWAT4LS workshop held in Berlin in December 2010. Contributors to this workshop were invited to submit extended versions of their papers, the best of which are now made available in the special supplement of BMC Bioinformatics. The papers reflect the wide range of work in this area, covering the storage and querying of Life Sciences data in RDF triple stores, tools for the development of biomedical ontologies and the semantics-based integration of Life Sciences as well as clinicial data.

  15. Theoretical physics 8 statistical physics

    CERN Document Server

    Nolting, Wolfgang

    2018-01-01

    This textbook offers a clear and comprehensive introduction to statistical physics, one of the core components of advanced undergraduate physics courses. It follows on naturally from the previous volumes in this series, using methods of probability theory and statistics to solve physical problems. The first part of the book gives a detailed overview on classical statistical physics and introduces all mathematical tools needed. The second part of the book covers topics related to quantized states, gives a thorough introduction to quantum statistics, followed by a concise treatment of quantum gases. Ideally suited to undergraduate students with some grounding in quantum mechanics, the book is enhanced throughout with learning features such as boxed inserts and chapter summaries, with key mathematical derivations highlighted to aid understanding. The text is supported by numerous worked examples and end of chapter problem sets. About the Theoretical Physics series Translated from the renowned and highly successf...

  16. Statistics Hacks Tips & Tools for Measuring the World and Beating the Odds

    CERN Document Server

    Frey, Bruce

    2008-01-01

    Want to calculate the probability that an event will happen? Be able to spot fake data? Prove beyond doubt whether one thing causes another? Or learn to be a better gambler? You can do that and much more with 75 practical and fun hacks packed into Statistics Hacks. These cool tips, tricks, and mind-boggling solutions from the world of statistics, measurement, and research methods will not only amaze and entertain you, but will give you an advantage in several real-world situations-including business.

  17. wft4galaxy: a workflow testing tool for galaxy.

    Science.gov (United States)

    Piras, Marco Enrico; Pireddu, Luca; Zanetti, Gianluigi

    2017-12-01

    Workflow managers for scientific analysis provide a high-level programming platform facilitating standardization, automation, collaboration and access to sophisticated computing resources. The Galaxy workflow manager provides a prime example of this type of platform. As compositions of simpler tools, workflows effectively comprise specialized computer programs implementing often very complex analysis procedures. To date, no simple way to automatically test Galaxy workflows and ensure their correctness has appeared in the literature. With wft4galaxy we offer a tool to bring automated testing to Galaxy workflows, making it feasible to bring continuous integration to their development and ensuring that defects are detected promptly. wft4galaxy can be easily installed as a regular Python program or launched directly as a Docker container-the latter reducing installation effort to a minimum. Available at https://github.com/phnmnl/wft4galaxy under the Academic Free License v3.0. marcoenrico.piras@crs4.it. © The Author 2017. Published by Oxford University Press.

  18. GIS tools for analyzing accidents and road design: A review

    Energy Technology Data Exchange (ETDEWEB)

    Satria, R.

    2016-07-01

    A significant unexpected outcome of transportation systems is road accidents with injuries and loss of lives. In recent years, the number of studies about the tools for analyzing accidents and road design has increased considerably. Among these tools, Geographical Information Systems (GIS) stand out for their ability to perform complex spatial analyses. However, sometimes the GIS, has been used only as a geographical database to store and represent data about accidents and road characteristics. It has also been used to represent the results of statistical studies of accidents but, these statistical studies have not been carried out with GIS. Owing to its integrated statistical-analysis capabilities GIS provides several advantages. First, it allows a more careful and accurate data selection, screening and reduction. Also, it allows a spatial analysis of the results in pre and post-processing. Second, GIS allows the development of spatial statistics that rely on geographically-referenced data. In this paper, several GIS tools used to model accidents have been examined. The understanding of these tools will help the analyst to make a better decision about which tool could be applied in each particular condition and context. (Author)

  19. EU-Korea FTA and Its Impact on V4 Economies. A Comparative Analysis of Trade Sophistication and Intra-Industry Trade

    Directory of Open Access Journals (Sweden)

    Michalski Bartosz

    2018-03-01

    Full Text Available This paper investigates selected short- and mid-term effects in trade in goods between the Visegrad countries (V4: the Czech Republic, Hungary, Poland and the Slovak Republic and the Republic of Korea under the framework of the Free Trade Agreement between the European Union and the Republic of Korea. This Agreement is described in the “Trade for All” (2015: 9 strategy as the most ambitious trade deal ever implemented by the EU. The primary purpose of our analysis is to identify, compare, and evaluate the evolution of the technological sophistication of bilateral exports and imports. Another dimension of the paper concentrates on the developments within intra-industry trade. Moreover, these objectives are approached taking into account the context of the South Korean direct investment inflow to the V4. The evaluation of technological sophistication is based on UNCTAD’s methodology, while the intensity of intra-industry trade is measured by the GL-index and identification of its subcategories (horizontal and vertical trade. The analysis covers the timespan 2001–2015. The novelty of the paper lies in the fact that the study of South Korean-V4 trade relations has not so far been carried out from this perspective. Thus this paper investigates interesting phenomena identified in the trade between the Republic of Korea (ROK and V4 economies. The main findings imply an impact of South Korean direct investments on trade. This is represented by the trade deficit of the V4 with ROK and the structure of bilateral trade in terms of its technological sophistication. South Korean investments might also have had positive consequences for the evolution of IIT, particularly in the machinery sector. The political interpretation indicates that they may strengthen common threats associated with the middle-income trap, particularly the technological gap and the emphasis placed on lower costs of production.

  20. The Emergence of Contextual Social Psychology.

    Science.gov (United States)

    Pettigrew, Thomas F

    2018-07-01

    Social psychology experiences recurring so-called "crises." This article maintains that these episodes actually mark advances in the discipline; these "crises" have enhanced relevance and led to greater methodological and statistical sophistication. New statistical tools have allowed social psychologists to begin to achieve a major goal: placing psychological phenomena in their larger social contexts. This growing trend is illustrated with numerous recent studies; they demonstrate how cultures and social norms moderate basic psychological processes. Contextual social psychology is finally emerging.

  1. Google Advertising Tools Cashing in with AdSense and AdWords

    CERN Document Server

    Davis, Harold

    2010-01-01

    With this book, you'll learn how to take full advantage of Google AdWords and AdSense, the sophisticated online advertising tools used by thousands of large and small businesses. This new edition provides a substantially updated guide to advertising on the Web, including how it works in general, and how Google's advertising programs in particular help you make money. You'll find everything you need to work with AdWords, which lets you generate text ads to accompany specific search term results, and AdSense, which automatically delivers precisely targeted text and image ads to your website.

  2. Statistical analysis applied to safety culture self-assessment

    International Nuclear Information System (INIS)

    Macedo Soares, P.P.

    2002-01-01

    Interviews and opinion surveys are instruments used to assess the safety culture in an organization as part of the Safety Culture Enhancement Programme. Specific statistical tools are used to analyse the survey results. This paper presents an example of an opinion survey with the corresponding application of the statistical analysis and the conclusions obtained. Survey validation, Frequency statistics, Kolmogorov-Smirnov non-parametric test, Student (T-test) and ANOVA means comparison tests and LSD post-hoc multiple comparison test, are discussed. (author)

  3. HistFitter software framework for statistical data analysis

    Energy Technology Data Exchange (ETDEWEB)

    Baak, M. [CERN, Geneva (Switzerland); Besjes, G.J. [Radboud University Nijmegen, Nijmegen (Netherlands); Nikhef, Amsterdam (Netherlands); Cote, D. [University of Texas, Arlington (United States); Koutsman, A. [TRIUMF, Vancouver (Canada); Lorenz, J. [Ludwig-Maximilians-Universitaet Muenchen, Munich (Germany); Excellence Cluster Universe, Garching (Germany); Short, D. [University of Oxford, Oxford (United Kingdom)

    2015-04-15

    We present a software framework for statistical data analysis, called HistFitter, that has been used extensively by the ATLAS Collaboration to analyze big datasets originating from proton-proton collisions at the Large Hadron Collider at CERN. Since 2012 HistFitter has been the standard statistical tool in searches for supersymmetric particles performed by ATLAS. HistFitter is a programmable and flexible framework to build, book-keep, fit, interpret and present results of data models of nearly arbitrary complexity. Starting from an object-oriented configuration, defined by users, the framework builds probability density functions that are automatically fit to data and interpreted with statistical tests. Internally HistFitter uses the statistics packages RooStats and HistFactory. A key innovation of HistFitter is its design, which is rooted in analysis strategies of particle physics. The concepts of control, signal and validation regions are woven into its fabric. These are progressively treated with statistically rigorous built-in methods. Being capable of working with multiple models at once that describe the data, HistFitter introduces an additional level of abstraction that allows for easy bookkeeping, manipulation and testing of large collections of signal hypotheses. Finally, HistFitter provides a collection of tools to present results with publication quality style through a simple command-line interface. (orig.)

  4. HistFitter software framework for statistical data analysis

    International Nuclear Information System (INIS)

    Baak, M.; Besjes, G.J.; Cote, D.; Koutsman, A.; Lorenz, J.; Short, D.

    2015-01-01

    We present a software framework for statistical data analysis, called HistFitter, that has been used extensively by the ATLAS Collaboration to analyze big datasets originating from proton-proton collisions at the Large Hadron Collider at CERN. Since 2012 HistFitter has been the standard statistical tool in searches for supersymmetric particles performed by ATLAS. HistFitter is a programmable and flexible framework to build, book-keep, fit, interpret and present results of data models of nearly arbitrary complexity. Starting from an object-oriented configuration, defined by users, the framework builds probability density functions that are automatically fit to data and interpreted with statistical tests. Internally HistFitter uses the statistics packages RooStats and HistFactory. A key innovation of HistFitter is its design, which is rooted in analysis strategies of particle physics. The concepts of control, signal and validation regions are woven into its fabric. These are progressively treated with statistically rigorous built-in methods. Being capable of working with multiple models at once that describe the data, HistFitter introduces an additional level of abstraction that allows for easy bookkeeping, manipulation and testing of large collections of signal hypotheses. Finally, HistFitter provides a collection of tools to present results with publication quality style through a simple command-line interface. (orig.)

  5. Environmental restoration and statistics: Issues and needs

    International Nuclear Information System (INIS)

    Gilbert, R.O.

    1991-10-01

    Statisticians have a vital role to play in environmental restoration (ER) activities. One facet of that role is to point out where additional work is needed to develop statistical sampling plans and data analyses that meet the needs of ER. This paper is an attempt to show where statistics fits into the ER process. The statistician, as member of the ER planning team, works collaboratively with the team to develop the site characterization sampling design, so that data of the quality and quantity required by the specified data quality objectives (DQOs) are obtained. At the same time, the statistician works with the rest of the planning team to design and implement, when appropriate, the observational approach to streamline the ER process and reduce costs. The statistician will also provide the expertise needed to select or develop appropriate tools for statistical analysis that are suited for problems that are common to waste-site data. These data problems include highly heterogeneous waste forms, large variability in concentrations over space, correlated data, data that do not have a normal (Gaussian) distribution, and measurements below detection limits. Other problems include environmental transport and risk models that yield highly uncertain predictions, and the need to effectively communicate to the public highly technical information, such as sampling plans, site characterization data, statistical analysis results, and risk estimates. Even though some statistical analysis methods are available ''off the shelf'' for use in ER, these problems require the development of additional statistical tools, as discussed in this paper. 29 refs

  6. Use of library statistics to support library and advisory services and ...

    African Journals Online (AJOL)

    Statistical information is a vital tool for management and development of organizations. Keeping statistics of activities is basic to the survival and progress of a library and enables the library to measure its performance periodically. The National Library of Nigeria (NLN) places high premium on the library statistics that it ...

  7. An introduction to statistical computing a simulation-based approach

    CERN Document Server

    Voss, Jochen

    2014-01-01

    A comprehensive introduction to sampling-based methods in statistical computing The use of computers in mathematics and statistics has opened up a wide range of techniques for studying otherwise intractable problems.  Sampling-based simulation techniques are now an invaluable tool for exploring statistical models.  This book gives a comprehensive introduction to the exciting area of sampling-based methods. An Introduction to Statistical Computing introduces the classical topics of random number generation and Monte Carlo methods.  It also includes some advanced met

  8. The use and misuse of statistical methodologies in pharmacology research.

    Science.gov (United States)

    Marino, Michael J

    2014-01-01

    Descriptive, exploratory, and inferential statistics are necessary components of hypothesis-driven biomedical research. Despite the ubiquitous need for these tools, the emphasis on statistical methods in pharmacology has become dominated by inferential methods often chosen more by the availability of user-friendly software than by any understanding of the data set or the critical assumptions of the statistical tests. Such frank misuse of statistical methodology and the quest to reach the mystical αstatistical training. Perhaps more critically, a poor understanding of statistical tools limits the conclusions that may be drawn from a study by divorcing the investigator from their own data. The net result is a decrease in quality and confidence in research findings, fueling recent controversies over the reproducibility of high profile findings and effects that appear to diminish over time. The recent development of "omics" approaches leading to the production of massive higher dimensional data sets has amplified these issues making it clear that new approaches are needed to appropriately and effectively mine this type of data. Unfortunately, statistical education in the field has not kept pace. This commentary provides a foundation for an intuitive understanding of statistics that fosters an exploratory approach and an appreciation for the assumptions of various statistical tests that hopefully will increase the correct use of statistics, the application of exploratory data analysis, and the use of statistical study design, with the goal of increasing reproducibility and confidence in the literature. Copyright © 2013. Published by Elsevier Inc.

  9. Statistical aspects of food safety sampling

    NARCIS (Netherlands)

    Jongenburger, I.; Besten, den H.M.W.; Zwietering, M.H.

    2015-01-01

    In food safety management, sampling is an important tool for verifying control. Sampling by nature is a stochastic process. However, uncertainty regarding results is made even greater by the uneven distribution of microorganisms in a batch of food. This article reviews statistical aspects of

  10. Analytical and numerical tools for vacuum systems

    CERN Document Server

    Kersevan, R

    2007-01-01

    Modern particle accelerators have reached a level of sophistication which require a thorough analysis of all their sub-systems. Among the latter, the vacuum system is often a major contributor to the operating performance of a particle accelerator. The vacuum engineer has nowadays a large choice of computational schemes and tools for the correct analysis, design, and engineering of the vacuum system. This paper is a review of the different type of algorithms and methodologies which have been developed and employed in the field since the birth of vacuum technology. The different level of detail between simple back-of-the-envelope calculations and more complex numerical analysis is discussed by means of comparisons. The domain of applicability of each method is discussed, together with its pros and cons.

  11. Modern applied statistics with s-plus

    CERN Document Server

    Venables, W N

    1997-01-01

    S-PLUS is a powerful environment for the statistical and graphical analysis of data. It provides the tools to implement many statistical ideas which have been made possible by the widespread availability of workstations having good graphics and computational capabilities. This book is a guide to using S-PLUS to perform statistical analyses and provides both an introduction to the use of S-PLUS and a course in modern statistical methods. S-PLUS is available for both Windows and UNIX workstations, and both versions are covered in depth. The aim of the book is to show how to use S-PLUS as a powerful and graphical system. Readers are assumed to have a basic grounding in statistics, and so the book is intended for would-be users of S-PLUS, and both students and researchers using statistics. Throughout, the emphasis is on presenting practical problems and full analyses of real data sets. Many of the methods discussed are state-of-the-art approaches to topics such as linear and non-linear regression models, robust a...

  12. Dead time of dual detector tools

    International Nuclear Information System (INIS)

    Czubek, J.A.

    1994-01-01

    A theory of the dead time for the dual detector nuclear tool with the analogue signal transmission is given in the paper. At least two different times exist in such tools: the dead time of detectors (for final computation they assumed identical to each other) and the dead time of the signal transmission set-up. A method of two radioactive sources is proposed to measure these two different dead times. When the times used for measuring every countrate needed in the dead time determination algorithm are taken into account, the statistical accuracy of the dead time determination can be obtained. These estimations are performed by the computer simulation method. Two codes have been designed: DEADT2D (DEAD Time for 2 Detectors) and DEADT2DS (DEAD Time for 2 Detectors with Statistics). The first code calculates the dead time based on the recorded countrates only, the second is doing a 'simulation job' and provides information on the statistical distribution of the observed dead times. The theory and the numerical solutions were checked both by the simulation calculations and by the experiments performed with the ODSN-102 tool (the experiments were performed by T. Zorski). (Author)

  13. Roman sophisticated surface modification methods to manufacture silver counterfeited coins

    Science.gov (United States)

    Ingo, G. M.; Riccucci, C.; Faraldi, F.; Pascucci, M.; Messina, E.; Fierro, G.; Di Carlo, G.

    2017-11-01

    By means of the combined use of X-ray photoelectron spectroscopy (XPS), optical microscopy (OM) and scanning electron microscopy (SEM) coupled with energy dispersive X-ray spectroscopy (EDS) the surface and subsurface chemical and metallurgical features of silver counterfeited Roman Republican coins are investigated to decipher some aspects of the manufacturing methods and to evaluate the technological ability of the Roman metallurgists to produce thin silver coatings. The results demonstrate that over 2000 ago important advances in the technology of thin layer deposition on metal substrates were attained by Romans. The ancient metallurgists produced counterfeited coins by combining sophisticated micro-plating methods and tailored surface chemical modification based on the mercury-silvering process. The results reveal that Romans were able systematically to chemically and metallurgically manipulate alloys at a micro scale to produce adherent precious metal layers with a uniform thickness up to few micrometers. The results converge to reveal that the production of forgeries was aimed firstly to save expensive metals as much as possible allowing profitable large-scale production at a lower cost. The driving forces could have been a lack of precious metals, an unexpected need to circulate coins for trade and/or a combinations of social, political and economic factors that requested a change in money supply. Finally, some information on corrosion products have been achieved useful to select materials and methods for the conservation of these important witnesses of technology and economy.

  14. Stochastic tools in turbulence

    CERN Document Server

    Lumey, John L

    2012-01-01

    Stochastic Tools in Turbulence discusses the available mathematical tools to describe stochastic vector fields to solve problems related to these fields. The book deals with the needs of turbulence in relation to stochastic vector fields, particularly, on three-dimensional aspects, linear problems, and stochastic model building. The text describes probability distributions and densities, including Lebesgue integration, conditional probabilities, conditional expectations, statistical independence, lack of correlation. The book also explains the significance of the moments, the properties of the

  15. RSYST: From nuclear reactor calculations towards a highly sophisticated scientific software integration environment

    International Nuclear Information System (INIS)

    Noack, M.; Seybold, J.; Ruehle, R.

    1996-01-01

    The software environment RSYST was originally used to solve problems of reactor physics. The consideration of advanced scientific simulation requirements and the strict application of modern software design principles led to a system which is perfectly suitable to solve problems in various complex scientific problem domains. Starting with a review of the early days of RSYST, we describe the straight evolution driven by the need of software environment which combines the advantages of a high-performance database system with the capability to integrate sophisticated scientific technical applications. The RSYST architecture is presented and the data modelling capabilities are described. To demonstrate the powerful possibilities and flexibility of the RSYST environment, we describe a wide range of RSYST applications, e.g., mechanical simulations of multibody systems, which are used in biomechanical research, civil engineering and robotics. In addition, a hypermedia system which is used for scientific technical training and documentation is presented. (orig.) [de

  16. Statistical physics of pairwise probability models

    DEFF Research Database (Denmark)

    Roudi, Yasser; Aurell, Erik; Hertz, John

    2009-01-01

    (dansk abstrakt findes ikke) Statistical models for describing the probability distribution over the states of biological systems are commonly used for dimensional reduction. Among these models, pairwise models are very attractive in part because they can be fit using a reasonable amount of  data......: knowledge of the means and correlations between pairs of elements in the system is sufficient. Not surprisingly, then, using pairwise models for studying neural data has been the focus of many studies in recent years. In this paper, we describe how tools from statistical physics can be employed for studying...

  17. Effect size, confidence intervals and statistical power in psychological research.

    Directory of Open Access Journals (Sweden)

    Téllez A.

    2015-07-01

    Full Text Available Quantitative psychological research is focused on detecting the occurrence of certain population phenomena by analyzing data from a sample, and statistics is a particularly helpful mathematical tool that is used by researchers to evaluate hypotheses and make decisions to accept or reject such hypotheses. In this paper, the various statistical tools in psychological research are reviewed. The limitations of null hypothesis significance testing (NHST and the advantages of using effect size and its respective confidence intervals are explained, as the latter two measurements can provide important information about the results of a study. These measurements also can facilitate data interpretation and easily detect trivial effects, enabling researchers to make decisions in a more clinically relevant fashion. Moreover, it is recommended to establish an appropriate sample size by calculating the optimum statistical power at the moment that the research is designed. Psychological journal editors are encouraged to follow APA recommendations strictly and ask authors of original research studies to report the effect size, its confidence intervals, statistical power and, when required, any measure of clinical significance. Additionally, we must account for the teaching of statistics at the graduate level. At that level, students do not receive sufficient information concerning the importance of using different types of effect sizes and their confidence intervals according to the different types of research designs; instead, most of the information is focused on the various tools of NHST.

  18. Using Statistical Process Control to Make Data-Based Clinical Decisions.

    Science.gov (United States)

    Pfadt, Al; Wheeler, Donald J.

    1995-01-01

    Statistical process control (SPC), which employs simple statistical tools and problem-solving techniques such as histograms, control charts, flow charts, and Pareto charts to implement continual product improvement procedures, can be incorporated into human service organizations. Examples illustrate use of SPC procedures to analyze behavioral data…

  19. Robust statistical methods with R

    CERN Document Server

    Jureckova, Jana

    2005-01-01

    Robust statistical methods were developed to supplement the classical procedures when the data violate classical assumptions. They are ideally suited to applied research across a broad spectrum of study, yet most books on the subject are narrowly focused, overly theoretical, or simply outdated. Robust Statistical Methods with R provides a systematic treatment of robust procedures with an emphasis on practical application.The authors work from underlying mathematical tools to implementation, paying special attention to the computational aspects. They cover the whole range of robust methods, including differentiable statistical functions, distance of measures, influence functions, and asymptotic distributions, in a rigorous yet approachable manner. Highlighting hands-on problem solving, many examples and computational algorithms using the R software supplement the discussion. The book examines the characteristics of robustness, estimators of real parameter, large sample properties, and goodness-of-fit tests. It...

  20. Application of Parallel Hierarchical Matrices in Spatial Statistics and Parameter Identification

    KAUST Repository

    Litvinenko, Alexander

    2018-04-20

    Parallel H-matrices in spatial statistics 1. Motivation: improve statistical model 2. Tools: Hierarchical matrices [Hackbusch 1999] 3. Matern covariance function and joint Gaussian likelihood 4. Identification of unknown parameters via maximizing Gaussian log-likelihood 5. Implementation with HLIBPro

  1. Statistical process control in nursing research.

    Science.gov (United States)

    Polit, Denise F; Chaboyer, Wendy

    2012-02-01

    In intervention studies in which randomization to groups is not possible, researchers typically use quasi-experimental designs. Time series designs are strong quasi-experimental designs but are seldom used, perhaps because of technical and analytic hurdles. Statistical process control (SPC) is an alternative analytic approach to testing hypotheses about intervention effects using data collected over time. SPC, like traditional statistical methods, is a tool for understanding variation and involves the construction of control charts that distinguish between normal, random fluctuations (common cause variation), and statistically significant special cause variation that can result from an innovation. The purpose of this article is to provide an overview of SPC and to illustrate its use in a study of a nursing practice improvement intervention. Copyright © 2011 Wiley Periodicals, Inc.

  2. A Statistical Toolkit for Data Analysis

    International Nuclear Information System (INIS)

    Donadio, S.; Guatelli, S.; Mascialino, B.; Pfeiffer, A.; Pia, M.G.; Ribon, A.; Viarengo, P.

    2006-01-01

    The present project aims to develop an open-source and object-oriented software Toolkit for statistical data analysis. Its statistical testing component contains a variety of Goodness-of-Fit tests, from Chi-squared to Kolmogorov-Smirnov, to less known, but generally much more powerful tests such as Anderson-Darling, Goodman, Fisz-Cramer-von Mises, Kuiper, Tiku. Thanks to the component-based design and the usage of the standard abstract interfaces for data analysis, this tool can be used by other data analysis systems or integrated in experimental software frameworks. This Toolkit has been released and is downloadable from the web. In this paper we describe the statistical details of the algorithms, the computational features of the Toolkit and describe the code validation

  3. Curve ball baseball, statistics, and the role of chance in the game

    CERN Document Server

    Bennett, Jay

    2001-01-01

    In its formative years, from the 1970s through the 1990s, sabermetrics was p- marily an amateur undertaking. Publications were aimed at a relatively small audience of baseball fans. To be sure, this ever-growing group of aficionados brought a lot of sophistication to baseball analysis, and were constantly looking for statistical insights beyond the listings of the top ten batters found in popular newspapers and magazines. But their influence on the baseball profession was very limited. A few consultants like Craig Wright developed temporary relati- ships with various teams, but none were able to stay long enough to create a p- manent sabermetrician staff position. (See Rob Neyer’s November 11, 2002, arti- 1 cle on ESPN. com. ) All of this changed, however, in 2002 with the hiring of Bill James by the Boston Red Sox. With that move, we have seen the admittance of the foremost proponent of sabermetrics into the top echelon of professional ba- ball management. The art and science of careful statistical analysi...

  4. Probability and statistics: A reminder

    International Nuclear Information System (INIS)

    Clement, B.

    2013-01-01

    The main purpose of these lectures is to provide the reader with the tools needed to data analysis in the framework of physics experiments. Basic concepts are introduced together with examples of application in experimental physics. The lecture is divided into two parts: probability and statistics. It is build on the introduction from 'data analysis in experimental sciences' given in [1]. (authors)

  5. Statistical methods for spatio-temporal systems

    CERN Document Server

    Finkenstadt, Barbel

    2006-01-01

    Statistical Methods for Spatio-Temporal Systems presents current statistical research issues on spatio-temporal data modeling and will promote advances in research and a greater understanding between the mechanistic and the statistical modeling communities.Contributed by leading researchers in the field, each self-contained chapter starts with an introduction of the topic and progresses to recent research results. Presenting specific examples of epidemic data of bovine tuberculosis, gastroenteric disease, and the U.K. foot-and-mouth outbreak, the first chapter uses stochastic models, such as point process models, to provide the probabilistic backbone that facilitates statistical inference from data. The next chapter discusses the critical issue of modeling random growth objects in diverse biological systems, such as bacteria colonies, tumors, and plant populations. The subsequent chapter examines data transformation tools using examples from ecology and air quality data, followed by a chapter on space-time co...

  6. BUSINESS INTELLIGENCE TOOLS FOR DATA ANALYSIS AND DECISION MAKING

    Directory of Open Access Journals (Sweden)

    DEJAN ZDRAVESKI

    2011-04-01

    Full Text Available Every business is dynamic in nature and is affected by various external and internal factors. These factors include external market conditions, competitors, internal restructuring and re-alignment, operational optimization and paradigm shifts in the business itself. New regulations and restrictions, in combination with the above factors, contribute to the constant evolutionary nature of compelling, business-critical information; the kind of information that an organization needs to sustain and thrive. Business intelligence (“BI” is broad term that encapsulates the process of gathering information pertaining to a business and the market it functions in. This information when collated and analyzed in the right manner, can provide vital insights into the business and can be a tool to improve efficiency, reduce costs, reduce time lags and bring many positive changes. A business intelligence application helps to achieve precisely that. Successful organizations maximize the use of their data assets through business intelligence technology. The first data warehousing and decision support tools introduced companies to the power and benefits of accessing and analyzing their corporate data. Business users at every level found new, more sophisticated ways to analyze and report on the information mined from their vast data warehouses.Choosing a Business Intelligence offering is an important decision for an enterprise, one that will have a significant impact throughout the enterprise. The choice of a BI offering will affect people up and down the chain of command (senior management, analysts, and line managers and across functional areas (sales, finance, and operations. It will affect business users, application developers, and IT professionals. BI applications include the activities of decision support systems (DSS, query and reporting, online analyticalprocessing (OLAP, statistical analysis, forecasting, and data mining. Another way of phrasing this is

  7. A Case Study on E - Banking Security – When Security Becomes Too Sophisticated for the User to Access Their Information

    OpenAIRE

    Aaron M. French

    2012-01-01

    While eBanking security continues to increase in sophistication to protect against threats, the usability of the eBanking decreases resulting in poor security behaviors by the users. The current research evaluates se curity risks and measures taken for eBanking solutions. A case study is presented describing how increased complexity decreases vulnerabilities online but increases vulnerabilities from internal threats and eBanking users

  8. Probability density cloud as a geometrical tool to describe statistics of scattered light.

    Science.gov (United States)

    Yaitskova, Natalia

    2017-04-01

    First-order statistics of scattered light is described using the representation of the probability density cloud, which visualizes a two-dimensional distribution for complex amplitude. The geometric parameters of the cloud are studied in detail and are connected to the statistical properties of phase. The moment-generating function for intensity is obtained in a closed form through these parameters. An example of exponentially modified normal distribution is provided to illustrate the functioning of this geometrical approach.

  9. Statistical Analysis and Comparison of Harmonics Measured in Offshore Wind Farms

    DEFF Research Database (Denmark)

    Kocewiak, Lukasz Hubert; Hjerrild, Jesper; Bak, Claus Leth

    2011-01-01

    The paper shows statistical analysis of harmonic components measured in different offshore wind farms. Harmonic analysis is a complex task and requires many aspects, such as measurements, data processing, modeling, validation, to be taken into consideration. The paper describes measurement process...... and shows sophisticated analysis on representative harmonic measurements from Avedøre Holme, Gunfleet Sands and Burbo Bank wind farms. The nature of generation and behavior of harmonic components in offshore wind farms clearly presented and explained based on probabilistic approach. Some issues regarding...... commonly applied standards are also put forward in the discussion. Based on measurements and data analysis it is shown that a general overview about wind farm harmonic behaviour cannot be fully observed only based on single-value measurements as suggested in the standards but using more descriptive...

  10. Implementation of Statistics in Business and Industry

    OpenAIRE

    BOVAS, ABRAHAM

    2007-01-01

    Statisticians have devised many tools for application and these are available to be utilized for general business improvement and industrial problem solving. However, there is a wide gap between the available tools and what are practiced in business and industrial organizations. Thus it is important for statisticians to direct serious attention to bridging this gap if statistics is to be relevant in business and industry and to the society at large. In this paper we look at some ideas for imp...

  11. Experiments with Analytic Centers: A confluence of data, tools and help in using them.

    Science.gov (United States)

    Little, M. M.; Crichton, D. J.; Hines, K.; Cole, M.; Quam, B. M.

    2017-12-01

    Traditional repositories have been primarily focused on data stewardship. Over the past two decades, data scientists have attempted to overlay a superstructure to make these repositories more amenable to analysis tasks, with limited success. This poster will summarize lessons learned and some realizations regarding what it takes to create an analytic center. As the volume of Earth Science data grows and the sophistication of analytic tools improves, a pattern has emerged that indicates different science communities uniquely apply a selection of tools to the data to produce scientific results. Infrequently do the experiences of one group help steer other groups. How can the information technology community seed these domains with tools that conform to the thought processes and experiences of that particular science group? What types of succcessful technology infusions have occured and how does technology get adopted. AIST has been experimenting with the management of this analytic center process; this paper will summarize the results and indicate a direction for future infusion attempts.

  12. Statistical Image Analysis of Tomograms with Application to Fibre Geometry Characterisation

    DEFF Research Database (Denmark)

    Emerson, Monica Jane

    The goal of this thesis is to develop statistical image analysis tools to characterise the micro-structure of complex materials used in energy technologies, with a strong focus on fibre composites. These quantification tools are based on extracting geometrical parameters defining structures from 2D...... with high resolution both in space and time to observe fast micro-structural changes. This thesis demonstrates that statistical image analysis combined with X-ray CT opens up numerous possibilities for understanding the behaviour of fibre composites under real life conditions. Besides enabling...

  13. Statistical comparisons of Savannah River anemometer data applied to quality control of instrument networks

    International Nuclear Information System (INIS)

    Porch, W.M.; Dickerson, M.H.

    1976-08-01

    Continuous monitoring of extensive meteorological instrument arrays is a requirement in the study of important mesoscale atmospheric phenomena. The phenomena include pollution transport prediction from continuous area sources, or one time releases of toxic materials and wind energy prospecting in areas of topographic enhancement of the wind. Quality control techniques that can be applied to these data to determine if the instruments are operating within their prescribed tolerances were investigated. Savannah River Plant data were analyzed with both independent and comparative statistical techniques. The independent techniques calculate the mean, standard deviation, moments about the mean, kurtosis, skewness, probability density distribution, cumulative probability and power spectra. The comparative techniques include covariance, cross-spectral analysis and two dimensional probability density. At present the calculating and plotting routines for these statistical techniques do not reside in a single code so it is difficult to ascribe independent memory size and computation time accurately. However, given the flexibility of a data system which includes simple and fast running statistics at the instrument end of the data network (ASF) and more sophisticated techniques at the computational end (ACF) a proper balance will be attained. These techniques are described in detail and preliminary results are presented

  14. Nurturing Opportunity Identification for Business Sophistication in a Cross-disciplinary Study Environment

    Directory of Open Access Journals (Sweden)

    Karine Oganisjana

    2012-12-01

    Full Text Available Opportunity identification is the key element of the entrepreneurial process; therefore the issue of developing this skill in students is a crucial task in contemporary European education which has recognized entrepreneurship as one of the lifelong learning key competences. The earlier opportunity identification becomes a habitual way of thinking and behavior across a broad range of contexts, the more likely that entrepreneurial disposition will steadily reside in students. In order to nurture opportunity identification in students for making them able to organize sophisticated businesses in the future, certain demands ought to be put forward as well to the teacher – the person who is to promote these qualities in their students. The paper reflects some findings of a research conducted within the frameworks of a workplace learning project for the teachers of one of Riga secondary schools (Latvia. The main goal of the project was to teach the teachers to identify hidden inner links between apparently unrelated things, phenomena and events within 10th grade study curriculum and connect them together and create new opportunities. The creation and solution of cross-disciplinary tasks were the means for achieving this goal.

  15. Fuzzy statistical decision-making theory and applications

    CERN Document Server

    Kabak, Özgür

    2016-01-01

    This book offers a comprehensive reference guide to fuzzy statistics and fuzzy decision-making techniques. It provides readers with all the necessary tools for making statistical inference in the case of incomplete information or insufficient data, where classical statistics cannot be applied. The respective chapters, written by prominent researchers, explain a wealth of both basic and advanced concepts including: fuzzy probability distributions, fuzzy frequency distributions, fuzzy Bayesian inference, fuzzy mean, mode and median, fuzzy dispersion, fuzzy p-value, and many others. To foster a better understanding, all the chapters include relevant numerical examples or case studies. Taken together, they form an excellent reference guide for researchers, lecturers and postgraduate students pursuing research on fuzzy statistics. Moreover, by extending all the main aspects of classical statistical decision-making to its fuzzy counterpart, the book presents a dynamic snapshot of the field that is expected to stimu...

  16. Collecting Virtual Reference Statistics with an IM Chat-Bot

    Directory of Open Access Journals (Sweden)

    Mason R.K. Hall

    2008-06-01

    Full Text Available A perennial problem in libraries is capturing accurate statistics. This article addresses this problem with the creative use of Web 2.0 tools: Meebo and AOL Instant Messenger. It describes the development and implementation of an instant messaging "stat-bot" that prompts staff to record virtual reference statistics via IM. Step-by-step guidelines and the perl script are provided.

  17. eSACP - a new Nordic initiative towards developing statistical climate services

    Science.gov (United States)

    Thorarinsdottir, Thordis; Thejll, Peter; Drews, Martin; Guttorp, Peter; Venälainen, Ari; Uotila, Petteri; Benestad, Rasmus; Mesquita, Michel d. S.; Madsen, Henrik; Fox Maule, Cathrine

    2015-04-01

    The Nordic research council NordForsk has recently announced its support for a new 3-year research initiative on "statistical analysis of climate projections" (eSACP). eSACP will focus on developing e-science tools and services based on statistical analysis of climate projections for the purpose of helping decision-makers and planners in the face of expected future challenges in regional climate change. The motivation behind the project is the growing recognition in our society that forecasts of future climate change is associated with various sources of uncertainty, and that any long-term planning and decision-making dependent on a changing climate must account for this. At the same time there is an obvious gap between scientists from different fields and between practitioners in terms of understanding how climate information relates to different parts of the "uncertainty cascade". In eSACP we will develop generic e-science tools and statistical climate services to facilitate the use of climate projections by decision-makers and scientists from all fields for climate impact analyses and for the development of robust adaptation strategies, which properly (in a statistical sense) account for the inherent uncertainty. The new tool will be publically available and include functionality to utilize the extensive and dynamically growing repositories of data and use state-of-the-art statistical techniques to quantify the uncertainty and innovative approaches to visualize the results. Such a tool will not only be valuable for future assessments and underpin the development of dedicated climate services, but will also assist the scientific community in making more clearly its case on the consequences of our changing climate to policy makers and the general public. The eSACP project is led by Thordis Thorarinsdottir, Norwegian Computing Center, and also includes the Finnish Meteorological Institute, the Norwegian Meteorological Institute, the Technical University of Denmark

  18. PULSim: User-Based Adaptable Simulation Tool for Railway Planning and Operations

    Directory of Open Access Journals (Sweden)

    Yong Cui

    2018-01-01

    Full Text Available Simulation methods are widely used in the field of railway planning and operations. Currently, several commercial software tools are available that not only provide functionality for railway simulation but also enable further evaluation and optimisation of the network for scheduling, dispatching, and capacity research. However, the various tools are all lacking with respect to the standards they utilise as well as their published interfaces. For an end-user, the basic mechanism and the assumptions built into a simulation tool are unknown, which means that the true potential of these software tools is limited. One of the most critical issues is the lack of the ability of users to define a sophisticated workflow, integrated in several rounds of simulation with adjustable parameters and settings. This paper develops and describes a user-based, customisable platform. As the preconditions of the platform, the design aspects for modelling the components of a railway system and building the workflow of railway simulation are elaborated in detail. Based on the model and the workflow, an integrated simulation platform with open interfaces is developed. Users and researchers gain the ability to rapidly develop their own algorithms, supported by the tailored simulation process in a flexible manner. The productivity of using simulation tools for further evaluation and optimisation will be significantly improved through the user-adaptable open interfaces.

  19. Nanobody-derived nanobiotechnology tool kits for diverse biomedical and biotechnology applications.

    Science.gov (United States)

    Wang, Yongzhong; Fan, Zhen; Shao, Lei; Kong, Xiaowei; Hou, Xianjuan; Tian, Dongrui; Sun, Ying; Xiao, Yazhong; Yu, Li

    2016-01-01

    Owing to peculiar properties of nanobody, including nanoscale size, robust structure, stable and soluble behaviors in aqueous solution, reversible refolding, high affinity and specificity for only one cognate target, superior cryptic cleft accessibility, and deep tissue penetration, as well as a sustainable source, it has been an ideal research tool for the development of sophisticated nanobiotechnologies. Currently, the nanobody has been evolved into versatile research and application tool kits for diverse biomedical and biotechnology applications. Various nanobody-derived formats, including the nanobody itself, the radionuclide or fluorescent-labeled nanobodies, nanobody homo- or heteromultimers, nanobody-coated nanoparticles, and nanobody-displayed bacteriophages, have been successfully demonstrated as powerful nanobiotechnological tool kits for basic biomedical research, targeting drug delivery and therapy, disease diagnosis, bioimaging, and agricultural and plant protection. These applications indicate a special advantage of these nanobody-derived technologies, already surpassing the "me-too" products of other equivalent binders, such as the full-length antibodies, single-chain variable fragments, antigen-binding fragments, targeting peptides, and DNA-based aptamers. In this review, we summarize the current state of the art in nanobody research, focusing on the nanobody structural features, nanobody production approach, nanobody-derived nanobiotechnology tool kits, and the potentially diverse applications in biomedicine and biotechnology. The future trends, challenges, and limitations of the nanobody-derived nanobiotechnology tool kits are also discussed.

  20. Statistics and Probability Theory In Pursuit of Engineering Decision Support

    CERN Document Server

    Faber, Michael Havbro

    2012-01-01

    This book provides the reader with the basic skills and tools of statistics and probability in the context of engineering modeling and analysis. The emphasis is on the application and the reasoning behind the application of these skills and tools for the purpose of enhancing  decision making in engineering. The purpose of the book is to ensure that the reader will acquire the required theoretical basis and technical skills such as to feel comfortable with the theory of basic statistics and probability. Moreover, in this book, as opposed to many standard books on the same subject, the perspective is to focus on the use of the theory for the purpose of engineering model building and decision making.  This work is suitable for readers with little or no prior knowledge on the subject of statistics and probability.

  1. STATISTICS IN SERVICE QUALITY ASSESSMENT

    Directory of Open Access Journals (Sweden)

    Dragana Gardašević

    2012-09-01

    Full Text Available For any quality evaluation in sports, science, education, and so, it is useful to collect data to construct a strategy to improve the quality of services offered to the user. For this purpose, we use statistical software packages for data processing data collected in order to increase customer satisfaction. The principle is demonstrated by the example of the level of student satisfaction ratings Belgrade Polytechnic (as users the quality of institutions (Belgrade Polytechnic. Here, the emphasis on statistical analysis as a tool for quality control in order to improve the same, and not the interpretation of results. Therefore, the above can be used as a model in sport to improve the overall results.

  2. Statistic techniques of process control for MTR type

    International Nuclear Information System (INIS)

    Oliveira, F.S.; Ferrufino, F.B.J.; Santos, G.R.T.; Lima, R.M.

    2002-01-01

    This work aims at introducing some improvements on the fabrication of MTR type fuel plates, applying statistic techniques of process control. The work was divided into four single steps and their data were analyzed for: fabrication of U 3 O 8 fuel plates; fabrication of U 3 Si 2 fuel plates; rolling of small lots of fuel plates; applying statistic tools and standard specifications to perform a comparative study of these processes. (author)

  3. Integrated Wind Power Planning Tool

    DEFF Research Database (Denmark)

    Rosgaard, M. H.; Hahmann, Andrea N.; Nielsen, T. S.

    This poster describes the status as of April 2012 of the Public Service Obligation (PSO) funded project PSO 10464 \\Integrated Wind Power Planning Tool". The project goal is to integrate a meso scale numerical weather prediction (NWP) model with a statistical tool in order to better predict short...... term power variation from off shore wind farms, as well as to conduct forecast error assessment studies in preparation for later implementation of such a feature in an existing simulation model. The addition of a forecast error estimation feature will further increase the value of this tool, as it...

  4. The Use of Social Media for Communication In Official Statistics at European Level

    OpenAIRE

    Ionela-Roxana GLĂVAN; Andreea MIRICĂ; Bogdan Narcis FÎRȚESCU

    2016-01-01

    Social media tools are wide spread in web communication and are gaining popularity in the communication process between public institutions and citizens. This study conducts an analysis on how social media is used by Official Statistical Institutes to interact with citizens and disseminate information. A linear regression technique is performed to examine which social media platforms (Twitter or Facebook) is a more effective tool in the communication process in the official statistics area. O...

  5. Statistical learning as a tool for rehabilitation in spatial neglect.

    Directory of Open Access Journals (Sweden)

    Albulena eShaqiri

    2013-05-01

    Full Text Available We propose that neglect includes a disorder of representational updating. Representational updating refers to our ability to build mental models and adapt those models to changing experience. This updating ability depends on the processes of priming, working memory, and statistical learning. These processes in turn interact with our capabilities for sustained attention and precise temporal processing. We review evidence showing that all these non-spatial abilities are impaired in neglect, and we discuss how recognition of such deficits can lead to novel approaches for rehabilitating neglect.

  6. Bayesian models a statistical primer for ecologists

    CERN Document Server

    Hobbs, N Thompson

    2015-01-01

    Bayesian modeling has become an indispensable tool for ecological research because it is uniquely suited to deal with complexity in a statistically coherent way. This textbook provides a comprehensive and accessible introduction to the latest Bayesian methods-in language ecologists can understand. Unlike other books on the subject, this one emphasizes the principles behind the computations, giving ecologists a big-picture understanding of how to implement this powerful statistical approach. Bayesian Models is an essential primer for non-statisticians. It begins with a definition of probabili

  7. PVeStA: A Parallel Statistical Model Checking and Quantitative Analysis Tool

    KAUST Repository

    AlTurki, Musab; Meseguer, José

    2011-01-01

    Statistical model checking is an attractive formal analysis method for probabilistic systems such as, for example, cyber-physical systems which are often probabilistic in nature. This paper is about drastically increasing the scalability

  8. Statistical process control for serially correlated data

    NARCIS (Netherlands)

    Wieringa, Jakob Edo

    1999-01-01

    Statistical Process Control (SPC) aims at quality improvement through reduction of variation. The best known tool of SPC is the control chart. Over the years, the control chart has proved to be a successful practical technique for monitoring process measurements. However, its usefulness in practice

  9. "Statistical Techniques for Particle Physics" (2/4)

    CERN Multimedia

    CERN. Geneva

    2009-01-01

    This series will consist of four 1-hour lectures on statistics for particle physics. The goal will be to build up to techniques meant for dealing with problems of realistic complexity while maintaining a formal approach. I will also try to incorporate usage of common tools like ROOT, RooFit, and the newly developed RooStats framework into the lectures. The first lecture will begin with a review the basic principles of probability, some terminology, and the three main approaches towards statistical inference (Frequentist, Bayesian, and Likelihood-based). I will then outline the statistical basis for multivariate analysis techniques (the Neyman-Pearson lemma) and the motivation for machine learning algorithms. Later, I will extend simple hypothesis testing to the case in which the statistical model has one or many parameters (the Neyman Construction and the Feldman-Cousins technique). From there I will outline techniques to incorporate background uncertainties. If time allows, I will touch on the statist...

  10. "Statistical Techniques for Particle Physics" (1/4)

    CERN Multimedia

    CERN. Geneva

    2009-01-01

    This series will consist of four 1-hour lectures on statistics for particle physics. The goal will be to build up to techniques meant for dealing with problems of realistic complexity while maintaining a formal approach. I will also try to incorporate usage of common tools like ROOT, RooFit, and the newly developed RooStats framework into the lectures. The first lecture will begin with a review the basic principles of probability, some terminology, and the three main approaches towards statistical inference (Frequentist, Bayesian, and Likelihood-based). I will then outline the statistical basis for multivariate analysis techniques (the Neyman-Pearson lemma) and the motivation for machine learning algorithms. Later, I will extend simple hypothesis testing to the case in which the statistical model has one or many parameters (the Neyman Construction and the Feldman-Cousins technique). From there I will outline techniques to incorporate background uncertainties. If time allows, I will touch on the statist...

  11. "Statistical Techniques for Particle Physics" (4/4)

    CERN Multimedia

    CERN. Geneva

    2009-01-01

    This series will consist of four 1-hour lectures on statistics for particle physics. The goal will be to build up to techniques meant for dealing with problems of realistic complexity while maintaining a formal approach. I will also try to incorporate usage of common tools like ROOT, RooFit, and the newly developed RooStats framework into the lectures. The first lecture will begin with a review the basic principles of probability, some terminology, and the three main approaches towards statistical inference (Frequentist, Bayesian, and Likelihood-based). I will then outline the statistical basis for multivariate analysis techniques (the Neyman-Pearson lemma) and the motivation for machine learning algorithms. Later, I will extend simple hypothesis testing to the case in which the statistical model has one or many parameters (the Neyman Construction and the Feldman-Cousins technique). From there I will outline techniques to incorporate background uncertainties. If time allows, I will touch on the statist...

  12. "Statistical Techniques for Particle Physics" (3/4)

    CERN Multimedia

    CERN. Geneva

    2009-01-01

    This series will consist of four 1-hour lectures on statistics for particle physics. The goal will be to build up to techniques meant for dealing with problems of realistic complexity while maintaining a formal approach. I will also try to incorporate usage of common tools like ROOT, RooFit, and the newly developed RooStats framework into the lectures. The first lecture will begin with a review the basic principles of probability, some terminology, and the three main approaches towards statistical inference (Frequentist, Bayesian, and Likelihood-based). I will then outline the statistical basis for multivariate analysis techniques (the Neyman-Pearson lemma) and the motivation for machine learning algorithms. Later, I will extend simple hypothesis testing to the case in which the statistical model has one or many parameters (the Neyman Construction and the Feldman-Cousins technique). From there I will outline techniques to incorporate background uncertainties. If time allows, I will touch on the statist...

  13. Beam diagnostic tools for the negative hydrogen ion source test facility ELISE

    International Nuclear Information System (INIS)

    Nocentini, Riccardo; Fantz, Ursel; Franzen, Peter; Froeschle, Markus; Heinemann, Bernd; Riedl, Rudolf; Ruf, Benjamin; Wuenderlich, Dirk

    2013-01-01

    Highlights: ► We present an overview of beam diagnostic tools foreseen for the new testbed ELISE. ► A sophisticated diagnostic calorimeter allows beam profile measurement. ► A tungsten wire mesh in the beam path provides a qualitative picture of the beam. ► Stripping losses and beam divergence are measured by H α Doppler shift spectroscopy. -- Abstract: The test facility ELISE, presently being commissioned at IPP, is a first step in the R and D roadmap for the RF driven ion source and extraction system of the ITER NBI system. The “half-size” ITER-like test facility includes a negative hydrogen ion source that can be operated for 1 h. ELISE is expected to extract an ion beam of 20 A at 60 kV for 10 s every 3 min, therefore delivering a total power of 1.2 MW. The extraction area has a geometry that closely reproduces the ITER design, with the same width and half the height, i.e. 1 m × 1 m. This paper presents an overview of beam diagnostic tools foreseen for ELISE. For the commissioning phase, a simple beam dump with basic diagnostic capabilities has been installed. In the second phase, the beam dump will be substituted by a more sophisticated diagnostic calorimeter to allow beam profile measurement. Additionally, a tungsten wire mesh will be introduced in the beam path to provide a qualitative picture of beam size and position. Stripping losses and beam divergence will be measured by means of H α Doppler shift spectroscopy. An absolute calibration is foreseen in order to measure beam intensity

  14. Statistical shape analysis with applications in R

    CERN Document Server

    Dryden, Ian L

    2016-01-01

    A thoroughly revised and updated edition of this introduction to modern statistical methods for shape analysis Shape analysis is an important tool in the many disciplines where objects are compared using geometrical features. Examples include comparing brain shape in schizophrenia; investigating protein molecules in bioinformatics; and describing growth of organisms in biology. This book is a significant update of the highly-regarded `Statistical Shape Analysis’ by the same authors. The new edition lays the foundations of landmark shape analysis, including geometrical concepts and statistical techniques, and extends to include analysis of curves, surfaces, images and other types of object data. Key definitions and concepts are discussed throughout, and the relative merits of different approaches are presented. The authors have included substantial new material on recent statistical developments and offer numerous examples throughout the text. Concepts are introduced in an accessible manner, while reta...

  15. Spatial analysis statistics, visualization, and computational methods

    CERN Document Server

    Oyana, Tonny J

    2015-01-01

    An introductory text for the next generation of geospatial analysts and data scientists, Spatial Analysis: Statistics, Visualization, and Computational Methods focuses on the fundamentals of spatial analysis using traditional, contemporary, and computational methods. Outlining both non-spatial and spatial statistical concepts, the authors present practical applications of geospatial data tools, techniques, and strategies in geographic studies. They offer a problem-based learning (PBL) approach to spatial analysis-containing hands-on problem-sets that can be worked out in MS Excel or ArcGIS-as well as detailed illustrations and numerous case studies. The book enables readers to: Identify types and characterize non-spatial and spatial data Demonstrate their competence to explore, visualize, summarize, analyze, optimize, and clearly present statistical data and results Construct testable hypotheses that require inferential statistical analysis Process spatial data, extract explanatory variables, conduct statisti...

  16. Easily configured real-time CPOE Pick Off Tool supporting focused clinical research and quality improvement.

    Science.gov (United States)

    Rosenbaum, Benjamin P; Silkin, Nikolay; Miller, Randolph A

    2014-01-01

    Real-time alerting systems typically warn providers about abnormal laboratory results or medication interactions. For more complex tasks, institutions create site-wide 'data warehouses' to support quality audits and longitudinal research. Sophisticated systems like i2b2 or Stanford's STRIDE utilize data warehouses to identify cohorts for research and quality monitoring. However, substantial resources are required to install and maintain such systems. For more modest goals, an organization desiring merely to identify patients with 'isolation' orders, or to determine patients' eligibility for clinical trials, may adopt a simpler, limited approach based on processing the output of one clinical system, and not a data warehouse. We describe a limited, order-entry-based, real-time 'pick off' tool, utilizing public domain software (PHP, MySQL). Through a web interface the tool assists users in constructing complex order-related queries and auto-generates corresponding database queries that can be executed at recurring intervals. We describe successful application of the tool for research and quality monitoring.

  17. Statistics in Matlab a primer

    CERN Document Server

    Cho, MoonJung

    2014-01-01

    List of Tables Preface MATLAB BasicsDesktop Environment Getting Help and Other Documentation Data Import and Export Data I/O via the Command Line The Import Wizard Examples of Data I/O in MATLAB Data I/O with the Statistics Toolbox More Functions for Data I/O Data in MATLAB Data Objects in Base MATLAB Accessing Data Elements Examples of Joining Data Sets Data Types in the Statistics Toolbox Object-Oriented Programming Miscellaneous Topics File and Workspace Management Punctuation in MATLAB Arithmetic Operators Functions in MATLAB Summary and Further Reading Visualizing DataBasic Plot Functions Plotting 2-D Data Plotting 3-D Data Examples Scatter Plots Basic 2-D and 3-D Scatter Plots Scatter Plot Matrix Examples GUIs for Graphics Simple Plot Editing Plotting Tools Interface PLOTS Tab Summary and Further Reading Descriptive StatisticsMeasures of Location Means, Medians, and Modes Examples Measures of Dispersion Range Variance and Standard Deviation Covariance and Correlation Examples Describing the Distribution...

  18. Predictive Data Tools Find Uses in Schools

    Science.gov (United States)

    Sparks, Sarah D.

    2011-01-01

    The use of analytic tools to predict student performance is exploding in higher education, and experts say the tools show even more promise for K-12 schools, in everything from teacher placement to dropout prevention. Use of such statistical techniques is hindered in precollegiate schools, however, by a lack of researchers trained to help…

  19. Advanced statistics for tokamak transport colinearity and tokamak to tokamak variation

    International Nuclear Information System (INIS)

    Riedel, K.S.

    1989-01-01

    This paper is an expository introduction to advanced statistics and scaling laws and their application to tokamak devices. Topics of discussion are as follows: implicit assumptions in the standard analysis; advanced regression techniques; specialized tools in statistics and their applications in fusion physics; and improved datasets for transport studies

  20. An Entropy-Based Statistic for Genomewide Association Studies

    OpenAIRE

    Zhao, Jinying; Boerwinkle, Eric; Xiong, Momiao

    2005-01-01

    Efficient genotyping methods and the availability of a large collection of single-nucleotide polymorphisms provide valuable tools for genetic studies of human disease. The standard χ2 statistic for case-control studies, which uses a linear function of allele frequencies, has limited power when the number of marker loci is large. We introduce a novel test statistic for genetic association studies that uses Shannon entropy and a nonlinear function of allele frequencies to amplify the difference...

  1. Establishing statistical models of manufacturing parameters

    International Nuclear Information System (INIS)

    Senevat, J.; Pape, J.L.; Deshayes, J.F.

    1991-01-01

    This paper reports on the effect of pilgering and cold-work parameters on contractile strain ratio and mechanical properties that were investigated using a large population of Zircaloy tubes. Statistical models were established between: contractile strain ratio and tooling parameters, mechanical properties (tensile test, creep test) and cold-work parameters, and mechanical properties and stress-relieving temperature

  2. Your company's history as a leadership tool.

    Science.gov (United States)

    Seaman, John T; Smith, George David

    2012-12-01

    When the history of an organization comes up, it's usually in connection with an anniversary--just part of the "balloons and fireworks" (as one business leader characterized his company's bicentennial celebration, knowing that the investment of time and money would have little staying power). A fast-changing world leaves little time for nostalgia and irrelevant details--or, worse, strategies for winning the last war. But the authors, business historians at the Winthrop Group, assert that leaders with no patience for history are missing a vital truth: A sophisticated understanding of the past is one of the most powerful tools they have for shaping the future. The job of leaders, most would agree, is to inspire collective efforts and devise smart strategies for the future. History can be profitably employed on both fronts. As a leader strives to get people working together productively, communicating the history of the enterprise can instill a sense of identity and purpose and suggest the goals that will resonate. In its most familiar form, as a narrative about the past, history is a rich explanatory tool with which executives can make a case for change and motivate people to overcome challenges. Taken to a higher level, it also serves as a potent problem-solving tool, one that offers pragmatic insights, valid generalizations, and meaningful perspectives--a way to cut through management fads and the noise of the moment to what really matters.

  3. Introduction to statistics using interactive MM*Stat elements

    CERN Document Server

    Härdle, Wolfgang Karl; Rönz, Bernd

    2015-01-01

    MM*Stat, together with its enhanced online version with interactive examples, offers a flexible tool that facilitates the teaching of basic statistics. It covers all the topics found in introductory descriptive statistics courses, including simple linear regression and time series analysis, the fundamentals of inferential statistics (probability theory, random sampling and estimation theory), and inferential statistics itself (confidence intervals, testing). MM*Stat is also designed to help students rework class material independently and to promote comprehension with the help of additional examples. Each chapter starts with the necessary theoretical background, which is followed by a variety of examples. The core examples are based on the content of the respective chapter, while the advanced examples, designed to deepen students’ knowledge, also draw on information and material from previous chapters. The enhanced online version helps students grasp the complexity and the practical relevance of statistical...

  4. Statistical Methods for Particle Physics (4/4)

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    The series of four lectures will introduce some of the important statistical methods used in Particle Physics, and should be particularly relevant to those involved in the analysis of LHC data. The lectures will include an introduction to statistical tests, parameter estimation, and the application of these tools to searches for new phenomena. Both frequentist and Bayesian methods will be described, with particular emphasis on treatment of systematic uncertainties. The lectures will also cover unfolding, that is, estimation of a distribution in binned form where the variable in question is subject to measurement errors.

  5. Statistical Methods for Particle Physics (1/4)

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    The series of four lectures will introduce some of the important statistical methods used in Particle Physics, and should be particularly relevant to those involved in the analysis of LHC data. The lectures will include an introduction to statistical tests, parameter estimation, and the application of these tools to searches for new phenomena. Both frequentist and Bayesian methods will be described, with particular emphasis on treatment of systematic uncertainties. The lectures will also cover unfolding, that is, estimation of a distribution in binned form where the variable in question is subject to measurement errors.

  6. Statistical Methods for Particle Physics (2/4)

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    The series of four lectures will introduce some of the important statistical methods used in Particle Physics, and should be particularly relevant to those involved in the analysis of LHC data. The lectures will include an introduction to statistical tests, parameter estimation, and the application of these tools to searches for new phenomena. Both frequentist and Bayesian methods will be described, with particular emphasis on treatment of systematic uncertainties. The lectures will also cover unfolding, that is, estimation of a distribution in binned form where the variable in question is subject to measurement errors.

  7. Statistical Methods for Particle Physics (3/4)

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    The series of four lectures will introduce some of the important statistical methods used in Particle Physics, and should be particularly relevant to those involved in the analysis of LHC data. The lectures will include an introduction to statistical tests, parameter estimation, and the application of these tools to searches for new phenomena. Both frequentist and Bayesian methods will be described, with particular emphasis on treatment of systematic uncertainties. The lectures will also cover unfolding, that is, estimation of a distribution in binned form where the variable in question is subject to measurement errors.

  8. T.I.M.S: TaqMan Information Management System, tools to organize data flow in a genotyping laboratory

    Science.gov (United States)

    Monnier, Stéphanie; Cox, David G; Albion, Tim; Canzian, Federico

    2005-01-01

    Background Single Nucleotide Polymorphism (SNP) genotyping is a major activity in biomedical research. The Taqman technology is one of the most commonly used approaches. It produces large amounts of data that are difficult to process by hand. Laboratories not equipped with a Laboratory Information Management System (LIMS) need tools to organize the data flow. Results We propose a package of Visual Basic programs focused on sample management and on the parsing of input and output TaqMan files. The code is written in Visual Basic, embedded in the Microsoft Office package, and it allows anyone to have access to those tools, without any programming skills and with basic computer requirements. Conclusion We have created useful tools focused on management of TaqMan genotyping data, a critical issue in genotyping laboratories whithout a more sophisticated and expensive system, such as a LIMS. PMID:16221298

  9. Representative volume size: A comparison of statistical continuum mechanics and statistical physics

    Energy Technology Data Exchange (ETDEWEB)

    AIDUN,JOHN B.; TRUCANO,TIMOTHY G.; LO,CHI S.; FYE,RICHARD M.

    1999-05-01

    In this combination background and position paper, the authors argue that careful work is needed to develop accurate methods for relating the results of fine-scale numerical simulations of material processes to meaningful values of macroscopic properties for use in constitutive models suitable for finite element solid mechanics simulations. To provide a definite context for this discussion, the problem is couched in terms of the lack of general objective criteria for identifying the size of the representative volume (RV) of a material. The objective of this report is to lay out at least the beginnings of an approach for applying results and methods from statistical physics to develop concepts and tools necessary for determining the RV size, as well as alternatives to RV volume-averaging for situations in which the RV is unmanageably large. The background necessary to understand the pertinent issues and statistical physics concepts is presented.

  10. Appennino: A GIS Tool for Analyzing Wildlife Habitat Use

    Directory of Open Access Journals (Sweden)

    Marco Ferretti

    2012-01-01

    Full Text Available The aim of the study was to test Appennino, a tool used to evaluate the habitats of animals through compositional analysis. This free tool calculates an animal’s habitat use within the GIS platform for ArcGIS and saves and exports the results of the comparative land uses to other statistical software. Visual Basic for Application programming language was employed to prepare the ESRI ArcGIS 9.x utility. The tool was tested on a dataset of 546 pheasant positions obtained from a study carried out in Tuscany (Italy. The tool automatically gave the same results as the results obtained by calculating the surfaces in ESRI ArcGIS, exporting the data from the ArcGIS, then using a commercial spreadsheet and/or statistical software to calculate the animal’s habitat use with a considerable reduction in time.

  11. UTILIZATION OF QUALITY TOOLS: DOES SECTOR AND SIZE MATTER?

    Directory of Open Access Journals (Sweden)

    Luis Fonseca

    2015-12-01

    Full Text Available This research focuses on the influence of company sector and size on the level of utilization of Basic and Advanced Quality Tools. The paper starts with a literature review and then presents the methodology used for the survey. Based on the responses from 202 managers of Portuguese ISO 9001:2008 Quality Management System certified organizations, statistical tests were performed. Results show, with 95% confidence level, that industry and services have a similar proportion of use of Basic and Advanced Quality Tools. Concerning size, bigger companies show a higher trend to use Advanced Quality Tools than smaller ones. For Basic Quality Tools, there was no statistical significant difference at a 95% confidence level for different company sizes. The three basic Quality tools with higher utilization were Check sheets, Flow charts and Histograms (for Services or Control Charts/ (for Industry, however 22% of the surveyed organizations reported not using Basic Quality Tools, which highlights a major improvement opportunity for these companies. Additional studies addressing motivations, benefits and barriers for Quality Tools application should be undertaken for further validation and understanding of these results.

  12. Statistics for Petroleum Engineers and Geoscientists

    International Nuclear Information System (INIS)

    Jensen, J.L.; Lake, L.W.; Corbett, P.W.M.; Goggin, D.J.

    2000-01-01

    Geostatistics is a common tool in reservoir characterisation. Several texts discuss the subject, however this book differs in its approach and audience from currently available material. Written from the basics of statistics it covers only those topics that are needed for the two goals of the text: to exhibit the diagnostic potential of statistics and to introduce the important features of statistical modeling. This revised edition contains expanded discussions of some materials, in particular conditional probabilities, Bayes Theorem, correlation, and Kriging. The coverage of estimation, variability, and modeling applications have been updated. Seventy examples illustrate concepts and show the role of geology for providing important information for data analysis and model building. Four reservoir case studies conclude the presentation, illustrating the application and importance of the earlier material. This book can help petroleum professionals develop more accurate models, leading to lower sampling costs

  13. Sophisticated Epistemologies of Physics versus High-Stakes Tests: How Do Elite High School Students Respond to Competing Influences about How to Learn Physics?

    Science.gov (United States)

    Yerdelen-Damar, Sevda; Elby, Andrew

    2016-01-01

    This study investigates how elite Turkish high school physics students claim to approach learning physics when they are simultaneously (i) engaged in a curriculum that led to significant gains in their epistemological sophistication and (ii) subject to a high-stakes college entrance exam. Students reported taking surface (rote) approaches to…

  14. Statistical Analysis of Deep Drilling Process Conditions Using Vibrations and Force Signals

    Directory of Open Access Journals (Sweden)

    Syafiq Hazwan

    2016-01-01

    Full Text Available Cooling systems is a key point for hot forming process of Ultra High Strength Steels (UHSS. Normally, cooling systems is made using deep drilling technique. Although deep twist drill is better than other drilling techniques in term of higher productivity however its main problem is premature tool breakage, which affects the production quality. In this paper, analysis of deep twist drill process parameters such as cutting speed, feed rate and depth of cut by using statistical analysis to identify the tool condition is presented. The comparisons between different two tool geometries are also studied. Measured data from vibrations and force sensors are being analyzed through several statistical parameters such as root mean square (RMS, mean, kurtosis, standard deviation and skewness. Result found that kurtosis and skewness value are the most appropriate parameters to represent the deep twist drill tool conditions behaviors from vibrations and forces data. The condition of the deep twist drill process been classified according to good, blunt and fracture. It also found that the different tool geometry parameters affect the performance of the tool drill. It believe the results of this study are useful in determining the suitable analysis method to be used for developing online tool condition monitoring system to identify the tertiary tool life stage and helps to avoid mature of tool fracture during drilling process.

  15. Statistical model for prediction of hearing loss in patients receiving cisplatin chemotherapy.

    Science.gov (United States)

    Johnson, Andrew; Tarima, Sergey; Wong, Stuart; Friedland, David R; Runge, Christina L

    2013-03-01

    This statistical model might be used to predict cisplatin-induced hearing loss, particularly in patients undergoing concomitant radiotherapy. To create a statistical model based on pretreatment hearing thresholds to provide an individual probability for hearing loss from cisplatin therapy and, secondarily, to investigate the use of hearing classification schemes as predictive tools for hearing loss. Retrospective case-control study. Tertiary care medical center. A total of 112 subjects receiving chemotherapy and audiometric evaluation were evaluated for the study. Of these subjects, 31 met inclusion criteria for analysis. The primary outcome measurement was a statistical model providing the probability of hearing loss following the use of cisplatin chemotherapy. Fifteen of the 31 subjects had significant hearing loss following cisplatin chemotherapy. American Academy of Otolaryngology-Head and Neck Society and Gardner-Robertson hearing classification schemes revealed little change in hearing grades between pretreatment and posttreatment evaluations for subjects with or without hearing loss. The Chang hearing classification scheme could effectively be used as a predictive tool in determining hearing loss with a sensitivity of 73.33%. Pretreatment hearing thresholds were used to generate a statistical model, based on quadratic approximation, to predict hearing loss (C statistic = 0.842, cross-validated = 0.835). The validity of the model improved when only subjects who received concurrent head and neck irradiation were included in the analysis (C statistic = 0.91). A calculated cutoff of 0.45 for predicted probability has a cross-validated sensitivity and specificity of 80%. Pretreatment hearing thresholds can be used as a predictive tool for cisplatin-induced hearing loss, particularly with concomitant radiotherapy.

  16. Recent advances in microbial production of fuels and chemicals using tools and strategies of systems metabolic engineering

    DEFF Research Database (Denmark)

    Cho, Changhee; Choi, So Young; Luo, Zi Wei

    2015-01-01

    The advent of various systems metabolic engineering tools and strategies has enabled more sophisticated engineering of microorganisms for the production of industrially useful fuels and chemicals. Advances in systems metabolic engineering have been made in overproducing natural chemicals...... and producing novel non-natural chemicals. In this paper, we review the tools and strategies of systems metabolic engineering employed for the development of microorganisms for the production of various industrially useful chemicals belonging to fuels, building block chemicals, and specialty chemicals......, in particular focusing on those reported in the last three years. It was aimed at providing the current landscape of systems metabolic engineering and suggesting directions to address future challenges towards successfully establishing processes for the bio-based production of fuels and chemicals from renewable...

  17. Reuse, Recycle, Reweigh: Combating Influenza through Efficient Sequential Bayesian Computation for Massive Data

    OpenAIRE

    Tom, Jennifer A.; Sinsheimer, Janet S.; Suchard, Marc A.

    2010-01-01

    Massive datasets in the gigabyte and terabyte range combined with the availability of increasingly sophisticated statistical tools yield analyses at the boundary of what is computationally feasible. Compromising in the face of this computational burden by partitioning the dataset into more tractable sizes results in stratified analyses, removed from the context that justified the initial data collection. In a Bayesian framework, these stratified analyses generate intermediate realizations, of...

  18. MIV TOOL: A RENDEZ-VOUS SIMULATOR FOR MANOEUVRING OF AN INSPECTION VEHICLE IN GEO

    DEFF Research Database (Denmark)

    Ravazzotti, Maria Teresa; Neefs, Marc; Jørgensen, John Leif

    1996-01-01

    In the frame of the studies ESA is currently conducting on the servicing of non-cooperative spacecraft in geostationary orbits, a simulator is being set up to support the analysis and development of safe techniques for Manoeuvring, during approach and circumflight, an Inspection Vehicle (MIV tool......), including on-board and teleoperated control. The main aspects of the study include the design of the automatic and teleoperated GNC, with allocation of tasks to the space and ground segment and to the Human Operator (HO), the Man Machine Interface (MMI), a sophisticated model of the on-board CCD camera...

  19. Old tools for sophisticated diagnosis: Electrocardiography for the assessment of myocardial viability

    International Nuclear Information System (INIS)

    Margonato, A.; Chierchia, S.

    1996-01-01

    The identification of residual myocardial viability in patients with a previous myocardial infarction has important clinical implications. Various methods have been developed for the detection of viable myocardium, however most of them are expensive and available only to high-tech centers. In the attempt to obtain reliable information at a low cost, exercise-ECG has been proposed as a useful technique. The results of a series of studies show that ST segment elevation and ventricular arrhythmias elicited by exercise are reliable signs of the presence of reversible myocardial damage

  20. Statistical methods for quality assurance basics, measurement, control, capability, and improvement

    CERN Document Server

    Vardeman, Stephen B

    2016-01-01

    This undergraduate statistical quality assurance textbook clearly shows with real projects, cases and data sets how statistical quality control tools are used in practice. Among the topics covered is a practical evaluation of measurement effectiveness for both continuous and discrete data. Gauge Reproducibility and Repeatability methodology (including confidence intervals for Repeatability, Reproducibility and the Gauge Capability Ratio) is thoroughly developed. Process capability indices and corresponding confidence intervals are also explained. In addition to process monitoring techniques, experimental design and analysis for process improvement are carefully presented. Factorial and Fractional Factorial arrangements of treatments and Response Surface methods are covered. Integrated throughout the book are rich sets of examples and problems that help readers gain a better understanding of where and how to apply statistical quality control tools. These large and realistic problem sets in combination with the...

  1. Networking—a statistical physics perspective

    Science.gov (United States)

    Yeung, Chi Ho; Saad, David

    2013-03-01

    Networking encompasses a variety of tasks related to the communication of information on networks; it has a substantial economic and societal impact on a broad range of areas including transportation systems, wired and wireless communications and a range of Internet applications. As transportation and communication networks become increasingly more complex, the ever increasing demand for congestion control, higher traffic capacity, quality of service, robustness and reduced energy consumption requires new tools and methods to meet these conflicting requirements. The new methodology should serve for gaining better understanding of the properties of networking systems at the macroscopic level, as well as for the development of new principled optimization and management algorithms at the microscopic level. Methods of statistical physics seem best placed to provide new approaches as they have been developed specifically to deal with nonlinear large-scale systems. This review aims at presenting an overview of tools and methods that have been developed within the statistical physics community and that can be readily applied to address the emerging problems in networking. These include diffusion processes, methods from disordered systems and polymer physics, probabilistic inference, which have direct relevance to network routing, file and frequency distribution, the exploration of network structures and vulnerability, and various other practical networking applications.

  2. Networking—a statistical physics perspective

    International Nuclear Information System (INIS)

    Yeung, Chi Ho; Saad, David

    2013-01-01

    Networking encompasses a variety of tasks related to the communication of information on networks; it has a substantial economic and societal impact on a broad range of areas including transportation systems, wired and wireless communications and a range of Internet applications. As transportation and communication networks become increasingly more complex, the ever increasing demand for congestion control, higher traffic capacity, quality of service, robustness and reduced energy consumption requires new tools and methods to meet these conflicting requirements. The new methodology should serve for gaining better understanding of the properties of networking systems at the macroscopic level, as well as for the development of new principled optimization and management algorithms at the microscopic level. Methods of statistical physics seem best placed to provide new approaches as they have been developed specifically to deal with nonlinear large-scale systems. This review aims at presenting an overview of tools and methods that have been developed within the statistical physics community and that can be readily applied to address the emerging problems in networking. These include diffusion processes, methods from disordered systems and polymer physics, probabilistic inference, which have direct relevance to network routing, file and frequency distribution, the exploration of network structures and vulnerability, and various other practical networking applications. (topical review)

  3. Direct Learning of Systematics-Aware Summary Statistics

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    Complex machine learning tools, such as deep neural networks and gradient boosting algorithms, are increasingly being used to construct powerful discriminative features for High Energy Physics analyses. These methods are typically trained with simulated or auxiliary data samples by optimising some classification or regression surrogate objective. The learned feature representations are then used to build a sample-based statistical model to perform inference (e.g. interval estimation or hypothesis testing) over a set of parameters of interest. However, the effectiveness of the mentioned approach can be reduced by the presence of known uncertainties that cause differences between training and experimental data, included in the statistical model via nuisance parameters. This work presents an end-to-end algorithm, which leverages on existing deep learning technologies but directly aims to produce inference-optimal sample-summary statistics. By including the statistical model and a differentiable approximation of ...

  4. Tools for Analyzing Computing Resource Management Strategies and Algorithms for SDR Clouds

    Science.gov (United States)

    Marojevic, Vuk; Gomez-Miguelez, Ismael; Gelonch, Antoni

    2012-09-01

    Software defined radio (SDR) clouds centralize the computing resources of base stations. The computing resource pool is shared between radio operators and dynamically loads and unloads digital signal processing chains for providing wireless communications services on demand. Each new user session request particularly requires the allocation of computing resources for executing the corresponding SDR transceivers. The huge amount of computing resources of SDR cloud data centers and the numerous session requests at certain hours of a day require an efficient computing resource management. We propose a hierarchical approach, where the data center is divided in clusters that are managed in a distributed way. This paper presents a set of computing resource management tools for analyzing computing resource management strategies and algorithms for SDR clouds. We use the tools for evaluating a different strategies and algorithms. The results show that more sophisticated algorithms can achieve higher resource occupations and that a tradeoff exists between cluster size and algorithm complexity.

  5. A statistical method for the detection of alternative splicing using RNA-seq.

    Directory of Open Access Journals (Sweden)

    Liguo Wang

    2010-01-01

    Full Text Available Deep sequencing of transcriptome (RNA-seq provides unprecedented opportunity to interrogate plausible mRNA splicing patterns by mapping RNA-seq reads to exon junctions (thereafter junction reads. In most previous studies, exon junctions were detected by using the quantitative information of junction reads. The quantitative criterion (e.g. minimum of two junction reads, although is straightforward and widely used, usually results in high false positive and false negative rates, owning to the complexity of transcriptome. Here, we introduced a new metric, namely Minimal Match on Either Side of exon junction (MMES, to measure the quality of each junction read, and subsequently implemented an empirical statistical model to detect exon junctions. When applied to a large dataset (>200M reads consisting of mouse brain, liver and muscle mRNA sequences, and using independent transcripts databases as positive control, our method was proved to be considerably more accurate than previous ones, especially for detecting junctions originated from low-abundance transcripts. Our results were also confirmed by real time RT-PCR assay. The MMES metric can be used either in this empirical statistical model or in other more sophisticated classifiers, such as logistic regression.

  6. Preserved statistical learning of tonal and linguistic material in congenital amusia.

    Science.gov (United States)

    Omigie, Diana; Stewart, Lauren

    2011-01-01

    Congenital amusia is a lifelong disorder whereby individuals have pervasive difficulties in perceiving and producing music. In contrast, typical individuals display a sophisticated understanding of musical structure, even in the absence of musical training. Previous research has shown that they acquire this knowledge implicitly, through exposure to music's statistical regularities. The present study tested the hypothesis that congenital amusia may result from a failure to internalize statistical regularities - specifically, lower-order transitional probabilities. To explore the specificity of any potential deficits to the musical domain, learning was examined with both tonal and linguistic material. Participants were exposed to structured tonal and linguistic sequences and, in a subsequent test phase, were required to identify items which had been heard in the exposure phase, as distinct from foils comprising elements that had been present during exposure, but presented in a different temporal order. Amusic and control individuals showed comparable learning, for both tonal and linguistic material, even when the tonal stream included pitch intervals around one semitone. However analysis of binary confidence ratings revealed that amusic individuals have less confidence in their abilities and that their performance in learning tasks may not be contingent on explicit knowledge formation or level of awareness to the degree shown in typical individuals. The current findings suggest that the difficulties amusic individuals have with real-world music cannot be accounted for by an inability to internalize lower-order statistical regularities but may arise from other factors.

  7. Preserved Statistical Learning of Tonal and Linguistic Material in Congenital Amusia

    Directory of Open Access Journals (Sweden)

    Diana eOmigie

    2011-06-01

    Full Text Available Congenital amusia is a lifelong disorder whereby individuals have pervasive difficulties in perceiving and producing music. In contrast, typical individuals display a sophisticated understanding of musical structure, even in the absence of musical training. Previous research has shown that they acquire this knowledge implicitly, through exposure to music’s statistical regularities. The present study tested the hypothesis that congenital amusia may result from a failure to internalize statistical regularities - specifically, lower-order transitional probabilities. To explore the specificity of any potential deficits to the musical domain, learning was examined with both tonal and linguistic material. Participants were exposed to structured tonal and linguistic sequences and, in a subsequent test phase, were required to identify items which had been heard in the exposure phase, as distinct from foils comprising elements that had been present during exposure, but presented in a different temporal order. Amusic and control individuals showed comparable learning, for both tonal and linguistic material, even when the tonal stream included pitch intervals around one semitone. However analysis of binary confidence ratings revealed that amusic individuals have less confidence in their abilities and that their performance in learning tasks may not be contingent on explicit knowledge formation or level of awareness to the degree shown in typical individuals. The current findings suggest that the difficulties amusic individuals have with real-world music cannot be accounted for by an inability to internalize lower-order statistical regularities but may arise from other factors.

  8. Statistics of resonances and time reversal reconstruction in aluminum acoustic chaotic cavities

    NARCIS (Netherlands)

    Antoniuk, O.; Sprik, R.

    2010-01-01

    The statistical properties of wave propagation in classical chaotic systems are of fundamental interest in physics and are the basis for diagnostic tools in materials science. The statistical properties depend in particular also on the presence of time reversal invariance in the system, which can be

  9. Multivariate Analysis, Mass Balance Techniques, and Statistical Tests as Tools in Igneous Petrology: Application to the Sierra de las Cruces Volcanic Range (Mexican Volcanic Belt)

    Science.gov (United States)

    Velasco-Tapia, Fernando

    2014-01-01

    Magmatic processes have usually been identified and evaluated using qualitative or semiquantitative geochemical or isotopic tools based on a restricted number of variables. However, a more complete and quantitative view could be reached applying multivariate analysis, mass balance techniques, and statistical tests. As an example, in this work a statistical and quantitative scheme is applied to analyze the geochemical features for the Sierra de las Cruces (SC) volcanic range (Mexican Volcanic Belt). In this locality, the volcanic activity (3.7 to 0.5 Ma) was dominantly dacitic, but the presence of spheroidal andesitic enclaves and/or diverse disequilibrium features in majority of lavas confirms the operation of magma mixing/mingling. New discriminant-function-based multidimensional diagrams were used to discriminate tectonic setting. Statistical tests of discordancy and significance were applied to evaluate the influence of the subducting Cocos plate, which seems to be rather negligible for the SC magmas in relation to several major and trace elements. A cluster analysis following Ward's linkage rule was carried out to classify the SC volcanic rocks geochemical groups. Finally, two mass-balance schemes were applied for the quantitative evaluation of the proportion of the end-member components (dacitic and andesitic magmas) in the comingled lavas (binary mixtures). PMID:24737994

  10. Multivariate Analysis, Mass Balance Techniques, and Statistical Tests as Tools in Igneous Petrology: Application to the Sierra de las Cruces Volcanic Range (Mexican Volcanic Belt

    Directory of Open Access Journals (Sweden)

    Fernando Velasco-Tapia

    2014-01-01

    Full Text Available Magmatic processes have usually been identified and evaluated using qualitative or semiquantitative geochemical or isotopic tools based on a restricted number of variables. However, a more complete and quantitative view could be reached applying multivariate analysis, mass balance techniques, and statistical tests. As an example, in this work a statistical and quantitative scheme is applied to analyze the geochemical features for the Sierra de las Cruces (SC volcanic range (Mexican Volcanic Belt. In this locality, the volcanic activity (3.7 to 0.5 Ma was dominantly dacitic, but the presence of spheroidal andesitic enclaves and/or diverse disequilibrium features in majority of lavas confirms the operation of magma mixing/mingling. New discriminant-function-based multidimensional diagrams were used to discriminate tectonic setting. Statistical tests of discordancy and significance were applied to evaluate the influence of the subducting Cocos plate, which seems to be rather negligible for the SC magmas in relation to several major and trace elements. A cluster analysis following Ward’s linkage rule was carried out to classify the SC volcanic rocks geochemical groups. Finally, two mass-balance schemes were applied for the quantitative evaluation of the proportion of the end-member components (dacitic and andesitic magmas in the comingled lavas (binary mixtures.

  11. Teaching biology through statistics: application of statistical methods in genetics and zoology courses.

    Science.gov (United States)

    Colon-Berlingeri, Migdalisel; Burrowes, Patricia A

    2011-01-01

    Incorporation of mathematics into biology curricula is critical to underscore for undergraduate students the relevance of mathematics to most fields of biology and the usefulness of developing quantitative process skills demanded in modern biology. At our institution, we have made significant changes to better integrate mathematics into the undergraduate biology curriculum. The curricular revision included changes in the suggested course sequence, addition of statistics and precalculus as prerequisites to core science courses, and incorporating interdisciplinary (math-biology) learning activities in genetics and zoology courses. In this article, we describe the activities developed for these two courses and the assessment tools used to measure the learning that took place with respect to biology and statistics. We distinguished the effectiveness of these learning opportunities in helping students improve their understanding of the math and statistical concepts addressed and, more importantly, their ability to apply them to solve a biological problem. We also identified areas that need emphasis in both biology and mathematics courses. In light of our observations, we recommend best practices that biology and mathematics academic departments can implement to train undergraduates for the demands of modern biology.

  12. Using Visual Analogies To Teach Introductory Statistical Concepts

    Directory of Open Access Journals (Sweden)

    Jessica S. Ancker

    2017-07-01

    Full Text Available Introductory statistical concepts are some of the most challenging to convey in quantitative literacy courses. Analogies supplemented by visual illustrations can be highly effective teaching tools. This literature review shows that to exploit the power of analogies, teachers must select analogies familiar to the audience, explicitly link the analog with the target concept, and avert misconceptions by explaining where the analogy fails. We provide guidance for instructors and a series of visual analogies for use in teaching medical and health statistics.

  13. Practical End-to-End Performance Testing Tool for High Speed 3G-Based Networks

    Science.gov (United States)

    Shinbo, Hiroyuki; Tagami, Atsushi; Ano, Shigehiro; Hasegawa, Toru; Suzuki, Kenji

    High speed IP communication is a killer application for 3rd generation (3G) mobile systems. Thus 3G network operators should perform extensive tests to check whether expected end-to-end performances are provided to customers under various environments. An important objective of such tests is to check whether network nodes fulfill requirements to durations of processing packets because a long duration of such processing causes performance degradation. This requires testers (persons who do tests) to precisely know how long a packet is hold by various network nodes. Without any tool's help, this task is time-consuming and error prone. Thus we propose a multi-point packet header analysis tool which extracts and records packet headers with synchronized timestamps at multiple observation points. Such recorded packet headers enable testers to calculate such holding durations. The notable feature of this tool is that it is implemented on off-the shelf hardware platforms, i.e., lap-top personal computers. The key challenges of the implementation are precise clock synchronization without any special hardware and a sophisticated header extraction algorithm without any drop.

  14. Statistical Methods and Tools for Hanford Staged Feed Tank Sampling

    Energy Technology Data Exchange (ETDEWEB)

    Fountain, Matthew S. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Brigantic, Robert T. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Peterson, Reid A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2013-10-01

    This report summarizes work conducted by Pacific Northwest National Laboratory to technically evaluate the current approach to staged feed sampling of high-level waste (HLW) sludge to meet waste acceptance criteria (WAC) for transfer from tank farms to the Hanford Waste Treatment and Immobilization Plant (WTP). The current sampling and analysis approach is detailed in the document titled Initial Data Quality Objectives for WTP Feed Acceptance Criteria, 24590-WTP-RPT-MGT-11-014, Revision 0 (Arakali et al. 2011). The goal of this current work is to evaluate and provide recommendations to support a defensible, technical and statistical basis for the staged feed sampling approach that meets WAC data quality objectives (DQOs).

  15. Topology for statistical modeling of petascale data.

    Energy Technology Data Exchange (ETDEWEB)

    Pascucci, Valerio (University of Utah, Salt Lake City, UT); Mascarenhas, Ajith Arthur; Rusek, Korben (Texas A& M University, College Station, TX); Bennett, Janine Camille; Levine, Joshua (University of Utah, Salt Lake City, UT); Pebay, Philippe Pierre; Gyulassy, Attila (University of Utah, Salt Lake City, UT); Thompson, David C.; Rojas, Joseph Maurice (Texas A& M University, College Station, TX)

    2011-07-01

    This document presents current technical progress and dissemination of results for the Mathematics for Analysis of Petascale Data (MAPD) project titled 'Topology for Statistical Modeling of Petascale Data', funded by the Office of Science Advanced Scientific Computing Research (ASCR) Applied Math program. Many commonly used algorithms for mathematical analysis do not scale well enough to accommodate the size or complexity of petascale data produced by computational simulations. The primary goal of this project is thus to develop new mathematical tools that address both the petascale size and uncertain nature of current data. At a high level, our approach is based on the complementary techniques of combinatorial topology and statistical modeling. In particular, we use combinatorial topology to filter out spurious data that would otherwise skew statistical modeling techniques, and we employ advanced algorithms from algebraic statistics to efficiently find globally optimal fits to statistical models. This document summarizes the technical advances we have made to date that were made possible in whole or in part by MAPD funding. These technical contributions can be divided loosely into three categories: (1) advances in the field of combinatorial topology, (2) advances in statistical modeling, and (3) new integrated topological and statistical methods.

  16. When not to copy: female fruit flies use sophisticated public information to avoid mated males

    Science.gov (United States)

    Loyau, Adeline; Blanchet, Simon; van Laere, Pauline; Clobert, Jean; Danchin, Etienne

    2012-10-01

    Semen limitation (lack of semen to fertilize all of a female's eggs) imposes high fitness costs to female partners. Females should therefore avoid mating with semen-limited males. This can be achieved by using public information extracted from watching individual males' previous copulating activities. This adaptive preference should be flexible given that semen limitation is temporary. We first demonstrate that the number of offspring produced by males Drosophila melanogaster gradually decreases over successive copulations. We then show that females avoid mating with males they just watched copulating and that visual public cues are sufficient to elicit this response. Finally, after males were given the time to replenish their sperm reserves, females did not avoid the males they previously saw copulating anymore. These results suggest that female fruit flies may have evolved sophisticated behavioural processes of resistance to semen-limited males, and demonstrate unsuspected adaptive context-dependent mate choice in an invertebrate.

  17. Estudo teórico das transições eletrônicas usando métodos simples e sofisticados Theoretical study of electronic transitions using simple and sophisticated methods

    Directory of Open Access Journals (Sweden)

    Nelson H. Morgon

    2013-01-01

    Full Text Available In this paper, the use of both simple and sophisticated models in the study of electronic transitions was explored for a set of molecular systems: C2H4, C4H4, C4H6, C6H6, C6H8, "C8", C60, and [H2NCHCH(CHCHkCHNH2]+, where k = 0 to 4. The simple model of the free particle (1D, 2D, and 3D boxes, rings or spherical surfaces, considering the boundary conditions, was found to yield similar results to the sophisticated theoretical methods such as EOM-CCSD/6-311++G** or TD(NStates=5,Root=1-M06-2X/6-311++G**.

  18. T.I.M.S: TaqMan Information Management System, tools to organize data flow in a genotyping laboratory

    Directory of Open Access Journals (Sweden)

    Albion Tim

    2005-10-01

    Full Text Available Abstract Background Single Nucleotide Polymorphism (SNP genotyping is a major activity in biomedical research. The Taqman technology is one of the most commonly used approaches. It produces large amounts of data that are difficult to process by hand. Laboratories not equipped with a Laboratory Information Management System (LIMS need tools to organize the data flow. Results We propose a package of Visual Basic programs focused on sample management and on the parsing of input and output TaqMan files. The code is written in Visual Basic, embedded in the Microsoft Office package, and it allows anyone to have access to those tools, without any programming skills and with basic computer requirements. Conclusion We have created useful tools focused on management of TaqMan genotyping data, a critical issue in genotyping laboratories whithout a more sophisticated and expensive system, such as a LIMS.

  19. Bayesian models based on test statistics for multiple hypothesis testing problems.

    Science.gov (United States)

    Ji, Yuan; Lu, Yiling; Mills, Gordon B

    2008-04-01

    We propose a Bayesian method for the problem of multiple hypothesis testing that is routinely encountered in bioinformatics research, such as the differential gene expression analysis. Our algorithm is based on modeling the distributions of test statistics under both null and alternative hypotheses. We substantially reduce the complexity of the process of defining posterior model probabilities by modeling the test statistics directly instead of modeling the full data. Computationally, we apply a Bayesian FDR approach to control the number of rejections of null hypotheses. To check if our model assumptions for the test statistics are valid for various bioinformatics experiments, we also propose a simple graphical model-assessment tool. Using extensive simulations, we demonstrate the performance of our models and the utility of the model-assessment tool. In the end, we apply the proposed methodology to an siRNA screening and a gene expression experiment.

  20. BEST: Next-Generation Biomedical Entity Search Tool for Knowledge Discovery from Biomedical Literature.

    Directory of Open Access Journals (Sweden)

    Sunwon Lee

    Full Text Available As the volume of publications rapidly increases, searching for relevant information from the literature becomes more challenging. To complement standard search engines such as PubMed, it is desirable to have an advanced search tool that directly returns relevant biomedical entities such as targets, drugs, and mutations rather than a long list of articles. Some existing tools submit a query to PubMed and process retrieved abstracts to extract information at query time, resulting in a slow response time and limited coverage of only a fraction of the PubMed corpus. Other tools preprocess the PubMed corpus to speed up the response time; however, they are not constantly updated, and thus produce outdated results. Further, most existing tools cannot process sophisticated queries such as searches for mutations that co-occur with query terms in the literature. To address these problems, we introduce BEST, a biomedical entity search tool. BEST returns, as a result, a list of 10 different types of biomedical entities including genes, diseases, drugs, targets, transcription factors, miRNAs, and mutations that are relevant to a user's query. To the best of our knowledge, BEST is the only system that processes free text queries and returns up-to-date results in real time including mutation information in the results. BEST is freely accessible at http://best.korea.ac.kr.

  1. Quality assurance tool for organ at risk delineation in radiation therapy using a parametric statistical approach.

    Science.gov (United States)

    Hui, Cheukkai B; Nourzadeh, Hamidreza; Watkins, William T; Trifiletti, Daniel M; Alonso, Clayton E; Dutta, Sunil W; Siebers, Jeffrey V

    2018-02-26

    To develop a quality assurance (QA) tool that identifies inaccurate organ at risk (OAR) delineations. The QA tool computed volumetric features from prior OAR delineation data from 73 thoracic patients to construct a reference database. All volumetric features of the OAR delineation are computed in three-dimensional space. Volumetric features of a new OAR are compared with respect to those in the reference database to discern delineation outliers. A multicriteria outlier detection system warns users of specific delineation outliers based on combinations of deviant features. Fifteen independent experimental sets including automatic, propagated, and clinically approved manual delineation sets were used for verification. The verification OARs included manipulations to mimic common errors. Three experts reviewed the experimental sets to identify and classify errors, first without; and then 1 week after with the QA tool. In the cohort of manual delineations with manual manipulations, the QA tool detected 94% of the mimicked errors. Overall, it detected 37% of the minor and 85% of the major errors. The QA tool improved reviewer error detection sensitivity from 61% to 68% for minor errors (P = 0.17), and from 78% to 87% for major errors (P = 0.02). The QA tool assists users to detect potential delineation errors. QA tool integration into clinical procedures may reduce the frequency of inaccurate OAR delineation, and potentially improve safety and quality of radiation treatment planning. © 2018 American Association of Physicists in Medicine.

  2. Statistical Symbolic Execution with Informed Sampling

    Science.gov (United States)

    Filieri, Antonio; Pasareanu, Corina S.; Visser, Willem; Geldenhuys, Jaco

    2014-01-01

    Symbolic execution techniques have been proposed recently for the probabilistic analysis of programs. These techniques seek to quantify the likelihood of reaching program events of interest, e.g., assert violations. They have many promising applications but have scalability issues due to high computational demand. To address this challenge, we propose a statistical symbolic execution technique that performs Monte Carlo sampling of the symbolic program paths and uses the obtained information for Bayesian estimation and hypothesis testing with respect to the probability of reaching the target events. To speed up the convergence of the statistical analysis, we propose Informed Sampling, an iterative symbolic execution that first explores the paths that have high statistical significance, prunes them from the state space and guides the execution towards less likely paths. The technique combines Bayesian estimation with a partial exact analysis for the pruned paths leading to provably improved convergence of the statistical analysis. We have implemented statistical symbolic execution with in- formed sampling in the Symbolic PathFinder tool. We show experimentally that the informed sampling obtains more precise results and converges faster than a purely statistical analysis and may also be more efficient than an exact symbolic analysis. When the latter does not terminate symbolic execution with informed sampling can give meaningful results under the same time and memory limits.

  3. The Systems Biology Research Tool: evolvable open-source software

    Directory of Open Access Journals (Sweden)

    Wright Jeremiah

    2008-06-01

    Full Text Available Abstract Background Research in the field of systems biology requires software for a variety of purposes. Software must be used to store, retrieve, analyze, and sometimes even to collect the data obtained from system-level (often high-throughput experiments. Software must also be used to implement mathematical models and algorithms required for simulation and theoretical predictions on the system-level. Results We introduce a free, easy-to-use, open-source, integrated software platform called the Systems Biology Research Tool (SBRT to facilitate the computational aspects of systems biology. The SBRT currently performs 35 methods for analyzing stoichiometric networks and 16 methods from fields such as graph theory, geometry, algebra, and combinatorics. New computational techniques can be added to the SBRT via process plug-ins, providing a high degree of evolvability and a unifying framework for software development in systems biology. Conclusion The Systems Biology Research Tool represents a technological advance for systems biology. This software can be used to make sophisticated computational techniques accessible to everyone (including those with no programming ability, to facilitate cooperation among researchers, and to expedite progress in the field of systems biology.

  4. Medical facility statistics in Japan.

    Science.gov (United States)

    Hamajima, Nobuyuki; Sugimoto, Takuya; Hasebe, Ryo; Myat Cho, Su; Khaing, Moe; Kariya, Tetsuyoshi; Mon Saw, Yu; Yamamoto, Eiko

    2017-11-01

    Medical facility statistics provide essential information to policymakers, administrators, academics, and practitioners in the field of health services. In Japan, the Health Statistics Office of the Director-General for Statistics and Information Policy at the Ministry of Health, Labour and Welfare is generating these statistics. Although the statistics are widely available in both Japanese and English, the methodology described in the technical reports are primarily in Japanese, and are not fully described in English. This article aimed to describe these processes for readers in the English-speaking world. The Health Statistics Office routinely conduct two surveys called the Hospital Report and the Survey of Medical Institutions. The subjects of the former are all the hospitals and clinics with long-term care beds in Japan. It comprises a Patient Questionnaire focusing on the numbers of inpatients, admissions, discharges, and outpatients in one month, and an Employee Questionnaire, which asks about the number of employees as of October 1. The Survey of Medical Institutions consists of the Dynamic Survey, which focuses on the opening and closing of facilities every month, and the Static Survey, which focuses on staff, facilities, and services as of October 1, as well as the number of inpatients as of September 30 and the total number of outpatients during September. All hospitals, clinics, and dental clinics are requested to submit the Static Survey questionnaire every three years. These surveys are useful tools for collecting essential information, as well as providing occasions to implicitly inform facilities of the movements of government policy.

  5. Ranking network of a captive rhesus macaque society: a sophisticated corporative kingdom.

    Science.gov (United States)

    Fushing, Hsieh; McAssey, Michael P; Beisner, Brianne; McCowan, Brenda

    2011-03-15

    We develop a three-step computing approach to explore a hierarchical ranking network for a society of captive rhesus macaques. The computed network is sufficiently informative to address the question: Is the ranking network for a rhesus macaque society more like a kingdom or a corporation? Our computations are based on a three-step approach. These steps are devised to deal with the tremendous challenges stemming from the transitivity of dominance as a necessary constraint on the ranking relations among all individual macaques, and the very high sampling heterogeneity in the behavioral conflict data. The first step simultaneously infers the ranking potentials among all network members, which requires accommodation of heterogeneous measurement error inherent in behavioral data. Our second step estimates the social rank for all individuals by minimizing the network-wide errors in the ranking potentials. The third step provides a way to compute confidence bounds for selected empirical features in the social ranking. We apply this approach to two sets of conflict data pertaining to two captive societies of adult rhesus macaques. The resultant ranking network for each society is found to be a sophisticated mixture of both a kingdom and a corporation. Also, for validation purposes, we reanalyze conflict data from twenty longhorn sheep and demonstrate that our three-step approach is capable of correctly computing a ranking network by eliminating all ranking error.

  6. Ranking network of a captive rhesus macaque society: a sophisticated corporative kingdom.

    Directory of Open Access Journals (Sweden)

    Hsieh Fushing

    2011-03-01

    Full Text Available We develop a three-step computing approach to explore a hierarchical ranking network for a society of captive rhesus macaques. The computed network is sufficiently informative to address the question: Is the ranking network for a rhesus macaque society more like a kingdom or a corporation? Our computations are based on a three-step approach. These steps are devised to deal with the tremendous challenges stemming from the transitivity of dominance as a necessary constraint on the ranking relations among all individual macaques, and the very high sampling heterogeneity in the behavioral conflict data. The first step simultaneously infers the ranking potentials among all network members, which requires accommodation of heterogeneous measurement error inherent in behavioral data. Our second step estimates the social rank for all individuals by minimizing the network-wide errors in the ranking potentials. The third step provides a way to compute confidence bounds for selected empirical features in the social ranking. We apply this approach to two sets of conflict data pertaining to two captive societies of adult rhesus macaques. The resultant ranking network for each society is found to be a sophisticated mixture of both a kingdom and a corporation. Also, for validation purposes, we reanalyze conflict data from twenty longhorn sheep and demonstrate that our three-step approach is capable of correctly computing a ranking network by eliminating all ranking error.

  7. Multivariate statistics high-dimensional and large-sample approximations

    CERN Document Server

    Fujikoshi, Yasunori; Shimizu, Ryoichi

    2010-01-01

    A comprehensive examination of high-dimensional analysis of multivariate methods and their real-world applications Multivariate Statistics: High-Dimensional and Large-Sample Approximations is the first book of its kind to explore how classical multivariate methods can be revised and used in place of conventional statistical tools. Written by prominent researchers in the field, the book focuses on high-dimensional and large-scale approximations and details the many basic multivariate methods used to achieve high levels of accuracy. The authors begin with a fundamental presentation of the basic

  8. Technical session: the Atomika TXRF tool series

    International Nuclear Information System (INIS)

    Dobler, M. . URL: www.atomika.com

    2000-01-01

    ATOMIKA Instruments GmbH holds worldwide competence as a renowned producer of high-performance metrology tools and analytic devices. ATOMIKA's TXRF products are widely accepted for elemental contamination monitoring on semiconductor materials as well as in chemical analysis. More than 100 companies and institutes have their analytical work based on TXRF tools made by ATOMIKA Instruments. ATOMIKA's TXRF 8300W/82OOW wafer contamination monitors are the result of an evolution based on a background of 20 years of competence. Built for the semiconductor industry, the TXRF 8300W/82OOW detect rnetal contaminants on 300mm, or 200mm silicon wafer surfaces with highest possible sensitivity. Operating under ambient conditions, with a sealed x-ray tube, and having their own minienvironment (FOUP, or SMIF respectively), TXRF 8300W182OOW are optimally suited for in-line use. Fab automation (GEM/SECS) is supported by predefined measurement recipes and fully automatic routines. High throughput and uptimes, an ergonomic design according to SEMI standard plus an unrivaled small footprint of 1.1 m 2 make the TXRF 8300W/82OOW most efficient and economic solutions for industrial wafer monitoring. As the specific tool for multielement trace and thin layer analysis the ATOMIKA TXRF 8030C provides simultaneous and fast determination of alt elements within the range from sodium to uranium. Sophisticated measurement instrumentation provides detection limits down to the ppt range. On the other hand, performance is decisively facilitated by features as automatic switching of primary radiation, predefined measurement recipes, or software driven optimization of the entire measurement process. These features make the TXRF 8030C a valuable analytic tool for a wide range of applications: contamination in water, dust or sediments; quantitative screening in the chemical industry; toxic elements in tissues and biological fluids; radioactive elements; process chemicals in the semiconductor industry

  9. Quantifying traces of tool use: a novel morphometric analysis of damage patterns on percussive tools.

    Directory of Open Access Journals (Sweden)

    Matthew V Caruana

    Full Text Available Percussive technology continues to play an increasingly important role in understanding the evolution of tool use. Comparing the archaeological record with extractive foraging behaviors in nonhuman primates has focused on percussive implements as a key to investigating the origins of lithic technology. Despite this, archaeological approaches towards percussive tools have been obscured by a lack of standardized methodologies. Central to this issue have been the use of qualitative, non-diagnostic techniques to identify percussive tools from archaeological contexts. Here we describe a new morphometric method for distinguishing anthropogenically-generated damage patterns on percussive tools from naturally damaged river cobbles. We employ a geomatic approach through the use of three-dimensional scanning and geographical information systems software to statistically quantify the identification process in percussive technology research. This will strengthen current technological analyses of percussive tools in archaeological frameworks and open new avenues for translating behavioral inferences of early hominins from percussive damage patterns.

  10. Quantifying Traces of Tool Use: A Novel Morphometric Analysis of Damage Patterns on Percussive Tools

    Science.gov (United States)

    Caruana, Matthew V.; Carvalho, Susana; Braun, David R.; Presnyakova, Darya; Haslam, Michael; Archer, Will; Bobe, Rene; Harris, John W. K.

    2014-01-01

    Percussive technology continues to play an increasingly important role in understanding the evolution of tool use. Comparing the archaeological record with extractive foraging behaviors in nonhuman primates has focused on percussive implements as a key to investigating the origins of lithic technology. Despite this, archaeological approaches towards percussive tools have been obscured by a lack of standardized methodologies. Central to this issue have been the use of qualitative, non-diagnostic techniques to identify percussive tools from archaeological contexts. Here we describe a new morphometric method for distinguishing anthropogenically-generated damage patterns on percussive tools from naturally damaged river cobbles. We employ a geomatic approach through the use of three-dimensional scanning and geographical information systems software to statistically quantify the identification process in percussive technology research. This will strengthen current technological analyses of percussive tools in archaeological frameworks and open new avenues for translating behavioral inferences of early hominins from percussive damage patterns. PMID:25415303

  11. Closing Discussion "The Future of Veterans Studies"

    OpenAIRE

    Committee Panel

    2014-01-01

    Our conference theme for 2014 is Humanizing the Discourse, a title that speaks to a two-fold aim. We hope to foster increasingly sophisticated dialogue regarding veterans. This requires recognizing the individual humanity of people who can sometimes be turned into one-dimensional caricatures behind headlines, statistics, and stereotypes. In particular, this year we invited contributors to draw on the tools of the arts, humanities, and social sciences in addressing veterans’ issues and shaping...

  12. Signal detection theory as a tool for successful student selection

    NARCIS (Netherlands)

    Van Ooijen-van Der Linden, Linda; Van Der Smagt, Maarten J.; Woertman, Liesbeth; Te Pas, Susan F.

    2017-01-01

    Prediction accuracy of academic achievement for admission purposes requires adequate sensitivity and specificity of admission tools, yet the available information on the validity and predictive power of admission tools is largely based on studies using correlational and regression statistics. The

  13. Making Your Tools Useful to a Broader Audience

    Science.gov (United States)

    Lyness, M. D.; Broten, M. J.

    2006-12-01

    With the increasing growth of Web Services and SOAP the ability to connect and reuse computational and also visualization tools from all over the world via Web Interfaces that can be easily displayed in any current browser has provided the means to construct an ideal online research environment. The age-old question of usability is a major determining factor whether a particular tool would find great success in its community. An interface that can be understood purely by a user's intuition is desirable and more closely obtainable than ever before. Through the use of increasingly sophisticated web-oriented technologies including JavaScript, AJAX, and the DOM, web interfaces are able to harness the advantages of the Internet along with the functional capabilities of native applications such as menus, partial page changes, background processing, and visual effects to name a few. Also, with computers becoming a normal part of the educational process companies, such as Google and Microsoft, give us a synthetic intuition as a foundation for new designs. Understanding the way earth science researchers know how to use computers will allow the VLab portal (http://vlab.msi.umn.edu) and other projects to create interfaces that will get used. To provide detailed communication with the users of VLab's computational tools, projects like the Porky Portlet (http://www.gorerle.com/vlab-wiki/index.php?title=Porky_Portlet) spawned to empower users with a fully- detailed, interactive visual representation of progressing workflows. With the well-thought design of such tools and interfaces, researchers around the world will become accustomed to new highly engaging, visual web- based research environments.

  14. COMPARISON OF STATISTICALLY CONTROLLED MACHINING SOLUTIONS OF TITANIUM ALLOYS USING USM

    Directory of Open Access Journals (Sweden)

    R. Singh

    2010-06-01

    Full Text Available The purpose of the present investigation is to compare the statistically controlled machining solution of titanium alloys using ultrasonic machining (USM. In this study, the previously developed Taguchi model for USM of titanium and its alloys has been investigated and compared. Relationships between the material removal rate, tool wear rate, surface roughness and other controllable machining parameters (power rating, tool type, slurry concentration, slurry type, slurry temperature and slurry size have been deduced. The results of this study suggest that at the best settings of controllable machining parameters for titanium alloys (based upon the Taguchi design, the machining solution with USM is statistically controlled, which is not observed for other settings of input parameters on USM.

  15. Statistical methods for forecasting

    CERN Document Server

    Abraham, Bovas

    2009-01-01

    The Wiley-Interscience Paperback Series consists of selected books that have been made more accessible to consumers in an effort to increase global appeal and general circulation. With these new unabridged softcover volumes, Wiley hopes to extend the lives of these works by making them available to future generations of statisticians, mathematicians, and scientists."This book, it must be said, lives up to the words on its advertising cover: ''Bridging the gap between introductory, descriptive approaches and highly advanced theoretical treatises, it provides a practical, intermediate level discussion of a variety of forecasting tools, and explains how they relate to one another, both in theory and practice.'' It does just that!"-Journal of the Royal Statistical Society"A well-written work that deals with statistical methods and models that can be used to produce short-term forecasts, this book has wide-ranging applications. It could be used in the context of a study of regression, forecasting, and time series ...

  16. Recent advances in microbial production of fuels and chemicals using tools and strategies of systems metabolic engineering.

    Science.gov (United States)

    Cho, Changhee; Choi, So Young; Luo, Zi Wei; Lee, Sang Yup

    2015-11-15

    The advent of various systems metabolic engineering tools and strategies has enabled more sophisticated engineering of microorganisms for the production of industrially useful fuels and chemicals. Advances in systems metabolic engineering have been made in overproducing natural chemicals and producing novel non-natural chemicals. In this paper, we review the tools and strategies of systems metabolic engineering employed for the development of microorganisms for the production of various industrially useful chemicals belonging to fuels, building block chemicals, and specialty chemicals, in particular focusing on those reported in the last three years. It was aimed at providing the current landscape of systems metabolic engineering and suggesting directions to address future challenges towards successfully establishing processes for the bio-based production of fuels and chemicals from renewable resources. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. CORSSA: The Community Online Resource for Statistical Seismicity Analysis

    Science.gov (United States)

    Michael, Andrew J.; Wiemer, Stefan

    2010-01-01

    Statistical seismology is the application of rigorous statistical methods to earthquake science with the goal of improving our knowledge of how the earth works. Within statistical seismology there is a strong emphasis on the analysis of seismicity data in order to improve our scientific understanding of earthquakes and to improve the evaluation and testing of earthquake forecasts, earthquake early warning, and seismic hazards assessments. Given the societal importance of these applications, statistical seismology must be done well. Unfortunately, a lack of educational resources and available software tools make it difficult for students and new practitioners to learn about this discipline. The goal of the Community Online Resource for Statistical Seismicity Analysis (CORSSA) is to promote excellence in statistical seismology by providing the knowledge and resources necessary to understand and implement the best practices, so that the reader can apply these methods to their own research. This introduction describes the motivation for and vision of CORRSA. It also describes its structure and contents.

  18. Application of Parallel Hierarchical Matrices and Low-Rank Tensors in Spatial Statistics and Parameter Identification

    KAUST Repository

    Litvinenko, Alexander

    2018-03-12

    Part 1: Parallel H-matrices in spatial statistics 1. Motivation: improve statistical model 2. Tools: Hierarchical matrices 3. Matern covariance function and joint Gaussian likelihood 4. Identification of unknown parameters via maximizing Gaussian log-likelihood 5. Implementation with HLIBPro. Part 2: Low-rank Tucker tensor methods in spatial statistics

  19. Foundations and applications of statistics an introduction using $Mathsf{R}$

    CERN Document Server

    Pruim, Randall

    2018-01-01

    Foundations and Applications of Statistics simultaneously emphasizes both the foundational and the computational aspects of modern statistics. Engaging and accessible, this book is useful to undergraduate students with a wide range of backgrounds and career goals. The exposition immediately begins with statistics, presenting concepts and results from probability along the way. Hypothesis testing is introduced very early, and the motivation for several probability distributions comes from p-value computations. Pruim develops the students' practical statistical reasoning through explicit examples and through numerical and graphical summaries of data that allow intuitive inferences before introducing the formal machinery. The topics have been selected to reflect the current practice in statistics, where computation is an indispensible tool. In this vein, the statistical computing environment \\mathsf{R} is used throughout the text and is integral to the exposition. Attention is paid to developing students' mathem...

  20. Memory-type control charts in statistical process control

    NARCIS (Netherlands)

    Abbas, N.

    2012-01-01

    Control chart is the most important statistical tool to manage the business processes. It is a graph of measurements on a quality characteristic of the process on the vertical axis plotted against time on the horizontal axis. The graph is completed with control limits that cause variation mark. Once

  1. Statistical analysis of aging trend of mechanical properties in ethylene propylene rubber-insulated safety-related cables sampled from containments (Denryoko Chuo Kenkyusho Hokoku, December 2013 issue)

    International Nuclear Information System (INIS)

    Fuse, Norikazu; Kanegami, Masaki; Misaka, Hideki; Homma, Hiroya; Okamoto, Tatsuki

    2013-01-01

    As for polymeric insulations used in nuclear power plant safety cables, it is known that the present prediction model sometimes estimates the service life conservatively. In order to sophisticate the model to reflect to the aging in containments, disconnections between the prediction and realities are needed to be clarified. In the present paper, statistical analysis has been carried out on various aging status of insulations removed from domestic containments. Aging in operational environment is found to be slower than the one expected from acceleration aging test results. Temperature dependence of estimated lifetime on Arrhenius plot also suggests that elementary chemical reaction dominant under the two aging conditions is different, which results in apparent difference in activation energies and pre-exponential factors. Following two kinds of issues are found necessary to be clarified for the model sophistication; temperature change in predominant degradation chemical processes, and the effect of low oxygen concentration environment in boiling water reactor type containment. (author)

  2. Vortex dynamics and Lagrangian statistics in a model for active turbulence.

    Science.gov (United States)

    James, Martin; Wilczek, Michael

    2018-02-14

    Cellular suspensions such as dense bacterial flows exhibit a turbulence-like phase under certain conditions. We study this phenomenon of "active turbulence" statistically by using numerical tools. Following Wensink et al. (Proc. Natl. Acad. Sci. U.S.A. 109, 14308 (2012)), we model active turbulence by means of a generalized Navier-Stokes equation. Two-point velocity statistics of active turbulence, both in the Eulerian and the Lagrangian frame, is explored. We characterize the scale-dependent features of two-point statistics in this system. Furthermore, we extend this statistical study with measurements of vortex dynamics in this system. Our observations suggest that the large-scale statistics of active turbulence is close to Gaussian with sub-Gaussian tails.

  3. Interoperable mesh and geometry tools for advanced petascale simulations

    International Nuclear Information System (INIS)

    Diachin, L; Bauer, A; Fix, B; Kraftcheck, J; Jansen, K; Luo, X; Miller, M; Ollivier-Gooch, C; Shephard, M S; Tautges, T; Trease, H

    2007-01-01

    SciDAC applications have a demonstrated need for advanced software tools to manage the complexities associated with sophisticated geometry, mesh, and field manipulation tasks, particularly as computer architectures move toward the petascale. The Center for Interoperable Technologies for Advanced Petascale Simulations (ITAPS) will deliver interoperable and interchangeable mesh, geometry, and field manipulation services that are of direct use to SciDAC applications. The premise of our technology development goal is to provide such services as libraries that can be used with minimal intrusion into application codes. To develop these technologies, we focus on defining a common data model and data-structure neutral interfaces that unify a number of different services such as mesh generation and improvement, front tracking, adaptive mesh refinement, shape optimization, and solution transfer operations. We highlight the use of several ITAPS services in SciDAC applications

  4. The use of statistical models in heavy-ion reactions studies

    International Nuclear Information System (INIS)

    Stokstad, R.G.

    1984-01-01

    This chapter reviews the use of statistical models to describe nuclear level densities and the decay of equilibrated nuclei. The statistical models of nuclear structure and nuclear reactions presented here have wide application in the analysis of heavy-ion reaction data. Applications are illustrated with examples of gamma-ray decay, the emission of light particles and heavier clusters of nucleons, and fission. In addition to the compound nucleus, the treatment of equilibrated fragments formed in binary reactions is discussed. The statistical model is shown to be an important tool for the identification of products from nonequilibrium decay

  5. Statistical thermodynamics -- A tool for understanding point defects in intermetallic compounds

    International Nuclear Information System (INIS)

    Ipser, H.; Krachler, R.

    1996-01-01

    The principles of the derivation of statistical-thermodynamic models to interpret the compositional variation of thermodynamic properties in non-stoichiometric intermetallic compounds are discussed. Two types of models are distinguished: the Bragg-Williams type, where the total energy of the crystal is taken as the sum of the interaction energies of all nearest-neighbor pairs of atoms, and the Wagner-Schottky type, where the internal energy, the volume, and the vibrational entropy of the crystal are assumed to be linear functions of the numbers of atoms or vacancies on the different sublattices. A Wagner-Schottky type model is used for the description of two examples with different crystal structures: for β'-FeAl (with B2-structure) defect concentrations and their variation with composition are derived from the results of measurements of the aluminum vapor pressure, the resulting values are compared with results of other independent experimental methods; for Rh 3 Te 4 (with an NiAs-derivative structure) the defect mechanism responsible for non-stoichiometry is worked out by application of a theoretical model to the results of tellurium vapor pressure measurements. In addition it is shown that the shape of the activity curve indicates a certain sequence of superstructures. In principle, there are no limitations to the application of statistical thermodynamics to experimental thermodynamic data as long as these are available with sufficient accuracy, and as long as it is ensured that the distribution of the point defects is truly random, i.e. that there are no aggregates of defects

  6. Software Tool Implementing the Fuzzy AHP Method in Ecological Risk Assessment

    Directory of Open Access Journals (Sweden)

    Radionovs Andrejs

    2017-12-01

    Full Text Available Due to the increased spread of invasive animals and plants in the territory of Latvia, the necessity of ecological risk assessment related to such kind of spread has grown lately. In cases with sufficient statistical data, the risk assessment may be successfully performed on the basis of statistical methods. The amount of statistical data in the context of spread of invasive animals and plants is pretty poor; therefore, the only method of ecological risk assessment remains subjective judgements of experts. The present paper proposes using a programming tool for ecological risk analysis elaborated by the authors. With the help of this programming tool the method of Fuzzy Analytical Hierarchical Process is implemented. The elements of the pairwise comparison matrix are allowed to be expressed by triangular and trapezoidal fuzzy sets. The presented tool makes it possible to design the fuzzy pair-wise comparison matrix and process the results in a user-friendly way.

  7. Analysis and classification of ECG-waves and rhythms using circular statistics and vector strength

    Directory of Open Access Journals (Sweden)

    Janßen Jan-Dirk

    2017-09-01

    Full Text Available The most common way to analyse heart rhythm is to calculate the RR-interval and the heart rate variability. For further evaluation, descriptive statistics are often used. Here we introduce a new and more natural heart rhythm analysis tool that is based on circular statistics and vector strength. Vector strength is a tool to measure the periodicity or lack of periodicity of a signal. We divide the signal into non-overlapping window segments and project the detected R-waves around the unit circle using the complex exponential function and the median RR-interval. In addition, we calculate the vector strength and apply circular statistics as wells as an angular histogram on the R-wave vectors. This approach enables an intuitive visualization and analysis of rhythmicity. Our results show that ECG-waves and rhythms can be easily visualized, analysed and classified by circular statistics and vector strength.

  8. Operation Statistics of the CERN Accelerators Complex for 2003

    CERN Document Server

    CERN. Geneva; Baird, S A; Rey, A; Steerenberg, R; CERN. Geneva. AB Department

    2004-01-01

    This report gives an overview of the performance of the different Accelerators (Linacs, PS Booster, PS, AD and SPS) of the CERN Accelerator Complex for 2003. It includes scheduled activities, beam availabilities, beam intensities and an analysis of faults and breakdowns by system and by beam. MORE INFORATION by using the OP Statistics Tool: http://eLogbook.web.cern.ch/eLogbook/statistics.php and on the SPS HomePage: http://ab-div-op-sps.web.cern.ch/ab-div-op-sps/SPSss.html

  9. Final Report, DOE Early Career Award: Predictive modeling of complex physical systems: new tools for statistical inference, uncertainty quantification, and experimental design

    Energy Technology Data Exchange (ETDEWEB)

    Marzouk, Youssef [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)

    2016-08-31

    Predictive simulation of complex physical systems increasingly rests on the interplay of experimental observations with computational models. Key inputs, parameters, or structural aspects of models may be incomplete or unknown, and must be developed from indirect and limited observations. At the same time, quantified uncertainties are needed to qualify computational predictions in the support of design and decision-making. In this context, Bayesian statistics provides a foundation for inference from noisy and limited data, but at prohibitive computional expense. This project intends to make rigorous predictive modeling *feasible* in complex physical systems, via accelerated and scalable tools for uncertainty quantification, Bayesian inference, and experimental design. Specific objectives are as follows: 1. Develop adaptive posterior approximations and dimensionality reduction approaches for Bayesian inference in high-dimensional nonlinear systems. 2. Extend accelerated Bayesian methodologies to large-scale {\\em sequential} data assimilation, fully treating nonlinear models and non-Gaussian state and parameter distributions. 3. Devise efficient surrogate-based methods for Bayesian model selection and the learning of model structure. 4. Develop scalable simulation/optimization approaches to nonlinear Bayesian experimental design, for both parameter inference and model selection. 5. Demonstrate these inferential tools on chemical kinetic models in reacting flow, constructing and refining thermochemical and electrochemical models from limited data. Demonstrate Bayesian filtering on canonical stochastic PDEs and in the dynamic estimation of inhomogeneous subsurface properties and flow fields.

  10. Propensity score to detect baseline imbalance in cluster randomized trials: the role of the c-statistic.

    Science.gov (United States)

    Leyrat, Clémence; Caille, Agnès; Foucher, Yohann; Giraudeau, Bruno

    2016-01-22

    Despite randomization, baseline imbalance and confounding bias may occur in cluster randomized trials (CRTs). Covariate imbalance may jeopardize the validity of statistical inferences if they occur on prognostic factors. Thus, the diagnosis of a such imbalance is essential to adjust statistical analysis if required. We developed a tool based on the c-statistic of the propensity score (PS) model to detect global baseline covariate imbalance in CRTs and assess the risk of confounding bias. We performed a simulation study to assess the performance of the proposed tool and applied this method to analyze the data from 2 published CRTs. The proposed method had good performance for large sample sizes (n =500 per arm) and when the number of unbalanced covariates was not too small as compared with the total number of baseline covariates (≥40% of unbalanced covariates). We also provide a strategy for pre selection of the covariates needed to be included in the PS model to enhance imbalance detection. The proposed tool could be useful in deciding whether covariate adjustment is required before performing statistical analyses of CRTs.

  11. Uncertainty Quantification and Statistical Engineering for Hypersonic Entry Applications

    Science.gov (United States)

    Cozmuta, Ioana

    2011-01-01

    NASA has invested significant resources in developing and validating a mathematical construct for TPS margin management: a) Tailorable for low/high reliability missions; b) Tailorable for ablative/reusable TPS; c) Uncertainty Quantification and Statistical Engineering are valuable tools not exploited enough; and d) Need to define strategies combining both Theoretical Tools and Experimental Methods. The main reason for this lecture is to give a flavor of where UQ and SE could contribute and hope that the broader community will work with us to improve in these areas.

  12. Statistical analysis of dynamic parameters of the core

    International Nuclear Information System (INIS)

    Ionov, V.S.

    2007-01-01

    The transients of various types were investigated for the cores of zero power critical facilities in RRC KI and NPP. Dynamic parameters of neutron transients were explored by tool statistical analysis. Its have sufficient duration, few channels for currents of chambers and reactivity and also some channels for technological parameters. On these values the inverse period. reactivity, lifetime of neutrons, reactivity coefficients and some effects of a reactivity are determinate, and on the values were restored values of measured dynamic parameters as result of the analysis. The mathematical means of statistical analysis were used: approximation(A), filtration (F), rejection (R), estimation of parameters of descriptive statistic (DSP), correlation performances (kk), regression analysis(KP), the prognosis (P), statistician criteria (SC). The calculation procedures were realized by computer language MATLAB. The reasons of methodical and statistical errors are submitted: inadequacy of model operation, precision neutron-physical parameters, features of registered processes, used mathematical model in reactivity meters, technique of processing for registered data etc. Examples of results of statistical analysis. Problems of validity of the methods used for definition and certification of values of statistical parameters and dynamic characteristics are considered (Authors)

  13. Modeling with data tools and techniques for scientific computing

    CERN Document Server

    Klemens, Ben

    2009-01-01

    Modeling with Data fully explains how to execute computationally intensive analyses on very large data sets, showing readers how to determine the best methods for solving a variety of different problems, how to create and debug statistical models, and how to run an analysis and evaluate the results. Ben Klemens introduces a set of open and unlimited tools, and uses them to demonstrate data management, analysis, and simulation techniques essential for dealing with large data sets and computationally intensive procedures. He then demonstrates how to easily apply these tools to the many threads of statistical technique, including classical, Bayesian, maximum likelihood, and Monte Carlo methods

  14. Information flows at OS level unmask sophisticated Android malware

    OpenAIRE

    Viet Triem Tong , Valérie; Trulla , Aurélien; Leslous , Mourad; Lalande , Jean-François

    2017-01-01

    International audience; The detection of new Android malware is far from being a relaxing job. Indeed, each day new Android malware appear in the market and it remains difficult to quickly identify them. Unfortunately users still pay the lack of real efficient tools able to detect zero day malware that have no known signature. The difficulty is that most of the existing approaches rely on static analysis coupled with the ability of malware to hide their malicious code. Thus, we believe that i...

  15. Introducing SONS, a Tool for Operational Taxonomic Unit-Based Comparisons of Microbial Community Memberships and Structures

    OpenAIRE

    Schloss, Patrick D.; Handelsman, Jo

    2006-01-01

    The recent advent of tools enabling statistical inferences to be drawn from comparisons of microbial communities has enabled the focus of microbial ecology to move from characterizing biodiversity to describing the distribution of that biodiversity. Although statistical tools have been developed to compare community structures across a phylogenetic tree, we lack tools to compare the memberships and structures of two communities at a particular operational taxonomic unit (OTU) definition. Furt...

  16. GoCxx: a tool to easily leverage C++ legacy code for multicore-friendly Go libraries and frameworks

    International Nuclear Information System (INIS)

    Binet, Sébastien

    2012-01-01

    Current HENP libraries and frameworks were written before multicore systems became widely deployed and used. From this environment, a ‘single-thread’ processing model naturally emerged but the implicit assumptions it encouraged are greatly impairing our abilities to scale in a multicore/manycore world. Writing scalable code in C++ for multicore architectures, while doable, is no panacea. Sure, C++11 will improve on the current situation (by standardizing on std::thread, introducing lambda functions and defining a memory model) but it will do so at the price of complicating further an already quite sophisticated language. This level of sophistication has probably already strongly motivated analysis groups to migrate to CPython, hoping for its current limitations with respect to multicore scalability to be either lifted (Grand Interpreter Lock removal) or for the advent of a new Python VM better tailored for this kind of environment (PyPy, Jython, …) Could HENP migrate to a language with none of the deficiencies of C++ (build time, deployment, low level tools for concurrency) and with the fast turn-around time, simplicity and ease of coding of Python? This paper will try to make the case for Go - a young open source language with built-in facilities to easily express and expose concurrency - being such a language. We introduce GoCxx, a tool leveraging gcc-xml's output to automatize the tedious work of creating Go wrappers for foreign languages, a critical task for any language wishing to leverage legacy and field-tested code. We will conclude with the first results of applying GoCxx to real C++ code.

  17. Academic Training Lecture: Statistical Methods for Particle Physics

    CERN Multimedia

    PH Department

    2012-01-01

    2, 3, 4 and 5 April 2012 Academic Training Lecture  Regular Programme from 11:00 to 12:00 -  Bldg. 222-R-001 - Filtration Plant Statistical Methods for Particle Physics by Glen Cowan (Royal Holloway) The series of four lectures will introduce some of the important statistical methods used in Particle Physics, and should be particularly relevant to those involved in the analysis of LHC data. The lectures will include an introduction to statistical tests, parameter estimation, and the application of these tools to searches for new phenomena.  Both frequentist and Bayesian methods will be described, with particular emphasis on treatment of systematic uncertainties.  The lectures will also cover unfolding, that is, estimation of a distribution in binned form where the variable in question is subject to measurement errors.

  18. Sophisticated Calculation of the 1oo4-architecture for Safety-related Systems Conforming to IEC61508

    International Nuclear Information System (INIS)

    Hayek, A; Al Bokhaiti, M; Schwarz, M H; Boercsoek, J

    2012-01-01

    With the publication and enforcement of the standard IEC 61508 of safety related systems, recent system architectures have been presented and evaluated. Among a number of techniques and measures to the evaluation of safety integrity level (SIL) for safety-related systems, several measures such as reliability block diagrams and Markov models are used to analyze the probability of failure on demand (PFD) and mean time to failure (MTTF) which conform to IEC 61508. The current paper deals with the quantitative analysis of the novel 1oo4-architecture (one out of four) presented in recent work. Therefore sophisticated calculations for the required parameters are introduced. The provided 1oo4-architecture represents an advanced safety architecture based on on-chip redundancy, which is 3-failure safe. This means that at least one of the four channels have to work correctly in order to trigger the safety function.

  19. HistFitter software framework for statistical data analysis

    CERN Document Server

    Baak, M.; Côte, D.; Koutsman, A.; Lorenz, J.; Short, D.

    2015-01-01

    We present a software framework for statistical data analysis, called HistFitter, that has been used extensively by the ATLAS Collaboration to analyze big datasets originating from proton-proton collisions at the Large Hadron Collider at CERN. Since 2012 HistFitter has been the standard statistical tool in searches for supersymmetric particles performed by ATLAS. HistFitter is a programmable and flexible framework to build, book-keep, fit, interpret and present results of data models of nearly arbitrary complexity. Starting from an object-oriented configuration, defined by users, the framework builds probability density functions that are automatically fitted to data and interpreted with statistical tests. A key innovation of HistFitter is its design, which is rooted in core analysis strategies of particle physics. The concepts of control, signal and validation regions are woven into its very fabric. These are progressively treated with statistically rigorous built-in methods. Being capable of working with mu...

  20. SoS contract verification using statistical model checking

    Directory of Open Access Journals (Sweden)

    Alessandro Mignogna

    2013-11-01

    Full Text Available Exhaustive formal verification for systems of systems (SoS is impractical and cannot be applied on a large scale. In this paper we propose to use statistical model checking for efficient verification of SoS. We address three relevant aspects for systems of systems: 1 the model of the SoS, which includes stochastic aspects; 2 the formalization of the SoS requirements in the form of contracts; 3 the tool-chain to support statistical model checking for SoS. We adapt the SMC technique for application to heterogeneous SoS. We extend the UPDM/SysML specification language to express the SoS requirements that the implemented strategies over the SoS must satisfy. The requirements are specified with a new contract language specifically designed for SoS, targeting a high-level English- pattern language, but relying on an accurate semantics given by the standard temporal logics. The contracts are verified against the UPDM/SysML specification using the Statistical Model Checker (SMC PLASMA combined with the simulation engine DESYRE, which integrates heterogeneous behavioral models through the functional mock-up interface (FMI standard. The tool-chain allows computing an estimation of the satisfiability of the contracts by the SoS. The results help the system architect to trade-off different solutions to guide the evolution of the SoS.

  1. Statistical uncertainties and unrecognized relationships

    International Nuclear Information System (INIS)

    Rankin, J.P.

    1985-01-01

    Hidden relationships in specific designs directly contribute to inaccuracies in reliability assessments. Uncertainty factors at the system level may sometimes be applied in attempts to compensate for the impact of such unrecognized relationships. Often uncertainty bands are used to relegate unknowns to a miscellaneous category of low-probability occurrences. However, experience and modern analytical methods indicate that perhaps the dominant, most probable and significant events are sometimes overlooked in statistical reliability assurances. The author discusses the utility of two unique methods of identifying the otherwise often unforeseeable system interdependencies for statistical evaluations. These methods are sneak circuit analysis and a checklist form of common cause failure analysis. Unless these techniques (or a suitable equivalent) are also employed along with the more widely-known assurance tools, high reliability of complex systems may not be adequately assured. This concern is indicated by specific illustrations. 8 references, 5 figures

  2. Lightweight and Statistical Techniques for Petascale PetaScale Debugging

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Barton

    2014-06-30

    This project investigated novel techniques for debugging scientific applications on petascale architectures. In particular, we developed lightweight tools that narrow the problem space when bugs are encountered. We also developed techniques that either limit the number of tasks and the code regions to which a developer must apply a traditional debugger or that apply statistical techniques to provide direct suggestions of the location and type of error. We extend previous work on the Stack Trace Analysis Tool (STAT), that has already demonstrated scalability to over one hundred thousand MPI tasks. We also extended statistical techniques developed to isolate programming errors in widely used sequential or threaded applications in the Cooperative Bug Isolation (CBI) project to large scale parallel applications. Overall, our research substantially improved productivity on petascale platforms through a tool set for debugging that complements existing commercial tools. Previously, Office Of Science application developers relied either on primitive manual debugging techniques based on printf or they use tools, such as TotalView, that do not scale beyond a few thousand processors. However, bugs often arise at scale and substantial effort and computation cycles are wasted in either reproducing the problem in a smaller run that can be analyzed with the traditional tools or in repeated runs at scale that use the primitive techniques. New techniques that work at scale and automate the process of identifying the root cause of errors were needed. These techniques significantly reduced the time spent debugging petascale applications, thus leading to a greater overall amount of time for application scientists to pursue the scientific objectives for which the systems are purchased. We developed a new paradigm for debugging at scale: techniques that reduced the debugging scenario to a scale suitable for traditional debuggers, e.g., by narrowing the search for the root-cause analysis

  3. Analysis of room transfer function and reverberant signal statistics

    DEFF Research Database (Denmark)

    Georganti, Eleftheria; Mourjopoulos, John; Jacobsen, Finn

    2008-01-01

    For some time now, statistical analysis has been a valuable tool in analyzing room transfer functions (RTFs). This work examines existing statistical time-frequency models and techniques for RTF analysis (e.g., Schroeder's stochastic model and the standard deviation over frequency bands for the RTF...... magnitude and phase). RTF fractional octave smoothing, as with 1-slash 3 octave analysis, may lead to RTF simplifications that can be useful for several audio applications, like room compensation, room modeling, auralisation purposes. The aim of this work is to identify the relationship of optimal response...... and the corresponding ratio of the direct and reverberant signal. In addition, this work examines the statistical quantities for speech and audio signals prior to their reproduction within rooms and when recorded in rooms. Histograms and other statistical distributions are used to compare RTF minima of typical...

  4. Comparative Investigation on Tool Wear during End Milling of AISI H13 Steel with Different Tool Path Strategies

    Science.gov (United States)

    Adesta, Erry Yulian T.; Riza, Muhammad; Avicena

    2018-03-01

    Tool wear prediction plays a significant role in machining industry for proper planning and control machining parameters and optimization of cutting conditions. This paper aims to investigate the effect of tool path strategies that are contour-in and zigzag tool path strategies applied on tool wear during pocket milling process. The experiments were carried out on CNC vertical machining centre by involving PVD coated carbide inserts. Cutting speed, feed rate and depth of cut were set to vary. In an experiment with three factors at three levels, Response Surface Method (RSM) design of experiment with a standard called Central Composite Design (CCD) was employed. Results obtained indicate that tool wear increases significantly at higher range of feed per tooth compared to cutting speed and depth of cut. This result of this experimental work is then proven statistically by developing empirical model. The prediction model for the response variable of tool wear for contour-in strategy developed in this research shows a good agreement with experimental work.

  5. A physical tool for severe accident mitigation studies

    Energy Technology Data Exchange (ETDEWEB)

    Marie, N., E-mail: nathalie.marie@cea.fr [CEA, DEN, DER, F-13108 Saint Paul Lez Durance (France); Bachrata, A. [CEA, DEN, DER, F-13108 Saint Paul Lez Durance (France); Seiler, J.M. [CEA, DEN, DTN, F-38054 Grenoble (France); Barjot, F. [EDF R& D, SINETICS, F-93141 Clamart (France); Marrel, A. [CEA, DEN, DER, F-13108 Saint Paul Lez Durance (France); Gossé, S. [CEA, DEN, DPC, F-91191 Gif Sur Yvette (France); Bertrand, F. [CEA, DEN, DER, F-13108 Saint Paul Lez Durance (France)

    2016-12-01

    Highlights: • Physical tool for mitigation studies devoted to SFR safety. • Physical models to describe the material discharge from core. • Comparison to SIMMER III results. • Studies for ASTRID safety assessment and support to core design. - Abstract: Within the framework of the Generation IV Sodium-cooled Fast Reactors (SFR) R&D program of CEA, the core behavior in case of severe accidents is being assessed. Such transients are usually simulated with mechanistic codes (such as SIMMER-III). As a complement to this code, which gives reference accidental transient, a physico-statistical approach is currently followed; its final objective being to derive the variability of the main results of interest for the safety. This approach involves a fast-running simulation of extended accident sequences coupling low-dimensional physical models to advanced statistical analysis techniques. In this context, this paper presents such a low-dimensional physical tool (models and simulation results) dedicated to molten core materials discharge. This 0D tool handles heat transfers from molten (possibly boiling) pools, fuel crust evolution, phase separation/mixing of fuel/steel pools, radial thermal erosion of mitigation tubes, discharge of core materials and associated axial thermal erosion of mitigation tubes. All modules are coupled with a global neutronic evolution model of the degraded core. This physical tool is used to study and to define mitigation features (function of tubes devoted to mitigation inside the core, impact of absorbers falling into the degraded core…) to avoid energetic core recriticality during a secondary phase of a potential severe accident. In the future, this physical tool, associated to statistical treatments of the effect of uncertainties would enable sensitivity analysis studies. This physical tool is described before presenting its comparison against SIMMER-III code results, including a space-and energy-dependent neutron transport kinetic

  6. A MORET tool to assist code bias estimation

    International Nuclear Information System (INIS)

    Fernex, F.; Richet, Y.; Letang, E.

    2003-01-01

    This new Graphical User Interface (GUI) developed in JAVA is one of the post-processing tools for MORET4 code. It aims to help users to estimate the importance of the k eff bias due to the code in order to better define the upper safety limit. Moreover, it allows visualizing the distance between an actual configuration case and evaluated critical experiments. This tool depends on a validated experiments database, on sets of physical parameters and on various statistical tools allowing interpolating the calculation bias of the database or displaying the projections of experiments on a reduced base of parameters. The development of this tool is still in progress. (author)

  7. Challenges in dental statistics: survey methodology topics

    Directory of Open Access Journals (Sweden)

    Giuseppe Pizzo

    2013-12-01

    Full Text Available This paper gathers some contributions concerning survey methodology in dental research, as discussed during the first Workshop of the SISMEC STATDENT working group on statistical methods and applications in dentistry, held in Ancona on the 28th September 2011.The first contribution deals with the European Global Oral Health Indicators Development (EGOHID Project which proposed a comprehensive and standardized system of epidemiological tools (questionnaires and clinical forms for national data collection on oral health in Europe. The second contribution regards the design and conduct of trials to evaluate the clinical efficacy and safety of toothbrushes and mouthrinses. Finally, a flexible and effective tool used to trace dental age reference charts tailored to Italian children is presented.

  8. Statistical significance of cis-regulatory modules

    Directory of Open Access Journals (Sweden)

    Smith Andrew D

    2007-01-01

    Full Text Available Abstract Background It is becoming increasingly important for researchers to be able to scan through large genomic regions for transcription factor binding sites or clusters of binding sites forming cis-regulatory modules. Correspondingly, there has been a push to develop algorithms for the rapid detection and assessment of cis-regulatory modules. While various algorithms for this purpose have been introduced, most are not well suited for rapid, genome scale scanning. Results We introduce methods designed for the detection and statistical evaluation of cis-regulatory modules, modeled as either clusters of individual binding sites or as combinations of sites with constrained organization. In order to determine the statistical significance of module sites, we first need a method to determine the statistical significance of single transcription factor binding site matches. We introduce a straightforward method of estimating the statistical significance of single site matches using a database of known promoters to produce data structures that can be used to estimate p-values for binding site matches. We next introduce a technique to calculate the statistical significance of the arrangement of binding sites within a module using a max-gap model. If the module scanned for has defined organizational parameters, the probability of the module is corrected to account for organizational constraints. The statistical significance of single site matches and the architecture of sites within the module can be combined to provide an overall estimation of statistical significance of cis-regulatory module sites. Conclusion The methods introduced in this paper allow for the detection and statistical evaluation of single transcription factor binding sites and cis-regulatory modules. The features described are implemented in the Search Tool for Occurrences of Regulatory Motifs (STORM and MODSTORM software.

  9. OPTHYLIC: An Optimised Tool for Hybrid Limits Computation

    Science.gov (United States)

    Busato, Emmanuel; Calvet, David; Theveneaux-Pelzer, Timothée

    2018-05-01

    A software tool, computing observed and expected upper limits on Poissonian process rates using a hybrid frequentist-Bayesian CLs method, is presented. This tool can be used for simple counting experiments where only signal, background and observed yields are provided or for multi-bin experiments where binned distributions of discriminating variables are provided. It allows the combination of several channels and takes into account statistical and systematic uncertainties, as well as correlations of systematic uncertainties between channels. It has been validated against other software tools and analytical calculations, for several realistic cases.

  10. Can Low Frequency Measurements Be Good Enough? - A Statistical Assessment of Citizen Hydrology Streamflow Observations

    Science.gov (United States)

    Davids, J. C.; Rutten, M.; Van De Giesen, N.

    2016-12-01

    Hydrologic data has traditionally been collected with permanent installations of sophisticated and relatively accurate but expensive monitoring equipment at limited numbers of sites. Consequently, the spatial coverage of the data is limited and costs are high. Achieving adequate maintenance of sophisticated monitoring equipment often exceeds local technical and resource capacity, and permanently deployed monitoring equipment is susceptible to vandalism, theft, and other hazards. Rather than using expensive, vulnerable installations at a few points, SmartPhones4Water (S4W), a form of Citizen Hydrology, leverages widely available mobile technology to gather hydrologic data at many sites in a manner that is repeatable and scalable. However, there is currently a limited understanding of the impact of decreased observational frequency on the accuracy of key streamflow statistics like minimum flow, maximum flow, and runoff. As a first step towards evaluating the tradeoffs between traditional continuous monitoring approaches and emerging Citizen Hydrology methods, we randomly selected 50 active U.S. Geological Survey (USGS) streamflow gauges in California. We used historical 15 minute flow data from 01/01/2008 through 12/31/2014 to develop minimum flow, maximum flow, and runoff values (7 year total) for each gauge. In order to mimic lower frequency Citizen Hydrology observations, we developed a bootstrap randomized subsampling with replacement procedure. We calculated the same statistics, along with their respective distributions, from 50 subsample iterations with four different subsampling intervals (i.e. daily, three day, weekly, and monthly). Based on our results we conclude that, depending on the types of questions being asked, and the watershed characteristics, Citizen Hydrology streamflow measurements can provide useful and accurate information. Depending on watershed characteristics, minimum flows were reasonably estimated with subsample intervals ranging from

  11. Applied regression analysis a research tool

    CERN Document Server

    Pantula, Sastry; Dickey, David

    1998-01-01

    Least squares estimation, when used appropriately, is a powerful research tool. A deeper understanding of the regression concepts is essential for achieving optimal benefits from a least squares analysis. This book builds on the fundamentals of statistical methods and provides appropriate concepts that will allow a scientist to use least squares as an effective research tool. Applied Regression Analysis is aimed at the scientist who wishes to gain a working knowledge of regression analysis. The basic purpose of this book is to develop an understanding of least squares and related statistical methods without becoming excessively mathematical. It is the outgrowth of more than 30 years of consulting experience with scientists and many years of teaching an applied regression course to graduate students. Applied Regression Analysis serves as an excellent text for a service course on regression for non-statisticians and as a reference for researchers. It also provides a bridge between a two-semester introduction to...

  12. Nuclear medicine statistics

    International Nuclear Information System (INIS)

    Martin, P.M.

    1977-01-01

    Numerical description of medical and biologic phenomena is proliferating. Laboratory studies on patients now yield measurements of at least a dozen indices, each with its own normal limits. Within nuclear medicine, numerical analysis as well as numerical measurement and the use of computers are becoming more common. While the digital computer has proved to be a valuable tool for measurment and analysis of imaging and radioimmunoassay data, it has created more work in that users now ask for more detailed calculations and for indices that measure the reliability of quantified observations. The following material is presented with the intention of providing a straight-forward methodology to determine values for some useful parameters and to estimate the errors involved. The process used is that of asking relevant questions and then providing answers by illustrations. It is hoped that this will help the reader avoid an error of the third kind, that is, the error of statistical misrepresentation or inadvertent deception. This occurs most frequently in cases where the right answer is found to the wrong question. The purposes of this chapter are: (1) to provide some relevant statistical theory, using a terminology suitable for the nuclear medicine field; (2) to demonstrate the application of a number of statistical methods to the kinds of data commonly encountered in nuclear medicine; (3) to provide a framework to assist the experimenter in choosing the method and the questions most suitable for the experiment at hand; and (4) to present a simple approach for a quantitative quality control program for scintillation cameras and other radiation detectors

  13. Distortion in fingerprints: a statistical investigation using shape measurement tools.

    Science.gov (United States)

    Sheets, H David; Torres, Anne; Langenburg, Glenn; Bush, Peter J; Bush, Mary A

    2014-07-01

    Friction ridge impression appearance can be affected due to the type of surface touched and pressure exerted during deposition. Understanding the magnitude of alterations, regions affected, and systematic/detectable changes occurring would provide useful information. Geometric morphometric techniques were used to statistically characterize these changes. One hundred and fourteen prints were obtained from a single volunteer and impressed with heavy, normal, and light pressure on computer paper, soft gloss paper, 10-print card stock, and retabs. Six hundred prints from 10 volunteers were rolled with heavy, normal, and light pressure on soft gloss paper and 10-print card stock. Results indicate that while different substrates/pressure levels produced small systematic changes in fingerprints, the changes were small in magnitude: roughly the width of one ridge. There were no detectable changes in the degree of random variability of prints associated with either pressure or substrate. In conclusion, the prints transferred reliably regardless of pressure or substrate. © 2014 American Academy of Forensic Sciences.

  14. RADSS: an integration of GIS, spatial statistics, and network service for regional data mining

    Science.gov (United States)

    Hu, Haitang; Bao, Shuming; Lin, Hui; Zhu, Qing

    2005-10-01

    Regional data mining, which aims at the discovery of knowledge about spatial patterns, clusters or association between regions, has widely applications nowadays in social science, such as sociology, economics, epidemiology, crime, and so on. Many applications in the regional or other social sciences are more concerned with the spatial relationship, rather than the precise geographical location. Based on the spatial continuity rule derived from Tobler's first law of geography: observations at two sites tend to be more similar to each other if the sites are close together than if far apart, spatial statistics, as an important means for spatial data mining, allow the users to extract the interesting and useful information like spatial pattern, spatial structure, spatial association, spatial outlier and spatial interaction, from the vast amount of spatial data or non-spatial data. Therefore, by integrating with the spatial statistical methods, the geographical information systems will become more powerful in gaining further insights into the nature of spatial structure of regional system, and help the researchers to be more careful when selecting appropriate models. However, the lack of such tools holds back the application of spatial data analysis techniques and development of new methods and models (e.g., spatio-temporal models). Herein, we make an attempt to develop such an integrated software and apply it into the complex system analysis for the Poyang Lake Basin. This paper presents a framework for integrating GIS, spatial statistics and network service in regional data mining, as well as their implementation. After discussing the spatial statistics methods involved in regional complex system analysis, we introduce RADSS (Regional Analysis and Decision Support System), our new regional data mining tool, by integrating GIS, spatial statistics and network service. RADSS includes the functions of spatial data visualization, exploratory spatial data analysis, and

  15. An 'electronic' extramural course in epidemiology and medical statistics.

    Science.gov (United States)

    Ostbye, T

    1989-03-01

    This article describes an extramural university course in epidemiology and medical statistics taught using a computer conferencing system, microcomputers and data communications. Computer conferencing was shown to be a powerful, yet quite easily mastered, vehicle for distance education. It allows health personnel unable to attend regular classes due to geographical or time constraints, to take part in an interactive learning environment at low cost. This overcomes part of the intellectual and social isolation associated with traditional correspondence courses. Teaching of epidemiology and medical statistics is well suited to computer conferencing, even if the asynchronicity of the medium makes discussion of the most complex statistical concepts a little cumbersome. Computer conferencing may also prove to be a useful tool for teaching other medical and health related subjects.

  16. A new formalism for non extensive physical systems: Tsallis Thermo statistics

    International Nuclear Information System (INIS)

    Tirnakli, U.; Bueyuekkilic, F.; Demirhan, D.

    1999-01-01

    Although Boltzmann-Gibbs (BG) statistics provides a suitable tool which enables us to handle a large number of physical systems satisfactorily, it has some basic restrictions. Recently a non extensive thermo statistics has been proposed by C.Tsallis to handle the non extensive physical systems and up to now, besides the generalization of some of the conventional concepts, the formalism has been prosperous in some of the physical applications. In this study, our effort is to introduce Tsallis thermo statistics in some details and to emphasize its achievements on physical systems by noting the recent developments on this line

  17. Statistical Mechanics of Turbulent Flows

    International Nuclear Information System (INIS)

    Cambon, C

    2004-01-01

    counterparts at the molecular level. In addition, equations are given for multicomponent reacting systems. The chapter ends with miscellaneous topics, including DNS (idea of) the energy cascade, and RANS. Chapter 5 is devoted to stochastic models for the large scales of turbulence. Langevin-type models for velocity (and particle position) are presented, and their various consequences for second-order single-point correlations (Reynolds stress components, Kolmogorov constant) are discussed. These models are then presented for the scalar. The chapter ends with compressible high-speed flows and various models, ranging from k-ε to hybrid RANS-pdf. Stochastic models for small-scale turbulence are addressed in chapter 6. These models are based on the concept of a filter density function (FDF) for the scalar, and a more conventional SGS (sub-grid-scale model) for the velocity in LES. The final chapter, chapter 7, is entitled 'The unification of turbulence models' and aims at reconciling large-scale and small-scale modelling. This book offers a timely survey of techniques in modern computational fluid mechanics for turbulent flows with reacting scalars. It should be of interest to engineers, while the discussion of the underlying tools, namely pdfs, stochastic and statistical equations should also be attractive to applied mathematicians and physicists. The book's emphasis on local pdfs and stochastic Langevin models gives a consistent structure to the book and allows the author to cover almost the whole spectrum of practical modelling in turbulent CFD. On the other hand, one might regret that non-local issues are not mentioned explicitly, or even briefly. These problems range from the presence of pressure-strain correlations in the Reynolds stress transport equations to the presence of two-point pdfs in the single-point pdf equation derived from the Navier--Stokes equations. (One may recall that, even without scalar transport, a general closure problem for turbulence statistics

  18. Statistical methods and computing for big data

    Science.gov (United States)

    Wang, Chun; Chen, Ming-Hui; Schifano, Elizabeth; Wu, Jing

    2016-01-01

    Big data are data on a massive scale in terms of volume, intensity, and complexity that exceed the capacity of standard analytic tools. They present opportunities as well as challenges to statisticians. The role of computational statisticians in scientific discovery from big data analyses has been under-recognized even by peer statisticians. This article summarizes recent methodological and software developments in statistics that address the big data challenges. Methodologies are grouped into three classes: subsampling-based, divide and conquer, and online updating for stream data. As a new contribution, the online updating approach is extended to variable selection with commonly used criteria, and their performances are assessed in a simulation study with stream data. Software packages are summarized with focuses on the open source R and R packages, covering recent tools that help break the barriers of computer memory and computing power. Some of the tools are illustrated in a case study with a logistic regression for the chance of airline delay. PMID:27695593

  19. Statistical methods and computing for big data.

    Science.gov (United States)

    Wang, Chun; Chen, Ming-Hui; Schifano, Elizabeth; Wu, Jing; Yan, Jun

    2016-01-01

    Big data are data on a massive scale in terms of volume, intensity, and complexity that exceed the capacity of standard analytic tools. They present opportunities as well as challenges to statisticians. The role of computational statisticians in scientific discovery from big data analyses has been under-recognized even by peer statisticians. This article summarizes recent methodological and software developments in statistics that address the big data challenges. Methodologies are grouped into three classes: subsampling-based, divide and conquer, and online updating for stream data. As a new contribution, the online updating approach is extended to variable selection with commonly used criteria, and their performances are assessed in a simulation study with stream data. Software packages are summarized with focuses on the open source R and R packages, covering recent tools that help break the barriers of computer memory and computing power. Some of the tools are illustrated in a case study with a logistic regression for the chance of airline delay.

  20. The clinical value of large neuroimaging data sets in Alzheimer's disease.

    Science.gov (United States)

    Toga, Arthur W

    2012-02-01

    Rapid advances in neuroimaging and cyberinfrastructure technologies have brought explosive growth in the Web-based warehousing, availability, and accessibility of imaging data on a variety of neurodegenerative and neuropsychiatric disorders and conditions. There has been a prolific development and emergence of complex computational infrastructures that serve as repositories of databases and provide critical functionalities such as sophisticated image analysis algorithm pipelines and powerful three-dimensional visualization and statistical tools. The statistical and operational advantages of collaborative, distributed team science in the form of multisite consortia push this approach in a diverse range of population-based investigations. Copyright © 2012 Elsevier Inc. All rights reserved.

  1. Nonequilibrium Statistical Operator Method and Generalized Kinetic Equations

    Science.gov (United States)

    Kuzemsky, A. L.

    2018-01-01

    We consider some principal problems of nonequilibrium statistical thermodynamics in the framework of the Zubarev nonequilibrium statistical operator approach. We present a brief comparative analysis of some approaches to describing irreversible processes based on the concept of nonequilibrium Gibbs ensembles and their applicability to describing nonequilibrium processes. We discuss the derivation of generalized kinetic equations for a system in a heat bath. We obtain and analyze a damped Schrödinger-type equation for a dynamical system in a heat bath. We study the dynamical behavior of a particle in a medium taking the dissipation effects into account. We consider the scattering problem for neutrons in a nonequilibrium medium and derive a generalized Van Hove formula. We show that the nonequilibrium statistical operator method is an effective, convenient tool for describing irreversible processes in condensed matter.

  2. Challenges in dental statistics: survey methodology topics

    OpenAIRE

    Pizzo, Giuseppe; Milani, Silvano; Spada, Elena; Ottolenghi, Livia

    2013-01-01

    This paper gathers some contributions concerning survey methodology in dental research, as discussed during the first Workshop of the SISMEC STATDENT working group on statistical methods and applications in dentistry, held in Ancona on the 28th September 2011.The first contribution deals with the European Global Oral Health Indicators Development (EGOHID) Project which proposed a comprehensive and standardized system of epidemiological tools (questionnaires and clinical forms) for national da...

  3. The art of data analysis how to answer almost any question using basic statistics

    CERN Document Server

    Jarman, Kristin H

    2013-01-01

    A friendly and accessible approach to applying statistics in the real worldWith an emphasis on critical thinking, The Art of Data Analysis: How to Answer Almost Any Question Using Basic Statistics presents fun and unique examples, guides readers through the entire data collection and analysis process, and introduces basic statistical concepts along the way.Leaving proofs and complicated mathematics behind, the author portrays the more engaging side of statistics and emphasizes its role as a problem-solving tool.  In addition, light-hearted case studies

  4. Renyi statistics in equilibrium statistical mechanics

    International Nuclear Information System (INIS)

    Parvan, A.S.; Biro, T.S.

    2010-01-01

    The Renyi statistics in the canonical and microcanonical ensembles is examined both in general and in particular for the ideal gas. In the microcanonical ensemble the Renyi statistics is equivalent to the Boltzmann-Gibbs statistics. By the exact analytical results for the ideal gas, it is shown that in the canonical ensemble, taking the thermodynamic limit, the Renyi statistics is also equivalent to the Boltzmann-Gibbs statistics. Furthermore it satisfies the requirements of the equilibrium thermodynamics, i.e. the thermodynamical potential of the statistical ensemble is a homogeneous function of first degree of its extensive variables of state. We conclude that the Renyi statistics arrives at the same thermodynamical relations, as those stemming from the Boltzmann-Gibbs statistics in this limit.

  5. Codifference as a practical tool to measure interdependence

    Science.gov (United States)

    Wyłomańska, Agnieszka; Chechkin, Aleksei; Gajda, Janusz; Sokolov, Igor M.

    2015-03-01

    Correlation and spectral analysis represent the standard tools to study interdependence in statistical data. However, for the stochastic processes with heavy-tailed distributions such that the variance diverges, these tools are inadequate. The heavy-tailed processes are ubiquitous in nature and finance. We here discuss codifference as a convenient measure to study statistical interdependence, and we aim to give a short introductory review of its properties. By taking different known stochastic processes as generic examples, we present explicit formulas for their codifferences. We show that for the Gaussian processes codifference is equivalent to covariance. For processes with finite variance these two measures behave similarly with time. For the processes with infinite variance the covariance does not exist, however, the codifference is relevant. We demonstrate the practical importance of the codifference by extracting this function from simulated as well as real data taken from turbulent plasma of fusion device and financial market. We conclude that the codifference serves as a convenient practical tool to study interdependence for stochastic processes with both infinite and finite variances as well.

  6. QInfer: Statistical inference software for quantum applications

    Directory of Open Access Journals (Sweden)

    Christopher Granade

    2017-04-01

    Full Text Available Characterizing quantum systems through experimental data is critical to applications as diverse as metrology and quantum computing. Analyzing this experimental data in a robust and reproducible manner is made challenging, however, by the lack of readily-available software for performing principled statistical analysis. We improve the robustness and reproducibility of characterization by introducing an open-source library, QInfer, to address this need. Our library makes it easy to analyze data from tomography, randomized benchmarking, and Hamiltonian learning experiments either in post-processing, or online as data is acquired. QInfer also provides functionality for predicting the performance of proposed experimental protocols from simulated runs. By delivering easy-to-use characterization tools based on principled statistical analysis, QInfer helps address many outstanding challenges facing quantum technology.

  7. Introducing SONS, a tool for operational taxonomic unit-based comparisons of microbial community memberships and structures.

    Science.gov (United States)

    Schloss, Patrick D; Handelsman, Jo

    2006-10-01

    The recent advent of tools enabling statistical inferences to be drawn from comparisons of microbial communities has enabled the focus of microbial ecology to move from characterizing biodiversity to describing the distribution of that biodiversity. Although statistical tools have been developed to compare community structures across a phylogenetic tree, we lack tools to compare the memberships and structures of two communities at a particular operational taxonomic unit (OTU) definition. Furthermore, current tests of community structure do not indicate the similarity of the communities but only report the probability of a statistical hypothesis. Here we present a computer program, SONS, which implements nonparametric estimators for the fraction and richness of OTUs shared between two communities.

  8. Using the Signal Tools and Statistical Tools to Redefine the 24 Solar Terms in Peasant Calendar by Analyzing Surface Temperature and Precipitation

    Science.gov (United States)

    Huang, J. Y.; Tung, C. P.

    2017-12-01

    There is an important book called "Peasant Calendar" in the Chinese society. The Peasant Calendar is originally based on the orbit of the Sun and each year is divided into 24 solar terms. Each term has its own special meaning and conception. For example, "Spring Begins" means the end of winter and the beginning of spring. In Taiwan, 24 solar terms play an important role in agriculture because farmers always use the Peasant Calendar to decide when to sow. However, the current solar term in Taiwan is fixed about 15 days. This way doesn't show the temporal variability of climate and also can't truly reflect the regional climate characteristics in different areas.The number of days in each solar term should be more flexible. Since weather is associated with climate, all weather phenomena can be regarded as a multiple fluctuation signal. In this research, 30 years observation data of surface temperature and precipitation from 1976 2016 are used. The data is cut into different time series, such as a week, a month, six months to one year and so on. Signal analysis tools such as wavelet, change point analysis and Fourier transform are used to determine the length of each solar term. After determining the days of each solar term, statistical tests are used to find the relationships between the length of solar terms and climate turbulent (e.g., ENSO and PDO).For example, one of the solar terms called "Major Heat" should typically be more than 20 days in Taiwan due to global warming and heat island effect. The advance of Peasant Calendar can help farmers to make better decision, controlling crop schedule and using the farmland more efficient. For instance, warmer condition can accelerate the accumulation of accumulated temperature, which is the key of crop's growth stage. The result also can be used on disaster reduction (e.g., preventing agricultural damage) and water resources project.

  9. Bayesian models: A statistical primer for ecologists

    Science.gov (United States)

    Hobbs, N. Thompson; Hooten, Mevin B.

    2015-01-01

    Bayesian modeling has become an indispensable tool for ecological research because it is uniquely suited to deal with complexity in a statistically coherent way. This textbook provides a comprehensive and accessible introduction to the latest Bayesian methods—in language ecologists can understand. Unlike other books on the subject, this one emphasizes the principles behind the computations, giving ecologists a big-picture understanding of how to implement this powerful statistical approach.Bayesian Models is an essential primer for non-statisticians. It begins with a definition of probability and develops a step-by-step sequence of connected ideas, including basic distribution theory, network diagrams, hierarchical models, Markov chain Monte Carlo, and inference from single and multiple models. This unique book places less emphasis on computer coding, favoring instead a concise presentation of the mathematical statistics needed to understand how and why Bayesian analysis works. It also explains how to write out properly formulated hierarchical Bayesian models and use them in computing, research papers, and proposals.This primer enables ecologists to understand the statistical principles behind Bayesian modeling and apply them to research, teaching, policy, and management.Presents the mathematical and statistical foundations of Bayesian modeling in language accessible to non-statisticiansCovers basic distribution theory, network diagrams, hierarchical models, Markov chain Monte Carlo, and moreDeemphasizes computer coding in favor of basic principlesExplains how to write out properly factored statistical expressions representing Bayesian models

  10. UPPAAL-SMC: Statistical Model Checking for Priced Timed Automata

    DEFF Research Database (Denmark)

    Bulychev, Petr; David, Alexandre; Larsen, Kim Guldstrand

    2012-01-01

    on a series of extensions of the statistical model checking approach generalized to handle real-time systems and estimate undecidable problems. U PPAAL - SMC comes together with a friendly user interface that allows a user to specify complex problems in an efficient manner as well as to get feedback...... in the form of probability distributions and compare probabilities to analyze performance aspects of systems. The focus of the survey is on the evolution of the tool – including modeling and specification formalisms as well as techniques applied – together with applications of the tool to case studies....

  11. The Use of Statistical Process Control Tools for Analysing Financial Statements

    Directory of Open Access Journals (Sweden)

    Niezgoda Janusz

    2017-06-01

    Full Text Available This article presents the proposed application of one type of the modified Shewhart control charts in the monitoring of changes in the aggregated level of financial ratios. The control chart x̅ has been used as a basis of analysis. The examined variable from the sample in the mentioned chart is the arithmetic mean. The author proposes to substitute it with a synthetic measure that is determined and based on the selected ratios. As the ratios mentioned above, are expressed in different units and characters, the author applies standardisation. The results of selected comparative analyses have been presented for both bankrupts and non-bankrupts. They indicate the possibility of using control charts as an auxiliary tool in financial analyses.

  12. PROSA: A computer program for statistical analysis of near-real-time-accountancy (NRTA) data

    International Nuclear Information System (INIS)

    Beedgen, R.; Bicking, U.

    1987-04-01

    The computer program PROSA (Program for Statistical Analysis of NRTA Data) is a tool to decide on the basis of statistical considerations if, in a given sequence of materials balance periods, a loss of material might have occurred or not. The evaluation of the material balance data is based on statistical test procedures. In PROSA three truncated sequential tests are applied to a sequence of material balances. The manual describes the statistical background of PROSA and how to use the computer program on an IBM-PC with DOS 3.1. (orig.) [de

  13. INNOVATIVE APPROACH TO EDUCATION AND TEACHING OF STATISTICS

    Directory of Open Access Journals (Sweden)

    Andrea Jindrová

    2010-06-01

    Full Text Available Educational and tutorial programs are being developed together, with the changing world of information technology it is a necessary course to adapt to and accept new possibilities and needs. Use of online learning tools can amplify our teaching resources and create new types of learning opportunities that did not exist in the pre-Internet age. The world is full of information, which needs to be constantly updated. Virtualisation of studying materials enables us to update and manage them quickly and easily. As an advantage, we see an asynchronous approach towards learning materials that can be tailored for the students´ needs and adjusted according to their time and availability. The specificness of statistical learning lies in various statistical programs. The high technical demands of these programs require tutorials (instructional presentations, which can help students to learn how to use them efficiently. Instructional presentation may be understood as a demonstration of how the statistical software program works. This is one of the options that students may use to simplify the utilization of control and navigation through the statistical system. Thanks to instructional presentations, students will be able to transfer their theoretical statistical knowledge into practical situation and real life and, therefore, improve their personal development process. The goal of this tutorial is to show an innovative approach for learning of statistics in the Czech University of Life Sciences. The use of presentations and their benefits for students was evaluated according to results obtained from a questionnaire survey completed by students of the 4th grade of the Faculty of Economics and Management. The aim of this pilot survey was to evaluate the benefits of these instructional presentations, and the students interest in using them. The information obtained was used as essential data for the evaluation of the efficiency of this new approach. Firstly

  14. Econometric Assessment of "One Minute" Paper as a Pedagogic Tool

    Science.gov (United States)

    Das, Amaresh

    2010-01-01

    This paper makes an econometric testing of one-minute paper used as a tool to manage and assess instruction in my statistics class. One of our findings is that the one minute paper when I have tested it by using an OLS estimate in a controlled Vs experimental design framework is found to statistically significant and effective in enhancing…

  15. Reporting and analyzing statistical uncertainties in Monte Carlo-based treatment planning

    International Nuclear Information System (INIS)

    Chetty, Indrin J.; Rosu, Mihaela; Kessler, Marc L.; Fraass, Benedick A.; Haken, Randall K. ten; Kong, Feng-Ming; McShan, Daniel L.

    2006-01-01

    Purpose: To investigate methods of reporting and analyzing statistical uncertainties in doses to targets and normal tissues in Monte Carlo (MC)-based treatment planning. Methods and Materials: Methods for quantifying statistical uncertainties in dose, such as uncertainty specification to specific dose points, or to volume-based regions, were analyzed in MC-based treatment planning for 5 lung cancer patients. The effect of statistical uncertainties on target and normal tissue dose indices was evaluated. The concept of uncertainty volume histograms for targets and organs at risk was examined, along with its utility, in conjunction with dose volume histograms, in assessing the acceptability of the statistical precision in dose distributions. The uncertainty evaluation tools were extended to four-dimensional planning for application on multiple instances of the patient geometry. All calculations were performed using the Dose Planning Method MC code. Results: For targets, generalized equivalent uniform doses and mean target doses converged at 150 million simulated histories, corresponding to relative uncertainties of less than 2% in the mean target doses. For the normal lung tissue (a volume-effect organ), mean lung dose and normal tissue complication probability converged at 150 million histories despite the large range in the relative organ uncertainty volume histograms. For 'serial' normal tissues such as the spinal cord, large fluctuations exist in point dose relative uncertainties. Conclusions: The tools presented here provide useful means for evaluating statistical precision in MC-based dose distributions. Tradeoffs between uncertainties in doses to targets, volume-effect organs, and 'serial' normal tissues must be considered carefully in determining acceptable levels of statistical precision in MC-computed dose distributions

  16. Statistical inference and visualization in scale-space for spatially dependent images

    KAUST Repository

    Vaughan, Amy; Jun, Mikyoung; Park, Cheolwoo

    2012-01-01

    SiZer (SIgnificant ZERo crossing of the derivatives) is a graphical scale-space visualization tool that allows for statistical inferences. In this paper we develop a spatial SiZer for finding significant features and conducting goodness-of-fit tests

  17. A κ-generalized statistical mechanics approach to income analysis

    Science.gov (United States)

    Clementi, F.; Gallegati, M.; Kaniadakis, G.

    2009-02-01

    This paper proposes a statistical mechanics approach to the analysis of income distribution and inequality. A new distribution function, having its roots in the framework of κ-generalized statistics, is derived that is particularly suitable for describing the whole spectrum of incomes, from the low-middle income region up to the high income Pareto power-law regime. Analytical expressions for the shape, moments and some other basic statistical properties are given. Furthermore, several well-known econometric tools for measuring inequality, which all exist in a closed form, are considered. A method for parameter estimation is also discussed. The model is shown to fit remarkably well the data on personal income for the United States, and the analysis of inequality performed in terms of its parameters is revealed as very powerful.

  18. A κ-generalized statistical mechanics approach to income analysis

    International Nuclear Information System (INIS)

    Clementi, F; Gallegati, M; Kaniadakis, G

    2009-01-01

    This paper proposes a statistical mechanics approach to the analysis of income distribution and inequality. A new distribution function, having its roots in the framework of κ-generalized statistics, is derived that is particularly suitable for describing the whole spectrum of incomes, from the low–middle income region up to the high income Pareto power-law regime. Analytical expressions for the shape, moments and some other basic statistical properties are given. Furthermore, several well-known econometric tools for measuring inequality, which all exist in a closed form, are considered. A method for parameter estimation is also discussed. The model is shown to fit remarkably well the data on personal income for the United States, and the analysis of inequality performed in terms of its parameters is revealed as very powerful

  19. A Divergence Statistics Extension to VTK for Performance Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Pebay, Philippe Pierre [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bennett, Janine Camille [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    This report follows the series of previous documents ([PT08, BPRT09b, PT09, BPT09, PT10, PB13], where we presented the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k -means, order and auto-correlative statistics engines which we developed within the Visualization Tool Kit ( VTK ) as a scalable, parallel and versatile statistics package. We now report on a new engine which we developed for the calculation of divergence statistics, a concept which we hereafter explain and whose main goal is to quantify the discrepancy, in a stasticial manner akin to measuring a distance, between an observed empirical distribution and a theoretical, "ideal" one. The ease of use of the new diverence statistics engine is illustrated by the means of C++ code snippets. Although this new engine does not yet have a parallel implementation, it has already been applied to HPC performance analysis, of which we provide an example.

  20. Growth Curve Models and Applications : Indian Statistical Institute

    CERN Document Server

    2017-01-01

    Growth curve models in longitudinal studies are widely used to model population size, body height, biomass, fungal growth, and other variables in the biological sciences, but these statistical methods for modeling growth curves and analyzing longitudinal data also extend to general statistics, economics, public health, demographics, epidemiology, SQC, sociology, nano-biotechnology, fluid mechanics, and other applied areas.   There is no one-size-fits-all approach to growth measurement. The selected papers in this volume build on presentations from the GCM workshop held at the Indian Statistical Institute, Giridih, on March 28-29, 2016. They represent recent trends in GCM research on different subject areas, both theoretical and applied. This book includes tools and possibilities for further work through new techniques and modification of existing ones. The volume includes original studies, theoretical findings and case studies from a wide range of app lied work, and these contributions have been externally r...

  1. Excel 2016 for engineering statistics a guide to solving practical problems

    CERN Document Server

    Quirk, Thomas J

    2016-01-01

    This book shows the capabilities of Microsoft Excel in teaching engineering statistics effectively. Similar to the previously published Excel 2013 for Engineering Statistics, this book is a step-by-step exercise-driven guide for students and practitioners who need to master Excel to solve practical engineering problems. If understanding statistics isn’t your strongest suit, you are not especially mathematically-inclined, or if you are wary of computers, this is the right book for you. Excel, a widely available computer program for students and managers, is also an effective teaching and learning tool for quantitative analyses in engineering courses. Its powerful computational ability and graphical functions make learning statistics much easier than in years past. However,Excel 2016 for Engineering Statistics: A Guide to Solving Practical Problems is the first book to capitalize on these improvements by teaching students and managers how to apply Excel to statistical techniques necessary in their courses and...

  2. Excel 2016 for business statistics a guide to solving practical problems

    CERN Document Server

    Quirk, Thomas J

    2016-01-01

    This book shows the capabilities of Microsoft Excel in teaching business statistics effectively. Similar to the previously published Excel 2010 for Business Statistics, this book is a step-by-step exercise-driven guide for students and practitioners who need to master Excel to solve practical business problems. If understanding statistics isn’t your strongest suit, you are not especially mathematically-inclined, or if you are wary of computers, this is the right book for you. Excel, a widely available computer program for students and managers, is also an effective teaching and learning tool for quantitative analyses in business courses. Its powerful computational ability and graphical functions make learning statistics much easier than in years past. However, Excel 2016 for Business Statistics: A Guide to Solving Practical Problems is the first book to capitalize on these improvements by teaching students and managers how to apply Excel to statistical techniques necessary in their courses and work. Each ch...

  3. Excel 2016 for marketing statistics a guide to solving practical problems

    CERN Document Server

    Quirk, Thomas J

    2016-01-01

    This is the first book to show the capabilities of Microsoft Excel in teaching marketing statistics effectively. It is a step-by-step exercise-driven guide for students and practitioners who need to master Excel to solve practical marketing problems. If understanding statistics isn’t your strongest suit, you are not especially mathematically-inclined, or if you are wary of computers, this is the right book for you. Excel, a widely available computer program for students and managers, is also an effective teaching and learning tool for quantitative analyses in marketing courses. Its powerful computational ability and graphical functions make learning statistics much easier than in years past. However, Excel 2016 for Marketing Statistics: A Guide to Solving Practical Problems is the first book to capitalize on these improvements by teaching students and managers how to apply Excel to statistical techniques necessary in their courses and work. Each chapter explains statistical formulas and directs the reader t...

  4. Excel 2013 for engineering statistics a guide to solving practical problems

    CERN Document Server

    Quirk, Thomas J

    2015-01-01

    This is the first book to show the capabilities of Microsoft Excel to teach engineering statistics effectively.  It is a step-by-step exercise-driven guide for students and practitioners who need to master Excel to solve practical engineering problems.  If understanding statistics isn’t your strongest suit, you are not especially mathematically-inclined, or if you are wary of computers, this is the right book for you. Excel, a widely available computer program for students and managers, is also an effective teaching and learning tool for quantitative analyses in engineering courses.  Its powerful computational ability and graphical functions make learning statistics much easier than in years past.  However, Excel 2013 for Engineering Statistics: A Guide to Solving Practical Problems is the first book to capitalize on these improvements by teaching students and managers how to apply Excel to statistical techniques necessary in their courses and work. Each chapter explains statistical formulas and directs...

  5. Statistical approach for collaborative tests, reference material certification procedures

    International Nuclear Information System (INIS)

    Fangmeyer, H.; Haemers, L.; Larisse, J.

    1977-01-01

    The first part introduces the different aspects in organizing and executing intercomparison tests of chemical or physical quantities. It follows a description of a statistical procedure to handle the data collected in a circular analysis. Finally, an example demonstrates how the tool can be applied and which conclusion can be drawn of the results obtained

  6. Methodological difficulties of conducting agroecological studies from a statistical perspective

    DEFF Research Database (Denmark)

    Bianconi, A.; Dalgaard, Tommy; Manly, Bryan F J

    2013-01-01

    Statistical methods for analysing agroecological data might not be able to help agroecologists to solve all of the current problems concerning crop and animal husbandry, but such methods could well help agroecologists to assess, tackle, and resolve several agroecological issues in a more reliable...... and accurate manner. Therefore, our goal in this paper is to discuss the importance of statistical tools for alternative agronomic approaches, because alternative approaches, such as organic farming, should not only be promoted by encouraging farmers to deploy agroecological techniques, but also by providing...

  7. Statistical analysis of dragline monitoring data

    Energy Technology Data Exchange (ETDEWEB)

    Mirabediny, H.; Baafi, E.Y. [University of Tehran, Tehran (Iran)

    1998-07-01

    Dragline monitoring systems are normally the best tool used to collect data on the machine performance and operational parameters of a dragline operation. This paper discusses results of a time study using data from a dragline monitoring system captured over a four month period. Statistical summaries of the time study in terms of average values, standard deviation and frequency distributions showed that the mode of operation and the geological conditions have a significant influence on the dragline performance parameters. 6 refs., 14 figs., 3 tabs.

  8. Storytelling, statistics and hereditary thought: the narrative support of early statistics.

    Science.gov (United States)

    López-Beltrán, Carlos

    2006-03-01

    This paper's main contention is that some basically methodological developments in science which are apparently distant and unrelated can be seen as part of a sequential story. Focusing on general inferential and epistemological matters, the paper links occurrences separated by both in time and space, by formal and representational issues rather than social or disciplinary links. It focuses on a few limited aspects of several cognitive practices in medical and biological contexts separated by geography, disciplines and decades, but connected by long term transdisciplinary representational and inferential structures and constraints. The paper intends to show a given set of knowledge claims based on organizing statistically empirical data can be seen to have been underpinned by a previous, more familiar, and probably more natural, narrative handling of similar evidence. To achieve that this paper moves from medicine in France in the late eighteenth and early nineteenth century to the second half of the nineteenth century in England among gentleman naturalists, following its subject: the shift from narrative depiction of hereditary transmission of physical peculiarities to posterior statistical articulations of the same phenomena. Some early defenders of heredity as an important (if not the most important) causal presence in the understanding of life adopted singular narratives, in the form of case stories from medical and natural history traditions, to flesh out a special kind of causality peculiar to heredity. This work tries to reconstruct historically the rationale that drove the use of such narratives. It then shows that when this rationale was methodologically challenged, its basic narrative and probabilistic underpinings were transferred to the statistical quantificational tools that took their place.

  9. Understanding Statistics - Cancer Statistics

    Science.gov (United States)

    Annual reports of U.S. cancer statistics including new cases, deaths, trends, survival, prevalence, lifetime risk, and progress toward Healthy People targets, plus statistical summaries for a number of common cancer types.

  10. Simulation and statistical analysis for the optimization of nitrogen liquefaction plant with cryogenic Claude cycle using process modeling tool: ASPEN HYSYS

    International Nuclear Information System (INIS)

    Joshi, D.M.

    2017-01-01

    Cryogenic technology is used for liquefaction of many gases and it has several applications in food process engineering. Temperatures below 123 K are considered to be in the field of cryogenics. Extreme low temperatures are a basic need for many industrial processes and have several applications, such as superconductivity of magnets, space, medicine and gas industries. Several methods can be used to obtain the low temperatures required for liquefaction of gases. The process of cooling or refrigerating a gas to a temperature below its critical temperature so that liquid can be formed at some suitable pressure, which is below the critical pressure, is the basic liquefaction process. Different cryogenic cycle configurations are designed for getting the liquefied form of gases at different temperatures. Each of the cryogenic cycles like Linde cycle, Claude cycle, Kapitza cycle or modified Claude cycle has its own advantages and disadvantages. The placement of heat exchangers, Joule-Thompson valve and turboexpander decides the configuration of a cryogenic cycle. Each configuration has its own efficiency according to the application. Here, a nitrogen liquefaction plant is used for the analysis purpose. The process modeling tool ASPEN HYSYS can provide a software simulation approach before the actual implementation of the plant in the field. This paper presents the simulation and statistical analysis of the Claude cycle with the process modeling tool ASPEN HYSYS. It covers the technique used to optimize the liquefaction of the plant. The simulation results so obtained can be used as a reference for the design and optimization of the nitrogen liquefaction plant. Efficient liquefaction will give the best performance and productivity to the plant.

  11. Simulation and statistical analysis for the optimization of nitrogen liquefaction plant with cryogenic Claude cycle using process modeling tool: ASPEN HYSYS

    Science.gov (United States)

    Joshi, D. M.

    2017-09-01

    Cryogenic technology is used for liquefaction of many gases and it has several applications in food process engineering. Temperatures below 123 K are considered to be in the field of cryogenics. Extreme low temperatures are a basic need for many industrial processes and have several applications, such as superconductivity of magnets, space, medicine and gas industries. Several methods can be used to obtain the low temperatures required for liquefaction of gases. The process of cooling or refrigerating a gas to a temperature below its critical temperature so that liquid can be formed at some suitable pressure, which is below the critical pressure, is the basic liquefaction process. Different cryogenic cycle configurations are designed for getting the liquefied form of gases at different temperatures. Each of the cryogenic cycles like Linde cycle, Claude cycle, Kapitza cycle or modified Claude cycle has its own advantages and disadvantages. The placement of heat exchangers, Joule-Thompson valve and turboexpander decides the configuration of a cryogenic cycle. Each configuration has its own efficiency according to the application. Here, a nitrogen liquefaction plant is used for the analysis purpose. The process modeling tool ASPEN HYSYS can provide a software simulation approach before the actual implementation of the plant in the field. This paper presents the simulation and statistical analysis of the Claude cycle with the process modeling tool ASPEN HYSYS. It covers the technique used to optimize the liquefaction of the plant. The simulation results so obtained can be used as a reference for the design and optimization of the nitrogen liquefaction plant. Efficient liquefaction will give the best performance and productivity to the plant.

  12. Statistical analysis of natural radiation levels inside the UNICAMP campus through the use of Geiger-Muller counter

    International Nuclear Information System (INIS)

    Fontolan, Juliana A.; Biral, Antonio Renato P.

    2013-01-01

    It is known that the distribution at time intervals of random and unrelated events leads to the Poisson distribution . This work aims to study the distribution in time intervals of events resulting from radioactive decay of atoms present in the UNICAMP where activities involving the use of ionizing radiation are performed environments . The proposal is that the distribution surveys at intervals of these events in different locations of the university are carried out through the use of a Geiger-Mueller tube . In a next step , the evaluation of distributions obtained by using non- parametric statistics (Chi- square and Kolmogorov Smirnoff) will be taken . For analyzes involving correlations we intend to use the ANOVA (Analysis of Variance) statistical tool . Measured in six different places within the Campinas , with the use of Geiger- Muller its count mode and a time window of 20 seconds was performed . Through statistical tools chi- square and Kolmogorov Smirnoff tests, using the EXCEL program , it was observed that the distributions actually refer to a Poisson distribution. Finally, the next step is to perform analyzes involving correlations using the statistical tool ANOVA

  13. A flexible tool for hydraulic and water quality performance analysis of green infrastructure

    Science.gov (United States)

    Massoudieh, A.; Alikhani, J.

    2017-12-01

    Models that allow for design considerations of green infrastructure (GI) practices to control stormwater runoff and associated contaminants have received considerable attention in recent years. To be used to evaluate the effect design configurations on the long-term performance of GIs, models should be able to consider processes within GIs with good fidelity. In this presentation, a sophisticated, yet flexible tool for hydraulic and water quality assessment of GIs will be introduced. The tool can be used by design engineers and researchers to capture and explore the effect of design factors and properties of the media employed in the performance of GI systems at a relatively small scale. We deemed it essential to have a flexible GI modeling tool that is capable of simulating GI system components and specific biogeochemical processes affecting contaminants such as evapotranspiration, plant uptake, reactions, and particle-associated transport accurately while maintaining a high degree of flexibility to account for the myriad of GI alternatives. The mathematical framework for a stand-alone GI performance assessment tool has been developed and will be demonstrated. The process-based model framework developed here can be used to model a diverse range of GI practices such as stormwater ponds, green roofs, retention ponds, bioretention systems, infiltration trench, permeable pavement and other custom-designed combinatory systems. An example of the application of the system to evaluate the performance of a rain-garden system will be demonstrated.

  14. Personalizing oncology treatments by predicting drug efficacy, side-effects, and improved therapy: mathematics, statistics, and their integration.

    Science.gov (United States)

    Agur, Zvia; Elishmereni, Moran; Kheifetz, Yuri

    2014-01-01

    Despite its great promise, personalized oncology still faces many hurdles, and it is increasingly clear that targeted drugs and molecular biomarkers alone yield only modest clinical benefit. One reason is the complex relationships between biomarkers and the patient's response to drugs, obscuring the true weight of the biomarkers in the overall patient's response. This complexity can be disentangled by computational models that integrate the effects of personal biomarkers into a simulator of drug-patient dynamic interactions, for predicting the clinical outcomes. Several computational tools have been developed for personalized oncology, notably evidence-based tools for simulating pharmacokinetics, Bayesian-estimated tools for predicting survival, etc. We describe representative statistical and mathematical tools, and discuss their merits, shortcomings and preliminary clinical validation attesting to their potential. Yet, the individualization power of mathematical models alone, or statistical models alone, is limited. More accurate and versatile personalization tools can be constructed by a new application of the statistical/mathematical nonlinear mixed effects modeling (NLMEM) approach, which until recently has been used only in drug development. Using these advanced tools, clinical data from patient populations can be integrated with mechanistic models of disease and physiology, for generating personal mathematical models. Upon a more substantial validation in the clinic, this approach will hopefully be applied in personalized clinical trials, P-trials, hence aiding the establishment of personalized medicine within the main stream of clinical oncology. © 2014 Wiley Periodicals, Inc.

  15. Sophisticated Search Capabilities in the ADS Abstract Service

    Science.gov (United States)

    Eichhorn, G.; Accomazzi, A.; Grant, C. S.; Henneken, E.; Kurtz, M. J.; Murray, S. S.

    2003-12-01

    The ADS provides access to over 940,000 references from astronomy and planetary sciences publications and 1.5 million records from physics publications. It is funded by NASA and provides free access to these references, as well as to 2.4 million scanned pages from the astronomical literature. These include most of the major astronomy and several planetary sciences journals, as well as many historical observatory publications. The references now include the abstracts from all volumes of the Journal of Geophysical Research (JGR) since the beginning of 2002. We get these abstracts on a regular basis. The Kluwer journal Solar Physics has been scanned back to volume 1 and is available through the ADS. We have extracted the reference lists from this and many other journals and included them in the reference and citation database of the ADS. We have recently scanning Earth, Moon and Planets, another Kluwer journal, and will scan other Kluwer journals in the future as well. We plan on extracting references from these journals as well in the near future. The ADS has many sophisticated query features. These allow the user to formulate complex queries. Using results lists to get further information about the selected articles provide the means to quickly find important and relevant articles from the database. Three advanced feedback queries are available from the bottom of the ADS results list (in addition to regular feedback queries already available from the abstract page and from the bottom of the results list): 1. Get reference list for selected articles: This query returns all known references for the selected articles (or for all articles in the first list). The resulting list will be ranked according to how often each article is referred to and will show the most referenced articles in the field of study that created the first list. It presumably shows the most important articles in that field. 2. Get citation list for selected articles: This returns all known articles

  16. Noise level and MPEG-2 encoder statistics

    Science.gov (United States)

    Lee, Jungwoo

    1997-01-01

    Most software in the movie and broadcasting industries are still in analog film or tape format, which typically contains random noise that originated from film, CCD camera, and tape recording. The performance of the MPEG-2 encoder may be significantly degraded by the noise. It is also affected by the scene type that includes spatial and temporal activity. The statistical property of noise originating from camera and tape player is analyzed and the models for the two types of noise are developed. The relationship between the noise, the scene type, and encoder statistics of a number of MPEG-2 parameters such as motion vector magnitude, prediction error, and quant scale are discussed. This analysis is intended to be a tool for designing robust MPEG encoding algorithms such as preprocessing and rate control.

  17. Common misconceptions about data analysis and statistics.

    Science.gov (United States)

    Motulsky, Harvey J

    2014-11-01

    Ideally, any experienced investigator with the right tools should be able to reproduce a finding published in a peer-reviewed biomedical science journal. In fact, the reproducibility of a large percentage of published findings has been questioned. Undoubtedly, there are many reasons for this, but one reason maybe that investigators fool themselves due to a poor understanding of statistical concepts. In particular, investigators often make these mistakes: 1. P-Hacking. This is when you reanalyze a data set in many different ways, or perhaps reanalyze with additional replicates, until you get the result you want. 2. Overemphasis on P values rather than on the actual size of the observed effect. 3. Overuse of statistical hypothesis testing, and being seduced by the word "significant". 4. Overreliance on standard errors, which are often misunderstood.

  18. Common misconceptions about data analysis and statistics.

    Science.gov (United States)

    Motulsky, Harvey J

    2015-02-01

    Ideally, any experienced investigator with the right tools should be able to reproduce a finding published in a peer-reviewed biomedical science journal. In fact, the reproducibility of a large percentage of published findings has been questioned. Undoubtedly, there are many reasons for this, but one reason may be that investigators fool themselves due to a poor understanding of statistical concepts. In particular, investigators often make these mistakes: (1) P-Hacking. This is when you reanalyze a data set in many different ways, or perhaps reanalyze with additional replicates, until you get the result you want. (2) Overemphasis on P values rather than on the actual size of the observed effect. (3) Overuse of statistical hypothesis testing, and being seduced by the word "significant". (4) Overreliance on standard errors, which are often misunderstood.

  19. Demonstration of a software design and statistical analysis methodology with application to patient outcomes data sets.

    Science.gov (United States)

    Mayo, Charles; Conners, Steve; Warren, Christopher; Miller, Robert; Court, Laurence; Popple, Richard

    2013-11-01

    With emergence of clinical outcomes databases as tools utilized routinely within institutions, comes need for software tools to support automated statistical analysis of these large data sets and intrainstitutional exchange from independent federated databases to support data pooling. In this paper, the authors present a design approach and analysis methodology that addresses both issues. A software application was constructed to automate analysis of patient outcomes data using a wide range of statistical metrics, by combining use of C#.Net and R code. The accuracy and speed of the code was evaluated using benchmark data sets. The approach provides data needed to evaluate combinations of statistical measurements for ability to identify patterns of interest in the data. Through application of the tools to a benchmark data set for dose-response threshold and to SBRT lung data sets, an algorithm was developed that uses receiver operator characteristic curves to identify a threshold value and combines use of contingency tables, Fisher exact tests, Welch t-tests, and Kolmogorov-Smirnov tests to filter the large data set to identify values demonstrating dose-response. Kullback-Leibler divergences were used to provide additional confirmation. The work demonstrates the viability of the design approach and the software tool for analysis of large data sets.

  20. Summary statistics for end-point conditioned continuous-time Markov chains

    DEFF Research Database (Denmark)

    Hobolth, Asger; Jensen, Jens Ledet

    Continuous-time Markov chains are a widely used modelling tool. Applications include DNA sequence evolution, ion channel gating behavior and mathematical finance. We consider the problem of calculating properties of summary statistics (e.g. mean time spent in a state, mean number of jumps between...... two states and the distribution of the total number of jumps) for discretely observed continuous time Markov chains. Three alternative methods for calculating properties of summary statistics are described and the pros and cons of the methods are discussed. The methods are based on (i) an eigenvalue...... decomposition of the rate matrix, (ii) the uniformization method, and (iii) integrals of matrix exponentials. In particular we develop a framework that allows for analyses of rather general summary statistics using the uniformization method....