So, Jiyeon; Jeong, Se-Hoon; Hwang, Yoori
2017-04-01
The extant empirical research examining the effectiveness of statistical and exemplar-based health information is largely inconsistent. Under the premise that the inconsistency may be due to an unacknowledged moderator (O'Keefe, 2002), this study examined a moderating role of outcome-relevant involvement (Johnson & Eagly, 1989) in the effects of statistical and exemplified risk information on risk perception. Consistent with predictions based on elaboration likelihood model (Petty & Cacioppo, 1984), findings from an experiment (N = 237) concerning alcohol consumption risks showed that statistical risk information predicted risk perceptions of individuals with high, rather than low, involvement, while exemplified risk information predicted risk perceptions of those with low, rather than high, involvement. Moreover, statistical risk information contributed to negative attitude toward drinking via increased risk perception only for highly involved individuals, while exemplified risk information influenced the attitude through the same mechanism only for individuals with low involvement. Theoretical and practical implications for health risk communication are discussed.
Statistical significance versus clinical relevance.
van Rijn, Marieke H C; Bech, Anneke; Bouyer, Jean; van den Brand, Jan A J G
2017-04-01
In March this year, the American Statistical Association (ASA) posted a statement on the correct use of P-values, in response to a growing concern that the P-value is commonly misused and misinterpreted. We aim to translate these warnings given by the ASA into a language more easily understood by clinicians and researchers without a deep background in statistics. Moreover, we intend to illustrate the limitations of P-values, even when used and interpreted correctly, and bring more attention to the clinical relevance of study findings using two recently reported studies as examples. We argue that P-values are often misinterpreted. A common mistake is saying that P < 0.05 means that the null hypothesis is false, and P ≥0.05 means that the null hypothesis is true. The correct interpretation of a P-value of 0.05 is that if the null hypothesis were indeed true, a similar or more extreme result would occur 5% of the times upon repeating the study in a similar sample. In other words, the P-value informs about the likelihood of the data given the null hypothesis and not the other way around. A possible alternative related to the P-value is the confidence interval (CI). It provides more information on the magnitude of an effect and the imprecision with which that effect was estimated. However, there is no magic bullet to replace P-values and stop erroneous interpretation of scientific results. Scientists and readers alike should make themselves familiar with the correct, nuanced interpretation of statistical tests, P-values and CIs. © The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.
Is Information Still Relevant?
Ma, Lia
2013-01-01
Introduction: The term "information" in information science does not share the characteristics of those of a nomenclature: it does not bear a generally accepted definition and it does not serve as the bases and assumptions for research studies. As the data deluge has arrived, is the concept of information still relevant for information…
Wildemuth, Barbara M.
2009-01-01
A user's interaction with a DL is often initiated as the result of the user experiencing an information need of some kind. Aspects of that experience and how it might affect the user's interactions with the DL are discussed in this module. In addition, users continuously make decisions about and evaluations of the materials retrieved from a DL, relative to their information needs. Relevance judgments, and their relationship to the user's information needs, are discussed in this module. Draft
Information theory and statistics
Kullback, Solomon
1997-01-01
Highly useful text studies logarithmic measures of information and their application to testing statistical hypotheses. Includes numerous worked examples and problems. References. Glossary. Appendix. 1968 2nd, revised edition.
Dynamic statistical information theory
XING; Xiusan
2006-01-01
In recent years we extended Shannon static statistical information theory to dynamic processes and established a Shannon dynamic statistical information theory, whose core is the evolution law of dynamic entropy and dynamic information. We also proposed a corresponding Boltzmman dynamic statistical information theory. Based on the fact that the state variable evolution equation of respective dynamic systems, i.e. Fokker-Planck equation and Liouville diffusion equation can be regarded as their information symbol evolution equation, we derived the nonlinear evolution equations of Shannon dynamic entropy density and dynamic information density and the nonlinear evolution equations of Boltzmann dynamic entropy density and dynamic information density, that describe respectively the evolution law of dynamic entropy and dynamic information. The evolution equations of these two kinds of dynamic entropies and dynamic informations show in unison that the time rate of change of dynamic entropy densities is caused by their drift, diffusion and production in state variable space inside the systems and coordinate space in the transmission processes; and that the time rate of change of dynamic information densities originates from their drift, diffusion and dissipation in state variable space inside the systems and coordinate space in the transmission processes. Entropy and information have been combined with the state and its law of motion of the systems. Furthermore we presented the formulas of two kinds of entropy production rates and information dissipation rates, the expressions of two kinds of drift information flows and diffusion information flows. We proved that two kinds of information dissipation rates (or the decrease rates of the total information) were equal to their corresponding entropy production rates (or the increase rates of the total entropy) in the same dynamic system. We obtained the formulas of two kinds of dynamic mutual informations and dynamic channel
Young, M.; Koslovsky, M.; Schaefer, Caroline M.; Feiveson, A. H.
2017-01-01
Back by popular demand, the JSC Biostatistics Laboratory and LSAH statisticians are offering an opportunity to discuss your statistical challenges and needs. Take the opportunity to meet the individuals offering expert statistical support to the JSC community. Join us for an informal conversation about any questions you may have encountered with issues of experimental design, analysis, or data visualization. Get answers to common questions about sample size, repeated measures, statistical assumptions, missing data, multiple testing, time-to-event data, and when to trust the results of your analyses.
Has Financial Statement Information become Less Relevant?
Thinggaard, Frank; Damkier, Jesper
as the total market-adjusted return that could be earned from investment strategies based on foreknowledge of financial statement information. It answers the question: Are investments based on financial statement information able to capture progressively less information in security returns over time......? The sample is based on non-financial companies listed on the Copenhagen Stock Exchange in the period 1984-2002. Our analyses show that all the applied accounting measures are value-relevant as investment strategies based on the information earn positive market-adjusted returns in our sample period....... The results provide some indication of a decline in the value-relevance of earnings information in the 1984-2001 period, and mixed, but not statistically reliable, evidence for accounting measures where book value information and asset values are also extracted from financial statements. The results seem...
Has Financial Statement Information become Less Relevant?
Thinggaard, Frank; Damkier, Jesper
as the total market-adjusted return that could be earned from investment strategies based on foreknowledge of financial statement information. It answers the question: Are investments based on financial statement information able to capture progressively less information in security returns over time......? The sample is based on non-financial companies listed on the Copenhagen Stock Exchange in the period 1984-2002. Our analyses show that all the applied accounting measures are value-relevant as investment strategies based on the information earn positive market-adjusted returns in our sample period....... The results provide some indication of a decline in the value-relevance of earnings information in the 1984-2001 period, and mixed, but not statistically reliable, evidence for accounting measures where book value information and asset values are also extracted from financial statements. The results seem...
[Clinical research IV. Relevancy of the statistical test chosen].
Talavera, Juan O; Rivas-Ruiz, Rodolfo
2011-01-01
When we look at the difference between two therapies or the association of a risk factor or prognostic indicator with its outcome, we need to evaluate the accuracy of the result. This assessment is based on a judgment that uses information about the study design and statistical management of the information. This paper specifically mentions the relevance of the statistical test selected. Statistical tests are chosen mainly from two characteristics: the objective of the study and type of variables. The objective can be divided into three test groups: a) those in which you want to show differences between groups or inside a group before and after a maneuver, b) those that seek to show the relationship (correlation) between variables, and c) those that aim to predict an outcome. The types of variables are divided in two: quantitative (continuous and discontinuous) and qualitative (ordinal and dichotomous). For example, if we seek to demonstrate differences in age (quantitative variable) among patients with systemic lupus erythematosus (SLE) with and without neurological disease (two groups), the appropriate test is the "Student t test for independent samples." But if the comparison is about the frequency of females (binomial variable), then the appropriate statistical test is the χ(2).
Statistical Computing in Information Society
Domański Czesław
2015-12-01
Full Text Available In the presence of massive data coming with high heterogeneity we need to change our statistical thinking and statistical education in order to adapt both - classical statistics and software developments that address new challenges. Significant developments include open data, big data, data visualisation, and they are changing the nature of the evidence that is available, the ways in which it is presented and the skills needed for its interpretation. The amount of information is not the most important issue – the real challenge is the combination of the amount and the complexity of data. Moreover, a need arises to know how uncertain situations should be dealt with and what decisions should be taken when information is insufficient (which can also be observed for large datasets. In the paper we discuss the idea of computational statistics as a new approach to statistical teaching and we try to answer a question: how we can best prepare the next generation of statisticians.
Predict! Teaching Statistics Using Informational Statistical Inference
Makar, Katie
2013-01-01
Statistics is one of the most widely used topics for everyday life in the school mathematics curriculum. Unfortunately, the statistics taught in schools focuses on calculations and procedures before students have a chance to see it as a useful and powerful tool. Researchers have found that a dominant view of statistics is as an assortment of tools…
Textual information access statistical models
Gaussier, Eric
2013-01-01
This book presents statistical models that have recently been developed within several research communities to access information contained in text collections. The problems considered are linked to applications aiming at facilitating information access:- information extraction and retrieval;- text classification and clustering;- opinion mining;- comprehension aids (automatic summarization, machine translation, visualization).In order to give the reader as complete a description as possible, the focus is placed on the probability models used in the applications
Statistical Aspects of Information Integration
2009-12-17
measurements at multiple measurement sites provide potentially valuable predictive information on functional disabilities in epilepsy patients...among the top ten sellers for Springer , the largest and one of the most prominent publishers in statistics. Publications Books Kolaczyk, li.O. (2009...Statistical Analysis of Network Data: Methods and Models. New York, Springer . Rcfcrced Journal Articles Nanai, N., Kolaczyk, E.D., and Kasif, S
Relevance: An Interdisciplinary and Information Science Perspective
Howard Greisdorf
2000-01-01
Full Text Available Although relevance has represented a key concept in the field of information science for evaluating information retrieval effectiveness, the broader context established by interdisciplinary frameworks could provide greater depth and breadth to on-going research in the field. This work provides an overview of the nature of relevance in the field of information science with a cursory view of how cross-disciplinary approaches to relevance could represent avenues for further investigation into the evaluative characteristics of relevance as a means for enhanced understanding of human information behavior.
Statistical regularities attract attention when task-relevant
Andrea eAlamia
2016-02-01
Full Text Available Visual attention seems essential for learning the statistical regularities in our environment, a process known as statistical learning. However, how attention is allocated when exploring a novel visual scene whose statistical structure is unknown remains unclear. In order to address this question, we investigated visual attention allocation during a task in which we manipulated the conditional probability of occurrence of colored stimuli, unbeknown to the subjects. Participants were instructed to detect a target colored dot among two dots moving along separate circular paths. We evaluated implicit statistical learning, i.e. the effect of color predictability on reaction times (RT, and recorded eye position concurrently. Attention allocation was indexed by comparing the Mahalanobis distance between the position, velocity and acceleration of the eyes and the 2 colored dots.We found that learning the conditional probabilities occurred very early during the course of the experiment as shown by the fact that, starting already from the first block, predictable stimuli were detected with shorter RT than unpredictable ones. In terms of attentional allocation, we found that the predictive stimulus attracted gaze only when it was informative about the occurrence of the target but not when it predicted the occurrence of a task-irrelevant stimulus. This suggests that attention allocation was influenced by regularities only when they were instrumental in performing the task. Moreover, we found that the attentional bias towards task-relevant predictive stimuli occurred at a very early stage of learning, concomitantly with the first effects of learning on RT.In conclusion, these results show that statistical regularities capture visual attention only after a few occurrences, provided these regularities are instrumental to perform the task.
Teaching the Relevance of Statistics through Consumer-Oriented Research.
Beins, Bernard
1985-01-01
Statistics was made more relevant and interesting to participants in a clinical psychology course by having the students go out and find instances of statistical and research applications in products advertised by different companies. The course is described. (Author/RM)
A Compositional Relevance Model for Adaptive Information Retrieval
Mathe, Nathalie; Chen, James; Lu, Henry, Jr. (Technical Monitor)
1994-01-01
There is a growing need for rapid and effective access to information in large electronic documentation systems. Access can be facilitated if information relevant in the current problem solving context can be automatically supplied to the user. This includes information relevant to particular user profiles, tasks being performed, and problems being solved. However most of this knowledge on contextual relevance is not found within the contents of documents, and current hypermedia tools do not provide any easy mechanism to let users add this knowledge to their documents. We propose a compositional relevance network to automatically acquire the context in which previous information was found relevant. The model records information on the relevance of references based on user feedback for specific queries and contexts. It also generalizes such information to derive relevant references for similar queries and contexts. This model lets users filter information by context of relevance, build personalized views of documents over time, and share their views with other users. It also applies to any type of multimedia information. Compared to other approaches, it is less costly and doesn't require any a priori statistical computation, nor an extended training period. It is currently being implemented into the Computer Integrated Documentation system which enables integration of various technical documents in a hypertext framework.
Disability: concepts and statistical information
Giordana Baldassarre
2008-06-01
Full Text Available
Background: The measurement and definition of disability is difficult due to its objective and subjective characteristics. In Italy, three different perspectives have been developed during the last 40 years. These various perspectives have had an effect, not only on how to measure disability, but also on policies to improve the social integration of people with disabilities.
Methods: This paper examines the various conceptual models behind the definition of disability and the differences in the estimated number of persons with disabilities. In addition, it analyses in accordance with the International Classification of Functioning, disability and health, the European and international initiatives undertaken to harmonize the definitions of disability.
Discussion: There are various bodies and central government agencies that either have management data or carry out statistical systematic surveys and disability surveys. Statistically speaking, the worst aspect of this scenario is that it creates confusion and uncertainty among the end users of this data, namely the policy makers. At international level the statistical data on disability is scarcely comparable among countries, despite huge efforts on behalf of international organisations to harmonize classifications and definitions of disability.
Conclusions: Statistical and administrative surveys provide information flows using a different definition and label based on a conceptual model that reflects the time period in which they were implemented. The use of different prescriptive definitions of disability produces different counts of persons with disabilities in Italy. For this reason it is important to interpret the data correctly and choose the appropriate cross section that best represents the population on which to focus attention.
Quantum information theory and quantum statistics
Petz, D. [Alfred Renyi Institute of Mathematics, Budapest (Hungary)
2008-07-01
Based on lectures given by the author, this book focuses on providing reliable introductory explanations of key concepts of quantum information theory and quantum statistics - rather than on results. The mathematically rigorous presentation is supported by numerous examples and exercises and by an appendix summarizing the relevant aspects of linear analysis. Assuming that the reader is familiar with the content of standard undergraduate courses in quantum mechanics, probability theory, linear algebra and functional analysis, the book addresses graduate students of mathematics and physics as well as theoretical and mathematical physicists. Conceived as a primer to bridge the gap between statistical physics and quantum information, a field to which the author has contributed significantly himself, it emphasizes concepts and thorough discussions of the fundamental notions to prepare the reader for deeper studies, not least through the selection of well chosen exercises. (orig.)
Probing for Relevance: Information Theories and Translation
Daniel Dejica
2009-06-01
Full Text Available Recent studies claim that the more translators know about the structure and the dynamics of discourse, the more readily and accurately they can translate both the content and the spirit of a text. Similarly, international research projects highlight directions of research which aim at helping translators make reasonable and consistent decisions as to the relevance and reliability of source text features in the target text. Other recent studies stress the importance of developing information structure theories for translation. In line with such current research desiderata, the aim of this article is to test the relevance of information theories for translation. In the first part, information theories are presented from different linguistic perspectives. In the second part, their relevance for translation is tested on a series of texts by examining how they have been or can be applied to translation. The last part presents the conclusions of the analysis.
Relevance, Pertinence and Information System Development
Kemp, D. A.
1974-01-01
The difference between pertinence and relevance is discussed. Other pairs of terms and the differences between their members are examined, and the suggestion is made that such studies could increase our understanding of the theory of information systems, and thence lead to practical improvements. (Author)
ACCOUNTING INFORMATION RELEVANCE ON CAPITAL MARKETS
Ciprian-Dan COSTEA
2015-01-01
The research in accounting with specific application on capital markets represents a special resort of accounting research. The development of such studies was favored by the evolution and strong growth of capital markets in our daily contemporary life and by the extention of base accounting concepts to international level. In such circumstances, studies regarding the evolution of concepts like value relevance, efficient markets, accounting information and its dissemination, fair value, are ...
IDENTIFICATION OF INFORMATION RELEVANT FOR INTERNATIONAL MARKETING
Jovanovic, Z.
2016-07-01
Full Text Available A basic ingredient of any market selection program is the availability of market information. As a general observation, the sources of international market and product information can be characterized as overwhelming, the problem being to identify the relevant data when needed. For international marketers this identification problem can be partly solved through the establishment of computerized databases, which must be continually screened and updated. For selecting new markets, and to support ongoing decisions, marketing decision support systems have been developed to simplify the whole process.
The Reasoning behind Informal Statistical Inference
Makar, Katie; Bakker, Arthur; Ben-Zvi, Dani
2011-01-01
Informal statistical inference (ISI) has been a frequent focus of recent research in statistics education. Considering the role that context plays in developing ISI calls into question the need to be more explicit about the reasoning that underpins ISI. This paper uses educational literature on informal statistical inference and philosophical…
ACCOUNTING INFORMATION RELEVANCE ON CAPITAL MARKETS
Ciprian-Dan COSTEA
2015-06-01
Full Text Available The research in accounting with specific application on capital markets represents a special resort of accounting research. The development of such studies was favored by the evolution and strong growth of capital markets in our daily contemporary life and by the extention of base accounting concepts to international level. In such circumstances, studies regarding the evolution of concepts like value relevance, efficient markets, accounting information and its dissemination, fair value, are welcomed on the field of accounting research with applicability to the capital markets. This study comes to outline some positions regarding this topic of accounting research.
ACCOUNTING INFORMATION RELEVANCE ON CAPITAL MARKETS
Ciprian-Dan COSTEA
2015-06-01
Full Text Available The research in accounting with specific application on capital markets represents a special resort of accounting research. The development of such studies was favored by the evolution and strong growth of capital markets in our daily contemporary life and by the extention of base accounting concepts to international level. In such circumstances, studies regarding the evolution of concepts like value relevance, efficient markets, accounting information and its dissemination, fair value, are welcomed on the field of accounting research with applicability to the capital markets. This study comes to outline some positions regarding this topic of accounting research.
Reversible Watermarking Using Statistical Information
Kurugollu Fatih
2010-01-01
Full Text Available In most reversible watermarking methods, a compressed location map is exploited in order to ensure reversibility. Besides, in some methods, a header containing critical information is appended to the payload for the extraction and recovery process. Such schemes have a highly fragile nature; that is, changing a single bit in watermarked data may prohibit recovery of the original host as well as the embedded watermark. In this paper, we propose a new scheme in which utilizing a compressed location map is completely removed. In addition, the amount of auxiliary data is decreased by employing the adjacent pixels information. Therefore, in addition to quality improvement, independent authentication of different regions of a watermarked image is possible.
Earthquake forecasting: Statistics and Information
Gertsik, V; Krichevets, A
2013-01-01
We present an axiomatic approach to earthquake forecasting in terms of multi-component random fields on a lattice. This approach provides a method for constructing point estimates and confidence intervals for conditional probabilities of strong earthquakes under conditions on the levels of precursors. Also, it provides an approach for setting multilevel alarm system and hypothesis testing for binary alarms. We use a method of comparison for different earthquake forecasts in terms of the increase of Shannon information. 'Forecasting' and 'prediction' of earthquakes are equivalent in this approach.
Earthquake forecasting: statistics and information
Vladimir Gertsik
2016-01-01
Full Text Available The paper presents a decision rule forming a mathematical basis of earthquake forecasting problem. We develop an axiomatic approach to earthquake forecasting in terms of multicomponent random fields on a lattice. This approach provides a method for constructing point estimates and confidence intervals for conditional probabilities of strong earthquakes under conditions on the levels of precursors. Also, it provides an approach for setting a multilevel alarm system and hypothesis testing for binary alarms. We use a method of comparison for different algorithms of earthquake forecasts in terms of the increase of Shannon information. ‘Forecasting’ (the calculation of the probabilities and ‘prediction’ (the alarm declaring of earthquakes are equivalent in this approach.
Timing of Information Presentation in Learning Statistics
Kester, Liesbeth; Kirschner, Paul A.; van Merrienboer, Jeroen J. G.
2004-01-01
This study in the domain of statistics compares four information presentation formats in a 2 x 2 factorial design: timing of supportive information (before or during task practice) ? timing of procedural information (before or during task practice). Seventy-two psychology and education students (7 male and 65 female; mean age 18.5 years, SD =…
Timing of Information Presentation in Learning Statistics
Kester, L.; Kirschner, P.A.; Merriënboer, J.J.G.
2004-01-01
This study in the domain of statistics compares four information presentation formats in a 2 × 2 factorial design: timing of supportive information (before or during task practice) × timing of procedural information (before or during task practice). Seventy-two psychology and education students (7 m
Statistical Symbolic Execution with Informed Sampling
Filieri, Antonio; Pasareanu, Corina S.; Visser, Willem; Geldenhuys, Jaco
2014-01-01
Symbolic execution techniques have been proposed recently for the probabilistic analysis of programs. These techniques seek to quantify the likelihood of reaching program events of interest, e.g., assert violations. They have many promising applications but have scalability issues due to high computational demand. To address this challenge, we propose a statistical symbolic execution technique that performs Monte Carlo sampling of the symbolic program paths and uses the obtained information for Bayesian estimation and hypothesis testing with respect to the probability of reaching the target events. To speed up the convergence of the statistical analysis, we propose Informed Sampling, an iterative symbolic execution that first explores the paths that have high statistical significance, prunes them from the state space and guides the execution towards less likely paths. The technique combines Bayesian estimation with a partial exact analysis for the pruned paths leading to provably improved convergence of the statistical analysis. We have implemented statistical symbolic execution with in- formed sampling in the Symbolic PathFinder tool. We show experimentally that the informed sampling obtains more precise results and converges faster than a purely statistical analysis and may also be more efficient than an exact symbolic analysis. When the latter does not terminate symbolic execution with informed sampling can give meaningful results under the same time and memory limits.
Relevance of information in informed consent to digestive endoscopy.
Stroppa, I
2000-09-01
In the field of instrumental methodologies, digestive endoscopy is widely applied diagnostic and therapeutic investigation, involving ethical and medico-legal problems connected with its performance. So, in the light of the present doctor-patient relationship, we therefore wished to reconsider the relevant meaning of preventive information which is indispensable for obtaining the patient's consent to the doctor's action. The aim of this present paper is to provide adequate knowledge, for who ever is about to undergo endoscopic examination, by introducing new informative forms and a new system for their distribution, without negatively affecting the patient's state of anxiety. We have tried to attribute greater responsibility to the person of the doctor requesting the examination, in providing information for the patient, and to underline, in the case of complications, the important conduct of the endoscopic specialist, who must not fail to obtain new informed consent before submitting the patient to any action directed towards treatment of the specific complication. If ignored, these medico-legal aspects can formulate the responsibility of the doctor both in clinical or penal context.
Information Theory and Statistical Physics - Lecture Notes
Merhav, Neri
2010-01-01
This document consists of lecture notes for a graduate course, which focuses on the relations between Information Theory and Statistical Physics. The course is aimed at EE graduate students in the area of Communications and Information Theory, as well as to graduate students in Physics who have basic background in Information Theory. Strong emphasis is given to the analogy and parallelism between Information Theory and Statistical Physics, as well as to the insights, the analysis tools and techniques that can be borrowed from Statistical Physics and `imported' to certain problem areas in Information Theory. This is a research trend that has been very active in the last few decades, and the hope is that by exposing the student to the meeting points between these two disciplines, we will enhance his/her background and perspective to carry out research in the field. A short outline of the course is as follows: Introduction; Elementary Statistical Physics and its Relation to Information Theory; Analysis Tools in ...
Signal Enhancement as Minimization of Relevant Information Loss
Geiger, Bernhard C
2012-01-01
We introduce the notion of relevant information loss for the purpose of casting the signal enhancement problem in information-theoretic terms. We show that many algorithms from machine learning can be reformulated using relevant information loss, which allows their application to the aforementioned problem. As a particular example we analyze principle component analysis for dimensionality reduction, discuss its optimality, and show that the relevant information loss can indeed vanish if the relevant information is concentrated on a lower-dimensional subspace of the input space.
Statistics of Evolving Populations and Their Relevance to Flood Risk
Robert E.Criss
2016-01-01
ABSTRACT:Statistical methods are commonly used to evaluate natural populations and environmen-tal variables, yet these must recognize temporal trends in population character to be appropriate in an evolving world. New equations presented here define the statistical measures of aggregate historical populations affected by linear changes in population means and standard deviations. These can be used to extract the statistical character of present-day populations, needed to define modern variability and risk, from tables of historical data that are dominated by measurements made when conditions were different. As an example, many factors such as climate change and in-channel structures are causing flood levels to rise, so realistic estimation of future flood levels must take such secular changes into ac-count. The new equations provide estimates of water levels for“100-year”floods in the USA Midwest that are 0.5 to 2 m higher than official calculations that routinely assume population stationarity. These equations also show that flood levels will continue to rise by several centimeters per year. This rate is nearly ten times faster than the rise of sea level, and thus represents one of the fastest and most damag-ing rates of change that is documented by robust data.
Value Relevance of Accounting Information in the United Arab Emirates
Jamal Barzegari Khanagha
2011-01-01
This paper examines the value relevance of accounting information in per and post-periods of International Financial Reporting Standards implementation using the regression and portfolio approaches...
Information and exponential families in statistical theory
Barndorff-Nielsen, O
2014-01-01
First published by Wiley in 1978, this book is being re-issued with a new Preface by the author. The roots of the book lie in the writings of RA Fisher both as concerns results and the general stance to statistical science, and this stance was the determining factor in the author's selection of topics. His treatise brings together results on aspects of statistical information, notably concerning likelihood functions, plausibility functions, ancillarity, and sufficiency, and on exponential families of probability distributions.
Statistical functions and relevant correlation coefficients of clearness index
Pavanello, Diego; Zaaiman, Willem; Colli, Alessandra; Heiser, John; Smith, Scott
2015-08-01
This article presents a statistical analysis of the sky conditions, during years from 2010 to 2012, for three different locations: the Joint Research Centre site in Ispra (Italy, European Solar Test Installation - ESTI laboratories), the site of National Renewable Energy Laboratory in Golden (Colorado, USA) and the site of Brookhaven National Laboratories in Upton (New York, USA). The key parameter is the clearness index kT, a dimensionless expression of the global irradiance impinging upon a horizontal surface at a given instant of time. In the first part, the sky conditions are characterized using daily averages, giving a general overview of the three sites. In the second part the analysis is performed using data sets with a short-term resolution of 1 sample per minute, demonstrating remarkable properties of the statistical distributions of the clearness index, reinforced by a proof using fuzzy logic methods. Successively some time-dependent correlations between different meteorological variables are presented in terms of Pearson and Spearman correlation coefficients, and introducing a new one.
Machine learning for relevance of information in crisis response
C.P.M. Netten
2015-01-01
Efficient communication during crisis response situations is a major challenge for involved emergency responders. Lack of relevant information or too much irrelevant information hampers the emergency responders’ decision-making process, workflow and situational awareness. Despite efforts to better c
Information transport in classical statistical systems
Wetterich, C
2016-01-01
In many materials or equilibrium statistical systems the information of boundary conditions is lost inside the bulk of the material. In contrast, we describe here static classical statistical probability distributions for which bulk properties depend on boundary conditions. Such "static memory materials" can be realized if no unique equilibrium state exists. The propagation of information from the boundary to the bulk is described by a classical wave function or a density matrix, which obey generalized Schr\\"odinger or von Neumann equations. For static memory materials the evolution within a subsector is unitary, as characteristic for the time evolution in quantum mechanics. The space-dependence in static memory materials can be used as an analogue representation of the time evolution in quantum mechanics - such materials are "quantum simulators". For example, an asymmetric Ising model represents the time evolution of relativistic fermions in two-dimensional Minkowski space.
Maxwell's Daemon: information versus particle statistics.
Plesch, Martin; Dahlsten, Oscar; Goold, John; Vedral, Vlatko
2014-11-11
Maxwell's daemon is a popular personification of a principle connecting information gain and extractable work in thermodynamics. A Szilard Engine is a particular hypothetical realization of Maxwell's daemon, which is able to extract work from a single thermal reservoir by measuring the position of particle(s) within the system. Here we investigate the role of particle statistics in the whole process; namely, how the extractable work changes if instead of classical particles fermions or bosons are used as the working medium. We give a unifying argument for the optimal work in the different cases: the extractable work is determined solely by the information gain of the initial measurement, as measured by the mutual information, regardless of the number and type of particles which constitute the working substance.
Value Relevance of Accounting Information in the United Arab Emirates
Jamal Barzegari Khanagha
2011-01-01
Full Text Available This paper examines the value relevance of accounting information in per and post-periods of International Financial Reporting Standards implementation using the regression and portfolio approaches for sample of the UAE companies. The results obtained from a combination of regression and portfolio approaches, show accounting information is value relevant in UAE stock market. A comparison of the results for the periods before and after adoption, based on both regression and portfolio approaches, shows a decline in value relevance of accounting information after the reform in accounting standards. It could be interpreted to mean that following to IFRS in UAE didn’t improve value relevancy of accounting information. However, results based on and portfolio approach shows that cash flows’ incremental information content increased for the post-IFRS period.
Factors Affecting the Value Relevance of Accounting Information
Mahmoud Dehghan Nayeri; Ali Faal Ghayoumi; Mohammad Ali Bidari
2012-01-01
The present study examines the factors affecting the value relevance of accounting information for investors in the Tehran Stock Exchange over the period of six years. In this study, the effect of four factors; being profitable or loss generating, company size, earnings stability and company growth on the value relevance of accounting information have been studied. For this purpose Ohlson model and the cumulative regression analysis is used in order to examine the hypotheses and as the basis ...
Intelligent Support for Solving Classification Differences in Statistical Information Integration
Jonker, C.M.; Verwaart, D.
2003-01-01
Integration of heterogeneous statistics is essential for political decision making on all levels. Like in intelligent information integration in general, the problem is to combine information from different autonomous sources, using different ontologies. However, in statistical information integrati
Software Helps Retrieve Information Relevant to the User
Mathe, Natalie; Chen, James
2003-01-01
The Adaptive Indexing and Retrieval Agent (ARNIE) is a code library, designed to be used by an application program, that assists human users in retrieving desired information in a hypertext setting. Using ARNIE, the program implements a computational model for interactively learning what information each human user considers relevant in context. The model, called a "relevance network," incrementally adapts retrieved information to users individual profiles on the basis of feedback from the users regarding specific queries. The model also generalizes such knowledge for subsequent derivation of relevant references for similar queries and profiles, thereby, assisting users in filtering information by relevance. ARNIE thus enables users to categorize and share information of interest in various contexts. ARNIE encodes the relevance and structure of information in a neural network dynamically configured with a genetic algorithm. ARNIE maintains an internal database, wherein it saves associations, and from which it returns associated items in response to a query. A C++ compiler for a platform on which ARNIE will be utilized is necessary for creating the ARNIE library but is not necessary for the execution of the software.
Annual statistical information 1996; Informe estatistico anual 1996
NONE
1997-12-31
This annual statistical report aims to propagate the information about the generation, transmission and distribution systems evolution and about the electric power market from the Parana State, Brazil, in 1996. The electric power consumption in the distribution area of the Parana Power Company (COPEL) presented a growth about 6,7%. The electric power production in the the COPEL plants increased 42,2% higher than 1995, due to the outflows verified in the Iguacu river and to the long period of the affluence reduction that the Southern region tanks coursed during this year. This report presents statistical data about the following topics: a) electric power energy balance from the Parana State; b) electric power energy balance from the COPEL - own generation, certain interchange, electric power requirement, direct distribution and the electric system 6 graphs, 3 maps, 61 tabs.; e-mail: splcnmr at mail.copel.br
Information Theory and Statistical Mechanics Revisited
Zhou, Jian
2016-01-01
We derive Bose-Einstein statistics and Fermi-Dirac statistics by Principle of Maximum Entropy applied to two families of entropy functions different from the Boltzmann-Gibbs-Shannon entropy. These entropy functions are identified with special cases of modified Naudts' $\\phi$-entropy.
Information Measures for Statistical Orbit Determination
Mashiku, Alinda K.
2013-01-01
The current Situational Space Awareness (SSA) is faced with a huge task of tracking the increasing number of space objects. The tracking of space objects requires frequent and accurate monitoring for orbit maintenance and collision avoidance using methods for statistical orbit determination. Statistical orbit determination enables us to obtain…
Giovanna Badia
2011-03-01
database for a search topic, which was calculated as a percentage of the total number of unique results found in all four database searches;• availability (the number of relevant full text articles obtained from the database search results, which was calculated as a percentage of the total number of relevant results found in the database;• retrievability (the number of relevant full text articles obtained from the database search results, which was calculated as a percentage of the total number of relevant full text articles found from all four database searches;• effectiveness (the probable odds that a database will obtain relevant search results;• efficiency (the probable odds that a database will obtain both unique and relevant search results; and• accessibility (the probable odds that the full text of the relevant references obtained from the database search are available electronically or in print via the user’s library.Students decided whether the search results were relevant to their topic by using a “yes/no” scale. Only record titles were used to make relevancy judgments.Main Results – Friedman’s Test and odds ratios were used to compare the performance of BNI, CINAHL, MEDLINE, and EMBASE when searching for information about nursing topics.These two statistical measures demonstrated the following:• BNI had the best average score for the precision, availability, effectiveness, and accessibility of search results;• CINAHL scored the highest for the novelty, retrievability, and efficiency of results, and ranked second place for all the other criteria;• MEDLINE excelled in the areas of recall and originality, and ranked second place for novelty and retrievability; and• EMBASE did not obtain the highest, or second highest score, for any of the criteria.Conclusion – According to the authors, these results suggest that none of the databases studied can be considered the most useful for searching undergraduate nursing topics. CINAHL and
Eugster, Manuel J. A.; Ruotsalo, Tuukka; Spapé, Michiel M.; Barral, Oswald; Ravaja, Niklas; Jacucci, Giulio; Kaski, Samuel
2016-12-01
Finding relevant information from large document collections such as the World Wide Web is a common task in our daily lives. Estimation of a user’s interest or search intention is necessary to recommend and retrieve relevant information from these collections. We introduce a brain-information interface used for recommending information by relevance inferred directly from brain signals. In experiments, participants were asked to read Wikipedia documents about a selection of topics while their EEG was recorded. Based on the prediction of word relevance, the individual’s search intent was modeled and successfully used for retrieving new relevant documents from the whole English Wikipedia corpus. The results show that the users’ interests toward digital content can be modeled from the brain signals evoked by reading. The introduced brain-relevance paradigm enables the recommendation of information without any explicit user interaction and may be applied across diverse information-intensive applications.
THE RELEVANCE OF ECONOMIC INFORMATION IN ANALYZING THE ECONOMIC PERFORMANCE
PATRUTA MIRCEA IOAN
2016-12-01
Full Text Available The performance analysis is based on an informational system, which provides financial information in various formatsand with various applicabilities.We intend to formulate a set of important caracteristics of financial information along with identifying a set of relevant financial rates and indicatorsused to appreciate the performance level of a company. Economic performance can be interpreted in different ways at each level of analysis. Generally, it refers to economic growth, increased productivity and profitability. The growth of labor productivity or increased production per worker is a measure of efficient use of resources in value creation.
Research methodology in dentistry: Part II - The relevance of statistics in research
Jogikalmat Krithikadatta
2012-01-01
Full Text Available The lifeline of original research depends on adept statistical analysis. However, there have been reports of statistical misconduct in studies that could arise from the inadequate understanding of the fundamental of statistics. There have been several reports on this across medical and dental literature. This article aims at encouraging the reader to approach statistics from its logic rather than its theoretical perspective. The article also provides information on statistical misuse in the Journal of Conservative Dentistry between the years 2008 and 2011
The pricing relevance of insider information; Die Preiserheblichkeit von Insiderinformationen
Kruse, Dominik
2011-07-01
The publication attempts to describe the so far discussion concerning the feature of pricing relevance and to develop it further with the aid of new research approaches. First, a theoretical outline is presented of the elementary regulation problem of insider trading, its historical development, and the regulation goals of the WpHG. This is followed by an analysis of the concrete specifications of the law. In view of the exemplarity of US law, a country with long experience in regulation of the capital market, the materiality doctrine of US insider law is gone into in some detail. The goals and development of the doctrine are reviewed in the light of court rulings. The third part outlines the requirements of German law in order to forecast the pricing relevance of insider information, while the final part presents a critical review of the current regulations on pricing relevance. (orig./RHM)
Bootstrapping agency: How control-relevant information affects motivation.
Karsh, Noam; Eitam, Baruch; Mark, Ilya; Higgins, E Tory
2016-10-01
How does information about one's control over the environment (e.g., having an own-action effect) influence motivation? The control-based response selection framework was proposed to predict and explain such findings. Its key tenant is that control relevant information modulates both the frequency and speed of responses by determining whether a perceptual event is an outcome of one's actions or not. To test this framework empirically, the current study examines whether and how temporal and spatial contiguity/predictability-previously established as being important for one's sense of agency-modulate motivation from control. In 5 experiments, participants responded to a cue, potentially triggering a perceptual effect. Temporal (Experiments 1a-c) and spatial (Experiments 2a and b) contiguity/predictability between actions and their potential effects were experimentally manipulated. The influence of these control-relevant factors was measured, both indirectly (through their effect on explicit judgments of agency) and directly on response time and response frequency. The pattern of results was highly consistent with the control-based response selection framework in suggesting that control relevant information reliably modulates the impact of "having an effect" on different levels of action selection. We discuss the implications of this study for the notion of motivation from control and for the empirical work on the sense of agency. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Statistical Sources of Information on Canadian Public Libraries.
Kokich, George J. V.
1981-01-01
Discusses difficulties involved in locating sources of statistical information on public libraries in Canada and describes a study conducted to identify and evaluate various sources of such information. An 18-item bibliography is provided. (CHC)
Iterative Filtering of Retrieved Information to Increase Relevance
Robert Zeidman
2007-12-01
Full Text Available Efforts have been underway for years to find more effective ways to retrieve information from large knowledge domains. This effort is now being driven particularly by the Internet and the vast amount of information that is available to unsophisticated users. In the early days of the Internet, some effort involved allowing users to enter Boolean equations of search terms into search engines, for example, rather than just a list of keywords. More recently, effort has focused on understanding a user's desires from past search histories in order to narrow searches. Also there has been much effort to improve the ranking of results based on some measure of relevancy. This paper discusses using iterative filtering of retrieved information to focus in on useful information. This work was done for finding source code correlation and the author extends his findings to Internet searching and e-commerce. The paper presents specific information about a particular filtering application and then generalizes it to other forms of information retrieval.
Extensive Generalization of Statistical Mechanics Based on Incomplete Information Theory
Qiuping A. Wang
2003-06-01
Full Text Available Statistical mechanics is generalized on the basis of an additive information theory for incomplete probability distributions. The incomplete normalization is used to obtain generalized entropy . The concomitant incomplete statistical mechanics is applied to some physical systems in order to show the effect of the incompleteness of information. It is shown that this extensive generalized statistics can be useful for the correlated electron systems in weak coupling regime.
Perceived Relevance of an Introductory Information Systems Course to Prospective Business Students
Irene Govender
2013-12-01
Full Text Available The study is designed to examine students’ perceptions of the introductory Information Systems (IS course. It was an exploratory study in which 67 students participated. A quantitative approach was followed making use of questionnaires for the collection of data. Using the theory of reasoned action as a framework, the study explores the factors that influence non-IS major students’ perceived relevance of the IS introductory course. The analysis of collected data included descriptive and inferential statistics. Using multiple regression analysis, the results suggest that overall, the independent variables, relevance of the content, previous IT knowledge, relevance for professional practice, IT preference in courses and peers’ influence may account for 72% of the explanatory power for the dependent variable, perceived relevance of the IS course. In addition, the results have shown some strong predictors (IT preference and peers’ influence that influence students’ perceived relevance of the IS course. Practical work was found to be a strong mediating variable toward positive perceptions of IS. The results of this study suggest that students do indeed perceive the introductory IS course to be relevant and match their professional needs, but more practical work would enhance their learning. Implications for theory and practice are discussed as a result of the behavioural intention to perceive the IS course to be relevant and eventually to recruit more IS students.
Temporal and Statistical Information in Causal Structure Learning
McCormack, Teresa; Frosch, Caren; Patrick, Fiona; Lagnado, David
2015-01-01
Three experiments examined children's and adults' abilities to use statistical and temporal information to distinguish between common cause and causal chain structures. In Experiment 1, participants were provided with conditional probability information and/or temporal information and asked to infer the causal structure of a 3-variable mechanical…
Evaluation Statistics Computed for the Wave Information Studies (WIS)
2016-07-01
wave models , including those of WIS, are influenced by meteorological forcing parameters, representation of the geographic area (e.g., bathymetry...statistical metrics to wave model evaluation are found in Zambresky (1989) and Cardone et al. (1996). These statistics were calculated in the...describes the statistical metrics used by the Wave Information Studies (WIS) and produced as part of the model evaluation process. INTRODUCTION: The
On the relevance of the maximum entropy principle in non-equilibrium statistical mechanics
Auletta, Gennaro; Rondoni, Lamberto; Vulpiani, Angelo
2017-07-01
At first glance, the maximum entropy principle (MEP) apparently allows us to derive, or justify in a simple way, fundamental results of equilibrium statistical mechanics. Because of this, a school of thought considers the MEP as a powerful and elegant way to make predictions in physics and other disciplines, rather than a useful technical tool like others in statistical physics. From this point of view the MEP appears as an alternative and more general predictive method than the traditional ones of statistical physics. Actually, careful inspection shows that such a success is due to a series of fortunate facts that characterize the physics of equilibrium systems, but which are absent in situations not described by Hamiltonian dynamics, or generically in nonequilibrium phenomena. Here we discuss several important examples in non equilibrium statistical mechanics, in which the MEP leads to incorrect predictions, proving that it does not have a predictive nature. We conclude that, in these paradigmatic examples, an approach that uses a detailed analysis of the relevant aspects of the dynamics cannot be avoided.
Bayesian Information Criterion as an Alternative way of Statistical Inference
Nadejda Yu. Gubanova
2012-05-01
Full Text Available The article treats Bayesian information criterion as an alternative to traditional methods of statistical inference, based on NHST. The comparison of ANOVA and BIC results for psychological experiment is discussed.
Bayesian Information Criterion as an Alternative way of Statistical Inference
Nadejda Yu. Gubanova; Simon Zh. Simavoryan
2012-01-01
The article treats Bayesian information criterion as an alternative to traditional methods of statistical inference, based on NHST. The comparison of ANOVA and BIC results for psychological experiment is discussed.
Information Geometric Complexity of a Trivariate Gaussian Statistical Model
Domenico Felice
2014-05-01
Full Text Available We evaluate the information geometric complexity of entropic motion on low-dimensional Gaussian statistical manifolds in order to quantify how difficult it is to make macroscopic predictions about systems in the presence of limited information. Specifically, we observe that the complexity of such entropic inferences not only depends on the amount of available pieces of information but also on the manner in which such pieces are correlated. Finally, we uncover that, for certain correlational structures, the impossibility of reaching the most favorable configuration from an entropic inference viewpoint seems to lead to an information geometric analog of the well-known frustration effect that occurs in statistical physics.
Inclusion of Relevance Information in the Term Discrimination Model.
Biru, Tesfaye; And Others
1989-01-01
Discusses the effect of including relevance data on the calculation of term discrimination values in bibliographic databases. Algorithms that calculate the ability of index terms to discriminate between relevant and non-relevant documents are described and tested. The results are discussed in terms of the relationship between term frequency and…
Fisher-Schroedinger models for statistical encryption of covert information
Venkatesan, R. C.
2007-04-01
The theoretical framework for a principled procedure to encrypt/decrypt covert information (code)into/from the null spaces of a hierarchy of statistical distributions possessing ill-conditioned eigenstructures, is suggested. The statistical distributions are inferred using incomplete constraints, employing the generalized nonextensive thermostatistics (NET) Fisher information as the measure of uncertainty. The hierarchy of inferred statistical distributions possess a quantum mechanical connotation for unit values of the nonextensivity parameter. A systematic strategy to encrypt/decrypt code via unitary projections into the null spaces of the ill-conditioned eigenstructures, is presented.
INFORMATION SUPPORT OF REAL ESTATE MARKET STATISTICAL RESEARCH
Shibirina, S.
2010-01-01
The article considers the significance and suggests the directions for using information in statistical researches of real estate market. A special attention is paid on interconnections of information support and participants of realty market's, and stages of market analysis which characterizing realty market.
Prototyping a Distributed Information Retrieval System That Uses Statistical Ranking.
Harman, Donna; And Others
1991-01-01
Built using a distributed architecture, this prototype distributed information retrieval system uses statistical ranking techniques to provide better service to the end user. Distributed architecture was shown to be a feasible alternative to centralized or CD-ROM information retrieval, and user testing of the ranking methodology showed both…
A Framework for Thinking about Informal Statistical Inference
Makar, Katie; Rubin, Andee
2009-01-01
Informal inferential reasoning has shown some promise in developing students' deeper understanding of statistical processes. This paper presents a framework to think about three key principles of informal inference--generalizations "beyond the data," probabilistic language, and data as evidence. The authors use primary school classroom…
Information Geometry and Chaos on Negatively Curved Statistical Manifolds
Cafaro, Carlo
2007-01-01
A novel information-geometric approach to chaotic dynamics on curved statistical manifolds based on Entropic Dynamics (ED) is suggested. Furthermore, an information-geometric analogue of the Zurek-Paz quantum chaos criterion is proposed. It is shown that the hyperbolicity of a non-maximally symmetric 6N-dimensional statistical manifold M_{s} underlying an ED Gaussian model describing an arbitrary system of 3N non-interacting degrees of freedom leads to linear information-geometric entropy growth and to exponential divergence of the Jacobi vector field intensity, quantum and classical features of chaos respectively.
MERGING FUZZY STATISTICAL DATA WITH IMPRECISE PRIOR INFORMATION
Olgierd HRYNIEWICZ
2006-01-01
Solving complex decision problems requires the usage of information from different sources.Usually this information is uncertain and statistical or probabilistic methods are needed for its processing. However, in many cases a decision maker faces not only uncertainty of a random nature but also imprecision in the description of input data that is rather of linguistic nature. Therefore, there is a need to merge uncertainties of both types into one mathematical model. In the paper we present methodology of merging information from imprecisely reported statistical data and imprecisely formulated fuzzy prior information. Moreover, we also consider the case of imprecisely defined loss functions. The proposed methodology may be considered as the application of fuzzy statistical methods for the decision making in the systems analysis.
Rabinowitz, Daniel
2003-05-01
The focus of this work is the TDT-type and family-based test statistics used for adjusting for potential confounding due to population heterogeneity or misspecified allele frequencies. A variety of heuristics have been used to motivate and derive these statistics, and the statistics have been developed for a variety of analytic goals. There appears to be no general theoretical framework, however, that may be used to evaluate competing approaches. Furthermore, there is no framework to guide the development of efficient TDT-type and family-based methods for analytic goals for which methods have not yet been proposed. The purpose of this paper is to present a theoretical framework that serves both to identify the information which is available to methods that are immune to confounding due to population heterogeneity or misspecified allele frequencies, and to inform the construction of efficient unbiased tests in novel settings. The development relies on the existence of a characterization of the null hypothesis in terms of a completely specified conditional distribution of transmitted genotypes. An important observation is that, with such a characterization, when the conditioning event is unobserved or incomplete, there is statistical information that cannot be exploited by any exact conditional test. The main technical result of this work is an approach to computing test statistics for local alternatives that exploit all of the available statistical information. Copyright 2003 Wiley-Liss, Inc.
Concepts and recent advances in generalized information measures and statistics
Kowalski, Andres M
2013-01-01
Since the introduction of the information measure widely known as Shannon entropy, quantifiers based on information theory and concepts such as entropic forms and statistical complexities have proven to be useful in diverse scientific research fields. This book contains introductory tutorials suitable for the general reader, together with chapters dedicated to the basic concepts of the most frequently employed information measures or quantifiers and their recent applications to different areas, including physics, biology, medicine, economics, communication and social sciences. As these quantif
A protocol for classifying ecologically relevant marine zones, a statistical approach
Verfaillie, Els; Degraer, Steven; Schelfaut, Kristien; Willems, Wouter; Van Lancker, Vera
2009-06-01
Mapping ecologically relevant zones in the marine environment has become increasingly important. Biological data are however often scarce and alternatives are being sought in optimal classifications of abiotic variables. The concept of 'marine landscapes' is based on a hierarchical classification of geological, hydrographic and other physical data. This approach is however subject to many assumptions and subjective decisions. An objective protocol for zonation is being proposed here where abiotic variables are subjected to a statistical approach, using principal components analysis (PCA) and a cluster analysis. The optimal number of clusters (or zones) is being defined using the Calinski-Harabasz criterion. The methodology has been applied on datasets of the Belgian part of the North Sea (BPNS), a shallow sandy shelf environment with a sandbank-swale topography. The BPNS was classified into 8 zones that represent well the natural variability of the seafloor. The internal cluster consistency was validated with a split-run procedure, with more than 99% correspondence between the validation and the original dataset. The ecological relevance of 6 out of the 8 zones was demonstrated, using indicator species analysis. The proposed protocol, as exemplified for the BPNS, can easily be applied to other areas and provides a strong knowledge basis for environmental protection and management of the marine environment. A SWOT-analysis, showing the strengths, weaknesses, opportunities and threats of the protocol was performed.
Statistical Models of Fracture Relevant to Nuclear-Grade Graphite: Review and Recommendations
Nemeth, Noel N.; Bratton, Robert L.
2011-01-01
The nuclear-grade (low-impurity) graphite needed for the fuel element and moderator material for next-generation (Gen IV) reactors displays large scatter in strength and a nonlinear stress-strain response from damage accumulation. This response can be characterized as quasi-brittle. In this expanded review, relevant statistical failure models for various brittle and quasi-brittle material systems are discussed with regard to strength distribution, size effect, multiaxial strength, and damage accumulation. This includes descriptions of the Weibull, Batdorf, and Burchell models as well as models that describe the strength response of composite materials, which involves distributed damage. Results from lattice simulations are included for a physics-based description of material breakdown. Consideration is given to the predicted transition between brittle and quasi-brittle damage behavior versus the density of damage (level of disorder) within the material system. The literature indicates that weakest-link-based failure modeling approaches appear to be reasonably robust in that they can be applied to materials that display distributed damage, provided that the level of disorder in the material is not too large. The Weibull distribution is argued to be the most appropriate statistical distribution to model the stochastic-strength response of graphite.
Advances in statistical multisource-multitarget information fusion
Mahler, Ronald PS
2014-01-01
This is the sequel to the 2007 Artech House bestselling title, Statistical Multisource-Multitarget Information Fusion. That earlier book was a comprehensive resource for an in-depth understanding of finite-set statistics (FISST), a unified, systematic, and Bayesian approach to information fusion. The cardinalized probability hypothesis density (CPHD) filter, which was first systematically described in the earlier book, has since become a standard multitarget detection and tracking technique, especially in research and development.Since 2007, FISST has inspired a considerable amount of research
Reasoning about Informal Statistical Inference: One Statistician's View
Rossman, Allan J.
2008-01-01
This paper identifies key concepts and issues associated with the reasoning of informal statistical inference. I focus on key ideas of inference that I think all students should learn, including at secondary level as well as tertiary. I argue that a fundamental component of inference is to go beyond the data at hand, and I propose that statistical…
Identifying statistical dependence in genomic sequences via mutual information estimates
Aktulga, H M; Lyznik, L A; Szpankowski, L; Grama, A Y; Szpankowski, W
2007-01-01
Questions of understanding and quantifying the representation and amount of information in organisms have become a central part of biological research, as they potentially hold the key to fundamental advances. In this paper, we demonstrate the use of information-theoretic tools for the task of identifying segments of biomolecules (DNA or RNA) that are statistically correlated. We develop a precise and reliable methodology, based on the notion of mutual information, for finding and extracting statistical as well as structural dependencies. A simple threshold function is defined, and its use in quantifying the level of significance of dependencies between biological segments is explored. These tools are used in two specific applications. First, for the identification of correlations between different parts of the maize zmSRp32 gene. There, we find significant dependencies between the 5' untranslated region in zmSRp32 and its alternatively spliced exons. This observation may indicate the presence of as-yet unkno...
Loop calculus in statistical physics and information science.
Chertkov, Michael; Chernyak, Vladimir Y
2006-06-01
Considering a discrete and finite statistical model of a general position we introduce an exact expression for the partition function in terms of a finite series. The leading term in the series is the Bethe-Peierls (belief propagation) (BP) contribution; the rest are expressed as loop contributions on the factor graph and calculated directly using the BP solution. The series unveils a small parameter that often makes the BP approximation so successful. Applications of the loop calculus in statistical physics and information science are discussed.
The relevance of irrelevant information in the dictator game
Ramalingam, Abhijit
2012-01-01
We examine the sensitivity of the dictator game to information provided to subjects. We investigate if individuals internalize completely irrelevant information about players when making allocation decisions. Subjects are provided with their score and the scores of recipients on a quiz prior to making decisions in multiple dictator games. Quiz scores have no bearing on the game or on players' endowments and hence represent extraneous information. We find that dictators reward good performance...
SOLVING PROBLEMS OF STATISTICS WITH THE METHODS OF INFORMATION THEORY
Lutsenko Y. V.
2015-02-01
Full Text Available The article presents a theoretical substantiation, methods of numerical calculations and software implementation of the decision of problems of statistics, in particular the study of statistical distributions, methods of information theory. On the basis of empirical data by calculation we have determined the number of observations used for the analysis of statistical distributions. The proposed method of calculating the amount of information is not based on assumptions about the independence of observations and the normal distribution, i.e., is non-parametric and ensures the correct modeling of nonlinear systems, and also allows comparable to process heterogeneous (measured in scales of different types data numeric and non-numeric nature that are measured in different units. Thus, ASC-analysis and "Eidos" system is a modern innovation (ready for implementation technology solving problems of statistical methods of information theory. This article can be used as a description of the laboratory work in the disciplines of: intelligent systems; knowledge engineering and intelligent systems; intelligent technologies and knowledge representation; knowledge representation in intelligent systems; foundations of intelligent systems; introduction to neuromaturation and methods neural networks; fundamentals of artificial intelligence; intelligent technologies in science and education; knowledge management; automated system-cognitive analysis and "Eidos" intelligent system which the author is developing currently, but also in other disciplines associated with the transformation of data into information, and its transformation into knowledge and application of this knowledge to solve problems of identification, forecasting, decision making and research of the simulated subject area (which is virtually all subjects in all fields of science
Wildfire Prediction to Inform Fire Management: Statistical Science Challenges
Taylor, S. W.; Douglas G. Woolford; Dean, C. B.; Martell, David L.
2013-01-01
Wildfire is an important system process of the earth that occurs across a wide range of spatial and temporal scales. A variety of methods have been used to predict wildfire phenomena during the past century to better our understanding of fire processes and to inform fire and land management decision-making. Statistical methods have an important role in wildfire prediction due to the inherent stochastic nature of fire phenomena at all scales. ¶ Predictive models have exploited several so...
Computational Information Geometry in Statistics: Theory and Practice
Frank Critchley
2014-05-01
Full Text Available A broad view of the nature and potential of computational information geometry in statistics is offered. This new area suitably extends the manifold-based approach of classical information geometry to a simplicial setting, in order to obtain an operational universal model space. Additional underlying theory and illustrative real examples are presented. In the inﬁnite-dimensional case, challenges inherent in this ambitious overall agenda are highlighted and promising new methodologies indicated.
relevance of information warfare models to critical infrastructure ...
ismith
Department of Homeland Security defines critical infrastructure as “the assets, systems, and networks, whether physical or virtual, so vital to the United States that ... result in noticeable effects in the information, energy or physical distribution ...
50 CFR 424.13 - Sources of information and relevant data.
2010-10-01
... 50 Wildlife and Fisheries 7 2010-10-01 2010-10-01 false Sources of information and relevant data... Sources of information and relevant data. When considering any revision of the lists, the Secretary shall..., administrative reports, maps or other graphic materials, information received from experts on the subject, and...
Inference of Functionally-Relevant N-acetyltransferase Residues Based on Statistical Correlations.
Neuwald, Andrew F; Altschul, Stephen F
2016-12-01
Over evolutionary time, members of a superfamily of homologous proteins sharing a common structural core diverge into subgroups filling various functional niches. At the sequence level, such divergence appears as correlations that arise from residue patterns distinct to each subgroup. Such a superfamily may be viewed as a population of sequences corresponding to a complex, high-dimensional probability distribution. Here we model this distribution as hierarchical interrelated hidden Markov models (hiHMMs), which describe these sequence correlations implicitly. By characterizing such correlations one may hope to obtain information regarding functionally-relevant properties that have thus far evaded detection. To do so, we infer a hiHMM distribution from sequence data using Bayes' theorem and Markov chain Monte Carlo (MCMC) sampling, which is widely recognized as the most effective approach for characterizing a complex, high dimensional distribution. Other routines then map correlated residue patterns to available structures with a view to hypothesis generation. When applied to N-acetyltransferases, this reveals sequence and structural features indicative of functionally important, yet generally unknown biochemical properties. Even for sets of proteins for which nothing is known beyond unannotated sequences and structures, this can lead to helpful insights. We describe, for example, a putative coenzyme-A-induced-fit substrate binding mechanism mediated by arginine residue switching between salt bridge and π-π stacking interactions. A suite of programs implementing this approach is available (psed.igs.umaryland.edu).
DOES VOLUNTARY DISCLOSURE LEVEL AFFECT THE VALUE RELEVANCE OF ACCOUNTING INFORMATION?
2011-01-01
This paper seeks to explore whether voluntary disclosure level affects the value relevance of accounting information from an investor’s perspective on Kuwait Stock Exchange (KSE). Based on the assumption that an increased focus on the informational needs of investors should increase the value relevance of the information contained in financial statements we expect that value relevance will increase along with increases in the level of voluntary disclosure. As a consequence, we expect that gre...
Tadaki, Kohtaro, E-mail: tadaki@kc.chuo-u.ac.j [Research and Development Initiative, Chuo University, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551 (Japan)
2010-12-01
The statistical mechanical interpretation of algorithmic information theory (AIT, for short) was introduced and developed by our former works [K. Tadaki, Local Proceedings of CiE 2008, pp. 425-434, 2008] and [K. Tadaki, Proceedings of LFCS'09, Springer's LNCS, vol. 5407, pp. 422-440, 2009], where we introduced the notion of thermodynamic quantities, such as partition function Z(T), free energy F(T), energy E(T), statistical mechanical entropy S(T), and specific heat C(T), into AIT. We then discovered that, in the interpretation, the temperature T equals to the partial randomness of the values of all these thermodynamic quantities, where the notion of partial randomness is a stronger representation of the compression rate by means of program-size complexity. Furthermore, we showed that this situation holds for the temperature T itself, which is one of the most typical thermodynamic quantities. Namely, we showed that, for each of the thermodynamic quantities Z(T), F(T), E(T), and S(T) above, the computability of its value at temperature T gives a sufficient condition for T is an element of (0,1) to satisfy the condition that the partial randomness of T equals to T. In this paper, based on a physical argument on the same level of mathematical strictness as normal statistical mechanics in physics, we develop a total statistical mechanical interpretation of AIT which actualizes a perfect correspondence to normal statistical mechanics. We do this by identifying a microcanonical ensemble in the framework of AIT. As a result, we clarify the statistical mechanical meaning of the thermodynamic quantities of AIT.
Rendering Information Literacy Relevant: A Case-Based Pedagogy
Spackman, Andy; Camacho, Leticia
2009-01-01
The authors describe the use of case studies in a program of extracurricular library instruction and explain the benefits of case teaching in developing information literacy. The paper presents details of example cases and analyzes surveys to evaluate the impact of case teaching on student satisfaction. (Contains 3 tables.)
The relevance of visual information on learning sounds in infancy
ter Schure, S.M.M.
2016-01-01
Newborn infants are sensitive to combinations of visual and auditory speech. Does this ability to match sounds and sights affect how infants learn the sounds of their native language? And are visual articulations the only type of visual information that can influence sound learning? This
The relevance of visual information on learning sounds in infancy
S.M.M. ter Schure
2016-01-01
Newborn infants are sensitive to combinations of visual and auditory speech. Does this ability to match sounds and sights affect how infants learn the sounds of their native language? And are visual articulations the only type of visual information that can influence sound learning? This dissertatio
Paradigms for adaptive statistical information designs: practical experiences and strategies.
Wang, Sue-Jane; Hung, H M James; O'Neill, Robert
2012-11-10
In the last decade or so, interest in adaptive design clinical trials has gradually been directed towards their use in regulatory submissions by pharmaceutical drug sponsors to evaluate investigational new drugs. Methodological advances of adaptive designs are abundant in the statistical literature since the 1970s. The adaptive design paradigm has been enthusiastically perceived to increase the efficiency and to be more cost-effective than the fixed design paradigm for drug development. Much interest in adaptive designs is in those studies with two-stages, where stage 1 is exploratory and stage 2 depends upon stage 1 results, but where the data of both stages will be combined to yield statistical evidence for use as that of a pivotal registration trial. It was not until the recent release of the US Food and Drug Administration Draft Guidance for Industry on Adaptive Design Clinical Trials for Drugs and Biologics (2010) that the boundaries of flexibility for adaptive designs were specifically considered for regulatory purposes, including what are exploratory goals, and what are the goals of adequate and well-controlled (A&WC) trials (2002). The guidance carefully described these distinctions in an attempt to minimize the confusion between the goals of preliminary learning phases of drug development, which are inherently substantially uncertain, and the definitive inference-based phases of drug development. In this paper, in addition to discussing some aspects of adaptive designs in a confirmatory study setting, we underscore the value of adaptive designs when used in exploratory trials to improve planning of subsequent A&WC trials. One type of adaptation that is receiving attention is the re-estimation of the sample size during the course of the trial. We refer to this type of adaptation as an adaptive statistical information design. Specifically, a case example is used to illustrate how challenging it is to plan a confirmatory adaptive statistical information
Revealing Relationships among Relevant Climate Variables with Information Theory
Knuth, Kevin H; Curry, Charles T; Huyser, Karen A; Wheeler, Kevin R; Rossow, William B
2013-01-01
A primary objective of the NASA Earth-Sun Exploration Technology Office is to understand the observed Earth climate variability, thus enabling the determination and prediction of the climate's response to both natural and human-induced forcing. We are currently developing a suite of computational tools that will allow researchers to calculate, from data, a variety of information-theoretic quantities such as mutual information, which can be used to identify relationships among climate variables, and transfer entropy, which indicates the possibility of causal interactions. Our tools estimate these quantities along with their associated error bars, the latter of which is critical for describing the degree of uncertainty in the estimates. This work is based upon optimal binning techniques that we have developed for piecewise-constant, histogram-style models of the underlying density functions. Two useful side benefits have already been discovered. The first allows a researcher to determine whether there exist suf...
Revealing Relationships among Relevant Climate Variables with Information Theory
Knuth, Kevin H.; Golera, Anthony; Curry, Charles T.; Huyser, Karen A.; Kevin R. Wheeler; Rossow, William B.
2005-01-01
The primary objective of the NASA Earth-Sun Exploration Technology Office is to understand the observed Earth climate variability, thus enabling the determination and prediction of the climate's response to both natural and human-induced forcing. We are currently developing a suite of computational tools that will allow researchers to calculate, from data, a variety of information-theoretic quantities such as mutual information, which can be used to identify relationships among climate variables, and transfer entropy, which indicates the possibility of causal interactions. Our tools estimate these quantities along with their associated error bars, the latter of which is critical for describing the degree of uncertainty in the estimates. This work is based upon optimal binning techniques that we have developed for piecewise-constant, histogram-style models of the underlying density functions. Two useful side benefits have already been discovered. The first allows a researcher to determine whether there exist sufficient data to estimate the underlying probability density. The second permits one to determine an acceptable degree of round-off when compressing data for efficient transfer and storage. We also demonstrate how mutual information and transfer entropy can be applied so as to allow researchers not only to identify relations among climate variables, but also to characterize and quantify their possible causal interactions.
New perspectives on sustainable development and barriers to relevant information.
Regier, H A; Bronson, E A
1992-03-01
Sustainable development may mean different things to people with different worldviews. We sketch four worldviews, drawing on a schema developed by Bryan Norton. Of these four worldviews, i.e. exploitist, utilitist, integrist and inherentist, the third is the most consistent with the Brundtland Report (WCED 1987) and the Great Lakes Water Quality Agreement (GLWQA 1987).The integrist perspective combines analytic reductionistic study with comparative contextual study, with emphasis on the latter. This integrative approach moves from over-reliance on utilist information services such as impact assessment towards transactive study. Our own compromise emphasizes a stress-response approach to a partial understanding of complex cultural-natural interactions within ecosystems. Both cultural and natural attributes of ecosystems must be addressed.Currently the federal Canadian government tends toward an exploitist worldview; current government R&D funding and subsidies reflect this view. Old-fashioned scientists who rely on a monocular analytical vision of the world's minutiae may find contextual historical study offensive; these scientists hold sway on some advisory boards and hence research funding. Difficulty in finding funding for integrist information services should not be interpreted as a lack of need for this information; rather this difficulty results from resistance to a changing worldview.
Wildfire Prediction to Inform Fire Management: Statistical Science Challenges
Taylor, S W; Dean, C B; Martell, David L
2013-01-01
Wildfire is an important system process of the earth that occurs across a wide range of spatial and temporal scales. A variety of methods have been used to predict wildfire phenomena during the past century to better our understanding of fire processes and to inform fire and land management decision-making. Statistical methods have an important role in wildfire prediction due to the inherent stochastic nature of fire phenomena at all scales. Predictive models have exploited several sources of data describing fire phenomena. Experimental data are scarce; observational data are dominated by statistics compiled by government fire management agencies, primarily for administrative purposes and increasingly from remote sensing observations. Fires are rare events at many scales. The data describing fire phenomena can be zero-heavy and nonstationary over both space and time. Users of fire modeling methodologies are mainly fire management agencies often working under great time constraints, thus, complex models have t...
Identifying Statistical Dependence in Genomic Sequences via Mutual Information Estimates
Wojciech Szpankowski
2007-12-01
Full Text Available Questions of understanding and quantifying the representation and amount of information in organisms have become a central part of biological research, as they potentially hold the key to fundamental advances. In this paper, we demonstrate the use of information-theoretic tools for the task of identifying segments of biomolecules (DNA or RNA that are statistically correlated. We develop a precise and reliable methodology, based on the notion of mutual information, for finding and extracting statistical as well as structural dependencies. A simple threshold function is defined, and its use in quantifying the level of significance of dependencies between biological segments is explored. These tools are used in two specific applications. First, they are used for the identification of correlations between different parts of the maize zmSRp32 gene. There, we find significant dependencies between the 5Ã¢Â€Â² untranslated region in zmSRp32 and its alternatively spliced exons. This observation may indicate the presence of as-yet unknown alternative splicing mechanisms or structural scaffolds. Second, using data from the FBI's combined DNA index system (CODIS, we demonstrate that our approach is particularly well suited for the problem of discovering short tandem repeatsÃ¢Â€Â”an application of importance in genetic profiling.
Identifying Statistical Dependence in Genomic Sequences via Mutual Information Estimates
Kontoyiannis Ioannis
2007-01-01
Full Text Available Questions of understanding and quantifying the representation and amount of information in organisms have become a central part of biological research, as they potentially hold the key to fundamental advances. In this paper, we demonstrate the use of information-theoretic tools for the task of identifying segments of biomolecules (DNA or RNA that are statistically correlated. We develop a precise and reliable methodology, based on the notion of mutual information, for finding and extracting statistical as well as structural dependencies. A simple threshold function is defined, and its use in quantifying the level of significance of dependencies between biological segments is explored. These tools are used in two specific applications. First, they are used for the identification of correlations between different parts of the maize zmSRp32 gene. There, we find significant dependencies between the untranslated region in zmSRp32 and its alternatively spliced exons. This observation may indicate the presence of as-yet unknown alternative splicing mechanisms or structural scaffolds. Second, using data from the FBI's combined DNA index system (CODIS, we demonstrate that our approach is particularly well suited for the problem of discovering short tandem repeats—an application of importance in genetic profiling.
Diagnostically relevant facial gestalt information from ordinary photos.
Ferry, Quentin; Steinberg, Julia; Webber, Caleb; FitzPatrick, David R; Ponting, Chris P; Zisserman, Andrew; Nellåker, Christoffer
2014-06-24
Craniofacial characteristics are highly informative for clinical geneticists when diagnosing genetic diseases. As a first step towards the high-throughput diagnosis of ultra-rare developmental diseases we introduce an automatic approach that implements recent developments in computer vision. This algorithm extracts phenotypic information from ordinary non-clinical photographs and, using machine learning, models human facial dysmorphisms in a multidimensional 'Clinical Face Phenotype Space'. The space locates patients in the context of known syndromes and thereby facilitates the generation of diagnostic hypotheses. Consequently, the approach will aid clinicians by greatly narrowing (by 27.6-fold) the search space of potential diagnoses for patients with suspected developmental disorders. Furthermore, this Clinical Face Phenotype Space allows the clustering of patients by phenotype even when no known syndrome diagnosis exists, thereby aiding disease identification. We demonstrate that this approach provides a novel method for inferring causative genetic variants from clinical sequencing data through functional genetic pathway comparisons.DOI: http://dx.doi.org/10.7554/eLife.02020.001.
Statistical relevance of vorticity conservation with the Hamiltonian particle-mesh method
Dubinkina, S.; Frank, J.E.
2009-01-01
We conduct long simulations with a Hamiltonian particle-mesh method for ideal fluid flow, to determine the statistical mean vorticity field. Lagrangian and Eulerian statistical models are proposed for the discrete dynamics, and these are compared against numerical experiments. The observed results a
Statistical relevance of vorticity conservation with the Hamiltonian particle-mesh method
Dubinkina, S.; Frank, J.E.
2010-01-01
We conduct long-time simulations with a Hamiltonian particle-mesh method for ideal fluid flow, to determine the statistical mean vorticity field of the discretization. Lagrangian and Eulerian statistical models are proposed for the discrete dynamics, and these are compared against numerical experime
Statistical relevance of vorticity conservation in the Hamiltonian particle-mesh method
S. Dubinkina; J. Frank
2010-01-01
We conduct long-time simulations with a Hamiltonian particle-mesh method for ideal fluid flow, to determine the statistical mean vorticity field of the discretization. Lagrangian and Eulerian statistical models are proposed for the discrete dynamics, and these are compared against numerical experime
Value relevance of accounting information: evidence from South Eastern European countries
Pervan, Ivica; Bartulović, Marijana
2014-01-01
In this article the authors analysed value relevance of accounting information based on a sample of 97 corporations listed on one of the following capital markets: Ljubljana Stock Exchange, Zagreb Stock Exchange, Sarajevo Stock Exchange, Banja Luka Stock Exchange and Belgrade Stock Exchange. Research results show that accounting information is value relevant on all the observed markets. Value relevance analysis for the period 2005–2010 has shown that there was no increase in the explanatory p...
Smylie, Janet; Firestone, Michelle
Canada is known internationally for excellence in both the quality and public policy relevance of its health and social statistics. There is a double standard however with respect to the relevance and quality of statistics for Indigenous populations in Canada. Indigenous specific health and social statistics gathering is informed by unique ethical, rights-based, policy and practice imperatives regarding the need for Indigenous participation and leadership in Indigenous data processes throughout the spectrum of indicator development, data collection, management, analysis and use. We demonstrate how current Indigenous data quality challenges including misclassification errors and non-response bias systematically contribute to a significant underestimate of inequities in health determinants, health status, and health care access between Indigenous and non-Indigenous people in Canada. The major quality challenge underlying these errors and biases is the lack of Indigenous specific identifiers that are consistent and relevant in major health and social data sources. The recent removal of an Indigenous identity question from the Canadian census has resulted in further deterioration of an already suboptimal system. A revision of core health data sources to include relevant, consistent, and inclusive Indigenous self-identification is urgently required. These changes need to be carried out in partnership with Indigenous peoples and their representative and governing organizations.
Testing the idea of privileged awareness of self-relevant information.
Stein, Timo; Siebold, Alisha; van Zoest, Wieske
2016-03-01
Self-relevant information is prioritized in processing. Some have suggested the mechanism driving this advantage is akin to the automatic prioritization of physically salient stimuli in information processing (Humphreys & Sui, 2015). Here we investigate whether self-relevant information is prioritized for awareness under continuous flash suppression (CFS), as has been found for physical salience. Gabor patches with different orientations were first associated with the labels You or Other. Participants were more accurate in matching the self-relevant association, replicating previous findings of self-prioritization. However, breakthrough into awareness from CFS did not differ between self- and other-associated Gabors. These findings demonstrate that self-relevant information has no privileged access to awareness. Rather than modulating the initial visual processes that precede and lead to awareness, the advantage of self-relevant information may better be characterized as prioritization at later processing stages.
The system for statistical analysis of logistic information
Khayrullin Rustam Zinnatullovich
2015-05-01
Full Text Available The current problem for managers in logistic and trading companies is the task of improving the operational business performance and developing the logistics support of sales. The development of logistics sales supposes development and implementation of a set of works for the development of the existing warehouse facilities, including both a detailed description of the work performed, and the timing of their implementation. Logistics engineering of warehouse complex includes such tasks as: determining the number and the types of technological zones, calculation of the required number of loading-unloading places, development of storage structures, development and pre-sales preparation zones, development of specifications of storage types, selection of loading-unloading equipment, detailed planning of warehouse logistics system, creation of architectural-planning decisions, selection of information-processing equipment, etc. The currently used ERP and WMS systems did not allow us to solve the full list of logistics engineering problems. In this regard, the development of specialized software products, taking into account the specifics of warehouse logistics, and subsequent integration of these software with ERP and WMS systems seems to be a current task. In this paper we suggest a system of statistical analysis of logistics information, designed to meet the challenges of logistics engineering and planning. The system is based on the methods of statistical data processing.The proposed specialized software is designed to improve the efficiency of the operating business and the development of logistics support of sales. The system is based on the methods of statistical data processing, the methods of assessment and prediction of logistics performance, the methods for the determination and calculation of the data required for registration, storage and processing of metal products, as well as the methods for planning the reconstruction and development
A User-Centered Approach to Adaptive Hypertext Based on an Information Relevance Model
Mathe, Nathalie; Chen, James
1994-01-01
Rapid and effective to information in large electronic documentation systems can be facilitated if information relevant in an individual user's content can be automatically supplied to this user. However most of this knowledge on contextual relevance is not found within the contents of documents, it is rather established incrementally by users during information access. We propose a new model for interactively learning contextual relevance during information retrieval, and incrementally adapting retrieved information to individual user profiles. The model, called a relevance network, records the relevance of references based on user feedback for specific queries and user profiles. It also generalizes such knowledge to later derive relevant references for similar queries and profiles. The relevance network lets users filter information by context of relevance. Compared to other approaches, it does not require any prior knowledge nor training. More importantly, our approach to adaptivity is user-centered. It facilitates acceptance and understanding by users by giving them shared control over the adaptation without disturbing their primary task. Users easily control when to adapt and when to use the adapted system. Lastly, the model is independent of the particular application used to access information, and supports sharing of adaptations among users.
Shvidenko, Anatoly; Schepaschenko, Dmitry; Baklanov, Alexander
2014-05-01
PEEX, as a long-term multidisciplinary integrated study, needs a systems design of a relevant information background. The idea of development of an Integrated Land Information System (ILIS) for the region as an initial step of future advanced integrated observing systems is considered as a promising way. The ILIS could serve (1) for introduction of a unified system of classification and quantification of environment, ecosystems and landscapes; (2) as a benchmark for tracing the dynamics of land use - land cover and ecosystems parameters, particularly for forests; (3) as a systems background for empirical assessment of indicators of an interest (e.g., components of biogeochemical cycles); (4) comparisons, harmonizing and mutual constraints of the results obtained by different methods; (5) for parameterization of surface fluxes for the 'atmosphere-land' system; (6) for use in divers models and for models' validation; (7) for downscaling of available information to a required scale; (8) for understanding of gradients for up-scaling of "point" data, etc. The ILIS is presented in form of multi-layer and multi-scale GIS that includes a hybrid land cover (HLC) by a definite date and corresponding legends and attributive databases. The HLC is based on relevant combination of a "multi" remote sensing concept that includes sensors of different type and resolution and ground data. The ILIS includes inter alia (1) general geographical and biophysical description of the territory (landscapes, soil, vegetation, hydrology, bioclimatic zones, permafrost etc.); (2) diverse datasets of measurements in situ; (3) sets of empirical and semi-empirical aggregation and auxiliary models, (4) data on different inventories and surveys (forest inventory, land account, results of forest monitoring); (5) spatial and temporal description of anthropogenic and natural disturbances; (5) climatic data with relevant temporal resolution etc. The ILIS should include only the data with known
A statistical information-based clustering approach in distance space
YUE Shi-hong; LI Ping; GUO Ji-dong; ZHOU Shui-geng
2005-01-01
Clustering, as a powerful data mining technique for discovering interesting data distributions and patterns in the underlying database, is used in many fields, such as statistical data analysis, pattern recognition, image processing, and other business applications. Density-based Spatial Clustering of Applications with Noise (DBSCAN) (Ester et al., 1996) is a good performance clustering method for dealing with spatial data although it leaves many problems to be solved. For example,DBSCAN requires a necessary user-specified threshold while its computation is extremely time-consuming by current method such as OPTICS, etc. (Ankerst et al., 1999), and the performance of DBSCAN under different norms has yet to be examined. In this paper, we first developed a method based on statistical information of distance space in database to determine the necessary threshold. Then our examination of the DBSCAN performance under different norms showed that there was determinable relation between them. Finally, we used two artificial databases to verify the effectiveness and efficiency of the proposed methods.
Andrade, Leonardo Rosa; Maia, Adelena Gonçalves; Lucio, Paulo Sérgio
2017-02-01
This research investigated the relevance of four hydrological variables in the performance of a domestic rainwater harvesting (DRWH) system. The hydrological variables investigated are average annual rainfall (P), precipitation concentration degree (PCD), antecedent dry weather period (ADWP), and ratio of dry days to rainy days (nD/nR). Principal component analyses are used to group the water-saving efficiency into a select set of variables, and the relevance of the hydrological variables in a water-saving efficiency system was studied using canonical correlation analysis. The P associated with PCD, ADWP, or nD/nR attained a better correlation with water-saving efficiency than single P. We conclude that empirical models that represent a large combinations of roof-surface areas, rainwater-tank sizes, water demands, and rainfall regimes should also consider a variable for precipitation temporal variability, and treat it as an independent variable.
Wang Hui-Song; Zeng Gui-Hua
2008-01-01
In this paper,the effect of imperfect channel state information at the receiver, which is caused by noise and other interference, on the multi-access channel capacity is analysed through a statistical-mechanical approach. Replica analyses focus on analytically studying how the minimum mean square error (MMSE) channel estimation error appears in a multiuser channel capacity formula. And the relevant mathematical expressions are derived. At the same time,numerical simulation results are demonstrated to validate the Replica analyses. The simulation results show how the system parameters, such as channel estimation error, system load and signal-to-noise ratio, affect the channel capacity.
Choy, Samantha Low; O'Leary, Rebecca; Mengersen, Kerrie
2009-01-01
Bayesian statistical modeling has several benefits within an ecological context. In particular, when observed data are limited in sample size or representativeness, then the Bayesian framework provides a mechanism to combine observed data with other "prior" information. Prior information may be obtained from earlier studies, or in their absence, from expert knowledge. This use of the Bayesian framework reflects the scientific "learning cycle," where prior or initial estimates are updated when new data become available. In this paper we outline a framework for statistical design of expert elicitation processes for quantifying such expert knowledge, in a form suitable for input as prior information into Bayesian models. We identify six key elements: determining the purpose and motivation for using prior information; specifying the relevant expert knowledge available; formulating the statistical model; designing effective and efficient numerical encoding; managing uncertainty; and designing a practical elicitation protocol. We demonstrate this framework applies to a variety of situations, with two examples from the ecological literature and three from our experience. Analysis of these examples reveals several recurring important issues affecting practical design of elicitation in ecological problems.
Ju, Boryung
2013-06-01
Full Text Available Relevance has a long history of scholarly investigation and discussion in information science. One of its notable concepts is that of 'user-based' relevance. The purpose of this study is to examine how users construct their perspective on the concept of relevance; to analyze what the constituent elements (facets of relevance are, in terms of core-periphery status; and to compare the difference of constructions of two groups of users (information users vs. information professionals as applied with a social representations theory perspective. Data were collected from 244 information users and 123 information professionals through use of a free word association method. Three methods were employed to analyze data: (1 content analysis was used to elicit 26 categories (facets of the concept of relevance; (2 structural analysis of social representations was used to determine the core-periphery status of those facets in terms of coreness, sum of similarity, and weighted frequency; and, (3 maximum tree analysis was used to present and compare the differences between the two groups. Elicited categories in this study overlap with the ones from previous relevance studies, while the findings of a core-periphery analysis show that Topicality, User-needs, Reliability/Credibility, and Importance are configured as core concepts for the information user group, while Topicality, User-needs, Reliability/Credibility, and Currency are core concepts for the information professional group. Differences between the social representations of relevance revealed that Topicality was similar to User-needs and to Importance. Author is closely related to Title while Reliability/Credibility is linked with Currency. Easiness/Clarity is similar to Accuracy. Overall, information users and professionals function with a similar social collective of shared meanings for the concept of relevance. The overall findings identify the core and periphery concepts of relevance and their
On Using Genetic Algorithms for Multimodal Relevance Optimization in Information Retrieval.
Boughanem, M.; Christment, C.; Tamine, L.
2002-01-01
Presents a genetic relevance optimization process performed in an information retrieval system that uses genetic techniques for solving multimodal problems (niching) and query reformulation techniques. Explains that the niching technique allows the process to reach different relevance regions of the document space, and that query reformulations…
On Using Genetic Algorithms for Multimodal Relevance Optimization in Information Retrieval.
Boughanem, M.; Christment, C.; Tamine, L.
2002-01-01
Presents a genetic relevance optimization process performed in an information retrieval system that uses genetic techniques for solving multimodal problems (niching) and query reformulation techniques. Explains that the niching technique allows the process to reach different relevance regions of the document space, and that query reformulations…
动态绩效统计信息与企业监控%The Statistical Information of Dynamic Efficiency and Monitoring Enterprises
李萍
2003-01-01
The author discusses the contents should be concentrated on to establish the dynamic efficiency statistics and further the establishment of relevant statistical information, as well as the thinking way to establish the platform for the government to dynamically monitor the effects in state-owned and state-controlled enterprises through portal information technology.
Takahama, Satoshi; Ruggeri, Giulia; Dillner, Ann M.
2016-07-01
Various vibrational modes present in molecular mixtures of laboratory and atmospheric aerosols give rise to complex Fourier transform infrared (FT-IR) absorption spectra. Such spectra can be chemically informative, but they often require sophisticated algorithms for quantitative characterization of aerosol composition. Naïve statistical calibration models developed for quantification employ the full suite of wavenumbers available from a set of spectra, leading to loss of mechanistic interpretation between chemical composition and the resulting changes in absorption patterns that underpin their predictive capability. Using sparse representations of the same set of spectra, alternative calibration models can be built in which only a select group of absorption bands are used to make quantitative prediction of various aerosol properties. Such models are desirable as they allow us to relate predicted properties to their underlying molecular structure. In this work, we present an evaluation of four algorithms for achieving sparsity in FT-IR spectroscopy calibration models. Sparse calibration models exclude unnecessary wavenumbers from infrared spectra during the model building process, permitting identification and evaluation of the most relevant vibrational modes of molecules in complex aerosol mixtures required to make quantitative predictions of various measures of aerosol composition. We study two types of models: one which predicts alcohol COH, carboxylic COH, alkane CH, and carbonyl CO functional group (FG) abundances in ambient samples based on laboratory calibration standards and another which predicts thermal optical reflectance (TOR) organic carbon (OC) and elemental carbon (EC) mass in new ambient samples by direct calibration of infrared spectra to a set of ambient samples reserved for calibration. We describe the development and selection of each calibration model and evaluate the effect of sparsity on prediction performance. Finally, we ascribe
2015-06-12
groups and governmental institutions; the possibility of economic loss directed at entrepreneurs; or the prospect of United States undue influence or... INFLUENCING TOMORROW: A STUDY OF EMERGING INFLUENCE TECHNIQUES AND THEIR RELEVANCE TO UNITED STATES INFORMATION OPERATIONS A...
From Quality to Information Quality in Official Statistics
Kenett Ron S.
2016-12-01
Full Text Available The term quality of statistical data, developed and used in official statistics and international organizations such as the International Monetary Fund (IMF and the Organisation for Economic Co-operation and Development (OECD, refers to the usefulness of summary statistics generated by producers of official statistics. Similarly, in the context of survey quality, official agencies such as Eurostat, National Center for Science and Engineering Statistics (NCSES, and Statistics Canada have created dimensions for evaluating the quality of a survey and its ability to report ‘accurate survey data’.
Enzo eBrunetti
2013-06-01
Full Text Available During monitoring of the discourse, the detection of the relevance of incoming lexical information could be critical for its incorporation to update mental representations in memory. Because, in these situations, the relevance for lexical information is defined by abstract rules that are maintained in memory, results critical to understand how an abstract level of knowledge maintained in mind mediates the detection of the lower-level semantic information. In the present study, we propose that neuronal oscillations participate in the detection of relevant lexical information, based on ‘kept in mind’ rules deriving from more abstract semantic information. We tested our hypothesis using an experimental paradigm that restricted the detection of relevance to inferences based on explicit information, thus controlling for ambiguities derived from implicit aspects. We used a categorization task, in which the semantic relevance was previously defined based on the congruency between a kept in mind category (abstract knowledge, and the lexical-semantic information presented. Our results show that during the detection of the relevant lexical information, phase synchronization of neuronal oscillations selectively increases in delta and theta frequency bands during the interval of semantic analysis. These increments were independent of the semantic category maintained in memory, had a temporal profile specific for each subject, and were mainly induced, as they had no effect on the evoked mean global field power. Also, recruitment of an increased number of pairs of electrodes was a robust observation during the detection of semantic contingent words. These results are consistent with the notion that the detection of relevant lexical information based on a particular semantic rule, could be mediated by increasing the global phase synchronization of neuronal oscillations, which may contribute to the recruitment of an extended number of cortical regions.
Hans Lehmann, Ph.D.
2005-11-01
Full Text Available This paper builds on the belief that rigorous Information Systems (IS research can help practitioners to better understand and to adapt to emerging situations. Contrary to the view seeing rigour and relevance as a dichotomy, it is maintained that IS researchers have a third choice; namely, to be both relevant and rigorous. The paper proposes ways in which IS research can contribute to easing the practitioners’ burden of adapting to changes by providing timely, relevant, and rigorous research. It is argued that synergy between relevance and rigour is possible and that classic grounded theory methodology in combination with case-based data provides a good framework for rigorous and relevant research of emerging phenomena in information systems.
Arbona, A.; Bona, C.; Miñano, B.; Plastino, A.
2014-09-01
The definition of complexity through Statistical Complexity Measures (SCM) has recently seen major improvements. Mostly, the effort is concentrated in measures on time series. We propose a SCM definition for spatial dynamical systems. Our definition is in line with the trend to combine entropy with measures of structure (such as disequilibrium). We study the behaviour of our definition against the vectorial noise model of Collective Motion. From a global perspective, we show how our SCM is minimal at both the microscale and macroscale, while it reaches a maximum at the ranges that define the mesoscale in this model. From a local perspective, the SCM is minimum both in highly ordered and disordered areas, while it reaches a maximum at the edges between such areas. These characteristics suggest this is a good candidate for detecting the mesoscale of arbitrary dynamical systems as well as regions where the complexity is maximal in such systems.
The role of control groups in mutagenicity studies: matching biological and statistical relevance.
Hauschke, Dieter; Hothorn, Torsten; Schäfer, Juliane
2003-06-01
The statistical test of the conventional hypothesis of "no treatment effect" is commonly used in the evaluation of mutagenicity experiments. Failing to reject the hypothesis often leads to the conclusion in favour of safety. The major drawback of this indirect approach is that what is controlled by a prespecified level alpha is the probability of erroneously concluding hazard (producer risk). However, the primary concern of safety assessment is the control of the consumer risk, i.e. limiting the probability of erroneously concluding that a product is safe. In order to restrict this risk, safety has to be formulated as the alternative, and hazard, i.e. the opposite, has to be formulated as the hypothesis. The direct safety approach is examined for the case when the corresponding threshold value is expressed either as a fraction of the population mean for the negative control, or as a fraction of the difference between the positive and negative controls.
Lundquist, Carol; Frieder, Ophir; Holmes, David O.; Grossman, David
1999-01-01
Describes a scalable, parallel, relational database-drive information retrieval engine. To support portability across a wide range of execution environments, all algorithms adhere to the SQL-92 standard. By incorporating relevance feedback algorithms, accuracy is enhanced over prior database-driven information retrieval efforts. Presents…
Lundquist, Carol; Frieder, Ophir; Holmes, David O.; Grossman, David
1999-01-01
Describes a scalable, parallel, relational database-drive information retrieval engine. To support portability across a wide range of execution environments, all algorithms adhere to the SQL-92 standard. By incorporating relevance feedback algorithms, accuracy is enhanced over prior database-driven information retrieval efforts. Presents…
2013-02-26
... Administration Statistics Forum--2013; Public Conference AGENCY: Food and Drug Administration, HHS. ACTION... Statistics Forum--2013.'' The purpose of the conference is to discuss relevant statistical issues associated... open forum for the timely discussion of topics of mutual theoretical and practical interest to...
Samimi, Parnia; Ravana, Sri Devi
2014-01-01
Test collection is used to evaluate the information retrieval systems in laboratory-based evaluation experimentation. In a classic setting, generating relevance judgments involves human assessors and is a costly and time consuming task. Researchers and practitioners are still being challenged in performing reliable and low-cost evaluation of retrieval systems. Crowdsourcing as a novel method of data acquisition is broadly used in many research fields. It has been proven that crowdsourcing is an inexpensive and quick solution as well as a reliable alternative for creating relevance judgments. One of the crowdsourcing applications in IR is to judge relevancy of query document pair. In order to have a successful crowdsourcing experiment, the relevance judgment tasks should be designed precisely to emphasize quality control. This paper is intended to explore different factors that have an influence on the accuracy of relevance judgments accomplished by workers and how to intensify the reliability of judgments in crowdsourcing experiment.
Value Relevance of Accounting Information in the Pre- and Post-IFRS Accounting Periods
2010-01-01
This paper examines the value relevance of accounting information in the preand post-periods of International Financial Reporting Standards implementation using the models of Easton and Harris (1991) and Feltham and Ohlson (1995) for a sample of Greek companies. The results of the paper indicate that the effects of the IFRS reduced the incremental information content of book values of equity for stock prices. However, earnings’ incremental information content increased for the post-IFRS perio...
The relevance of music information representation metadata from the perspective of expert users
Camila Monteiro de Barros
Full Text Available The general goal of this research was to verify which metadata elements of music information representation are relevant for its retrieval from the perspective of expert music users. Based on a bibliographical research, a comprehensive metadata set of music information representation was developed and transformed into a questionnaire for data collection, which was applied to students and professors of the Graduate Program in Music at the Federal University of Rio Grande do Sul. The results show that the most relevant information for expert music users is related to identification and authorship responsibilities. The respondents from Composition and Interpretative Practice areas agree with these results, while the respondents from Musicology/Ethnomusicology and Music Education areas also consider the metadata related to the historical context of composition relevant.
Martinez, Rafael; Rodriguez, Francisco de Borja; Camacho, David
2007-01-01
The main contribution of this paper is to design an Information Retrieval (IR) technique based on Algorithmic Information Theory (using the Normalized Compression Distance- NCD), statistical techniques (outliers), and novel organization of data base structure. The paper shows how they can be integrated to retrieve information from generic databases using long (text-based) queries. Two important problems are analyzed in the paper. On the one hand, how to detect "false positives" when the distance among the documents is very low and there is actual similarity. On the other hand, we propose a way to structure a document database which similarities distance estimation depends on the length of the selected text. Finally, the experimental evaluations that have been carried out to study previous problems are shown.
The Common Body of Knowledge: A Framework to Promote Relevant Information Security Research
Kenneth J. Knapp
2007-03-01
Full Text Available This study proposes using an established common body of knowledge (CBK as one means of organizing information security literature.Â Consistent with calls for more relevant information systems (IS research, this industry-developed framework can motivate future research towards topics that are important to the security practitioner.Â In this review, forty-eight articles from ten IS journals from 1995 to 2004 are selected and cross-referenced to the ten domains of the information security CBK.Â Further, we distinguish articles as empirical research, frameworks, or tutorials.Â Generally, this study identified a need for additional empirical research in every CBK domain including topics related to legal aspects of information security.Â Specifically, this study identified a need for additional IS security research relating to applications development, physical security, operations security, and business continuity.Â The CBK framework is inherently practitioner oriented and using it will promote relevancy by steering IS research towards topics important to practitioners.Â This is important considering the frequent calls by prominent information systems scholars for more relevant research.Â Few research frameworks have emerged from the literature that specifically classify the diversity of security threats and range of problems that businesses today face.Â With the recent surge of interest in security, the need for a comprehensive framework that also promotes relevant research can be of great value.
Icek Ajzen; Thomas C. Brown; Lori H. Rosenthal
1996-01-01
A laboratory experiment examined the potential for information bias in contingent valuation (CV). Consistent with the view that information about a public or private good can function as a persuasive communication, willingness to pay (WTP) was found to increase with the quality of arguments used to describe the good, especially under conditions of high personal...
Bilal Kimouche
2016-03-01
Full Text Available Purpose: The paper aims to explore whether intangible items that recognised in financial statements are value-relevant to investors in the French context, and whether these items affect the value relevance of accounting information. Design/methodology/approach: Empirical data were collected from a sample of French listed companies, over the nine-year period of 2005 to 2013. Starting of Ohlson’s (1995 model, the correlation analysis and the linear multiple regressions have been applied. Findings: We find that intangibles and traditional accounting measures as a whole are value relevant. However, the amortization and impairment charges of intangibles and cash flows do not affect the market values of French companies, unlike other variables, which affect positively and substantially the market values. Also goodwill and book values are more associated with market values than intangible assets and earnings respectively. Finally, we find that intangibles have improved the value relevance of accounting information. Practical implications: French legislators must give more interest for intangibles, in order to enrich the financial statements content and increasing the pertinence of accounting information. Auditors must give more attention for intangibles’ examination process, in order to certify the amounts related to intangibles in financial statements, and hence enrich their reliability, what provides adequacy guarantees for investors to use them in decision making. Originality/value: The paper used recently available financial data, and proposed an improvement concerning the measure of incremental value relevance of intangibles items.
Combining Spatial Statistical and Ensemble Information in Probabilistic Weather Forecasts
2006-02-22
and Delfiner 1999). Covariance structures that are more complex can be accommodated, and we discuss some of the options in Section 5. Note that (3...and P. Delfiner , 1999: Geostatistics: Modeling Spatial Uncertainty. Wiley, 695 pp. Cressie, N. A. C., 1993: Statistics for Spatial Data. Wiley
Fisher information and statistical inference for phase-type distributions
Bladt, Mogens; Esparza, Luz Judith R; Nielsen, Bo Friis
2011-01-01
This paper is concerned with statistical inference for both continuous and discrete phase-type distributions. We consider maximum likelihood estimation, where traditionally the expectation-maximization (EM) algorithm has been employed. Certain numerical aspects of this method are revised and we p...
Fisher information and statistical inference for phase-type distributions
Bladt, Mogens; Esparza, Luz Judith R; Nielsen, Bo Friis
2011-01-01
This paper is concerned with statistical inference for both continuous and discrete phase-type distributions. We consider maximum likelihood estimation, where traditionally the expectation-maximization (EM) algorithm has been employed. Certain numerical aspects of this method are revised and we...
大日方, 隆
2007-01-01
The purpose of this paper is to confirm how international academicians evaluate the Japanese accounting system. This paper surveys prior studies on the international comparison (including Japan) of accounting information and reexamines the empirical findings on the usefulness of earnings information in Japan, focusing on the value relevance of earnings. Many researchers have pointed out that code law, investor protection in financial regulation environments and Japanese corporate governance, ...
Non-Equilibrium Statistical Mechanics Inspired by Modern Information Theory
Oscar C. O. Dahlsten
2013-12-01
Full Text Available A collection of recent papers revisit how to quantify the relationship between information and work in the light of modern information theory, so-called single-shot information theory. This is an introduction to those papers, from the perspective of the author. Many of the results may be viewed as a quantification of how much work a generalized Maxwell’s daemon can extract as a function of its extra information. These expressions do not in general involve the Shannon/von Neumann entropy but rather quantities from single-shot information theory. In a limit of large systems composed of many identical and independent parts the Shannon/von Neumann entropy is recovered.
Downward, L.; Booth, C.H.; Lukens, W.W.; Bridges, F.
2006-07-25
A general problem when fitting EXAFS data is determining whether particular parameters are statistically significant. The F-test is an excellent way of determining relevancy in EXAFS because it only relies on the ratio of the fit residual of two possible models, and therefore the data errors approximately cancel. Although this test is widely used in crystallography (there, it is often called a 'Hamilton test') and has been properly applied to EXAFS data in the past, it is very rarely applied in EXAFS analysis. We have implemented a variation of the F-test adapted for EXAFS data analysis in the RSXAP analysis package, and demonstrate its applicability with a few examples, including determining whether a particular scattering shell is warranted, and differentiating between two possible species or two possible structures in a given shell.
Statistical physics of networks, information and complex systems
Ecke, Robert E [Los Alamos National Laboratory
2009-01-01
In this project we explore the mathematical methods and concepts of statistical physics that are fmding abundant applications across the scientific and technological spectrum from soft condensed matter systems and bio-infonnatics to economic and social systems. Our approach exploits the considerable similarity of concepts between statistical physics and computer science, allowing for a powerful multi-disciplinary approach that draws its strength from cross-fertilization and mUltiple interactions of researchers with different backgrounds. The work on this project takes advantage of the newly appreciated connection between computer science and statistics and addresses important problems in data storage, decoding, optimization, the infonnation processing properties of the brain, the interface between quantum and classical infonnation science, the verification of large software programs, modeling of complex systems including disease epidemiology, resource distribution issues, and the nature of highly fluctuating complex systems. Common themes that the project has been emphasizing are (i) neural computation, (ii) network theory and its applications, and (iii) a statistical physics approach to infonnation theory. The project's efforts focus on the general problem of optimization and variational techniques, algorithm development and infonnation theoretic approaches to quantum systems. These efforts are responsible for fruitful collaborations and the nucleation of science efforts that span multiple divisions such as EES, CCS, 0 , T, ISR and P. This project supports the DOE mission in Energy Security and Nuclear Non-Proliferation by developing novel infonnation science tools for communication, sensing, and interacting complex networks such as the internet or energy distribution system. The work also supports programs in Threat Reduction and Homeland Security.
Hayslett, H T
1991-01-01
Statistics covers the basic principles of Statistics. The book starts by tackling the importance and the two kinds of statistics; the presentation of sample data; the definition, illustration and explanation of several measures of location; and the measures of variation. The text then discusses elementary probability, the normal distribution and the normal approximation to the binomial. Testing of statistical hypotheses and tests of hypotheses about the theoretical proportion of successes in a binomial population and about the theoretical mean of a normal population are explained. The text the
Wood, Wendy; And Others
Research literature shows that people with access to attitude-relevant information in memory are able to draw on relevant beliefs and prior experiences when analyzing a persuasive message. This suggests that people who can retrieve little attitude-relevant information should be less able to engage in systematic processing. Two experiments were…
Wood, Wendy; And Others
Research literature shows that people with access to attitude-relevant information in memory are able to draw on relevant beliefs and prior experiences when analyzing a persuasive message. This suggests that people who can retrieve little attitude-relevant information should be less able to engage in systematic processing. Two experiments were…
Quantum field theory, statistical physics, and information theory
Toyoda, Tadashi [Tokai Univ., Kanagawa (Japan)
2001-05-01
It is shown that the one-particle Matsubara temperature Green's function can be regarded as a Fisher information matrix on the basis of the quantum generalization of relative entropy due to Watanabe and Neumann.
Hoelzer, Simon; Schweiger, Ralf Kurt; Boettcher, Hanno; Rieger, Joerg; Dudeck, Joachim
2002-01-01
Due to the information overload and the unstructured access to (medical) information of the internet, it isn't hardly possible to find problem-relevant medical information in an appropriate time (e.g. during a consultation). The web offers a mixture of web pages, forums, newsgroups and databases. The search for problem-relevant information for a certain knowledge area encounters on two basic problems. On the one hand, you have to find in the jungle of the information, relevant resources for your individual clinical case (treatment, diagnosis, therapeutic option etc..). The second problem consists of being able to judge the quality of individual contents of inteernet pages. On the basis of the different informational needs of health care professionals and patients a catalog with inteernet resources was created to tumor diseases such as lung cancer (small cell and non-small cell carcinoma), colorectal cancer and thyroid cancer. Explicit and implicit metainformation, if available, such as the title of the document, language, date or keywords are stored in the database. The database entries are editorially revised, so that further specific metainformation is available for the information retrieval. Our pragmatic approach of searching, editing, and archiving of internet content is still necessary since most of the web documents are based on HTML, which doesn't allow for structuring (medical) information and assigning metainformation sufficiently. The use of specific metainformation is crucial in order to improve the recall and precision of internet searches. In the future, XML and related technologies (RDF) will meet these requirements.
Chebat, Jean-Charles; Vercollier, Sarah Drissi; Gélinas-Chebat, Claire
2003-06-01
The effects of drama versus lecture format in public service advertisements are studied in a 2 (format) x 2 (malaria vs AIDS) factorial design. Two structural equation models are built (one for each level of self-relevance), showing two distinct patterns. In both low and high self-relevant situations, empathy plays a key role. Under low self-relevance conditions, drama enhances information processing through empathy. Under high self-relevant conditions, the advertisement format has neither significant cognitive or empathetic effects. The information processing generated by the highly relevant topic affects viewers' empathy, which in turn affects the attitude the advertisement and the behavioral intent. As predicted by the Elaboration Likelihood Model, the advertisement format enhances the attitudes and information processing mostly under low self-relevant conditions. Under low self-relevant conditions, empathy enhances information processing while under high self-relevance, the converse relation holds.
Towards brain-activity-controlled information retrieval: Decoding image relevance from MEG signals.
Kauppi, Jukka-Pekka; Kandemir, Melih; Saarinen, Veli-Matti; Hirvenkari, Lotta; Parkkonen, Lauri; Klami, Arto; Hari, Riitta; Kaski, Samuel
2015-05-15
We hypothesize that brain activity can be used to control future information retrieval systems. To this end, we conducted a feasibility study on predicting the relevance of visual objects from brain activity. We analyze both magnetoencephalographic (MEG) and gaze signals from nine subjects who were viewing image collages, a subset of which was relevant to a predetermined task. We report three findings: i) the relevance of an image a subject looks at can be decoded from MEG signals with performance significantly better than chance, ii) fusion of gaze-based and MEG-based classifiers significantly improves the prediction performance compared to using either signal alone, and iii) non-linear classification of the MEG signals using Gaussian process classifiers outperforms linear classification. These findings break new ground for building brain-activity-based interactive image retrieval systems, as well as for systems utilizing feedback both from brain activity and eye movements.
Surface Electromyographic Onset Detection Based On Statistics and Information Content
López, Natalia M.; Orosco, Eugenio; di Sciascio, Fernando
2011-12-01
The correct detection of the onset of muscular contraction is a diagnostic tool to neuromuscular diseases and an action trigger to control myoelectric devices. In this work, entropy and information content concepts were applied in algorithmic methods to automatic detection in surface electromyographic signals.
Links to sources of cancer-related statistics, including the Surveillance, Epidemiology and End Results (SEER) Program, SEER-Medicare datasets, cancer survivor prevalence data, and the Cancer Trends Progress Report.
Bayesian statistics and information fusion for GPS-denied navigation
Copp, Brian Lee
It is well known that satellite navigation systems are vulnerable to disruption due to jamming, spoofing, or obstruction of the signal. The desire for robust navigation of aircraft in GPS-denied environments has motivated the development of feature-aided navigation systems, in which measurements of environmental features are used to complement the dead reckoning solution produced by an inertial navigation system. Examples of environmental features which can be exploited for navigation include star positions, terrain elevation, terrestrial wireless signals, and features extracted from photographic data. Feature-aided navigation represents a particularly challenging estimation problem because the measurements are often strongly nonlinear, and the quality of the navigation solution is limited by the knowledge of nuisance parameters which may be difficult to model accurately. As a result, integration approaches based on the Kalman filter and its variants may fail to give adequate performance. This project develops a framework for the integration of feature-aided navigation techniques using Bayesian statistics. In this approach, the probability density function for aircraft horizontal position (latitude and longitude) is approximated by a two-dimensional point mass function defined on a rectangular grid. Nuisance parameters are estimated using a hypothesis based approach (Multiple Model Adaptive Estimation) which continuously maintains an accurate probability density even in the presence of strong nonlinearities. The effectiveness of the proposed approach is illustrated by the simulated use of terrain referenced navigation and wireless time-of-arrival positioning to estimate a reference aircraft trajectory. Monte Carlo simulations have shown that accurate position estimates can be obtained in terrain referenced navigation even with a strongly nonlinear altitude bias. The integration of terrain referenced and wireless time-of-arrival measurements is described along with
Task relevance of emotional information affects anxiety-linked attention bias in visual search.
Dodd, Helen F; Vogt, Julia; Turkileri, Nilgun; Notebaert, Lies
2017-01-01
Task relevance affects emotional attention in healthy individuals. Here, we investigate whether the association between anxiety and attention bias is affected by the task relevance of emotion during an attention task. Participants completed two visual search tasks. In the emotion-irrelevant task, participants were asked to indicate whether a discrepant face in a crowd of neutral, middle-aged faces was old or young. Irrelevant to the task, target faces displayed angry, happy, or neutral expressions. In the emotion-relevant task, participants were asked to indicate whether a discrepant face in a crowd of middle-aged neutral faces was happy or angry (target faces also varied in age). Trait anxiety was not associated with attention in the emotion-relevant task. However, in the emotion-irrelevant task, trait anxiety was associated with a bias for angry over happy faces. These findings demonstrate that the task relevance of emotional information affects conclusions about the presence of an anxiety-linked attention bias. Copyright © 2016 Elsevier B.V. All rights reserved.
Full counting statistics of information content and particle number
Utsumi, Yasuhiro
2017-08-01
We consider a bipartite quantum conductor and discuss the joint probability distribution of a particle number in a subsystem and the self-information associated with the reduced density matrix of the subsystem. By extending the multicontour Keldysh Green-function technique, we calculate the Rényi entropy of a positive integer order M subjected to the particle number constraint, from which we derive the joint probability distribution. For energy-independent transmission, we derive the time dependence of the accessible entanglement entropy, or the conditional entropy. We analyze the joint probability distribution for energy-dependent transmission probability at the steady state under the coherent resonant tunneling and the incoherent sequential tunneling conditions. We also discuss the probability distribution of the efficiency, which measures the information content transferred by a single electron.
Fuchs, Julia; Cermak, Jan; Andersen, Hendrik
2017-04-01
This study aims at untangling the impacts of external dynamics and local conditions on cloud properties in the Southeast Atlantic (SEA) by combining satellite and reanalysis data using multivariate statistics. The understanding of clouds and their determinants at different scales is important for constraining the Earth's radiative budget, and thus prominent in climate-system research. In this study, SEA stratocumulus cloud properties are observed not only as the result of local environmental conditions but also as affected by external dynamics and spatial origins of air masses entering the study area. In order to assess to what extent cloud properties are impacted by aerosol concentration, air mass history, and meteorology, a multivariate approach is conducted using satellite observations of aerosol and cloud properties (MODIS, SEVIRI), information on aerosol species composition (MACC) and meteorological context (ERA-Interim reanalysis). To account for the often-neglected but important role of air mass origin, information on air mass history based on HYSPLIT modeling is included in the statistical model. This multivariate approach is intended to lead to a better understanding of the physical processes behind observed stratocumulus cloud properties in the SEA.
Popa Dorina
2012-07-01
Full Text Available The main objective of our work is the conceptual description of the performance of an economic entity in financial and non-financial terms. During our approach we have shown that it is not sufficient to analyze the performance of a company only in financial terms as the performance reflected in financial reports sometimes do not coincide with the real situation of the company. In this case the cause of the differences has to be found among the influences of other nonfinancial information. Mainly following the great financial scandals the distrust in the reliability of financial-accounting information has eroded strongly and thus the business performance measurement cannot be the exclusive domain of the criteria of financial analysis, but must be done in a comprehensive way, based both on financial criteria and on non-financial ones (intangible assets, social responsibility of the company. Using non-financial criteria have led to the occurrence of new types of analysis, namely extra-financial analysis. Thus, enterprise performance is not subject to material and financial resources managed and controlled by the entities, but to the complex of intangible resources that companies created by thier previous work. The extra-financial analysis has to face difficulties arising mainly from the existence of non-financial indicators very little normalized, and from the lack of uniformity of the practice in the field. In determining the extra-financial performance indicators one has to observe the manifestation and the evolution of the companyâ€™s relationships with its partners / environment. In order to analyze the performance measurement by financial and nonfinancial indicators we chose as a case study a company in Bihor county, listed on Bucharest Stock Exchange. The results of our study show that the Romanian entities are increasingly interested in measuring performance and after the extra-financial analysis we concluded that the company had set
2014-06-01
Data to Information (D2I), and Quality of Service (QoS) Enabled Dissem- ination (QED). IV. MODELING AND APPLYING DIRECTED QUALIFICATION FOR ANALYTICS ...document or key/value pair stores such as Hadoop or Cassandra. Semantic inferencing can create a form of analytics by applying an ontology relevant...logic analytics within semantic data sets. Many web-oriented popularity, similarity, and clus- tering analytics appear to be well suited for semantic
Relevance of Information Systems Strategic Planning Practices in E-Business Contexts
Ganesan Kannabiran; Srinivasan Sundar
2011-01-01
Increasing global competition and advances in Internet technologies have led organizations to consider e-business strategies. However, evolving e-business strategies have been identified as a critical issue faced by corporate planners. The relevance and the use of IS (Information Systems) strategy planning practices in the context of e-business have been argued among researchers. In this paper, the authors investigate whether organizations can successfully improve the IS value in the e-busine...
Anil K. Tripathi
2012-09-01
Full Text Available Multimedia Information may have multiple semantics depending on context, a temporal interest and user preferences. Hence we are exploiting the plausibility of context associated with semantic concept in retrieving relevance information. We are proposing an Affective Feature Based Implicit Contextual Semantic Relevance Feedback (AICSRF to investigate whether audio and speech along with visual could determine the current context in which user wants to retrieve the information and to further investigate whether we could employ Affective Feedback as an implicit source of evidence in CSRF cycle to increase the systems contextual semantic understanding. We introduce an Emotion Recognition Unit (ERU that comprises of spatiotemporal Gabor filter to capture spontaneous facial expression and emotional word recognition system that uses phonemes to recognize the spoken emotional words. We propose Contextual Query Perfection Scheme (CQPS to learn, refine the current context that could be used in query perfection in RF cycle to understand the semantic of query on the basis of relevance judgment taken by ERU. Observations suggest that CQPS in AICSRF incorporating such affective features reduce the search space hence retrieval time and increase the systems contextual semantic understanding.
Malpas, P J
2008-07-01
Within the medical, legal and bioethical literature, there has been an increasing concern that the information derived from genetic tests may be used to unfairly discriminate against individuals seeking various kinds of insurance; particularly health and life insurance. Consumer groups, the general public and those with genetic conditions have also expressed these concerns, specifically in the context of life insurance. While it is true that all insurance companies may have an interest in the information obtained from genetic tests, life insurers potentially have a very strong incentive to (want to) use genetic information to rate applicants, as individuals generally purchase their own cover and may want to take out very large policies. This paper critically focuses on genetic information in the context of life insurance. We consider whether genetic information differs in any relevant way from other kinds of non-genetic information required by and disclosed to life insurance companies by potential clients. We will argue that genetic information should not be treated any differently from other types of health information already collected from those wishing to purchase life insurance cover.
Dillen, van S.M.E.; Hiddink, G.J.; Koelen, M.A.; Graaf, de C.; Woerkum, van C.M.J.
2004-01-01
Objective: For more effective nutrition communication, it is crucial to identify sources from which consumers seek information. Our purpose was to assess perceived relevance and information needs regarding food topics, and preferred information sources by means of quantitative consumer research.
Zhu, Xiaowei; Iungo, G. Valerio; Leonardi, Stefano; Anderson, William
2017-02-01
For a horizontally homogeneous, neutrally stratified atmospheric boundary layer (ABL), aerodynamic roughness length, z_0, is the effective elevation at which the streamwise component of mean velocity is zero. A priori prediction of z_0 based on topographic attributes remains an open line of inquiry in planetary boundary-layer research. Urban topographies - the topic of this study - exhibit spatial heterogeneities associated with variability of building height, width, and proximity with adjacent buildings; such variability renders a priori, prognostic z_0 models appealing. Here, large-eddy simulation (LES) has been used in an extensive parametric study to characterize the ABL response (and z_0) to a range of synthetic, urban-like topographies wherein statistical moments of the topography have been systematically varied. Using LES results, we determined the hierarchical influence of topographic moments relevant to setting z_0. We demonstrate that standard deviation and skewness are important, while kurtosis is negligible. This finding is reconciled with a model recently proposed by Flack and Schultz (J Fluids Eng 132:041203-1-041203-10, 2010), who demonstrate that z_0 can be modelled with standard deviation and skewness, and two empirical coefficients (one for each moment). We find that the empirical coefficient related to skewness is not constant, but exhibits a dependence on standard deviation over certain ranges. For idealized, quasi-uniform cubic topographies and for complex, fully random urban-like topographies, we demonstrate strong performance of the generalized Flack and Schultz model against contemporary roughness correlations.
Van Wynsberge, Simon; Gilbert, Antoine; Guillemot, Nicolas; Heintz, Tom; Tremblay-Boyer, Laura
2017-07-01
Extensive biological field surveys are costly and time consuming. To optimize sampling and ensure regular monitoring on the long term, identifying informative indicators of anthropogenic disturbances is a priority. In this study, we used 1800 candidate indicators by combining metrics measured from coral, fish, and macro-invertebrate assemblages surveyed from 2006 to 2012 in the vicinity of an ongoing mining project in the Voh-Koné-Pouembout lagoon, New Caledonia. We performed a power analysis to identify a subset of indicators which would best discriminate temporal changes due to a simulated chronic anthropogenic impact. Only 4% of tested indicators were likely to detect a 10% annual decrease of values with sufficient power (>0.80). Corals generally exerted higher statistical power than macro-invertebrates and fishes because of lower natural variability and higher occurrence. For the same reasons, higher taxonomic ranks provided higher power than lower taxonomic ranks. Nevertheless, a number of families of common sedentary or sessile macro-invertebrates and fishes also performed well in detecting changes: Echinometridae, Isognomidae, Muricidae, Tridacninae, Arcidae, and Turbinidae for macro-invertebrates and Pomacentridae, Labridae, and Chaetodontidae for fishes. Interestingly, these families did not provide high power in all geomorphological strata, suggesting that the ability of indicators in detecting anthropogenic impacts was closely linked to reef geomorphology. This study provides a first operational step toward identifying statistically relevant indicators of anthropogenic disturbances in New Caledonia's coral reefs, which can be useful in similar tropical reef ecosystems where little information is available regarding the responses of ecological indicators to anthropogenic disturbances.
Elaboration of a guide including relevant project and logistic information: a case study
Costa, Tchaikowisky M. [Faculdade de Tecnologia e Ciencias (FTC), Itabuna, BA (Brazil); Bresci, Claudio T.; Franca, Carlos M.M. [PETROBRAS, Rio de Janeiro, RJ (Brazil)
2009-07-01
For every mobilization of a new enterprise it is necessary to quickly obtain the greatest amount of relative information in regards to location and availability of infra-structure, logistics, and work site amenities. Among this information are reports elaborated for management of the enterprise, (organizational chart, work schedule, objectives, contacts, etc.) as well as geographic anomalies, social-economic and culture of the area to be developed such as territorial extension, land aspects, local population, roads and amenities (fuel stations ,restaurants and hotels), infra-structure of the cities (health, education, entertainment, housing, transport, etc.) and logistically the distance between cities the estimated travel time, ROW access maps and notable points, among other relevant information. With the idea of making this information available for everyone involved in the enterprise, it was elaborated for GASCAC Spread 2A a rapid guide containing all the information mentioned above and made it available for all the vehicles used to transport employees and visitors to the spread. With this, everyone quickly received the majority of information necessary in one place, in a practical, quick, and precise manner, since the information is always used and controlled by the same person. This study includes the model used in the gas pipeline GASCAC Spread 2A project and the methodology used to draft and update the information. Besides the above, a file in the GIS format was prepared containing all necessary planning, execution and tracking information for enterprise activities, from social communication to the execution of the works previously mentioned. Part of the GIS file information was uploaded to Google Earth so as to disclose the information to a greater group of people, bearing in mind that this program is free of charge and easy to use. (author)
Marcel Ginotti Pires
2017-01-01
Full Text Available This article discusses the factors present in post-merger integration of Systems and Information Technology (SIT that lead to positive and negative results in mergers and acquisitions (M & A. The research comprised three of the largest acquiring banks in Brazil. We adopted two methods of research, qualitative, to operationalize the theoretical concepts and quantitative, to test the hypotheses. We interviewed six executives of banks that held relevant experience in M & A processes. Subsequently, we applied questionnaires to IT professionals who were involved in the SIT integration processes. The results showed that the quality and expertise of the integration teams and managing the integration were the most relevant factors in the processes, with positive results for increased efficiency and the increased capacity of SIT. Negative results were due to failures in exploiting learning opportunities, the loss of employees and the inexpressive record of integration procedures.
Kubicek, Katrina; Beyer, William J.; Weiss, George; Iverson, Ellen; Kipke, Michele D.
2010-01-01
A growing body of research has investigated the effectiveness of abstinence-only sexual education. There remains a dearth of research on the relevant sexual health information available to young men who have sex with men (YMSM). Drawing on a mixed-methods study with 526 YMSM, this study explores how and where YMSM receive relevant information on…
Kubicek, Katrina; Beyer, William J.; Weiss, George; Iverson, Ellen; Kipke, Michele D.
2010-01-01
A growing body of research has investigated the effectiveness of abstinence-only sexual education. There remains a dearth of research on the relevant sexual health information available to young men who have sex with men (YMSM). Drawing on a mixed-methods study with 526 YMSM, this study explores how and where YMSM receive relevant information on…
Gaps in policy-relevant information on burden of disease in children: a systematic review.
Rudan, Igor; Lawn, Joy; Cousens, Simon; Rowe, Alexander K; Boschi-Pinto, Cynthia; Tomasković, Lana; Mendoza, Walter; Lanata, Claudio F; Roca-Feltrer, Arantxa; Carneiro, Ilona; Schellenberg, Joanna A; Polasek, Ozren; Weber, Martin; Bryce, Jennifer; Morris, Saul S; Black, Robert E; Campbell, Harry
Valid information about cause-specific child mortality and morbidity is an essential foundation for national and international health policy. We undertook a systematic review to investigate the geographical dispersion of and time trends in publication for policy-relevant information about children's health and to assess associations between the availability of reliable data and poverty. We identified data available on Jan 1, 2001, and published since 1980, for the major causes of morbidity and mortality in young children. Studies with relevant data were assessed against a set of inclusion criteria to identify those likely to provide unbiased estimates of the burden of childhood disease in the community. Only 308 information units from more than 17,000 papers identified were regarded as possible unbiased sources for estimates of childhood disease burden. The geographical distribution of these information units revealed a pattern of small well-researched populations surrounded by large areas with little available information. No reliable population-based data were identified from many of the world's poorest countries, which account for about a third of all deaths of children worldwide. The number of new studies diminished over the last 10 years investigated. The number of population-based studies yielding estimates of burden of childhood disease from less developed countries was low. The decreasing trend over time suggests reductions in research investment in this sphere. Data are especially sparse from the world's least developed countries with the highest child mortality. Guidelines are needed for the conduct of burden-of-disease studies together with an international research policy that gives increased emphasis to global equity and coverage so that knowledge can be generated from all regions of the world.
Geospatial Information Relevant to the Flood Protection Available on The Mainstream Web
Kliment Tomáš
2014-03-01
Full Text Available Flood protection is one of several disciplines where geospatial data is very important and is a crucial component. Its management, processing and sharing form the foundation for their efficient use; therefore, special attention is required in the development of effective, precise, standardized, and interoperable models for the discovery and publishing of data on the Web. This paper describes the design of a methodology to discover Open Geospatial Consortium (OGC services on the Web and collect descriptive information, i.e., metadata in a geocatalogue. A pilot implementation of the proposed methodology - Geocatalogue of geospatial information provided by OGC services discovered on Google (hereinafter “Geocatalogue” - was used to search for available resources relevant to the area of flood protection. The result is an analysis of the availability of resources discovered through their metadata collected from the OGC services (WMS, WFS, etc. and the resources they provide (WMS layers, WFS objects, etc. within the domain of flood protection.
Autism spectrum disorder updates – relevant information for early interventionists to consider
Paula Allen-Meares
2016-10-01
Full Text Available Autism spectrum disorder (ASD is a pervasive developmental disorder characterized by deficits in social communication skills as well as repetitive, restricted or stereotyped behaviors (1. Early interventionists are often found at the forefront of assessment, evaluation and early intervention services for children with ASD. The role of an early intervention specialist may include, assessing developmental history, providing group and individual counseling, working in partnership with families on home, school, and community environments, mobilizing school and community resources and assisting in the development of positive early intervention strategies (2, 3. The commonality amongst these roles resides in the importance of providing up-to-date, relevant information to families and children. The purpose of this review is to provide pertinent up-to-date knowledge for early interventionists to help inform practice in working with individuals with ASD, including common behavioral models of intervention.
Autism Spectrum Disorder Updates – Relevant Information for Early Interventionists to Consider
Allen-Meares, Paula; MacDonald, Megan; McGee, Kristin
2016-01-01
Autism spectrum disorder (ASD) is a pervasive developmental disorder characterized by deficits in social communication skills as well as repetitive, restricted or stereotyped behaviors (1). Early interventionists are often found at the forefront of assessment, evaluation, and early intervention services for children with ASD. The role of an early intervention specialist may include assessing developmental history, providing group and individual counseling, working in partnership with families on home, school, and community environments, mobilizing school and community resources, and assisting in the development of positive early intervention strategies (2, 3). The commonality among these roles resides in the importance of providing up-to-date, relevant information to families and children. The purpose of this review is to provide pertinent up-to-date knowledge for early interventionists to help inform practice in working with individuals with ASD, including common behavioral models of intervention. PMID:27840812
Autism Spectrum Disorder Updates - Relevant Information for Early Interventionists to Consider.
Allen-Meares, Paula; MacDonald, Megan; McGee, Kristin
2016-01-01
Autism spectrum disorder (ASD) is a pervasive developmental disorder characterized by deficits in social communication skills as well as repetitive, restricted or stereotyped behaviors (1). Early interventionists are often found at the forefront of assessment, evaluation, and early intervention services for children with ASD. The role of an early intervention specialist may include assessing developmental history, providing group and individual counseling, working in partnership with families on home, school, and community environments, mobilizing school and community resources, and assisting in the development of positive early intervention strategies (2, 3). The commonality among these roles resides in the importance of providing up-to-date, relevant information to families and children. The purpose of this review is to provide pertinent up-to-date knowledge for early interventionists to help inform practice in working with individuals with ASD, including common behavioral models of intervention.
Geospatial Information Relevant to the Flood Protection Available on The Mainstream Web
Kliment, Tomáš; Gálová, Linda; Ďuračiová, Renata; Fencík, Róbert; Kliment, Marcel
2014-03-01
Flood protection is one of several disciplines where geospatial data is very important and is a crucial component. Its management, processing and sharing form the foundation for their efficient use; therefore, special attention is required in the development of effective, precise, standardized, and interoperable models for the discovery and publishing of data on the Web. This paper describes the design of a methodology to discover Open Geospatial Consortium (OGC) services on the Web and collect descriptive information, i.e., metadata in a geocatalogue. A pilot implementation of the proposed methodology - Geocatalogue of geospatial information provided by OGC services discovered on Google (hereinafter "Geocatalogue") - was used to search for available resources relevant to the area of flood protection. The result is an analysis of the availability of resources discovered through their metadata collected from the OGC services (WMS, WFS, etc.) and the resources they provide (WMS layers, WFS objects, etc.) within the domain of flood protection.
Selective theta-synchronization of choice-relevant information subserves goal-directed behavior
Thilo eWomelsdorf
2010-11-01
Full Text Available Theta activity reflects a state of rhythmic modulation of excitability at the level of single neuron membranes, within local neuronal groups and between distant nodes of a neuronal network. A wealth of evidence has shown that during theta states distant neuronal groups synchronize, forming networks of spatially confined neuronal clusters at specific time periods during task performance. Here, we show that a functional commonality of networks engaging in theta rhythmic states is that they emerge around decision points, reflecting rhythmic synchronization of choice-relevant information. Decision points characterize a point in time shortly before a subject chooses to select one action over another, i.e. when automatic behavior is terminated and the organism reactivates multiple sources of information to evaluate the evidence for available choices. As such, decision processes require the coordinated retrieval of choice-relevant information including (i the retrieval of stimulus evaluations (stim.-reward associations and reward expectancies about future outcomes, (ii the retrieval of past and prospective memories (e.g. stim.-stim. associations, (iii the reactivation of contextual task rule representations (e.g. stim.-response mappings, along with (iv an ongoing assessment of sensory evidence. An increasing number of studies reveal that retrieval of these multiple types of information proceeds within few theta cycles through synchronized spiking activity across limbic, striatal and cortical processing nodes. The outlined evidence suggests that evolving spatially and temporally specific theta synchronization could serve as the critical correlate underlying the selection of a choice during goal-directed behavior.
Spirig, Christoph; Bhend, Jonas
2015-04-01
Climate information indices (CIIs) represent a way to communicate climate conditions to specific sectors and the public. As such, CIIs provide actionable information to stakeholders in an efficient way. Due to their non-linear nature, such CIIs can behave differently than the underlying variables, such as temperature. At the same time, CIIs do not involve impact models with different sources of uncertainties. As part of the EU project EUPORIAS (EUropean Provision Of Regional Impact Assessment on a Seasonal-to-decadal timescale) we have developed examples of seasonal forecasts of CIIs. We present forecasts and analyses of the skill of seasonal forecasts for CIIs that are relevant to a variety of economic sectors and a range of stakeholders: heating and cooling degree days as proxies for energy demand, various precipitation and drought-related measures relevant to agriculture and hydrology, a wild fire index, a climate-driven mortality index and wind-related indices tailored to renewable energy producers. Common to all examples is the finding of limited forecast skill over Europe, highlighting the challenge for providing added-value services to stakeholders operating in Europe. The reasons for the lack of forecast skill vary: often we find little skill in the underlying variable(s) precisely in those areas that are relevant for the CII, in other cases the nature of the CII is particularly demanding for predictions, as seen in the case of counting measures such as frost days or cool nights. On the other hand, several results suggest there may be some predictability in sub-regions for certain indices. Several of the exemplary analyses show potential for skillful forecasts and prospect for improvements by investing in post-processing. Furthermore, those cases for which CII forecasts showed similar skill values as those of the underlying meteorological variables, forecasts of CIIs provide added value from a user perspective.
Interfaces between statistical analysis packages and the ESRI geographic information system
Masuoka, E.
1980-01-01
Interfaces between ESRI's geographic information system (GIS) data files and real valued data files written to facilitate statistical analysis and display of spatially referenced multivariable data are described. An example of data analysis which utilized the GIS and the statistical analysis system is presented to illustrate the utility of combining the analytic capability of a statistical package with the data management and display features of the GIS.
Social and Economic Statistics in the United Kingdom: A Review of Information Sources.
Tanenbaum, Eric; Nunez, Alfonso
1982-01-01
A new system is needed to monitor socioeconomic statistical data for the United Kingdom (UK). The current state of UK socioeconomic statistics, an assessment of methods used to communicate available information, and the resource requirements of a successful monitoring service are discussed. (AM)
Use of Statistical Information for Damage Assessment of Civil Engineering Structures
Kirkegaard, Poul Henning; Andersen, P.
This paper considers the problem of damage assessment of civil engineering structures using statistical information. The aim of the paper is to review how researchers recently have tried to solve the problem. It is pointed out that the problem consists of not only how to use the statistical...
Statistical analysis of geographic information with ArcView GIS and ArcGIS
Wong, David W. S; Lee, Jay
2005-01-01
... of its capabilities for spatial-quantitative synthesis. Now, David Wong and Jay Lee update their comprehensive handbook with Statistical Analysis of Geographic Information with ArcView GIS and ArcGIS...
Fisher information and quantum-classical field theory: classical statistics similarity
Syska, J. [Department of Field Theory and Particle Physics, Institute of Physics, University of Silesia, Uniwersytecka 4, 40-007 Katowice (Poland)
2007-07-15
The classical statistics indication for the impossibility to derive quantum mechanics from classical mechanics is proved. The formalism of the statistical Fisher information is used. Next the Fisher information as a tool of the construction of a self-consistent field theory, which joins the quantum theory and classical field theory, is proposed. (copyright 2007 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)
Kosteniuk, Julie G.; Morgan, Debra G.; D'Arcy, Carl K.
2013-01-01
Objectives: The research determined (1) the information sources that family physicians (FPs) most commonly use to update their general medical knowledge and to make specific clinical decisions, and (2) the information sources FPs found to be most physically accessible, intellectually accessible (easy to understand), reliable (trustworthy), and relevant to their needs. Methods: A cross-sectional postal survey of 792 FPs and locum tenens, in full-time or part-time medical practice, currently practicing or on leave of absence in the Canadian province of Saskatchewan was conducted during the period of January to April 2008. Results: Of 666 eligible physicians, 331 completed and returned surveys, resulting in a response rate of 49.7% (331/666). Medical textbooks and colleagues in the main patient care setting were the top 2 sources for the purpose of making specific clinical decisions. Medical textbooks were most frequently considered by FPs to be reliable (trustworthy), and colleagues in the main patient care setting were most physically accessible (easy to access). Conclusions: When making specific clinical decisions, FPs were most likely to use information from sources that they considered to be reliable and generally physically accessible, suggesting that FPs can best be supported by facilitating easy and convenient access to high-quality information. PMID:23405045
Argaud Jean-Philippe
2015-01-01
Full Text Available The goal of this study is to look after the amount of information that is mandatory to get a relevant parameters optimisation by data assimilation for physical models in neutronic diffusion calculations, and to determine what is the best information to reach the optimum of accuracy at the cheapest cost. To evaluate the quality of the optimisation, we study the covariance matrix that represents the accuracy of the optimised parameter. This matrix is a classical output of the data assimilation procedure, and it is the main information about accuracy and sensitivity of the parameter optimal determination. From these studies, we present some results collected from the neutronic simulation of nuclear power plants. On the basis of the configuration studies, it has been shown that with data assimilation we can determine a global strategy to optimise the quality of the result with respect to the amount of information provided. The consequence of this is a cost reduction in terms of measurement and/or computing time with respect to the basic approach.
ZHAO Zhen-shan; XU Guo-zhi
2007-01-01
In real multiple-input multiple-output (MIMO) systems, the perfect channel state information (CSI) may be costly or impossible to acquire. But the channel statistical information can be considered relatively stationary during long-term transmission.The statistical information can be obtained at the receiver and fed back to the transmitter and do not require frequent update. By exploiting channel mean and covariance information at the transmitter simultaneously, this paper investigates the optimal transmission strategy for spatially correlated MIMO channels. An upper bound of ergodic capacity is derived and taken as the performance criterion. Simulation results are also given to show the performance improvement of the optimal transmission strategy.
Bram Pynoo
2013-06-01
Full Text Available In view of the tremendous potential benefits of clinical information systems (CIS for the quality of patient care; it is hard to understand why not every CIS is embraced by its targeted users, the physicians. The aim of this study is to propose a framework for assessing hospital physicians' CIS-acceptance that can serve as a guidance for future research into this area. Hereto, a review of the relevant literature was performed in the ISI Web-of-Science database. Eleven studies were withheld from an initial dataset of 797 articles. Results show that just as in business settings, there are four core groups of variables that influence physicians' acceptance of a CIS: its usefulness and ease of use, social norms, and factors in the working environment that facilitate use of the CIS (such as providing computers/workstations, compatibility between the new and existing system.... We also identified some additional variables as predictors of CIS-acceptance.
Changing Statistical Significance with the Amount of Information: The Adaptive α Significance Level.
Pérez, María-Eglée; Pericchi, Luis Raúl
2014-02-01
We put forward an adaptive alpha which changes with the amount of sample information. This calibration may be interpreted as a Bayes/non-Bayes compromise, and leads to statistical consistency. The calibration can also be used to produce confidence intervals whose size take in consideration the amount of observed information.
Changing Statistical Significance with the Amount of Information: The Adaptive α Significance Level☆
Pérez, María-Eglée; Pericchi, Luis Raúl
2014-01-01
We put forward an adaptive alpha which changes with the amount of sample information. This calibration may be interpreted as a Bayes/non-Bayes compromise, and leads to statistical consistency. The calibration can also be used to produce confidence intervals whose size take in consideration the amount of observed information. PMID:24511173
Chocholik, Joan K.; Bouchard, Susan E.; Tan, Joseph K. H.; Ostrow, David N.
1999-01-01
Objectives: To determine the relevant weighted goals and criteria for use in the selection of an automated patient care information system (PCIS) using a modified Delphi technique to achieve consensus.
Gerasimenko S.
2013-01-01
Full Text Available The questions connected with the facilitation of the access of the people to the information about the living standard are considered. In particular it is suggested to use the information of the System of National Accounts and statistic methods. It is stressed that the information about living standard should be amplified with the characteristics of the effectiveness of the management of the social-economic development.
Statistical Inference on Memory Structure of Processes and Its Applications to Information Theory
2016-05-12
SECURITY CLASSIFICATION OF: Three areas were investigated. First, new memory models of discrete-time and finitely-valued information sources are...computational and storage complexities are proved. Second, a statistical method is developed to estimate the memory depth of discrete-time and continuously...Distribution Unlimited UU UU UU UU 12-05-2016 15-May-2014 14-Feb-2015 Final Report: Statistical Inference on Memory Structure of Processes and Its Applications
Dillen, van S.M.E.; Hiddink, G.J.; Koelen, M.A.; Graaf, de C.; Woerkum, van C.M.J.
2004-01-01
Objective: For more effective nutrition communication, it is crucial to identify sources from which consumers seek information. Our purpose was to assess perceived relevance and information needs regarding food topics, and preferred information sources by means of quantitative consumer research. Des
Dillen, van S.M.E.; Hiddink, G.J.; Koelen, M.A.; Graaf, de C.; Woerkum, van C.M.J.
2004-01-01
Objective: For more effective nutrition communication, it is crucial to identify sources from which consumers seek information. Our purpose was to assess perceived relevance and information needs regarding food topics, and preferred information sources by means of quantitative consumer research. Des
Mbondji, Peter Ebongue; Kebede, Derege; Soumbey-Alley, Edoh William; Zielinski, Chris; Kouvividila, Wenceslas; Lusamba-Dikassa, Paul-Samson
2014-05-01
To identify key data sources of health information and describe their availability in countries of the World Health Organization (WHO) African Region. An analytical review on the availability and quality of health information data sources in countries; from experience, observations, literature and contributions from countries. Forty-six Member States of the WHO African Region. No participants. The state of data sources, including censuses, surveys, vital registration and health care facility-based sources. In almost all countries of the Region, there is a heavy reliance on household surveys for most indicators, with more than 121 household surveys having been conducted in the Region since 2000. Few countries have civil registration systems that permit adequate and regular tracking of mortality and causes of death. Demographic surveillance sites function in several countries, but the data generated are not integrated into the national health information system because of concerns about representativeness. Health management information systems generate considerable data, but the information is rarely used because of concerns about bias, quality and timeliness. To date, 43 countries in the Region have initiated Integrated Disease Surveillance and Response. A multitude of data sources are used to track progress towards health-related goals in the Region, with heavy reliance on household surveys for most indicators. Countries need to develop comprehensive national plans for health information that address the full range of data needs and data sources and that include provision for building national capacities for data generation, analysis, dissemination and use. © The Royal Society of Medicine.
Thissen, U.; Wopereis, S.; van Berg, S.A.A.; Bobeldijk, I.; Kleemann, R.; Kooistra, T.; van Dijk, K.W.; van Ommen, B.; Smilde, A.K.
2009-01-01
Background: In the fields of life sciences, so-called designed studies are used for studying complex biological systems. The data derived from these studies comply with a study design aimed at generating relevant information while diminishing unwanted variation (noise). Knowledge about the study
Radian Belu; Darko Koracin
2013-01-01
The main objective of the study was to investigate spatial and temporal characteristics of the wind speed and direction in complex terrain that are relevant to wind energy assessment and development, as well as to wind energy system operation, management, and grid integration. Wind data from five tall meteorological towers located in Western Nevada, USA, operated from August 2003 to March 2008, used in the analysis. The multiannual average wind speeds did not show significant increased trend ...
Heather eSheridan
2014-08-01
Full Text Available The present study explored the ability of expert and novice chess players to rapidly distinguish between regions of a chessboard that were relevant to the best move on the board, and regions of the board that were irrelevant. Accordingly, we monitored the eye movements of expert and novice chess players, while they selected white’s best move for a variety of chess problems. To manipulate relevancy, we constructed two different versions of each chess problem in the experiment, and we counterbalanced these versions across participants. These two versions of each problem were identical except that a single piece was changed from a bishop to a knight. This subtle change reversed the relevancy map of the board, such that regions that were relevant in one version of the board were now irrelevant (and vice versa. Using this paradigm, we demonstrated that both the experts and novices spent more time fixating the relevant relative to the irrelevant regions of the board. However, the experts were faster at detecting relevant information than the novices, as shown by the finding that experts (but not novices were able to distinguish between relevant and irrelevant information during the early part of the trial. These findings further demonstrate the domain-related perceptual processing advantage of chess experts, using an experimental paradigm that allowed us to manipulate relevancy under tightly controlled conditions.
Kumaran, Dharshan; Banino, Andrea; Blundell, Charles; Hassabis, Demis; Dayan, Peter
2016-12-07
Knowledge about social hierarchies organizes human behavior, yet we understand little about the underlying computations. Here we show that a Bayesian inference scheme, which tracks the power of individuals, better captures behavioral and neural data compared with a reinforcement learning model inspired by rating systems used in games such as chess. We provide evidence that the medial prefrontal cortex (MPFC) selectively mediates the updating of knowledge about one's own hierarchy, as opposed to that of another individual, a process that underpinned successful performance and involved functional interactions with the amygdala and hippocampus. In contrast, we observed domain-general coding of rank in the amygdala and hippocampus, even when the task did not require it. Our findings reveal the computations underlying a core aspect of social cognition and provide new evidence that self-relevant information may indeed be afforded a unique representational status in the brain. Copyright Â© 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Angela Brand
2006-12-01
Full Text Available Healthcare delivery systems are facing fundamental challenges. New ways of organising theses systems based on the different needs of stakeholders’ are required to meet these challenges. While medicine is currently undergoing remarkable developments from its morphological and phenotype orientation to a molecular and genotype orientation, promoting the importance of prognosis and prediction, the discussion about the relevance of genome-based information and technologies for the health care system as a whole and especially for public health is still in its infancy. The following article discusses the relevance of genome-based information and technologies for individual health information management, health policy development and effective health services.
Geometrodynamics of Information on Curved Statistical Manifolds and its Applications to Chaos
Cafaro, C
2008-01-01
A novel information-geometrodynamical approach to chaotic dynamics (IGAC) on curved statistical manifolds based on Entropic Dynamics (ED) is presented and a new definition of information geometrodynamical entropy (IGE) as a measure of chaoticity is proposed. The general classical formalism is illustrated in a relatively simple example. It is shown that the hyperbolicity of a non-maximally symmetric 6N-dimensional statistical manifold M_{s} underlying an ED Gaussian model describing an arbitrary system of 3N degrees of freedom leads to linear information-geometric entropy growth and to exponential divergence of the Jacobi vector field intensity, quantum and classical features of chaos respectively. An information-geometric analogue of the Zurek-Paz quantum chaos criterion in the classical reversible limit is proposed. This analogy is illustrated applying the IGAC to a set of n-uncoupled three-dimensional anisotropic inverted harmonic oscillators characterized by a Ohmic distributed frequency spectrum.
Omole Moses Kayode
2012-10-01
Full Text Available This prospective study was carried out in a state hospital, Sokemu, Abeokuta to determine the relevance of Drug Information Centre (DIC to the practice of Health Care Professionals in the Hospital. A total of 120 questionnaires were administered to the hospital health care professionals. Total number of respondents was 107 corresponding to 89.2% of the total population with years of experience in service ranging from 5- 15 years.Eighty five 85 (79.4% believed that Drug Information Centre was relevant to their professional practice, 12 (11.2% believed that it was not relevant to their professional practice, while 10 (9.4% were not sure of the relevance of the DIC to their professional practice.Forty three (43 (19.2% respondents required latest Information on drugs, 25 (11.2% required information on side effects, 29 (12.9% on dosage form, 27 (12.1% on dosage regimen, 12 (5.4% on indications, 27 (12.1% on contra-indications, 19 (8.5% on brand names and 21 (9.4% required drug information on all the listed areas.Forty seven (47 (43.9% claimed that they obtained Drug Information from relevant textbooks, 74 (69.2% from colleagues, 16 (15.0% from internet, 23 (21.5% from journal and largest number 102 (95.3% claimed they obtained information from the Pharmacists who are medical representatives of pharmaceutical companies as well as from hospital pharmacists.Drug Information Centre was found to be relevant to the practice of health care professionals at a state hospital, Sokemu in Abeokuta, Nigeria. Hypothesis testing showed significant relationship of p<0.05.
Bengali-English Relevant Cross Lingual Information Access Using Finite Automata
Banerjee, Avishek; Bhattacharyya, Swapan; Hazra, Simanta; Mondal, Shatabdi
2010-10-01
CLIR techniques searches unrestricted texts and typically extract term and relationships from bilingual electronic dictionaries or bilingual text collections and use them to translate query and/or document representations into a compatible set of representations with a common feature set. In this paper, we focus on dictionary-based approach by using a bilingual data dictionary with a combination to statistics-based methods to avoid the problem of ambiguity also the development of human computer interface aspects of NLP (Natural Language processing) is the approach of this paper. The intelligent web search with regional language like Bengali is depending upon two major aspect that is CLIA (Cross language information access) and NLP. In our previous work with IIT, KGP we already developed content based CLIA where content based searching in trained on Bengali Corpora with the help of Bengali data dictionary. Here we want to introduce intelligent search because to recognize the sense of meaning of a sentence and it has a better real life approach towards human computer interactions.
Incorporating Nonparametric Statistics into Delphi Studies in Library and Information Science
Ju, Boryung; Jin, Tao
2013-01-01
Introduction: The Delphi technique is widely used in library and information science research. However, many researchers in the field fail to employ standard statistical tests when using this technique. This makes the technique vulnerable to criticisms of its reliability and validity. The general goal of this article is to explore how…
Knowledge-Sharing Intention among Information Professionals in Nigeria: A Statistical Analysis
Tella, Adeyinka
2016-01-01
In this study, the researcher administered a survey and developed and tested a statistical model to examine the factors that determine the intention of information professionals in Nigeria to share knowledge with their colleagues. The result revealed correlations between the overall score for intending to share knowledge and other…
Márcio André Veras Machado
2015-04-01
Full Text Available The usefulness of financial statements depends, fundamentally, on the degree of relevance of the information they disclose to users. Thus, studies that measure the relevance of accounting information to the users of financial statements are of some importance. One line of research within this subject is in ascertaining the relevance and importance of accounting information for the capital markets: if a particular item of accounting information is minimally reflected in the price of a share, it is because this information has relevance, at least at a certain level of significance, for investors and analysts of the capital markets. This present study aims to analyze the relevance, in the Brazilian capital markets, of the information content of the Value Added Statement (or VAS - referred to in Brazil as the Demonstração do Valor Adicionado, or DVA. It analyzed the ratio between stock price and Wealth created per share (WCPS, using linear regressions, for the period 2005-2011, for non-financial listed companies included in Melhores & Maiores ('Biggest & Best', an annual listing published by Exame Magazine in Brazil. As a secondary objective, this article seeks to establish whether WCPS represents a better indication of a company's result than Net profit per share (in this study, referred to as NPPS. The empirical evidence that was found supports the concept that the VAS has relevant information content, because it shows a capacity to explain a variation in the share price of the companies studied. Additionally, the relationship between WCPS and the stock price was shown to be significant, even after the inclusion of the control variables Stockholders' equity per share (which we abbreviate in this study to SEPS and NPPS. Finally, the evidence found indicates that the market reacts more to WCPS (Wealth created per share than to NPPS. Thus, the results obtained give some indication that, for the Brazilian capital markets, WCPS may be a better proxy
Structural analysis of health-relevant policy-making information exchange networks in Canada.
Contandriopoulos, Damien; Benoît, François; Bryant-Lukosius, Denise; Carrier, Annie; Carter, Nancy; Deber, Raisa; Duhoux, Arnaud; Greenhalgh, Trisha; Larouche, Catherine; Leclerc, Bernard-Simon; Levy, Adrian; Martin-Misener, Ruth; Maximova, Katerina; McGrail, Kimberlyn; Nykiforuk, Candace; Roos, Noralou; Schwartz, Robert; Valente, Thomas W; Wong, Sabrina; Lindquist, Evert; Pullen, Carolyn; Lardeux, Anne; Perroux, Melanie
2017-09-20
Health systems worldwide struggle to identify, adopt, and implement in a timely and system-wide manner the best-evidence-informed-policy-level practices. Yet, there is still only limited evidence about individual and institutional best practices for fostering the use of scientific evidence in policy-making processes The present project is the first national-level attempt to (1) map and structurally analyze-quantitatively-health-relevant policy-making networks that connect evidence production, synthesis, interpretation, and use; (2) qualitatively investigate the interaction patterns of a subsample of actors with high centrality metrics within these networks to develop an in-depth understanding of evidence circulation processes; and (3) combine these findings in order to assess a policy network's "absorptive capacity" regarding scientific evidence and integrate them into a conceptually sound and empirically grounded framework. The project is divided into two research components. The first component is based on quantitative analysis of ties (relationships) that link nodes (participants) in a network. Network data will be collected through a multi-step snowball sampling strategy. Data will be analyzed structurally using social network mapping and analysis methods. The second component is based on qualitative interviews with a subsample of the Web survey participants having central, bridging, or atypical positions in the network. Interviews will focus on the process through which evidence circulates and enters practice. Results from both components will then be integrated through an assessment of the network's and subnetwork's effectiveness in identifying, capturing, interpreting, sharing, reframing, and recodifying scientific evidence in policy-making processes. Knowledge developed from this project has the potential both to strengthen the scientific understanding of how policy-level knowledge transfer and exchange functions and to provide significantly improved advice
Full Text Available List Contact us RGP estmap2001 Statistics information of rice EST mapping results Data detail Data name Statistics...of This Database Site Policy | Contact Us Statistics information of rice EST mapping results - RGP estmap2001 | LSDB Archive ...
Statistical Mechanics and Information-Theoretic Perspectives on Complexity in the Earth System
Konstantinos Eftaxias
2013-11-01
Full Text Available This review provides a summary of methods originated in (non-equilibrium statistical mechanics and information theory, which have recently found successful applications to quantitatively studying complexity in various components of the complex system Earth. Specifically, we discuss two classes of methods: (i entropies of different kinds (e.g., on the one hand classical Shannon and R´enyi entropies, as well as non-extensive Tsallis entropy based on symbolic dynamics techniques and, on the other hand, approximate entropy, sample entropy and fuzzy entropy; and (ii measures of statistical interdependence and causality (e.g., mutual information and generalizations thereof, transfer entropy, momentary information transfer. We review a number of applications and case studies utilizing the above-mentioned methodological approaches for studying contemporary problems in some exemplary fields of the Earth sciences, highlighting the potentials of different techniques.
Information Theory - The Bridge Connecting Bounded Rational Game Theory and Statistical Physics
Wolpert, David H.
2005-01-01
A long-running difficulty with conventional game theory has been how to modify it to accommodate the bounded rationality of all red-world players. A recurring issue in statistical physics is how best to approximate joint probability distributions with decoupled (and therefore far more tractable) distributions. This paper shows that the same information theoretic mathematical structure, known as Product Distribution (PD) theory, addresses both issues. In this, PD theory not only provides a principle formulation of bounded rationality and a set of new types of mean field theory in statistical physics; it also shows that those topics are fundamentally one and the same.
Williams, Amanda
2014-01-01
The purpose of the current research was to investigate the relationship between preference for numerical information (PNI), math self-concept, and six types of statistics anxiety in an attempt to establish support for the nomological validity of the PNI. Correlations indicate that four types of statistics anxiety were strongly related to PNI, and…
Axel Osses
2013-10-01
Full Text Available In this work, we present combined statistical indexes for evaluating air quality monitoring networks based on concepts derived from the information theory and Kullback–Liebler divergence. More precisely, we introduce: (1 the standard measure of complementary mutual information or ‘specificity’ index; (2 a new measure of information gain or ‘representativity’ index; (3 the information gaps associated with the evolution of a network and (4 the normalised information distance used in clustering analysis. All these information concepts are illustrated by applying them to 14 yr of data collected by the air quality monitoring network in Santiago de Chile (33.5 S, 70.5 W, 500 m a.s.l.. We find that downtown stations, located in a relatively flat area of the Santiago basin, generally show high ‘representativity’ and low ‘specificity’, whereas the contrary is found for a station located in a canyon to the east of the basin, consistently with known emission and circulation patterns of Santiago. We also show interesting applications of information gain to the analysis of the evolution of a network, where the choice of background information is also discussed, and of mutual information distance to the classifications of stations. Our analyses show that information as those presented here should of course be used in a complementary way when addressing the analysis of an air quality network for planning and evaluation purposes.
Providing Decision-Relevant Information for a State Climate Change Action Plan
Wake, C.; Frades, M.; Hurtt, G. C.; Magnusson, M.; Gittell, R.; Skoglund, C.; Morin, J.
2008-12-01
Carbon Solutions New England (CSNE), a public-private partnership formed to promote collective action to achieve a low carbon society, has been working with the Governor appointed New Hampshire Climate Change Policy Task Force (NHCCTF) to support the development of a state Climate Change Action Plan. CSNE's role has been to quantify the potential carbon emissions reduction, implementation costs, and cost savings at three distinct time periods (2012, 2025, 2050) for a range of strategies identified by the Task Force. These strategies were developed for several sectors (transportation and land use, electricity generation and use, building energy use, and agriculture, forestry, and waste).New Hampshire's existing and projected economic and population growth are well above the regional average, creating additional challenges for the state to meet regional emission reduction targets. However, by pursuing an ambitious suite of renewable energy and energy efficiency strategies, New Hampshire may be able to continue growing while reducing emissions at a rate close to 3% per year up to 2025. This suite includes efficiency improvements in new and existing buildings, a renewable portfolio standard for electricity generation, avoiding forested land conversion, fuel economy gains in new vehicles, and a reduction in vehicle miles traveled. Most (over 80%) of these emission reduction strategies are projected to provide net economic savings in 2025.A collaborative and iterative process was developed among the key partners in the project. The foundation for the project's success included: a diverse analysis team with leadership that was committed to the project, an open source analysis approach, weekly meetings and frequent communication among the partners, interim reporting of analysis, and an established and trusting relationship among the partners, in part due to collaboration on previous projects.To develop decision-relevant information for the Task Force, CSNE addressed
Nadia Tahernia; Morteza Khodabin; Noorbakhsh Mirzaei; Morteza Eskandari-Ghadi
2012-04-01
By analyzing the seismic catalogue of Iran, the probability distributions of interoccurrence times of earthquakes were investigated for different seismotectonic settings. Several probability distributions were applied to data from major seismotectonic provinces in different cut-off magnitudes and the distribution parameters were determined through the method of maximum likelihood. With the help of goodness-of-fit tests (AIC and BIC criteria based on information theory, Kolmogorov–Smirnov test) and the coefficient of determination, we have found that the gamma statistics and generalized normal statistics coexist in interoccurrence time statistics. Our results imply that a transition from a generalized normal regime to a gamma regime occurs if the threshold magnitude in certain seismotectonic regions (Alborz–Azarbayejan, Zagros, and Central-East Iran) is changed.
The Problems in Chinese Government Financial Information Disclosure and Relevant Proposals
Zhe Wang
2016-03-01
Full Text Available Government financial information is an important part of government information, fully reporting the operational efficiency and the place where government puts tax onto. This paper analyses the problems in Chinese government financial information disclosure and the necessity of reform in detail. It also provides several proposals for the improvement of Chinese governmental financial report and financial information disclosure system.
Xuemin Zhuang
2015-05-01
Full Text Available Purpose: The purpose of this article is to study whether there exists natural relationship between fair value and corporate external market. A series of special phenomenon in the application of fair value arouses our research interests, which present evidences on how competition affects the correlation of fair value information. Design/methodology/approach: this thesis chooses fair value changes gains and losses and calculate the ratio of DFVPSit as the alternative variable of the fair value. In order to effectively inspect the mutual influence between the degree of industry competition and the value relevance of fair value, and reduce the impact of multi-collinearity, we built a regression model on the hypothesis, which supposes that if other conditions are the same, the fair value information has greater value relevance if the degree of the industry competition is greater. To test the hypothesis, we use the comparison of the DFVPSit coefficient absolute value to judge the value relevance of fair value information, and the greater the absolute value is, the higher relevance between the changes in fair value per share profits and losses with the stock prices. Findings: The higher the degree of competition in the industry is, the more fair value information relevance is. Also, there are evidences representing that fair value information often presents negative correlation with the stock price. Originality/value: The main contribution of the article is to show that not only need we make the formulation and implementation of the high quality of fair value accounting standards to suit for both the national conditions and international practice, but also need we further to improve the company's external governance mechanism to promote fair value’s information correlation.
Hamer, Harold A.; Mayer, John P.; Huston, Wilber B.
1961-01-01
Results of a statistical analysis of horizontal-tail loads on a fighter airplane are presented. The data were obtained from a number of operational training missions with flight at altitudes up to about 50,000 feet and at Mach numbers up to 1.22. The analysis was performed to determine the feasibility of calculating horizontal-tail load from data on the flight conditions and airplane motions. In the analysis the calculated loads are compared with the measured loads for the different types of missions performed. The loads were calculated by two methods: a direct approach and a Monte Carlo technique. The procedures used and some of the problems associated with the data analysis are discussed. frequencies of occurrence of tail loads of given magnitudes are derived from statistical information on the flight quantities. In the direct method, a time history of tail load is calculated from time-history measurements of the flight quantities. The Monte Carlo method could be useful for extending loads information for design of prospective airplanes . For the Monte Carlo method, the The results indicate that the accuracy of loads, regardless of the method used for calculation, is largely dependent on the knowledge of the pertinent airplane aerodynamic characteristics and center-of-gravity location. In addition, reliable Monte Carlo results require an adequate sample of statistical data and a knowledge of the more important statistical dependencies between the various flight conditions and airplane motions.
Examples of the Application of Nonparametric Information Geometry to Statistical Physics
Giovanni Pistone
2013-09-01
Full Text Available We review a nonparametric version of Amari’s information geometry in which the set of positive probability densities on a given sample space is endowed with an atlas of charts to form a differentiable manifold modeled on Orlicz Banach spaces. This nonparametric setting is used to discuss the setting of typical problems in machine learning and statistical physics, such as black-box optimization, Kullback-Leibler divergence, Boltzmann-Gibbs entropy and the Boltzmann equation.
RaptorX: exploiting structure information for protein alignment by statistical inference
Peng, Jian; Xu, Jinbo
2011-01-01
This paper presents RaptorX, a statistical method for template-based protein modeling that improves alignment accuracy by exploiting structural information in a single or multiple templates. RaptorX consists of three major components: single-template threading, alignment quality prediction and multiple-template threading. This paper summarizes the methods employed by RaptorX and presents its CASP9 result analysis, aiming to identify major bottlenecks with RaptorX and template-based modeling a...
Krishnan, Ananthanarayan; Gandour, Jackson T
2014-12-01
Pitch is a robust perceptual attribute that plays an important role in speech, language, and music. As such, it provides an analytic window to evaluate how neural activity relevant to pitch undergo transformation from early sensory to later cognitive stages of processing in a well coordinated hierarchical network that is subject to experience-dependent plasticity. We review recent evidence of language experience-dependent effects in pitch processing based on comparisons of native vs. nonnative speakers of a tonal language from electrophysiological recordings in the auditory brainstem and auditory cortex. We present evidence that shows enhanced representation of linguistically-relevant pitch dimensions or features at both the brainstem and cortical levels with a stimulus-dependent preferential activation of the right hemisphere in native speakers of a tone language. We argue that neural representation of pitch-relevant information in the brainstem and early sensory level processing in the auditory cortex is shaped by the perceptual salience of domain-specific features. While both stages of processing are shaped by language experience, neural representations are transformed and fundamentally different at each biological level of abstraction. The representation of pitch relevant information in the brainstem is more fine-grained spectrotemporally as it reflects sustained neural phase-locking to pitch relevant periodicities contained in the stimulus. In contrast, the cortical pitch relevant neural activity reflects primarily a series of transient temporal neural events synchronized to certain temporal attributes of the pitch contour. We argue that experience-dependent enhancement of pitch representation for Chinese listeners most likely reflects an interaction between higher-level cognitive processes and early sensory-level processing to improve representations of behaviorally-relevant features that contribute optimally to perception. It is our view that long
77 FR 42339 - Improving Contracting Officers' Access to Relevant Integrity Information
2012-07-18
... information about contractor business ethics in the Federal Awardee Performance and Integrity Information System (FAPIIS). FAPIIS is designed to facilitate the Government's ability to evaluate the business ethics of prospective contractors and protect the Government from awarding contracts to contractors...
Access to Attitude-Relevant Information in Memory as a Determinant of Attitude-Behavior Consistency.
Kallgren, Carl A.; Wood, Wendy
Recent reserach has attempted to determine systematically how attitudes influence behavior. This research examined whether access to attitude-relevant beliefs and prior experiences would mediate the relation between attitudes and behavior. Subjects were 49 college students with a mean age of 27 who did not live with their parents or in…
Friedman, Alon
2016-01-01
Statistics for Library and Information Services, written for non-statisticians, provides logical, user-friendly, and step-by-step instructions to make statistics more accessible for students and professionals in the field of Information Science. It emphasizes concepts of statistical theory and data collection methodologies, but also extends to the topics of visualization creation and display, so that the reader will be able to better conduct statistical analysis and communicate his/her findings. The book is tailored for information science students and professionals. It has specific examples of dataset sets, scripts, design modules, data repositories, homework assignments, and a glossary lexicon that matches the field of Information Science. The textbook provides a visual road map that is customized specifically for Information Science instructors, students, and professionals regarding statistics and visualization. Each chapter in the book includes full-color illustrations on how to use R for the statistical ...
Köpetz, Catalina; Kruglanski, Arie W
2008-05-01
Three studies investigated the process by which categorical and individuating information impacts impression formation. The authors assumed that (a) both types of information are functionally equivalent in serving as evidence for interpersonal judgments and (b) their use is determined by their accessibility and perceived applicability to the impression's target. The first study constituted an extended replication of Pavelchak's experiment, and it showed that its results, initially interpreted to suggest the primacy in impression formation of category over trait information, may have been prompted by differential accessibility of the category versus trait information in some experimental conditions of the original research. Studies 2 and 3 additionally explored the role of informational accessibility manipulated in different ways. Study 3 demonstrated also that the effect of accessibility is qualified by the information's apparent relevance to the judgmental target.
Wegwarth, Odette; Gigerenzer, Gerd
2018-01-01
other misleading statistics, motivated by conflicts of interest and defensive medicine that do not promote informed physicians and patients. What can be done? Every medical school should teach its students how to understand evidence in general and health statistics in particular. To cultivate informed patients, elementary and high schools should start teaching the mathematics of uncertainty-statistical thinking. Guidelines about complete and transparent reporting in journals, brochures, and the media need to be better enforced, and laws need to be changed in order to protect patients and doctors alike against the practice of defensive medicine instead of encouraging it. A critical mass of informed citizens will not resolve all healthcare problems, but it can constitute a major triggering factor for better care.
Statistics Information Management System%智能统计分析系统
曹占峰; 刘海涛; 张启伟
2015-01-01
Statistics work is the fundamental to achieve the enterprise information. With the increase of the requirement to statistics work, technology reformation of statistics is faced with the big challenge. In this paper, it is in allusion to the problem which is not satisfied by client in electric power statistics process to transform, with uniform data platform as centre. The statistics information management system is established which is combined with business intelligence and data statistics. In this system, it can unify data storage and standardize index management. Through the user interface, it can accomplish operation, like forms report, data examination and application analysis. It is proved availability of this research result with this system coming into service to statistics work that work pressure is lightened, work efficiency and quality is enhanced.%统计工作是实现企业信息化的根本，随着对统计工作要求的提高，统计技术革新面临重大挑战。以统一数据平台为核心，针对统计过程中无法满足客户需求的问题进行变革。建立了将商业智能功能与数据统计功能整合为一的智能统计分析系统，该系统具有统一数据存储、规范指标管理的效果，可以通过本系统的用户界面对企业业务完成报表统计、数据查询、应用分析等操作。该系统已成功投入使用，服务于统计工作，减轻工作压力、提高工作效率和质量，验证了本研究成果的有效性。
Cafaro, C
2008-01-01
In this paper, we review our novel information geometrodynamical approach to chaos (IGAC) on curved statistical manifolds and we emphasize the usefulness of our information-geometrodynamical entropy (IGE) as an indicator of chaoticity in a simple application. Furthermore, knowing that integrable and chaotic quantum antiferromagnetic Ising chains are characterized by asymptotic logarithmic and linear growths of their operator space entanglement entropies, respectively, we apply our IGAC to present an alternative characterization of such systems. Remarkably, we show that in the former case the IGE exhibits asymptotic logarithmic growth while in the latter case the IGE exhibits asymptotic linear growth. At this stage of its development, IGAC remains an ambitious unifying information-geometric theoretical construct for the study of chaotic dynamics with several unsolved problems. However, based on our recent findings, we believe it could provide an interesting, innovative and potentially powerful way to study and...
Cafaro, C.; Ali, S. A.
2008-12-01
In this paper, we review our novel information-geometrodynamical approach to chaos (IGAC) on curved statistical manifolds and we emphasize the usefulness of our information-geometrodynamical entropy (IGE) as an indicator of chaoticity in a simple application. Furthermore, knowing that integrable and chaotic quantum antiferromagnetic Ising chains are characterized by asymptotic logarithmic and linear growths of their operator space entanglement entropies, respectively, we apply our IGAC to present an alternative characterization of such systems. Remarkably, we show that in the former case the IGE exhibits asymptotic logarithmic growth while in the latter case the IGE exhibits asymptotic linear growth. At this stage of its development, IGAC remains an ambitious unifying information-geometric theoretical construct for the study of chaotic dynamics with several unsolved problems. However, based on our recent findings, we believe that it could provide an interesting, innovative and potentially powerful way to study and understand the very important and challenging problems of classical and quantum chaos.
On the Estimation and Use of Statistical Modelling in Information Retrieval
Petersen, Casper
Automatic text processing often relies on assumptions about the distribution of some property (such as term frequency) in the data being processed. In information retrieval (IR) such assumptions may be contributed to (i) the absence of principled approaches for determining the correct statistical...... distribution, and to the fact that (ii) making such assumptions does not seem to impact IR effectiveness. However, if such assumptions are not validated, any subsequent calculations, deductions or modelling becomes less accurate for the task at hand. To remove the need for such assumptions, this thesis first...... introduces a statistically principled method for selecting the best fitting distribution. The thesis then demonstrates that integrating knowledge about the best-fitting distribution into IR leads to superior results compared to existing strong baselines on multiple datasets. Overall, this thesis concludes...
Mello, P.A.; Pereyra, P.; Seligman, T.H.
1985-05-01
Ensembles of scattering S-matrices have been used in the past to describe the statistical fluctuations exhibited by many nuclear-reaction cross sections as a function of energy. In recent years, there have been attempts to construct these ensembles explicitly in terms of S, by directly proposinng a statistical law for S. In the present paper, it is shown that, for an arbitrary number of channels, one can incorporate, in the ensemble of S-matrices, the conditions of flux conservation, time-reversal invariance, causality, ergodicity, and the requirement that the ensemble average coincide with the optical scattering matrix. Since these conditions do not specify the ensemble uniquely, the ensemble that has maximum information-entropy is dealt with among those that satisfy the above requirements. Some applications to few-channel problems and comparisons to Monte-Carlo calculations are presented.
Information Retrieval eXperience (IRX): Towards a Human-Centered Personalized Model of Relevance
Sluis, van der Frans; Broek, van den Egon L.; Dijk, van Betsy; Hoeber, O.; Li, Y.; Huang, X.J.
2010-01-01
We approach Information Retrieval (IR) from a User eXperience (UX) perspective. Through introducing a model for Information Retrieval eXperience (IRX), this paper operationalizes a perspective on IR that reaches beyond topicality. Based on a document's topicality, complexity, and emotional value, a
Kim, Seokyeon; Jeong, Seongmin; Woo, Insoo; Jang, Yun; Maciejewski, Ross; Ebert, David
2017-02-08
Geographic visualization research has focused on a variety of techniques to represent and explore spatiotemporal data. The goal of those techniques is to enable users to explore events and interactions over space and time in order to facilitate the discovery of patterns, anomalies and relationships within the data. However, it is difficult to extract and visualize data flow patterns over time for non-directional statistical data without trajectory information. In this work, we develop a novel flow analysis technique to extract, represent, and analyze flow maps of non-directional spatiotemporal data unaccompanied by trajectory information. We estimate a continuous distribution of these events over space and time, and extract flow fields for spatial and temporal changes utilizing a gravity model. Then, we visualize the spatiotemporal patterns in the data by employing flow visualization techniques. The user is presented with temporal trends of geo-referenced discrete events on a map. As such, overall spatiotemporal data flow patterns help users analyze geo-referenced temporal events, such as disease outbreaks, crime patterns, etc. To validate our model, we discard the trajectory information in an origin-destination dataset and apply our technique to the data and compare the derived trajectories and the original. Finally, we present spatiotemporal trend analysis for statistical datasets including twitter data, maritime search and rescue events, and syndromic surveillance.
Statistical Information and Uncertainty: A Critique of Applications in Experimental Psychology
Donald Laming
2010-04-01
Full Text Available This paper presents, first, a formal exploration of the relationships between information (statistically defined, statistical hypothesis testing, the use of hypothesis testing in reverse as an investigative tool, channel capacity in a communication system, uncertainty, the concept of entropy in thermodynamics, and Bayes’ theorem. This exercise brings out the close mathematical interrelationships between different applications of these ideas in diverse areas of psychology. Subsequent illustrative examples are grouped under (a the human operator as an ideal communications channel, (b the human operator as a purely physical system, and (c Bayes’ theorem as an algorithm for combining information from different sources. Some tentative conclusions are drawn about the usefulness of information theory within these different categories. (a The idea of the human operator as an ideal communications channel has long been abandoned, though it provides some lessons that still need to be absorbed today. (b Treating the human operator as a purely physical system provides a platform for the quantitative exploration of many aspects of human performance by analogy with the analysis of other physical systems. (c The use of Bayes’ theorem to calculate the effects of prior probabilities and stimulus frequencies on human performance is probably misconceived, but it is difficult to obtain results precise enough to resolve this question.
Langenburg, Glenn; Champod, Christophe; Genessay, Thibault
2012-06-10
The aim of this research was to evaluate how fingerprint analysts would incorporate information from newly developed tools into their decision making processes. Specifically, we assessed effects using the following: (1) a quality tool to aid in the assessment of the clarity of the friction ridge details, (2) a statistical tool to provide likelihood ratios representing the strength of the corresponding features between compared fingerprints, and (3) consensus information from a group of trained fingerprint experts. The measured variables for the effect on examiner performance were the accuracy and reproducibility of the conclusions against the ground truth (including the impact on error rates) and the analyst accuracy and variation for feature selection and comparison. The results showed that participants using the consensus information from other fingerprint experts demonstrated more consistency and accuracy in minutiae selection. They also demonstrated higher accuracy, sensitivity, and specificity in the decisions reported. The quality tool also affected minutiae selection (which, in turn, had limited influence on the reported decisions); the statistical tool did not appear to influence the reported decisions.
Henriques, Ana; Oliveira, Hélia
2016-01-01
This paper reports on the results of a study investigating the potential to embed Informal Statistical Inference in statistical investigations, using TinkerPlots, for assisting 8th grade students' informal inferential reasoning to emerge, particularly their articulations of uncertainty. Data collection included students' written work on a…
Amy eRouinfar
2014-09-01
Full Text Available This study investigated links between lower-level visual attention processes and higher-level problem solving. This was done by overlaying visual cues on conceptual physics problem diagrams to direct participants’ attention to relevant areas to facilitate problem solving. Participants (N = 80 individually worked through four problem sets, each containing a diagram, while their eye movements were recorded. Each diagram contained regions that were relevant to solving the problem correctly and separate regions related to common incorrect responses. Problem sets contained an initial problem, six isomorphic training problems, and a transfer problem. The cued condition saw visual cues overlaid on the training problems. Participants’ verbal responses were used to determine their accuracy. The study produced two major findings. First, short duration visual cues can improve problem solving performance on a variety of insight physics problems, including transfer problems not sharing the surface features of the training problems, but instead sharing the underlying solution path. Thus, visual cues can facilitate re-representing a problem and overcoming impasse, enabling a correct solution. Importantly, these cueing effects on problem solving did not involve the solvers’ attention necessarily embodying the solution to the problem. Instead, the cueing effects were caused by solvers attending to and integrating relevant information in the problems into a solution path. Second, these short duration visual cues when administered repeatedly over multiple training problems resulted in participants becoming more efficient at extracting the relevant information on the transfer problem, showing that such cues can improve the automaticity with which solvers extract relevant information from a problem. Both of these results converge on the conclusion that lower-order visual processes driven by attentional cues can influence higher-order cognitive processes
Finnerty, Justin John; Peyser, Alexander; Carloni, Paolo
2015-01-01
Cation selective channels constitute the gate for ion currents through the cell membrane. Here we present an improved statistical mechanical model based on atomistic structural information, cation hydration state and without tuned parameters that reproduces the selectivity of biological Na+ and Ca2+ ion channels. The importance of the inclusion of step-wise cation hydration in these results confirms the essential role partial dehydration plays in the bacterial Na+ channels. The model, proven reliable against experimental data, could be straightforwardly used for designing Na+ and Ca2+ selective nanopores.
Majda, Andrew J.; Qi, Di
2016-02-01
Turbulent dynamical systems with a large phase space and a high degree of instabilities are ubiquitous in climate science and engineering applications. Statistical uncertainty quantification (UQ) to the response to the change in forcing or uncertain initial data in such complex turbulent systems requires the use of imperfect models due to the lack of both physical understanding and the overwhelming computational demands of Monte Carlo simulation with a large-dimensional phase space. Thus, the systematic development of reduced low-order imperfect statistical models for UQ in turbulent dynamical systems is a grand challenge. This paper applies a recent mathematical strategy for calibrating imperfect models in a training phase and accurately predicting the response by combining information theory and linear statistical response theory in a systematic fashion. A systematic hierarchy of simple statistical imperfect closure schemes for UQ for these problems is designed and tested which are built through new local and global statistical energy conservation principles combined with statistical equilibrium fidelity. The forty mode Lorenz 96 (L-96) model which mimics forced baroclinic turbulence is utilized as a test bed for the calibration and predicting phases for the hierarchy of computationally cheap imperfect closure models both in the full phase space and in a reduced three-dimensional subspace containing the most energetic modes. In all of phase spaces, the nonlinear response of the true model is captured accurately for the mean and variance by the systematic closure model, while alternative methods based on the fluctuation-dissipation theorem alone are much less accurate. For reduced-order model for UQ in the three-dimensional subspace for L-96, the systematic low-order imperfect closure models coupled with the training strategy provide the highest predictive skill over other existing methods for general forced response yet have simple design principles based on a
Renner, Gerolf; Irblich, Dieter
2016-11-01
Test Reviews in Child Psychology: Test Users Wish to Obtain Practical Information Relevant to their Respective Field of Work This study investigated to what extent diagnosticians use reviews of psychometric tests for children and adolescents, how they evaluate their quality, and what they expect concerning content. Test users (n = 323) from different areas of work (notably social pediatrics, early intervention, special education, speech and language therapy) rated test reviews as one of the most important sources of information. Readers of test reviews value practically oriented descriptions and evaluations of tests that are relevant to their respective field of work. They expect independent reviews that critically discuss opportunities and limits of the tests under scrutiny. The results show that authors of test reviews should not only have a background in test theory but should also be familiar with the practical application of tests in various settings.
Social relevance: toward understanding the impact of the individual in an information cascade
Hall, Robert T.; White, Joshua S.; Fields, Jeremy
2016-05-01
Information Cascades (IC) through a social network occur due to the decision of users to disseminate content. We define this decision process as User Diffusion (UD). IC models typically describe an information cascade by treating a user as a node within a social graph, where a node's reception of an idea is represented by some activation state. The probability of activation then becomes a function of a node's connectedness to other activated nodes as well as, potentially, the history of activation attempts. We enrich this Coarse-Grained User Diffusion (CGUD) model by applying actor type logics to the nodes of the graph. The resulting Fine-Grained User Diffusion (FGUD) model utilizes prior research in actor typing to generate a predictive model regarding the future influence a user will have on an Information Cascade. Furthermore, we introduce a measure of Information Resonance that is used to aid in predictions regarding user behavior.
The relevance of cartographic scale in interactive and multimedia cartographic information systems
Mirjanka Lechthaler
2004-09-01
Full Text Available The application of new technologies in the processes of gathering, analysing, transforming, visualizing and communicating of space data and geoinformation offers a great challenge for cartography. Cartographic information provision as described in several cartographic models, which is included in cartographic information systems depend on the graphical presentation/visualization at certain scales. That necessitates a need to define the capacity or content borders (geometry and semantic for cognition and communication. However, currently we need and use maps as a vehicle for transportation of spatial and temporal information. Graphic constructions of geoanalogies, linked with interaction, multimedia sequences and animations, support effective geocommunication, bridging the gaps imposed by having to work at pre-defined scales. This paper illustrates two interactive information systems, which were conceptualised and prototyped at the Institute of Cartography and Geomedia Technique, Vienna University of Technology.
Simpson E Hatheway
2006-04-01
Full Text Available Abstract Background Movement towards evidence-based practices in many fields suggests that public health (PH challenges may be better addressed if credible information about health risks and effective PH practices is readily available. However, research has shown that many PH information needs are unmet. In addition to reviewing relevant literature, this study performed a comprehensive review of existing information resources and collected data from two representative PH groups, focusing on identifying current practices, expressed information needs, and ideal systems for information access. Methods Nineteen individual interviews were conducted among employees of two domains in a state health department – communicable disease control and community health promotion. Subsequent focus groups gathered additional data on preferences for methods of information access and delivery as well as information format and content. Qualitative methods were used to identify themes in the interview and focus group transcripts. Results Informants expressed similar needs for improved information access including single portal access with a good search engine; automatic notification regarding newly available information; access to best practice information in many areas of interest that extend beyond biomedical subject matter; improved access to grey literature as well as to more systematic reviews, summaries, and full-text articles; better methods for indexing, filtering, and searching for information; and effective ways to archive information accessed. Informants expressed a preference for improving systems with which they were already familiar such as PubMed and listservs rather than introducing new systems of information organization and delivery. A hypothetical ideal model for information organization and delivery was developed based on informants' stated information needs and preferred means of delivery. Features of the model were endorsed by the subjects who
LaPelle, Nancy R; Luckmann, Roger; Simpson, E Hatheway; Martin, Elaine R
2006-04-05
Movement towards evidence-based practices in many fields suggests that public health (PH) challenges may be better addressed if credible information about health risks and effective PH practices is readily available. However, research has shown that many PH information needs are unmet. In addition to reviewing relevant literature, this study performed a comprehensive review of existing information resources and collected data from two representative PH groups, focusing on identifying current practices, expressed information needs, and ideal systems for information access. Nineteen individual interviews were conducted among employees of two domains in a state health department--communicable disease control and community health promotion. Subsequent focus groups gathered additional data on preferences for methods of information access and delivery as well as information format and content. Qualitative methods were used to identify themes in the interview and focus group transcripts. Informants expressed similar needs for improved information access including single portal access with a good search engine; automatic notification regarding newly available information; access to best practice information in many areas of interest that extend beyond biomedical subject matter; improved access to grey literature as well as to more systematic reviews, summaries, and full-text articles; better methods for indexing, filtering, and searching for information; and effective ways to archive information accessed. Informants expressed a preference for improving systems with which they were already familiar such as PubMed and listservs rather than introducing new systems of information organization and delivery. A hypothetical ideal model for information organization and delivery was developed based on informants' stated information needs and preferred means of delivery. Features of the model were endorsed by the subjects who reviewed it. Many critical information needs of PH
Solution of Multiple——Point Statistics to Extracting Information from Remotely Sensed Imagery
Ge Yong; Bai Hexiang; Cheng Qiuming
2008-01-01
Two phenomena of similar objects with different spectra and different objects with similar spectrum often result in the difficulty of separation and identification of all types of geographical objects only using spectral information．Therefore，there is a need to incorporate spatial structural and spatial association properties of the surfaces of objects into image processing to improve the accuracy of classification of remotely sensed imagery．In the current article，a new method is proposed on the basis of the principle of multiple-point statistics for combining spectral information and spatial information for image classification．The method was validated by applying to a case study on road extraction based on Landsat TM taken over the Chinese YeHow River delta on August 8，1999． The classification results have shown that this new method provides overall better results than the traditional methods such as maximum likelihood classifier (MLC)
Introduction of statistical information in a syntactic analyzer for document image recognition
Maroneze, André O.; Coüasnon, Bertrand; Lemaitre, Aurélie
2011-01-01
This paper presents an improvement to document layout analysis systems, offering a possible solution to Sayre's paradox (which states that an element "must be recognized before it can be segmented; and it must be segmented before it can be recognized"). This improvement, based on stochastic parsing, allows integration of statistical information, obtained from recognizers, during syntactic layout analysis. We present how this fusion of numeric and symbolic information in a feedback loop can be applied to syntactic methods to improve document description expressiveness. To limit combinatorial explosion during exploration of solutions, we devised an operator that allows optional activation of the stochastic parsing mechanism. Our evaluation on 1250 handwritten business letters shows this method allows the improvement of global recognition scores.
J. A. Reshi
2014-12-01
Full Text Available In this paper, a new class of Size-biased Generalized Gamma (SBGG distribution is defined. A Size-biased Generalized Gamma (SBGG distribution, a particular case of weighted Generalized Gamma distribution, taking the weights as the variate values has been defined. The important statistical properties including hazard functions, reverse hazard functions, mode, moment generating function, characteristic function, Shannon’s entropy, generalized entropy and Fisher’s information matrix of the new model have been derived and studied. Here, we also study SBGG entropy estimation, Akaike and Bayesian information criterion. A likelihood ratio test for size-biasedness is conducted. The estimation of parameters is obtained by employing the classical methods of estimation especially method of moments and maximum likelihood estimator.
2011-04-25
... strategies to assure safety of the U.S. supply of blood and blood components, tissues, cells, and organs... and public health domains. This RFI is for information and planning purposes only and is not a... areas: Identifying strategies for protecting recipients and living donor health; ] Identifying processes...
Combining brains: a survey of methods for statistical pooling of information.
Lazar, Nicole A; Luna, Beatriz; Sweeney, John A; Eddy, William F
2002-06-01
More than one subject is scanned in a typical functional brain imaging experiment. How can the scientist make best use of the acquired data to map the specific areas of the brain that become active during the performance of different tasks? It is clear that we can gain both scientific and statistical power by pooling the images from multiple subjects; furthermore, for the comparison of groups of subjects (clinical patients vs healthy controls, children of different ages, left-handed people vs right-handed people, as just some examples), it is essential to have a "group map" to represent each population and to form the basis of a statistical test. While the importance of combining images for these purposes has been recognized, there has not been an organized attempt on the part of neuroscientists to understand the different statistical approaches to this problem, which have various strengths and weaknesses. In this paper we review some popular methods for combining information, and demonstrate the surveyed techniques on a sample data set. Given a combination of brain images, the researcher needs to interpret the result and decide on areas of activation; the question of thresholding is critical here and is also explored.
RELEVANCE OF THE ACCOUNTING INFORMATION FOR THE MERGERS AND ACQUISITIONS IN ROMANIA
ALIN EMANUEL ARTENE
2012-11-01
Full Text Available In the Romanian economic environment entities are operating, we can find a large number of factors that obstruct the development of many economic entities such as: the perpetual economic and financial crisis, bureaucracy, inflation, competition and many others. In our opinion a solution to overcome the difficulties Romanian economic entities are facing is merger or acquisition. The relevance of the advance accounting process such as merger and acquisition can be a support for many small and medium sized enterprises witch in the last year had functioned at the same parameters as the year before. Trough merger this type of entities can harmonize their production process, find ways to develop new and improved products and production processes and last but not least increase the percentage of research and development among Romanian economic entities.
Principle of maximum Fisher information from Hardy's axioms applied to statistical systems.
Frieden, B Roy; Gatenby, Robert A
2013-10-01
Consider a finite-sized, multidimensional system in parameter state a. The system is either at statistical equilibrium or general nonequilibrium, and may obey either classical or quantum physics. L. Hardy's mathematical axioms provide a basis for the physics obeyed by any such system. One axiom is that the number N of distinguishable states a in the system obeys N=max. This assumes that N is known as deterministic prior knowledge. However, most observed systems suffer statistical fluctuations, for which N is therefore only known approximately. Then what happens if the scope of the axiom N=max is extended to include such observed systems? It is found that the state a of the system must obey a principle of maximum Fisher information, I=I(max). This is important because many physical laws have been derived, assuming as a working hypothesis that I=I(max). These derivations include uses of the principle of extreme physical information (EPI). Examples of such derivations were of the De Broglie wave hypothesis, quantum wave equations, Maxwell's equations, new laws of biology (e.g., of Coulomb force-directed cell development and of in situ cancer growth), and new laws of economic fluctuation and investment. That the principle I=I(max) itself derives from suitably extended Hardy axioms thereby eliminates its need to be assumed in these derivations. Thus, uses of I=I(max) and EPI express physics at its most fundamental level, its axiomatic basis in math.
Hartcher-O'Brien, Jess; Di Luca, Massimiliano; Ernst, Marc O.
2014-01-01
Often multisensory information is integrated in a statistically optimal fashion where each sensory source is weighted according to its precision. This integration scheme is statistically optimal because it theoretically results in unbiased perceptual estimates with the highest precision possible. There is a current lack of consensus about how the nervous system processes multiple sensory cues to elapsed time. In order to shed light upon this, we adopt a computational approach to pinpoint the integration strategy underlying duration estimation of audio/visual stimuli. One of the assumptions of our computational approach is that the multisensory signals redundantly specify the same stimulus property. Our results clearly show that despite claims to the contrary, perceived duration is the result of an optimal weighting process, similar to that adopted for estimates of space. That is, participants weight the audio and visual information to arrive at the most precise, single duration estimate possible. The work also disentangles how different integration strategies – i.e. considering the time of onset/offset of signals - might alter the final estimate. As such we provide the first concrete evidence of an optimal integration strategy in human duration estimates. PMID:24594578
Principle of Maximum Fisher Information from Hardy's Axioms Applied to Statistical Systems
Frieden, B R
2014-01-01
Consider a finite-sized, multidimensional system in parameter state a. The system is either at statistical equilibrium or general non-equilibrium, and may obey either classical or quantum physics. L. Hardy's mathematical axioms provide a basis for the physics obeyed by any such system. One axiom is that the number N of distinguishable states a in the system obeys N=max. This assumes that N is known as deterministic prior knowledge. However, most observed systems suffer statistical fluctuations, for which N is therefore only known approximately. Then what happens if the scope of the axiom N=max is extended to include such observed systems? It is found that the state a of the system must obey a principle of maximum Fisher information, I=I_{max}. This is important because many physical laws have been derived, assuming as a working hypothesis that I=I_{max}. These derivations include uses of the principle of Extreme physical information (EPI). Examples of such derivations were of the De Broglie wave hypothesis, q...
Bunget Ovidiu Constantin
2013-07-01
The objective of IAS 29 is to establish specific standards for entities reporting in the currency of a hyperinflationary economy, so that the financial information provided is meaningful. Our empirical analysis encompasses a hyperinflationary economy covering a wide variety of hyperinflationary conditions.
Three subsets of sequence complexity and their relevance to biopolymeric information
Trevors Jack T
2005-08-01
Full Text Available Abstract Genetic algorithms instruct sophisticated biological organization. Three qualitative kinds of sequence complexity exist: random (RSC, ordered (OSC, and functional (FSC. FSC alone provides algorithmic instruction. Random and Ordered Sequence Complexities lie at opposite ends of the same bi-directional sequence complexity vector. Randomness in sequence space is defined by a lack of Kolmogorov algorithmic compressibility. A sequence is compressible because it contains redundant order and patterns. Law-like cause-and-effect determinism produces highly compressible order. Such forced ordering precludes both information retention and freedom of selection so critical to algorithmic programming and control. Functional Sequence Complexity requires this added programming dimension of uncoerced selection at successive decision nodes in the string. Shannon information theory measures the relative degrees of RSC and OSC. Shannon information theory cannot measure FSC. FSC is invariably associated with all forms of complex biofunction, including biochemical pathways, cycles, positive and negative feedback regulation, and homeostatic metabolism. The algorithmic programming of FSC, not merely its aperiodicity, accounts for biological organization. No empirical evidence exists of either RSC of OSC ever having produced a single instance of sophisticated biological organization. Organization invariably manifests FSC rather than successive random events (RSC or low-informational self-ordering phenomena (OSC.
Megan Sapp Nelson
2013-03-01
Full Text Available Objective – This study aims to determine if the timing of library in-class presentations makes a difference in the type and quality of resources students use for each of four assignments in an introductory speech class. This comparison of content delivery timing contrasts a single, 50-minute lecture early in the semester with four approximately 12-minute lectures offered just before each assignment.Methods – First-year engineering students taking Fundamentals of Speech Communication provide the study group. Each speech assignment requires students to turn in an outline and list of references. The list of references for each student was given to the librarians, after the assignments were appropriately anonymized, for analysis of resource type, quality of resource, and completeness of citation. Researchers coded arandom sample of bibliographies from the assignments using a framework to identify resource type (book, periodical, Web, facts & figures, unknown and quality, based on intended audience and purpose (scholarly, entertainment, persuasion/bias, and compared them to each other to determine if a difference is evident. The authors coordinated what material would be presented to the students to minimize variation between the sections.Results – The study found a statistically significant difference between groups of students, demonstrating that the frequent, short library instruction sessions produce an increased use of high-quality content. Similarly, the sections with multiple library interactions show more use of periodicals than websites, while completeness of references is not significantly different across teaching methods.Conclusions – More frequent and timely interaction between students and library instruction increases the quality of sources used and the completeness of the citations written. While researchers found statistically significant differences, the use of a citation coding framework developed for specific engineering
Using climate information for improved health in Africa: relevance, constraints and opportunities
Stephen J. Connor
2006-11-01
Full Text Available Good health status is one of the primary aspirations of human social development and, as a consequence, health indicators are key components of the human development indices by which we measure progress toward sustainable development. Certain diseases and ill health are associated with particular environmental and climate conditions. The timeframe of the Millennium Development Goals (MDGs demands that the risks to health associated with current climate variability are more fully understood and acted upon to improve the focus of resources in climate sensitive disease control, especially in sub-Saharan Africa, where good epidemiological surveillance data are lacking. In the absence of high-quality epidemiological data on malaria distribution in Africa, climate information has long been used to develop malaria risk maps illustrating the climatic suitability boundaries for endemic transmission. However, experience to date has shown that it is difficult in terms of availability, timing and cost to obtain meteorological observations from national meteorological services in Africa. National health services generally find the costs of purchasing these data prohibitive given their competing demands for resources across the spectrum of health service requirements. Some national health services have tried to overcome this access problem by using proxies derived from satellites, which tend to be available freely, in 'near-real-time' and therefore offer much promise for monitoring applications. This paper discusses the issues related to climate and health, reviews the current use of climate information for malaria endemic and epidemic surveillance, and presents examples of operational use of climate information for malaria control in Africa based on Geographical Information Systems and Remote Sensing.
On the relevance of Gibson's affordance concept for geographical information science (GISc).
Jonietz, David; Timpf, Sabine
2015-09-01
J. J. Gibson's concept of affordances has provided a theoretical basis for various studies in geographical information science (GISc). This paper sets out to explain its popularity from a GISc perspective. Based on a short review of previous work, it will be argued that its main contributions to GISc are twofold, including an action-centered view of spatial entities and the notion of agent-environment mutuality. Using the practical example of pedestrian behavior simulation, new potentials for using and extending affordances are discussed.
CONTEMPORARY APPROACHES OF COMPANY PERFORMANCE ANALYSIS BASED ON RELEVANT FINANCIAL INFORMATION
Sziki Klara; Kiss Melinda; Popa Dorina
2012-01-01
In this paper we chose to present two components of the financial statements: the profit and loss account and the cash flow statement. These summary documents and different indicators calculated based on them allow us to formulate assessments on the performance and profitability on various functions and levels of the companyâ€™s activity. This paper aims to support the hypothesis that the accounting information presented in the profit and loss account and in the cash flow statement is an appr...
Pan, Xuequn; Cimino, James J
2014-01-01
Clinicians and clinical researchers often seek information in electronic health records (EHRs) that are relevant to some concept of interest, such as a disease or finding. The heterogeneous nature of EHRs can complicate retrieval, risking incomplete results. We frame this problem as the presence of two gaps: 1) a gap between clinical concepts and their representations in EHR data and 2) a gap between data representations and their locations within EHR data structures. We bridge these gaps with a knowledge structure that comprises relationships among clinical concepts (including concepts of interest and concepts that may be instantiated in EHR data) and relationships between clinical concepts and the database structures. We make use of available knowledge resources to develop a reproducible, scalable process for creating a knowledge base that can support automated query expansion from a clinical concept to all relevant EHR data.
Kim, Junmo; Fisher, John W; Yezzi, Anthony; Cetin, Müjdat; Willsky, Alan S
2005-10-01
In this paper, we present a new information-theoretic approach to image segmentation. We cast the segmentation problem as the maximization of the mutual information between the region labels and the image pixel intensities, subject to a constraint on the total length of the region boundaries. We assume that the probability densities associated with the image pixel intensities within each region are completely unknown a priori, and we formulate the problem based on nonparametric density estimates. Due to the nonparametric structure, our method does not require the image regions to have a particular type of probability distribution and does not require the extraction and use of a particular statistic. We solve the information-theoretic optimization problem by deriving the associated gradient flows and applying curve evolution techniques. We use level-set methods to implement the resulting evolution. The experimental results based on both synthetic and real images demonstrate that the proposed technique can solve a variety of challenging image segmentation problems. Futhermore, our method, which does not require any training, performs as good as methods based on training.
Water Quality attainment Information from Clean Water Act Statewide Statistical Surveys
U.S. Environmental Protection Agency — Designated uses assessed by statewide statistical surveys and their state and national attainment categories. Statewide statistical surveys are water quality...
Lan, Ganhui; Tu, Yuhai
2016-05-01
preserving information, it does not reveal the underlying mechanism that leads to the observed input-output relationship, nor does it tell us much about which information is important for the organism and how biological systems use information to carry out specific functions. To do that, we need to develop models of the biological machineries, e.g. biochemical networks and neural networks, to understand the dynamics of biological information processes. This is a much more difficult task. It requires deep knowledge of the underlying biological network-the main players (nodes) and their interactions (links)-in sufficient detail to build a model with predictive power, as well as quantitative input-output measurements of the system under different perturbations (both genetic variations and different external conditions) to test the model predictions to guide further development of the model. Due to the recent growth of biological knowledge thanks in part to high throughput methods (sequencing, gene expression microarray, etc) and development of quantitative in vivo techniques such as various florescence technology, these requirements are starting to be realized in different biological systems. The possible close interaction between quantitative experimentation and theoretical modeling has made systems biology an attractive field for physicists interested in quantitative biology. In this review, we describe some of the recent work in developing a quantitative predictive model of bacterial chemotaxis, which can be considered as the hydrogen atom of systems biology. Using statistical physics approaches, such as the Ising model and Langevin equation, we study how bacteria, such as E. coli, sense and amplify external signals, how they keep a working memory of the stimuli, and how they use these data to compute the chemical gradient. In particular, we will describe how E. coli cells avoid cross-talk in a heterogeneous receptor cluster to keep a ligand-specific memory. We will also
Lan, Ganhui; Tu, Yuhai
2016-05-01
preserving information, it does not reveal the underlying mechanism that leads to the observed input-output relationship, nor does it tell us much about which information is important for the organism and how biological systems use information to carry out specific functions. To do that, we need to develop models of the biological machineries, e.g. biochemical networks and neural networks, to understand the dynamics of biological information processes. This is a much more difficult task. It requires deep knowledge of the underlying biological network—the main players (nodes) and their interactions (links)—in sufficient detail to build a model with predictive power, as well as quantitative input-output measurements of the system under different perturbations (both genetic variations and different external conditions) to test the model predictions to guide further development of the model. Due to the recent growth of biological knowledge thanks in part to high throughput methods (sequencing, gene expression microarray, etc) and development of quantitative in vivo techniques such as various florescence technology, these requirements are starting to be realized in different biological systems. The possible close interaction between quantitative experimentation and theoretical modeling has made systems biology an attractive field for physicists interested in quantitative biology. In this review, we describe some of the recent work in developing a quantitative predictive model of bacterial chemotaxis, which can be considered as the hydrogen atom of systems biology. Using statistical physics approaches, such as the Ising model and Langevin equation, we study how bacteria, such as E. coli, sense and amplify external signals, how they keep a working memory of the stimuli, and how they use these data to compute the chemical gradient. In particular, we will describe how E. coli cells avoid cross-talk in a heterogeneous receptor cluster to keep a ligand-specific memory. We will also
B. S. Raghuram
1963-07-01
Full Text Available Information theory, which originated in Tele-communication studies, is a branch of Mathematical statistics with many applications of statistical inference. The three fundamental problems are: (i Development of statistical measures of information capacity in a Communication system, (ii the transmission problem of information in a system, and (iii analytical study of reception from a statistical decision point of view. This paper is an attempt to present a comprehensive study of all three aspects. In addition, application of sequential analysis, specially with reference to radar signal detection and range estimation has been briefly discussed. Finally from the point of view of signal reception in the case of a radar, the problem has been considered as a statistical decision study. In conclusion, the computational problems as well as certain comparative studies have been briefly touched upon. Illustrative examples are given and graphs are shown wherever necessary.
Graffelman, Jan; Sánchez, Milagros; Cook, Samantha; Moreno, Victor
2013-01-01
In genetic association studies, tests for Hardy-Weinberg proportions are often employed as a quality control checking procedure. Missing genotypes are typically discarded prior to testing. In this paper we show that inference for Hardy-Weinberg proportions can be biased when missing values are discarded. We propose to use multiple imputation of missing values in order to improve inference for Hardy-Weinberg proportions. For imputation we employ a multinomial logit model that uses information from allele intensities and/or neighbouring markers. Analysis of an empirical data set of single nucleotide polymorphisms possibly related to colon cancer reveals that missing genotypes are not missing completely at random. Deviation from Hardy-Weinberg proportions is mostly due to a lack of heterozygotes. Inbreeding coefficients estimated by multiple imputation of the missings are typically lowered with respect to inbreeding coefficients estimated by discarding the missings. Accounting for missings by multiple imputation qualitatively changed the results of 10 to 17% of the statistical tests performed. Estimates of inbreeding coefficients obtained by multiple imputation showed high correlation with estimates obtained by single imputation using an external reference panel. Our conclusion is that imputation of missing data leads to improved statistical inference for Hardy-Weinberg proportions.
CONTEMPORARY APPROACHES OF COMPANY PERFORMANCE ANALYSIS BASED ON RELEVANT FINANCIAL INFORMATION
Sziki Klara
2012-12-01
Full Text Available In this paper we chose to present two components of the financial statements: the profit and loss account and the cash flow statement. These summary documents and different indicators calculated based on them allow us to formulate assessments on the performance and profitability on various functions and levels of the companyâ€™s activity. This paper aims to support the hypothesis that the accounting information presented in the profit and loss account and in the cash flow statement is an appropriate source for assessing company performance. The purpose of this research is to answer the question linked to the main hypothesis: Is it the profit and loss statement or the cash flow account that reflects better the performance of a business? Based on the literature of specialty studied we tried a conceptual, analytical and practical approach of the term performance, overviewing some terminological acceptations of the term performance as well as the main indicators of performance analysis on the basis of the profit and loss account and of the cash flow statement: aggregated indicators, also known as intermediary balances of administration, economic rate of return, rate of financial profitability, rate of return through cash flows, operating cash flow rate, rate of generating operating cash out of gross operating result. At the same time we had a comparative approach of the profit and loss account and cash flow statement, outlining the main advantages and disadvantages of these documents. In order to demonstrate the above theoretical assessments, we chose to analyze these indicators based on information from the financial statements of SC Sinteza SA, a company in Bihor county, listed on the Bucharest Stock Exchange.
Hobden, Sally
2014-01-01
Information on the HIV/AIDS epidemic in Southern Africa is often interpreted through a veil of secrecy and shame and, I argue, with flawed understanding of basic statistics. This research determined the levels of statistical literacy evident in 316 future Mathematical Literacy teachers' explanations of the median in the context of HIV/AIDS…
Cheong, B. J.; Koh, Y. J.; Kim, H. S.; Koh, S. H.; Kang, D. H.; Kang, T. W. [Cheju National Univ., Jeju (Korea, Republic of)
2004-02-15
The goal of this study is to estimate the Relevance and Influence of the Existing Regulation and the RI-PBR to the institutionalization of the regulatory system. This study reviews the current regulatory system and the status of the RI-PBR implementation of the US NRC and Korea based upon SECY Papers, Risk Informed Regulation Implementation Plan (RIRIP) of the US NRC and other domestic studies. Also the recent trends of the individual technologies regarding the RI-PBR and RIA are summarized.
The European Charter for Regional or Minority Languages: Still Relevant in the Information Age?
Sarah McMonagle
2012-12-01
Full Text Available The impact of new information and communication technologies on European societies could not have been foreseen at the time the European Charter for Regional or Minority Languages (ECRML was adopted two decades ago. Although the text of the ECRML contains no reference to such technologies, they clearly have a role in the context of linguistic communication, given their current social ubiquity. The measures outlined in the ECRML concerning, inter alia, media and cultural facilities, are precisely those being affected by the new media landscape. We can therefore be certain that the internet has some sort of impact on regional and minority languages in Europe, yet detailed assessments of this impact at the policy level are lacking. This article seeks to uncover the extent to which the Committee of Experts of the ECRML assesses the impact of the internet on those languages that have been selected by state parties for protection and promotion under the provisions of the ECRML. Findings show that references to the internet have increased in the reports of the Committee of Experts since monitoring began. However, the role of new technologies in inhibiting or facilitating regional and minority languages is seldom evaluated.
Stevanović Ivana
2012-01-01
Full Text Available The paper emphasises the importance of „good statistics of juvenile justice“ as one of the basis for a clearer overview of the juvenile crime situation, in order to create unique policies at local and national levels for the suppression and prevention of this phenomenon, and to create appropriate areas for action in terms of improving the system reform. The author particularly gives a review of the „Global Indicators in Juvenile Justice“ which present the basic set of data and comparative tool that provides a basis for the assessment and evaluation of services and policies in the field of juvenile justice, and highlights the importance of compatibility of „national indicators“ with them. Particular attention in the paper has been devoted to the overview and analysis of the necessary measures to improve this field, that were prepared and delivered to the relevant ministries and institutions by the Council for Monitoring and Promoting the work of Bodies Engaged in Criminal Proceeding and Enforcement of Juvenile Criminal Sanctions Involving Juveniles - Juvenile Justice Council (hereinafter: the Council. It was pointed, first of all, at the suggestions made by the Council to the Ministry of Justice with the aim to improve the Program for automated record keeping, at the necessary changes of the Court rules, and certain amendments to Forms SK- 3 and SK- 4 of the Statistical Office of the Republic of Serbia were presented. [Projekat Ministarstva nauke Republike Srbije, br. 47011: Kriminal u Srbiji - fenomenologija, rizici i mogućnosti socijalne intervencije
Statistical shape and texture model of quadrature phase information for prostate segmentation.
Ghose, Soumya; Oliver, Arnau; Martí, Robert; Lladó, Xavier; Freixenet, Jordi; Mitra, Jhimli; Vilanova, Joan C; Comet-Batlle, Josep; Meriaudeau, Fabrice
2012-01-01
Prostate volume estimation from segmentation of transrectal ultrasound (TRUS) images aids in diagnosis and treatment of prostate hypertrophy and cancer. Computer-aided accurate and computationally efficient prostate segmentation in TRUS images is a challenging task, owing to low signal-to-noise ratio, speckle noise, calcifications, and heterogeneous intensity distribution in the prostate region. A multi-resolution framework using texture features in a parametric deformable statistical model of shape and appearance was developed to segment the prostate. Local phase information of log-Gabor quadrature filter extracted texture of the prostate region in TRUS images. Large bandwidth of log-Gabor filter ensures easy estimation of local orientations, and zero response for a constant signal provides invariance to gray level shift. This aids in enhanced representation of the underlying texture information of the prostate unaffected by speckle noise and imaging artifacts. The parametric model of the propagating contour is derived from principal component analysis of prior shape and texture information of the prostate from the training data. The parameters were modified using prior knowledge of the optimization space to achieve segmentation. The proposed method achieves a mean Dice similarity coefficient value of 0.95 ± 0.02 and mean absolute distance of 1.26 ± 0.51 millimeter when validated with 24 TRUS images of 6 data sets in a leave-one-patient-out validation framework. The proposed method for prostate TRUS image segmentation is computationally efficient and provides accurate prostate segmentations in the presence of intensity heterogeneities and imaging artifacts.
Uneke, Chigozie Jesse; Ezeoha, Abel Ebeh; Uro-Chukwu, Henry; Ezeonu, Chinonyelum Thecla; Ogbu, Ogbonnaya; Onwe, Friday; Edoga, Chima
2015-01-01
Information and communication technology (ICT) tools are known to facilitate communication and processing of information and sharing of knowledge by electronic means. In Nigeria, the lack of adequate capacity on the use of ICT by health sector policymakers constitutes a major impediment to the uptake of research evidence into the policymaking process. The objective of this study was to improve the knowledge and capacity of policymakers to access and utilize policy relevant evidence. A modified "before and after" intervention study design was used in which outcomes were measured on the target participants both before the intervention is implemented and after. A 4-point likert scale according to the degree of adequacy; 1 = grossly inadequate, 4 = very adequate was employed. This study was conducted in Ebonyi State, south-eastern Nigeria and the participants were career health policy makers. A two-day intensive ICT training workshop was organized for policymakers who had 52 participants in attendance. Topics covered included: (i). intersectoral partnership/collaboration; (ii). Engaging ICT in evidence-informed policy making; use of ICT for evidence synthesis; (iv) capacity development on the use of computer, internet and other ICT. The pre-workshop mean of knowledge and capacity for use of ICT ranged from 2.19-3.05, while the post-workshop mean ranged from 2.67-3.67 on 4-point scale. The percentage increase in mean of knowledge and capacity at the end of the workshop ranged from 8.3%-39.1%. Findings of this study suggest that policymakers' ICT competence relevant to evidence-informed policymaking can be enhanced through training workshop.
Francisco Viacava
2002-01-01
Full Text Available Partindo-se do suposto de que as estatísticas de saúde devem compor um conjunto organizado de dados provenientes dos registros civil, da produção de serviços, das bases de dados de morbi-mortalidade, procura-se enfatizar a necessidade de coletar dados sobre saúde e uso de serviços de saúde, que só podem ser gerados por inquéritos populacionais periódicos, complementando dessa forma as deficiências das informações para monitorar e avaliar as condições de saúde e o desempenho do sistema de saúde brasileiro. Faz-se uma apresentação dos inquéritos desenvolvidos em alguns países, seus enfoques principais e as limitações mais gerais. Finalmente, apresenta-se a produção mais recente de documentos e revisões internacionais sobre os mecanismos que vêm sendo utilizados em outros países para a formulação de uma política de informação em saúde como parte da agenda da reforma do setor saúde, que poderiam de certa forma apontar para alguns caminhos no caso brasileiro.Assuming that health statistics should be considered as an organized data system composed by vital statistics, administrative records, morbidity and mortality data, it is stressed the relevance in generating population based statistics on health status and use of health services as a compensation for the limitations of health services data that are based on users of the health system or vital statistics not always associated with social variables. Health surveys are the only way to generate health information to evaluate the health system performance and the monitoring of social inequalities in health and in the use of health services. It is pointed out some main issues concerning the use of health surveys by other countries and the methods employed to build a health statistics system as part of the health sector reform. Some of the possibilities and limitations of the Brazilian health information system are discussed as well.
Spatial maping of statistical information by using the concept of equivalence
Nadejda Yu. Gubanova
2011-05-01
Full Text Available The article focuses on the problems of the use of statistical tests for revealing equivalence. The combined application of equivalence and statistical difference tests in spatial maping is discussed.
Daniela Haluza
2015-11-01
Full Text Available Individual skin health attitudes are influenced by various factors, including public education campaigns, mass media, family, and friends. Evidence-based, educative information materials assist communication and decision-making in doctor-patient interactions. The present study aims at assessing the prevailing use of skin health information material and sources and their impact on skin health knowledge, motives to tan, and sun protection. We conducted a questionnaire survey among a representative sample of Austrian residents. Print media and television were perceived as the two most relevant sources for skin health information, whereas the source physician was ranked third. Picking the information source physician increased participants’ skin health knowledge (p = 0.025 and sun-protective behavior (p < 0.001. The study results highlight the demand for targeted health messages to attain lifestyle changes towards photo-protective habits. Providing resources that encourage pro-active counseling in every-day doctor-patient communication could increase skin health knowledge and sun-protective behavior, and thus, curb the rise in skin cancer incidence rates.
Proutski Vitali
2010-12-01
Full Text Available Abstract Background To date, there are no clinically reliable predictive markers of response to the current treatment regimens for advanced colorectal cancer. The aim of the current study was to compare and assess the power of transcriptional profiling using a generic microarray and a disease-specific transcriptome-based microarray. We also examined the biological and clinical relevance of the disease-specific transcriptome. Methods DNA microarray profiling was carried out on isogenic sensitive and 5-FU-resistant HCT116 colorectal cancer cell lines using the Affymetrix HG-U133 Plus2.0 array and the Almac Diagnostics Colorectal cancer disease specific Research tool. In addition, DNA microarray profiling was also carried out on pre-treatment metastatic colorectal cancer biopsies using the colorectal cancer disease specific Research tool. The two microarray platforms were compared based on detection of probesets and biological information. Results The results demonstrated that the disease-specific transcriptome-based microarray was able to out-perform the generic genomic-based microarray on a number of levels including detection of transcripts and pathway analysis. In addition, the disease-specific microarray contains a high percentage of antisense transcripts and further analysis demonstrated that a number of these exist in sense:antisense pairs. Comparison between cell line models and metastatic CRC patient biopsies further demonstrated that a number of the identified sense:antisense pairs were also detected in CRC patient biopsies, suggesting potential clinical relevance. Conclusions Analysis from our in vitro and clinical experiments has demonstrated that many transcripts exist in sense:antisense pairs including IGF2BP2, which may have a direct regulatory function in the context of colorectal cancer. While the functional relevance of the antisense transcripts has been established by many studies, their functional role is currently unclear
Castillo-Ortiz, Jose Dionisio; Valdivia-Nuno, Jose de Jesus; Ramirez-Gomez, Andrea; Garagarza-Mariscal, Heber; Gallegos-Rios, Carlos; Flores-Hernandez, Gabriel; Hernandez-Sanchez, Luis; Brambila-Barba, Victor; Castaneda-Sanchez, Jose Juan; Barajas-Ochoa, Zalathiel; Suarez-Rico, Angel; Sanchez-Gonzalez, Jorge Manuel; Ramos-Remus, Cesar
Education is a major health determinant and one of the main independent outcome predictors in rheumatoid arthritis (RA). The use of the Internet by patients has grown exponentially in the last decade. To assess the characteristics, legibility and quality of the information available in Spanish in the Internet regarding to rheumatoid arthritis. The search was performed in Google using the phrase rheumatoid arthritis. Information from the first 30 pages was evaluated according to a pre-established format (relevance, scope, authorship, type of publication and financial objective). The quality and legibility of the pages were assessed using two validated tools, DISCERN and INFLESZ respectively. Data extraction was performed by senior medical students and evaluation was achieved by consensus. The Google search returned 323 hits but only 63% were considered relevant; 80% of them were information sites (71% discussed exclusively RA, 44% conventional treatment and 12% alternative therapies) and 12.5% had a primary financial interest. 60% of the sites were created by nonprofit organizations and 15% by medical associations. Web sites posted by medical institutions from the United States of America were better positioned in Spanish (Arthritis Foundation 4th position and American College of Rheumatology 10th position) than web sites posted by Spanish speaking countries. There is a risk of disinformation for patients with RA that use the Internet. We identified a window of opportunity for rheumatology medical institutions from Spanish-speaking countries to have a more prominent societal involvement in the education of their patients with RA. Copyright © 2016 Elsevier España, S.L.U. and Sociedad Española de Reumatología y Colegio Mexicano de Reumatología. All rights reserved.
DHLAS: A web-based information system for statistical genetic analysis of HLA population data.
Thriskos, P; Zintzaras, E; Germenis, A
2007-03-01
DHLAS (database HLA system) is a user-friendly, web-based information system for the analysis of human leukocyte antigens (HLA) data from population studies. DHLAS has been developed using JAVA and the R system, it runs on a Java Virtual Machine and its user-interface is web-based powered by the servlet engine TOMCAT. It utilizes STRUTS, a Model-View-Controller framework and uses several GNU packages to perform several of its tasks. The database engine it relies upon for fast access is MySQL, but others can be used a well. The system estimates metrics, performs statistical testing and produces graphs required for HLA population studies: (i) Hardy-Weinberg equilibrium (calculated using both asymptotic and exact tests), (ii) genetics distances (Euclidian or Nei), (iii) phylogenetic trees using the unweighted pair group method with averages and neigbor-joining method, (iv) linkage disequilibrium (pairwise and overall, including variance estimations), (v) haplotype frequencies (estimate using the expectation-maximization algorithm) and (vi) discriminant analysis. The main merit of DHLAS is the incorporation of a database, thus, the data can be stored and manipulated along with integrated genetic data analysis procedures. In addition, it has an open architecture allowing the inclusion of other functions and procedures.
Li, Jiangyuan; Petropulu, Athina P.; Poor, H. Vincent
2011-03-01
Cooperative beamforming in relay networks is considered, in which a source transmits to its destination with the help of a set of cooperating nodes. The source first transmits locally. The cooperating nodes that receive the source signal retransmit a weighted version of it in an amplify-and-forward (AF) fashion. Assuming knowledge of the second-order statistics of the channel state information, beamforming weights are determined so that the signal-to-noise ratio (SNR) at the destination is maximized subject to two different power constraints, i.e., a total (source and relay) power constraint, and individual relay power constraints. For the former constraint, the original problem is transformed into a problem of one variable, which can be solved via Newton's method. For the latter constraint, the original problem is transformed into a homogeneous quadratically constrained quadratic programming (QCQP) problem. In this case, it is shown that when the number of relays does not exceed three the global solution can always be constructed via semidefinite programming (SDP) relaxation and the matrix rank-one decomposition technique. For the cases in which the SDP relaxation does not generate a rank one solution, two methods are proposed to solve the problem: the first one is based on the coordinate descent method, and the second one transforms the QCQP problem into an infinity norm maximization problem in which a smooth finite norm approximation can lead to the solution using the augmented Lagrangian method.
Itoh, M
2000-10-01
The pattern of the mood-congruent effect in an autobiographical memory recall task was investigated. Each subject was randomly assigned to one of three experimental conditions: positive mood, negative mood (induced with music), and control groups (no specific mood). Subjects were then presented with a word at a time from a list of trait words, which were pleasant or unpleasant. They decided whether they could recall any of their autobiographical memories related to the word, and responded with "yes" or "no" buttons as rapidly and accurately as possible. After the task, they were given five minutes for an incidental free recall test. Results indicated that the mood-congruent effect was found regardless of whether there was an autobiographical memory related to the word or not in both positive and negative mood states. The effect of moods on self-relevant information processing was discussed.
V. K. Hohlov
2014-01-01
Full Text Available The article studies a neural network approach to obtain the statistical characteristics of the input vector implementations of signal and noise at ill-conditioned matrices of correlation moments to solve the problems to select and reduce the vector dimensions of informative features at detection and recognition of signals and noise based on regression methods.A scientific novelty is determined by applying neural network algorithms for the efficient solution of problems to select the informative features and determine the parameters of regression algorithms in terms of the degeneracy or ill-conditioned data with unknown expectation and covariance matrices.The article proposes to use a single-layer neural network with no zero weights and activation functions to calculate the initial regression characteristics and the mean-square value error of multiple initial regression representations, which are necessary to justify the selection of informative features, reduce a dimension of sign vectors and implement the regression algorithms. It is shown that when excluding direct links between the inputs and their corresponding neurons, in the training network the weight coefficients of neuron inputs are the coefficients of initial multiple regression, the error meansquare value of multiple initial regression representations is calculated at the outputs of neurons. The article considers conditionality of the problem to calculate the matrix that is inverse one for matrix of correlation moments. It defines a condition number, which characterizes the relative error of stated task.The problem concerning the matrix condition of the correlation moment of informative signal features and noise arises when solving the problem to find the multiple coefficients of initial regression (MCIR and the residual mean-square values of the multiple regression representations. For obtaining the MCIR and finding the residual mean-square values the matrix of correlation moments of
Wells, W. T.; Borman, K. L.; Mitchell, R. D.; Dempsey, D. J.
1979-01-01
The statistical variations in the sample gate outputs of the GEOS-3 satellite altimeter were studied for possible sea state information. After examination of a large number of statistical characteristics of the altimeter waveforms, it was found that the best sea predictor for H-1/3 in the range of 0 to 3 meters was the 75th percentile of sample and hold gate number 11.
Mark William Perlin
2015-01-01
Full Text Available Background: DNA mixtures of two or more people are a common type of forensic crime scene evidence. A match statistic that connects the evidence to a criminal defendant is usually needed for court. Jurors rely on this strength of match to help decide guilt or innocence. However, the reliability of unsophisticated match statistics for DNA mixtures has been questioned. Materials and Methods: The most prevalent match statistic for DNA mixtures is the combined probability of inclusion (CPI, used by crime labs for over 15 years. When testing 13 short tandem repeat (STR genetic loci, the CPI -1 value is typically around a million, regardless of DNA mixture composition. However, actual identification information, as measured by a likelihood ratio (LR, spans a much broader range. This study examined probability of inclusion (PI mixture statistics for 517 locus experiments drawn from 16 reported cases and compared them with LR locus information calculated independently on the same data. The log(PI -1 values were examined and compared with corresponding log(LR values. Results: The LR and CPI methods were compared in case examples of false inclusion, false exclusion, a homicide, and criminal justice outcomes. Statistical analysis of crime laboratory STR data shows that inclusion match statistics exhibit a truncated normal distribution having zero center, with little correlation to actual identification information. By the law of large numbers (LLN, CPI -1 increases with the number of tested genetic loci, regardless of DNA mixture composition or match information. These statistical findings explain why CPI is relatively constant, with implications for DNA policy, criminal justice, cost of crime, and crime prevention. Conclusions: Forensic crime laboratories have generated CPI statistics on hundreds of thousands of DNA mixture evidence items. However, this commonly used match statistic behaves like a random generator of inclusionary values, following the LLN
Cheong, B. J.; Kang, J. M.; Kim, H. S.; Koh, S. H.; Kang, D. H.; Park, C. H. [Cheju Univ., Jeju (Korea, Republic of)
2003-02-15
The goal of this study is to estimate the relevance and Influence of the Existing Regulation and the RI-PBR to the institutionalization of the regulatory system. This study reviews the current regulatory system and the status of the RI-PBR implementation of the US NRC and Korea based upon SECY Papers, Risk Informed Regulation Implementation Plan (RIRIP) of the US NRC and other domestic studies. In order to investigate the perceptions, knowledge level, ground for the regulatory change, a survey was performed to Korean nuclear utilities, researchers and regulators on the perception on the RIR. The questionnaire was composed of 50 questions regarding personal details on work experience, level of education and specific field of work ; level of knowledge on the risk informed performance based regulation (RI-PBR); the perception of the current regulation, the effectiveness, level of procedure, flexibility, dependency on the regulator and personal view, and the perception of the RI-PBR such as flexibility of regulation, introduction time and the effect of RI-PBR, safety improvement, public perception, parts of the existing regulatory system that should be changed, etc. 515 answered from all sectors of the nuclear field; utilities, engineering companies, research institutes, and regulatory bodies.
An Exercise in Exploring Big Data for Producing Reliable Statistical Information.
Rey-Del-Castillo, Pilar; Cardeñosa, Jesús
2016-06-01
The availability of copious data about many human, social, and economic phenomena is considered an opportunity for the production of official statistics. National statistical organizations and other institutions are more and more involved in new projects for developing what is sometimes seen as a possible change of paradigm in the way statistical figures are produced. Nevertheless, there are hardly any systems in production using Big Data sources. Arguments of confidentiality, data ownership, representativeness, and others make it a difficult task to get results in the short term. Using Call Detail Records from Ivory Coast as an illustration, this article shows some of the issues that must be dealt with when producing statistical indicators from Big Data sources. A proposal of a graphical method to evaluate one specific aspect of the quality of the computed figures is also presented, demonstrating that the visual insight provided improves the results obtained using other traditional procedures.
Nathan Paula M
2010-06-01
Full Text Available Abstract Background Mathematical models of infection that consider targeted interventions are exquisitely dependent on the assumed mixing patterns of the population. We report on a pilot study designed to assess three different methods (one retrospective, two prospective for obtaining contact data relevant to the determination of these mixing patterns. Methods 65 adults were asked to record their social encounters in each location visited during 6 study days using a novel method whereby a change in physical location of the study participant triggered data entry. Using a cross-over design, all participants recorded encounters on 3 days in a paper diary and 3 days using an electronic recording device (PDA. Participants were randomised to first prospective recording method. Results Both methods captured more contacts than a pre-study questionnaire, but ascertainment using the paper diary was superior to the PDA (mean difference: 4.52 (95% CI 0.28, 8.77. Paper diaries were found more acceptable to the participants compared with the PDA. Statistical analysis confirms that our results are broadly consistent with those reported from large-scale European based surveys. An association between household size (trend 0.14, 95% CI (0.06, 0.22, P P Conclusions The study's location-based reporting design allows greater scope compared to other methods for examining differences in the characteristics of encounters over a range of environments. Improved parameterisation of dynamic transmission models gained from work of this type will aid in the development of more robust decision support tools to assist health policy makers and planners.
Troyan, V.N.
1982-01-01
For the first time materials are generalized concerning statistical methods of processing seismic information which are used more widely in prospecting minerals (oil, gas, ore) in regions of complex structure and great depths. The methods provide reliable identification of useful signals in the background of interferences. Fundamentals are examined of methods of constructing algorithms and programs used in interpretation, and their efficiency.
Boysen, Guy A.
2015-01-01
Student evaluations of teaching are among the most accepted and important indicators of college teachers' performance. However, faculty and administrators can overinterpret small variations in mean teaching evaluations. The current research examined the effect of including statistical information on the interpretation of teaching evaluations.…
Brown, Christopher C.
2011-01-01
As federal government information is increasingly migrating to online formats, libraries are providing links to this content via URLs or persistent URLs (PURLs) in their online public access catalogs (OPACs). Clickthrough statistics that accumulated as users visited links to online content in the University of Denver's library OPAC were gathered…
Boysen, Guy A.
2015-01-01
Student evaluations of teaching are among the most accepted and important indicators of college teachers' performance. However, faculty and administrators can overinterpret small variations in mean teaching evaluations. The current research examined the effect of including statistical information on the interpretation of teaching evaluations.…
19 CFR 103.31 - Information on vessel manifests and summary statistical reports.
2010-04-01
..., they may request and obtain from Customs, information from vessel manifests, subject to the rules set... purchases. Refunds will not be provided. Information regarding the technical specifications of the CD-ROMS... 19 Customs Duties 1 2010-04-01 2010-04-01 false Information on vessel manifests and summary...
Hawthorne L. Beyer; Jeff Jenness; Samuel A. Cushman
2010-01-01
Spatial information systems (SIS) is a term that describes a wide diversity of concepts, techniques, and technologies related to the capture, management, display and analysis of spatial information. It encompasses technologies such as geographic information systems (GIS), global positioning systems (GPS), remote sensing, and relational database management systems (...
Mathematical statistics and stochastic processes
Bosq, Denis
2013-01-01
Generally, books on mathematical statistics are restricted to the case of independent identically distributed random variables. In this book however, both this case AND the case of dependent variables, i.e. statistics for discrete and continuous time processes, are studied. This second case is very important for today's practitioners.Mathematical Statistics and Stochastic Processes is based on decision theory and asymptotic statistics and contains up-to-date information on the relevant topics of theory of probability, estimation, confidence intervals, non-parametric statistics and rob
My Opinions about Statistical Informization Application Construction%统计信息化应用系统建设之我见
田艳
2004-01-01
The paper demonstrates the problems in the face of statistical informization application system, analyses the causes of the problems and the possible ways to standardize statistical informization system at the stage of being, puts forward the suggestion to follow the rule to develop a set of united general software and software tools of statistics.
Perruchet, Pierre; Poulin-Charronnat, Benedicte
2012-01-01
Endress and Mehler (2009) reported that when adult subjects are exposed to an unsegmented artificial language composed from trisyllabic words such as ABX, YBC, and AZC, they are unable to distinguish between these words and what they coined as the "phantom-word" ABC in a subsequent test. This suggests that statistical learning generates knowledge…
Using Information Technology in Teaching of Business Statistics in Nigeria Business School
Hamadu, Dallah; Adeleke, Ismaila; Ehie, Ike
2011-01-01
This paper discusses the use of Microsoft Excel software in the teaching of statistics in the Faculty of Business Administration at the University of Lagos, Nigeria. Problems associated with existing traditional methods are identified and a novel pedagogy using Excel is proposed. The advantages of using this software over other specialized…
Basic properties and information theory of Audic-Claverie statistic for analyzing cDNA arrays
Tiňo, Peter
2009-01-01
Background The Audic-Claverie method [1] has been and still continues to be a popular approach for detection of differentially expressed genes in the SAGE framework. The method is based on the assumption that under the null hypothesis tag counts of the same gene in two libraries come from the same but unknown Poisson distribution. The problem is that each SAGE library represents only a single measurement. We ask: Given that the tag count samples from SAGE libraries are extremely limited, how useful actually is the Audic-Claverie methodology? We rigorously analyze the A-C statistic that forms a backbone of the methodology and represents our knowledge of the underlying tag generating process based on one observation. Results We show that the A-C statistic and the underlying Poisson distribution of the tag counts share the same mode structure. Moreover, the K-L divergence from the true unknown Poisson distribution to the A-C statistic is minimized when the A-C statistic is conditioned on the mode of the Poisson distribution. Most importantly, the expectation of this K-L divergence never exceeds 1/2 bit. Conclusion A rigorous underpinning of the Audic-Claverie methodology has been missing. Our results constitute a rigorous argument supporting the use of Audic-Claverie method even though the SAGE libraries represent very sparse samples. PMID:19775462
Altvater-Mackensen, Nicole; Jessen, Sarah; Grossmann, Tobias
2017-01-01
Infants' perception of faces becomes attuned to the environment during the first year of life. However, the mechanisms that underpin perceptual narrowing for faces are only poorly understood. Considering the developmental similarities seen in perceptual narrowing for faces and speech and the role that statistical learning has been shown to play…
Kuss, O
2015-03-30
Meta-analyses with rare events, especially those that include studies with no event in one ('single-zero') or even both ('double-zero') treatment arms, are still a statistical challenge. In the case of double-zero studies, researchers in general delete these studies or use continuity corrections to avoid them. A number of arguments against both options has been given, and statistical methods that use the information from double-zero studies without using continuity corrections have been proposed. In this paper, we collect them and compare them by simulation. This simulation study tries to mirror real-life situations as completely as possible by deriving true underlying parameters from empirical data on actually performed meta-analyses. It is shown that for each of the commonly encountered effect estimators valid statistical methods are available that use the information from double-zero studies without using continuity corrections. Interestingly, all of them are truly random effects models, and so also the current standard method for very sparse data as recommended from the Cochrane collaboration, the Yusuf-Peto odds ratio, can be improved on. For actual analysis, we recommend to use beta-binomial regression methods to arrive at summary estimates for the odds ratio, the relative risk, or the risk difference. Methods that ignore information from double-zero studies or use continuity corrections should no longer be used. We illustrate the situation with an example where the original analysis ignores 35 double-zero studies, and a superior analysis discovers a clinically relevant advantage of off-pump surgery in coronary artery bypass grafting. Copyright © 2014 John Wiley & Sons, Ltd.
76 FR 21780 - Agency Information Collection Activities: Bureau of Justice Statistics
2011-04-18
... be asked to fill out an online survey gathering facility-level characteristics. Sampled youth in... burden hours associated with this collection (including gathering facility-level information,...
Schepaschenko, D.; McCallum, I.; Shvidenko, A.; Kraxner, F.; Fritz, S.
2009-04-01
There is a critical need for accurate land cover information for resource assessment, biophysical modeling, greenhouse gas studies, and for estimating possible terrestrial responses and feedbacks to climate change. However, practically all existing land cover datasets have quite a high level of uncertainty and suffer from a lack of important details that does not allow for relevant parameterization, e.g., data derived from different forest inventories. The objective of this study is to develop a methodology in order to create a hybrid land cover dataset at the level which would satisfy requirements of the verified terrestrial biota full greenhouse gas account (Shvidenko et al., 2008) for large regions i.e. Russia. Such requirements necessitate a detailed quantification of land classes (e.g., for forests - dominant species, age, growing stock, net primary production, etc.) with additional information on uncertainties of the major biometric and ecological parameters in the range of 10-20% and a confidence interval of around 0.9. The approach taken here allows the integration of different datasets to explore synergies and in particular the merging and harmonization of land and forest inventories, ecological monitoring, remote sensing data and in-situ information. The following datasets have been integrated: Remote sensing: Global Land Cover 2000 (Fritz et al., 2003), Vegetation Continuous Fields (Hansen et al., 2002), Vegetation Fire (Sukhinin, 2007), Regional land cover (Schmullius et al., 2005); GIS: Soil 1:2.5 Mio (Dokuchaev Soil Science Institute, 1996), Administrative Regions 1:2.5 Mio, Vegetation 1:4 Mio, Bioclimatic Zones 1:4 Mio (Stolbovoi & McCallum, 2002), Forest Enterprises 1:2.5 Mio, Rivers/Lakes and Roads/Railways 1:1 Mio (IIASA's data base); Inventories and statistics: State Land Account (FARSC RF, 2006), State Forest Account - SFA (FFS RF, 2003), Disturbances in forests (FFS RF, 2006). The resulting hybrid land cover dataset at 1-km resolution comprises
Kershenbaum Aaron
2005-06-01
Full Text Available Abstract Background The rapid publication of important research in the biomedical literature makes it increasingly difficult for researchers to keep current with significant work in their area of interest. Results This paper reports a scalable method for the discovery of protein-protein interactions in Medline abstracts, using a combination of text analytics, statistical and graphical analysis, and a set of easily implemented rules. Applying these techniques to 12,300 abstracts, a precision of 0.61 and a recall of 0.97 were obtained, (f = 0.74 and when allowing for two-hop and three-hop relations discovered by graphical analysis, the precision was 0.74 (f = 0.83. Conclusion This combination of linguistic and statistical approaches appears to provide the highest precision and recall thus far reported in detecting protein-protein relations using text analytic approaches.
On the Estimation and Use of Statistical Modelling in Information Retrieval
Petersen, Casper
introduces a statistically principled method for selecting the best fitting distribution. The thesis then demonstrates that integrating knowledge about the best-fitting distribution into IR leads to superior results compared to existing strong baselines on multiple datasets. Overall, this thesis concludes...... that assumptions regarding the distribution of dataset properties can be replaced with an effective, efficient and principled method for determining the best-fitting distribution and that using this distribution can lead to improved retrieval performance....
Thomas Albrecht
2013-01-01
Full Text Available We present a novel method for nonrigid registration of 3D surfaces and images. The method can be used to register surfaces by means of their distance images, or to register medical images directly. It is formulated as a minimization problem of a sum of several terms representing the desired properties of a registration result: smoothness, volume preservation, matching of the surface, its curvature, and possible other feature images, as well as consistency with previous registration results of similar objects, represented by a statistical deformation model. While most of these concepts are already known, we present a coherent continuous formulation of these constraints, including the statistical deformation model. This continuous formulation renders the registration method independent of its discretization. The finite element discretization we present is, while independent of the registration functional, the second main contribution of this paper. The local discontinuous Galerkin method has not previously been used in image registration, and it provides an efficient and general framework to discretize each of the terms of our functional. Computational efficiency and modest memory consumption are achieved thanks to parallelization and locally adaptive mesh refinement. This allows for the first time the use of otherwise prohibitively large 3D statistical deformation models.
E. N. Klochkova
2016-01-01
Full Text Available The current world tendencies influence also the Russian economy which has fully entered an era of forming of information society. Development and broad application of information and communication technologies is determined by a global tendency of world development and has crucial importance for increase of competitiveness of economy, expansion of opportunities of its integration into world system of economy, increase of efficiency of public administration and local self-government. Now development of information society does not have alternatives. Expansion of use of information and communication technologies is a condition of transition to new economic way, a factor of growth of quality of life of citizens and a labor productivity of economy, the instrument of protection of national interests. In recent years information and communication technologies became the effective tool in the economic relations arising in a production process, distributions, an exchange and consumption of the benefits between economic actors. Widespread introduction of information technologies in economic activity of society stimulates profound infrastructure changes in scales of all global economic space. Today the majority of the countries aims at forming of information society, and the most priority directions of development are creation of the electronic government, implementation of information technologies in education, culture and health care. Indicators of development of information society dynamically change both in the Russian Federation, and in the majority of foreign countries, competitive struggle for presence of the companies in the international market becomes tougher. Important task of further social and economic development of Russia is improvement of quality of information exchange in various spheres of activity of society on the basis ofeffective development of the sphere of information and communication technologies. In
Wartenberg Daniel
2006-11-01
Full Text Available Abstract Background To communicate population-based cancer statistics, cancer researchers have a long tradition of presenting data in a spatial representation, or map. Historically, health data were presented in printed atlases in which the map producer selected the content and format. The availability of geographic information systems (GIS with comprehensive mapping and spatial analysis capability for desktop and Internet mapping has greatly expanded the number of producers and consumers of health maps, including policymakers and the public. Because health maps, particularly ones that show elevated cancer rates, historically have raised public concerns, it is essential that these maps be designed to be accurate, clear, and interpretable for the broad range of users who may view them. This article focuses on designing maps to communicate effectively. It is based on years of research into the use of health maps for communicating among public health researchers. Results The basics for designing maps that communicate effectively are similar to the basics for any mode of communication. Tasks include deciding on the purpose, knowing the audience and its characteristics, choosing a media suitable for both the purpose and the audience, and finally testing the map design to ensure that it suits the purpose with the intended audience, and communicates accurately and effectively. Special considerations for health maps include ensuring confidentiality and reflecting the uncertainty of small area statistics. Statistical maps need to be based on sound practices and principles developed by the statistical and cartographic communities. Conclusion The biggest challenge is to ensure that maps of health statistics inform without misinforming. Advances in the sciences of cartography, statistics, and visualization of spatial data are constantly expanding the toolkit available to mapmakers to meet this challenge. Asking potential users to answer questions or to talk
American National Standards Institute; National Information Standards Organization (Estados Unidos)
2013-01-01
Esta norma identifica las categorías de datos estadísticos básicos de bibliotecas mostrados a nivel nacional estadounidense, y proporciona definiciones asociadas de términos. Al hacerlo, se ocupa de las siguientes áreas: Unidad de informes y población objetivo, los recursos humanos, recursos de colección, infraestructura, economía y servicios. Además, la norma identifica nuevas medidas relacionadas con los servicios de red, bases de datos y rendimiento. La norma no pretende ser exhaustiva en ...
Karl L Evans
Full Text Available Winter habitat use and the magnitude of migratory connectivity are important parameters when assessing drivers of the marked declines in avian migrants. Such information is unavailable for most species. We use a stable isotope approach to assess these factors for three declining African-Eurasian migrants whose winter ecology is poorly known: wood warbler Phylloscopus sibilatrix, house martin Delichon urbicum and common swift Apus apus. Spatially segregated breeding wood warbler populations (sampled across a 800 km transect, house martins and common swifts (sampled across a 3,500 km transect exhibited statistically identical intra-specific carbon and nitrogen isotope ratios in winter grown feathers. Such patterns are compatible with a high degree of migratory connectivity, but could arise if species use isotopically similar resources at different locations. Wood warbler carbon isotope ratios are more depleted than typical for African-Eurasian migrants and are compatible with use of moist lowland forest. The very limited variance in these ratios indicates specialisation on isotopically restricted resources, which may drive the similarity in wood warbler populations' stable isotope ratios and increase susceptibility to environmental change within its wintering grounds. House martins were previously considered to primarily use moist montane forest during the winter, but this seems unlikely given the enriched nature of their carbon isotope ratios. House martins use a narrower isotopic range of resources than the common swift, indicative of increased specialisation or a relatively limited wintering range; both factors could increase house martins' vulnerability to environmental change. The marked variance in isotope ratios within each common swift population contributes to the lack of population specific signatures and indicates that the species is less vulnerable to environmental change in sub-Saharan Africa than our other focal species. Our findings
Bruno Del Papa
2014-01-01
This dissertation explores some applications of statistical mechanics and information theory tools to topics of interest in anthropology, social sciences, and economics. We intended to develop mathematical and computational models with empirical and theoretical bases aiming to identify important features of two problems: the transitions between egalitarian and hierarchical societies and the emergence of money in human societies. Anthropological data suggest the existence of a correlation ...
Rezch' ikov, V.B.; Bagautdinov, G.M.; Gerasimova, L.F.; Sayfullina, L.I.
1978-01-01
A program functional algorithm has been developed which is called ''analysis of statistical information on the operation of drill bits and bottomhole motors'', which was developed at IVTS institute TatNIPINEFT' as a problem of an automated system for designing the construction of boreholes. Possibility of introducing this program into operation before developing a system as a whole is indicated.
2014-01-01
Cutaneous leishmaniasis (CL) is a neglected tropical disease strongly associated with poverty. Treatment is problematic and no vaccine is available. Ethiopia has seen new outbreaks in areas previously not known to be endemic, often with co-infection by the human immunodeficiency virus (HIV) with rates reaching 5.6% of the cases. The present study concerns the development of a risk model based on environmental factors using geographical information systems (GIS), statistical analysis and model...
Ali Kharrazi; Fath, Brian D.; Harald Katzmair
2016-01-01
Despite its ambiguities, the concept of resilience is of critical importance to researchers, practitioners, and policy-makers in dealing with dynamic socio-ecological systems. In this paper, we critically examine the three empirical approaches of (i) panarchy; (ii) ecological information-based network analysis; and (iii) statistical evidence of resilience to three criteria determined for achieving a comprehensive understanding and application of this concept. These criteria are the ability: (...
Optimal Mass Transport for Statistical Estimation, Image Analysis, Information Geometry, and Control
2017-01-10
691, 2016; DOI: 10.1007/s10957-015-0803-z 15. Ricci Curvature: An Economic Indicator for Market Fragility and Systemic Risk (with R. Sandhu, A...Gao, Kyoung-Sik Moon, Yagang Yao, C.P. Wong), Polymer 53:7 (2012), pp. 1571-1580. 3. “3D automatic segmentation of the hippocampus using wavelets with... segmentation tool using local robust statistics driven active contours” (with Y. Gao, S. Bouix, M. Shenton, and R. Kikinis), MedIA 16:6 (2012), pp. 1216
Improvement of Information and Methodical Provision of Macro-economic Statistical Analysis
Tiurina Dina M.
2014-02-01
Full Text Available The article generalises and analyses main shortcomings of the modern system of macro-statistical analysis based on the use of the system of national accounts and balance of the national economy. The article proves on the basis of historic analysis of formation of indicators of the system of national accounts that problems with its practical use have both regional and global reasons. In order to eliminate impossibility of accounting life quality the article offers a system of quality indicators based on the general perception of wellbeing as assurance in own solvency of population and representative sampling of economic subjects.
Heidema, A.G.; Thissen, U.; Boer, J.M.; Bouwman, F.G.; Feskens, E.J.M.; Mariman, E.C.
2009-01-01
In this study, we applied the multivariate statistical tool Partial Least Squares (PLS) to analyze the relative importance of 83 plasma proteins in relation to coronary heart disease (CHD) mortality and the intermediate end points body mass index, HDL-cholesterol and total cholesterol. From a Dutch
Levett-Jones, Tracy; Kenny, Raelene; Van der Riet, Pamela; Hazelton, Michael; Kable, Ashley; Bourgeois, Sharon; Luxford, Yoni
2009-08-01
This paper profiles a study that explored nursing students' information and communication technology competence and confidence. It presents selected findings that focus on students' attitudes towards information and communication technology as an educational methodology and their perceptions of its relevance to clinical practice. Information and communication technology is integral to contemporary nursing practice. Development of these skills is important to ensure that graduates are 'work ready' and adequately prepared to practice in increasingly technological healthcare environments. This was a mixed methods study. Students (n=971) from three Australian universities were surveyed using an instrument designed specifically for the study, and 24 students participated in focus groups. The focus group data revealed that a number of students were resistant to the use of information and communication technology as an educational methodology and lacked the requisite skills and confidence to engage successfully with this educational approach. Survey results indicated that 26 per cent of students were unsure about the relevance of information and communication technology to clinical practice and only 50 per cent felt 'very confident' using a computer. While the importance of information and communication technology to student's learning and to their preparedness for practice has been established, it is evident that students' motivation is influenced by their level of confidence and competence, and their understanding of the relevance of information and communication technology to their future careers.
Epstein, Richard H; Dexter, Franklin; Hofer, Ira S; Rodriguez, Luis I; Schwenk, Eric S; Maga, Joni M; Hindman, Bradley J
2017-06-08
Perioperative hypothermia may increase the incidences of wound infection, blood loss, transfusion, and cardiac morbidity. U.S. national quality programs for perioperative normothermia specify the presence of at least 1 "body temperature" ≥35.5°C during the interval from 30 minutes before to 15 minutes after the anesthesia end time. Using data from 4 academic hospitals, we evaluated timing and measurement considerations relevant to the current requirements to guide hospitals wishing to report perioperative temperature measures using electronic data sources. Anesthesia information management system databases from 4 hospitals were queried to obtain intraoperative temperatures and intervals to the anesthesia end time from discontinuation of temperature monitoring, end of surgery, and extubation. Inclusion criteria included age >16 years, use of a tracheal tube or supraglottic airway, and case duration ≥60 minutes. The end-of-case temperature was determined as the maximum intraoperative temperature recorded within 30 minutes before the anesthesia end time (ie, the temperature that would be used for reporting purposes). The fractions of cases with intervals >30 minutes between the last intraoperative temperature and the anesthesia end time were determined. Among the hospitals, averages (binned by quarters) of 34.5% to 59.5% of cases had intraoperative temperature monitoring discontinued >30 minutes before the anesthesia end time. Even if temperature measurement had been continued until extubation, averages of 5.9% to 20.8% of cases would have exceeded the allowed 30-minute window. Averages of 8.9% to 21.3% of cases had end-of-case intraoperative temperatures <35.5°C (ie, a quality measure failure). Because of timing considerations, a substantial fraction of cases would have been ineligible to use the end-of-case intraoperative temperature for national quality program reporting. Thus, retrieval of postanesthesia care unit temperatures would have been necessary. A
Bjerregaard, Peter; Becker, Ulrik
2013-01-01
Questionnaires are widely used to obtain information on health-related behaviour, and they are more often than not the only method that can be used to assess the distribution of behaviour in subgroups of the population. No validation studies of reported consumption of tobacco or alcohol have been...
Embedding Web-Based Statistical Translation Models in Cross-Language Information Retrieval
Kraaij, W.; Nie, J.Y.; Simard, M.
2003-01-01
Although more and more language pairs are covered by machine translation (MT) services, there are still many pairs that lack translation resources. Cross-language information retrieval (CUR) is an application that needs translation functionality of a relatively low level of sophistication, since
Embedding Web-Based Statistical Translation Models in Cross-Language Information Retrieval
Kraaij, W.; Nie, J.Y.; Simard, M.
2003-01-01
Although more and more language pairs are covered by machine translation (MT) services, there are still many pairs that lack translation resources. Cross-language information retrieval (CUR) is an application that needs translation functionality of a relatively low level of sophistication, since cur
English Collocation Learning through Corpus Data: On-Line Concordance and Statistical Information
Ohtake, Hiroshi; Fujita, Nobuyuki; Kawamoto, Takeshi; Morren, Brian; Ugawa, Yoshihiro; Kaneko, Shuji
2012-01-01
We developed an English Collocations On Demand system offering on-line corpus and concordance information to help Japanese researchers acquire a better command of English collocation patterns. The Life Science Dictionary Corpus consists of approximately 90,000,000 words collected from life science related research papers published in academic…
Performing meta-analysis with incomplete statistical information in clinical trials
Hunter Anthony
2008-08-01
Full Text Available Abstract Background Results from clinical trials are usually summarized in the form of sampling distributions. When full information (mean, SEM about these distributions is given, performing meta-analysis is straightforward. However, when some of the sampling distributions only have mean values, a challenging issue is to decide how to use such distributions in meta-analysis. Currently, the most common approaches are either ignoring such trials or for each trial with a missing SEM, finding a similar trial and taking its SEM value as the missing SEM. Both approaches have drawbacks. As an alternative, this paper develops and tests two new methods, the first being the prognostic method and the second being the interval method, to estimate any missing SEMs from a set of sampling distributions with full information. A merging method is also proposed to handle clinical trials with partial information to simulate meta-analysis. Methods Both of our methods use the assumption that the samples for which the sampling distributions will be merged are randomly selected from the same population. In the prognostic method, we predict the missing SEMs from the given SEMs. In the interval method, we define intervals that we believe will contain the missing SEMs and then we use these intervals in the merging process. Results Two sets of clinical trials are used to verify our methods. One family of trials is on comparing different drugs for reduction of low density lipprotein cholesterol (LDL for Type-2 diabetes, and the other is about the effectiveness of drugs for lowering intraocular pressure (IOP. Both methods are shown to be useful for approximating the conventional meta-analysis including trials with incomplete information. For example, the meta-analysis result of Latanoprost versus Timolol on IOP reduction for six months provided in 1 was 5.05 ± 1.15 (Mean ± SEM with full information. If the last trial in this study is assumed to be with partial information
Lee, Seungjoon; Kevrekidis, Ioannis G.; Karniadakis, George Em
2017-09-01
Exascale-level simulations require fault-resilient algorithms that are robust against repeated and expected software and/or hardware failures during computations, which may render the simulation results unsatisfactory. If each processor can share some global information about the simulation from a coarse, limited accuracy but relatively costless auxiliary simulator we can effectively fill-in the missing spatial data at the required times by a statistical learning technique - multi-level Gaussian process regression, on the fly; this has been demonstrated in previous work [1]. Based on the previous work, we also employ another (nonlinear) statistical learning technique, Diffusion Maps, that detects computational redundancy in time and hence accelerate the simulation by projective time integration, giving the overall computation a ;patch dynamics; flavor. Furthermore, we are now able to perform information fusion with multi-fidelity and heterogeneous data (including stochastic data). Finally, we set the foundations of a new framework in CFD, called patch simulation, that combines information fusion techniques from, in principle, multiple fidelity and resolution simulations (and even experiments) with a new adaptive timestep refinement technique. We present two benchmark problems (the heat equation and the Navier-Stokes equations) to demonstrate the new capability that statistical learning tools can bring to traditional scientific computing algorithms. For each problem, we rely on heterogeneous and multi-fidelity data, either from a coarse simulation of the same equation or from a stochastic, particle-based, more ;microscopic; simulation. We consider, as such ;auxiliary; models, a Monte Carlo random walk for the heat equation and a dissipative particle dynamics (DPD) model for the Navier-Stokes equations. More broadly, in this paper we demonstrate the symbiotic and synergistic combination of statistical learning, domain decomposition, and scientific computing in
Guevara Erra, R.; Mateos, D. M.; Wennberg, R.; Perez Velazquez, J. L.
2016-11-01
It is said that complexity lies between order and disorder. In the case of brain activity and physiology in general, complexity issues are being considered with increased emphasis. We sought to identify features of brain organization that are optimal for sensory processing, and that may guide the emergence of cognition and consciousness, by analyzing neurophysiological recordings in conscious and unconscious states. We find a surprisingly simple result: Normal wakeful states are characterized by the greatest number of possible configurations of interactions between brain networks, representing highest entropy values. Therefore, the information content is larger in the network associated to conscious states, suggesting that consciousness could be the result of an optimization of information processing. These findings help to guide in a more formal sense inquiry into how consciousness arises from the organization of matter.
Dubiner, Moshe
2008-01-01
Consider the problem of finding high dimensional approximate nearest neighbors, where the data is generated by some known probabilistic model. We will investigate a large natural class of algorithms which we call bucketing codes. We will define bucketing information, prove that it bounds the performance of all bucketing codes, and that the bucketing information bound can be asymptotically attained by randomly constructed bucketing codes. For example suppose we have n Bernoulli(1/2) very long (length d-->infinity) sequences of bits. Let n-2m sequences be completely independent, while the remaining 2m sequences are composed of m independent pairs. The interdependence within each pair is that their bits agree with probability 1/20. Moreover if one sequence out of each pair belongs to a a known set of n^{(2p-1)^{2}-\\epsilon} sequences, than pairing can be done using order n comparisons!
Guevara Erra, R; Mateos, D M; Wennberg, R; Perez Velazquez, J L
2016-11-01
It is said that complexity lies between order and disorder. In the case of brain activity and physiology in general, complexity issues are being considered with increased emphasis. We sought to identify features of brain organization that are optimal for sensory processing, and that may guide the emergence of cognition and consciousness, by analyzing neurophysiological recordings in conscious and unconscious states. We find a surprisingly simple result: Normal wakeful states are characterized by the greatest number of possible configurations of interactions between brain networks, representing highest entropy values. Therefore, the information content is larger in the network associated to conscious states, suggesting that consciousness could be the result of an optimization of information processing. These findings help to guide in a more formal sense inquiry into how consciousness arises from the organization of matter.
Jun Zhang
2013-12-01
Full Text Available Divergence functions are the non-symmetric “distance” on the manifold, Μθ, of parametric probability density functions over a measure space, (Χ,μ. Classical information geometry prescribes, on Μθ: (i a Riemannian metric given by the Fisher information; (ii a pair of dual connections (giving rise to the family of α-connections that preserve the metric under parallel transport by their joint actions; and (iii a family of divergence functions ( α-divergence defined on Μθ x Μθ, which induce the metric and the dual connections. Here, we construct an extension of this differential geometric structure from Μθ (that of parametric probability density functions to the manifold, Μ, of non-parametric functions on X, removing the positivity and normalization constraints. The generalized Fisher information and α-connections on M are induced by an α-parameterized family of divergence functions, reflecting the fundamental convex inequality associated with any smooth and strictly convex function. The infinite-dimensional manifold, M, has zero curvature for all these α-connections; hence, the generally non-zero curvature of M can be interpreted as arising from an embedding of Μθ into Μ. Furthermore, when a parametric model (after a monotonic scaling forms an affine submanifold, its natural and expectation parameters form biorthogonal coordinates, and such a submanifold is dually flat for α = ± 1, generalizing the results of Amari’s α-embedding. The present analysis illuminates two different types of duality in information geometry, one concerning the referential status of a point (measurable function expressed in the divergence function (“referential duality” and the other concerning its representation under an arbitrary monotone scaling (“representational duality”.
Information-theory-based solution of the inverse problem in classical statistical mechanics.
D'Alessandro, Marco; Cilloco, Francesco
2010-08-01
We present a procedure for the determination of the interaction potential from the knowledge of the radial pair distribution function. The method, realized inside an inverse Monte Carlo simulation scheme, is based on the application of the maximum entropy principle of information theory and the interaction potential emerges as the asymptotic expression of the transition probability. Results obtained for high density monoatomic fluids are very satisfactory and provide an accurate extraction of the potential, despite a modest computational effort.
Why relevance theory is relevant for lexicography
Bothma, Theo; Tarp, Sven
2014-01-01
, socio-cognitive and affective relevance. It then shows, at the hand of examples, why relevance is important from a user perspective in the extra-lexicographical pre- and post-consultation phases and in the intra-lexicographical consultation phase. It defines an additional type of subjective relevance...... that is very important for lexicography as well as for information science, viz. functional relevance. Since all lexicographic work is ultimately aimed at satisfying users’ information needs, the article then discusses why the lexicographer should take note of all these types of relevance when planning a new...... dictionary project, identifying new tasks and responsibilities of the modern lexicographer. The article furthermore discusses how relevance theory impacts on teaching dictionary culture and reference skills. By integrating insights from lexicography and information science, the article contributes to new...
Bamidis, P D; Lithari, C; Konstantinidis, S T
2010-01-01
With the number of scientific papers published in journals, conference proceedings, and international literature ever increasing, authors and reviewers are not only facilitated with an abundance of information, but unfortunately continuously confronted with risks associated with the erroneous copy of another's material. In parallel, Information Communication Technology (ICT) tools provide to researchers novel and continuously more effective ways to analyze and present their work. Software tools regarding statistical analysis offer scientists the chance to validate their work and enhance the quality of published papers. Moreover, from the reviewers and the editor's perspective, it is now possible to ensure the (text-content) originality of a scientific article with automated software tools for plagiarism detection. In this paper, we provide a step-bystep demonstration of two categories of tools, namely, statistical analysis and plagiarism detection. The aim is not to come up with a specific tool recommendation, but rather to provide useful guidelines on the proper use and efficiency of either category of tools. In the context of this special issue, this paper offers a useful tutorial to specific problems concerned with scientific writing and review discourse. A specific neuroscience experimental case example is utilized to illustrate the young researcher's statistical analysis burden, while a test scenario is purpose-built using open access journal articles to exemplify the use and comparative outputs of seven plagiarism detection software pieces. PMID:21487489
Bamidis, P D; Lithari, C; Konstantinidis, S T
2010-12-01
With the number of scientific papers published in journals, conference proceedings, and international literature ever increasing, authors and reviewers are not only facilitated with an abundance of information, but unfortunately continuously confronted with risks associated with the erroneous copy of another's material. In parallel, Information Communication Technology (ICT) tools provide to researchers novel and continuously more effective ways to analyze and present their work. Software tools regarding statistical analysis offer scientists the chance to validate their work and enhance the quality of published papers. Moreover, from the reviewers and the editor's perspective, it is now possible to ensure the (text-content) originality of a scientific article with automated software tools for plagiarism detection. In this paper, we provide a step-bystep demonstration of two categories of tools, namely, statistical analysis and plagiarism detection. The aim is not to come up with a specific tool recommendation, but rather to provide useful guidelines on the proper use and efficiency of either category of tools. In the context of this special issue, this paper offers a useful tutorial to specific problems concerned with scientific writing and review discourse. A specific neuroscience experimental case example is utilized to illustrate the young researcher's statistical analysis burden, while a test scenario is purpose-built using open access journal articles to exemplify the use and comparative outputs of seven plagiarism detection software pieces.
Shirasaki, Masato; Nishimichi, Takahiro; Li, Baojiu; Higuchi, Yuichi
2017-04-01
We investigate the information content of various cosmic shear statistics on the theory of gravity. Focusing on the Hu-Sawicki-type f(R) model, we perform a set of ray-tracing simulations and measure the convergence bispectrum, peak counts and Minkowski functionals. We first show that while the convergence power spectrum does have sensitivity to the current value of extra scalar degree of freedom |fR0|, it is largely compensated by a change in the present density amplitude parameter σ8 and the matter density parameter Ωm0. With accurate covariance matrices obtained from 1000 lensing simulations, we then examine the constraining power of the three additional statistics. We find that these probes are indeed helpful to break the parameter degeneracy, which cannot be resolved from the power spectrum alone. We show that especially the peak counts and Minkowski functionals have the potential to rigorously (marginally) detect the signature of modified gravity with the parameter |fR0| as small as 10-5 (10-6) if we can properly model them on small (∼1 arcmin) scale in a future survey with a sky coverage of 1500 deg2. We also show that the signal level is similar among the additional three statistics and all of them provide complementary information to the power spectrum. These findings indicate the importance of combining multiple probes beyond the standard power spectrum analysis to detect possible modifications to general relativity.
Yan, Koon-Kiu; Gerstein, Mark
2011-01-01
The presence of web-based communities is a distinctive signature of Web 2.0. The web-based feature means that information propagation within each community is highly facilitated, promoting complex collective dynamics in view of information exchange. In this work, we focus on a community of scientists and study, in particular, how the awareness of a scientific paper is spread. Our work is based on the web usage statistics obtained from the PLoS Article Level Metrics dataset compiled by PLoS. The cumulative number of HTML views was found to follow a long tail distribution which is reasonably well-fitted by a lognormal one. We modeled the diffusion of information by a random multiplicative process, and thus extracted the rates of information spread at different stages after the publication of a paper. We found that the spread of information displays two distinct decay regimes: a rapid downfall in the first month after publication, and a gradual power law decay afterwards. We identified these two regimes with two distinct driving processes: a short-term behavior driven by the fame of a paper, and a long-term behavior consistent with citation statistics. The patterns of information spread were found to be remarkably similar in data from different journals, but there are intrinsic differences for different types of web usage (HTML views and PDF downloads versus XML). These similarities and differences shed light on the theoretical understanding of different complex systems, as well as a better design of the corresponding web applications that is of high potential marketing impact.
Koon-Kiu Yan
Full Text Available The presence of web-based communities is a distinctive signature of Web 2.0. The web-based feature means that information propagation within each community is highly facilitated, promoting complex collective dynamics in view of information exchange. In this work, we focus on a community of scientists and study, in particular, how the awareness of a scientific paper is spread. Our work is based on the web usage statistics obtained from the PLoS Article Level Metrics dataset compiled by PLoS. The cumulative number of HTML views was found to follow a long tail distribution which is reasonably well-fitted by a lognormal one. We modeled the diffusion of information by a random multiplicative process, and thus extracted the rates of information spread at different stages after the publication of a paper. We found that the spread of information displays two distinct decay regimes: a rapid downfall in the first month after publication, and a gradual power law decay afterwards. We identified these two regimes with two distinct driving processes: a short-term behavior driven by the fame of a paper, and a long-term behavior consistent with citation statistics. The patterns of information spread were found to be remarkably similar in data from different journals, but there are intrinsic differences for different types of web usage (HTML views and PDF downloads versus XML. These similarities and differences shed light on the theoretical understanding of different complex systems, as well as a better design of the corresponding web applications that is of high potential marketing impact.
Yamaguchi, Tadashi; Hachiya, Hiroyuki; Kamiyama, Naohisa; Moriyasu, Fuminori
2002-05-01
To realize a quantitative diagnosis of liver cirrhosis, we have been analyzing the characteristics of echo amplitude in B-mode images. Realizing the distinction between liver diseases such as liver cirrhosis and chronic hepatitis is required in the field of medical ultrasonics. In this study, we examine the spatial correlation, with the coefficient of correlation between the frames and the amplitude characteristics of each frame, using the volumetric data of RF echo signals from normal and diseased liver. It is found that there is a relationship between the tissue structure of liver and the spatial correlation of echo information.
Quantum statistical gravity: time dilation due to local information in many-body quantum systems
Sels, Dries; Wouters, Michiel
2017-08-01
We propose a generic mechanism for the emergence of a gravitational potential that acts on all classical objects in a quantum system. Our conjecture is based on the analysis of mutual information in many-body quantum systems. Since measurements in quantum systems affect the surroundings through entanglement, a measurement at one position reduces the entropy in its neighbourhood. This reduction in entropy can be described by a local temperature, that is directly related to the gravitational potential. A crucial ingredient in our argument is that ideal classical mechanical motion occurs at constant probability. This definition is motivated by the analysis of entropic forces in classical systems.
Whitley, Meredith A.
2014-01-01
While the quality and quantity of research on service-learning has increased considerably over the past 20 years, researchers as well as governmental and funding agencies have called for more rigor in service-learning research. One key variable in improving rigor is using relevant existing theories to improve the research. The purpose of this…
Sabuncu, Mert R.; Van Leemput, Koen
2012-01-01
This paper presents the relevance voxel machine (RVoxM), a dedicated Bayesian model for making predictions based on medical imaging data. In contrast to the generic machine learning algorithms that have often been used for this purpose, the method is designed to utilize a small number of spatially...
2010-04-21
... regarding workshop registration or logistics to ICF staff at 919-293-1621 or EPA_Lead_Wksp@icfi.com , or... highlight key policy issues around which EPA would structure the Pb NAAQS review. In workshop discussions... workshop is to ensure that this review focuses on the key policy-relevant issues and considers the most...
Hyman, Harvey
2012-01-01
This dissertation examines the impact of exploration and learning upon eDiscovery information retrieval; it is written in three parts. Part I contains foundational concepts and background on the topics of information retrieval and eDiscovery. This part informs the reader about the research frameworks, methodologies, data collection, and…
Li, Jia; Zhang, Haibo; Chen, Yongshan; Luo, Yongming; Zhang, Hua
2016-07-01
To quantify the extent of antibiotic contamination and to identity the dominant pollutant sources in the Tiaoxi River Watershed, surface water samples were collected at eight locations and analyzed for four tetracyclines and three sulfonamides using ultra-performance liquid chromatography tandem mass spectrometry (UPLC-MS/MS). The observed maximum concentrations of tetracycline (623 ng L(-1)), oxytetracycline (19,810 ng L(-1)), and sulfamethoxazole (112 ng L(-1)) exceeded their corresponding Predicted No Effect Concentration (PNEC) values. In particular, high concentrations of antibiotics were observed in wet summer with heavy rainfall. The maximum concentrations of antibiotics appeared in the vicinity of intensive aquaculture areas. High-resolution land use data were used for identifying diffuse source of antibiotic pollution in the watershed. Significant correlations between tetracycline and developed (r = 0.93), tetracycline and barren (r = 0.87), oxytetracycline and barren (r = 0.82), and sulfadiazine and agricultural facilities (r = 0.71) were observed. In addition, the density of aquaculture significantly correlated with doxycycline (r = 0.74) and oxytetracycline (r = 0.76), while the density of livestock significantly correlated with sulfadiazine (r = 0.71). Principle Component Analysis (PCA) indicated that doxycycline, tetracycline, oxytetracycline, and sulfamethoxazole were from aquaculture and domestic sources, whereas sulfadiazine and sulfamethazine were from livestock wastewater. Flood or drainage from aquaculture ponds was identified as a major source of antibiotics in the Tiaoxi watershed. A hot-spot map was created based on results of land use analysis and multi-variable statistics, which provided an effective management tool of sources identification in watersheds with multiple diffuse sources of antibiotic pollution.
Drawnel, Faye Marie; Zhang, Jitao David; Küng, Erich; Aoyama, Natsuyo; Benmansour, Fethallah; Araujo Del Rosario, Andrea; Jensen Zoffmann, Sannah; Delobel, Frédéric; Prummer, Michael; Weibel, Franziska; Carlson, Coby; Anson, Blake; Iacone, Roberto; Certa, Ulrich; Singer, Thomas; Ebeling, Martin; Prunotto, Marco
2017-05-18
Today, novel therapeutics are identified in an environment which is intrinsically different from the clinical context in which they are ultimately evaluated. Using molecular phenotyping and an in vitro model of diabetic cardiomyopathy, we show that by quantifying pathway reporter gene expression, molecular phenotyping can cluster compounds based on pathway profiles and dissect associations between pathway activities and disease phenotypes simultaneously. Molecular phenotyping was applicable to compounds with a range of binding specificities and triaged false positives derived from high-content screening assays. The technique identified a class of calcium-signaling modulators that can reverse disease-regulated pathways and phenotypes, which was validated by structurally distinct compounds of relevant classes. Our results advocate for application of molecular phenotyping in early drug discovery, promoting biological relevance as a key selection criterion early in the drug development cascade. Copyright © 2017 Elsevier Ltd. All rights reserved.
Parisa Allami
2012-12-01
Full Text Available When the World Wide Web provides suitable methods for producing and publishing information to scientists, the Web has become a mediator to publishing information. This environment has been formed billions of web pages that each of them has a special title, special content, special address and special purpose. Search engines provide a variety of facilities limit search results to raise the possibility of relevance in the retrieval results. One of these facilities is the limitation of the keywords and search terms to the title or URL. It can increase the possibility of results relevance significantly. Search engines claim what are limited to title and URL is most relevant. This research tried to compare the results relevant between results limited in title and URL in agricultural and Humanities areas from their users sights also it notice to Comparison of the presence of keywords in the title and URL between two areas and the relationship between search query numbers and matching keywords in title and their URLs. For this purpose, the number of 30 students in each area whom were in MA process and in doing their thesis was chosen. There was a significant relevant of the results that they limited their information needs to title and URL. There was significantly relevance in URL results in agricultural area, but there was not any significant difference between title and URL results in the humanities. For comparing the number of keywords in title and URL in two areas, 30 keywords in each area were chosen. There was not any significantly difference between the number of keywords in the title and URL of websites in two areas. To show relationship between number of search keyword and the matching of title and URL 45 keywords in each area were chosen. They were divided to three parts (one keyword, two keywords and three keywords. It was determined that if search keyword was less, the amount of matching between title and URL was more and if the matching
Ali Kharrazi
2016-09-01
Full Text Available Despite its ambiguities, the concept of resilience is of critical importance to researchers, practitioners, and policy-makers in dealing with dynamic socio-ecological systems. In this paper, we critically examine the three empirical approaches of (i panarchy; (ii ecological information-based network analysis; and (iii statistical evidence of resilience to three criteria determined for achieving a comprehensive understanding and application of this concept. These criteria are the ability: (1 to reflect a system’s adaptability to shocks; (2 to integrate social and environmental dimensions; and (3 to evaluate system-level trade-offs. Our findings show that none of the three currently applied approaches are strong in handling all three criteria. Panarchy is strong in the first two criteria but has difficulty with normative trade-offs. The ecological information-based approach is strongest in evaluating trade-offs but relies on common dimensions that lead to over-simplifications in integrating the social and environmental dimensions. Statistical evidence provides suggestions that are simplest and easiest to act upon but are generally weak in all three criteria. This analysis confirms the value of these approaches in specific instances but also the need for further research in advancing empirical approaches to the concept of resilience.
Tourassi, Georgia D.; Floyd, Carey E., Jr.
2004-05-01
The purpose of the study was to develop and evaluate a content-based image retrieval (CBIR) approach for computer-assisted diagnosis of masses detected in screening mammograms. The system follows an information theoretic retrieval scheme with a BIRADS-based relevance feedback (RF) algorithm. Initially, a knowledge databank of 365 mammographic regions of interest (ROIs) was created. They were all 512x512 pixel ROIs extracted from DDSM mammograms digitized using the Lumisys digitizer. The ROIs were extracted around the known locations of the annotated masses. Specifically, there were 177 ROIs depicting a biopsy-proven malignant mass and 188 ROIs with a benign mass. Subsequently, the CBIR algorithm was implemented using mutual information (MI) as the similarity metric for image retrieval. The CBIR algorithm formed the basis of a knowledge-based CAD system. Given a databank of mammographic masses with known pathology, a query mass was evaluated. Based on their information content, all similar masses in the databank were retrieved. A relevance feedback algorithm based on BIRADS findings was implemented to determine the relevance factor of the retrieved masses. Finally, a decision index was calculated using the query's k best matches. The decision index effectively combined the similarity metric of the retrieved cases and their relevance factor into a prediction regarding the malignancy status of the mass depicted in the query ROI. ROC analysis was to evaluate diagnostic performance. Performance improved dramatically with the incorporation of the relevance feedback algorithm. Overall, the CAD system achieved ROC area index AZ= 0.86+/-0.02 for the diagnosis of masses in screening mammograms.
Jeddi, Fatemeh Rangraz; Farzandipoor, Mehrdad; Arabfard, Masoud; Hosseini, Azam Haj Mohammad
2016-01-01
Objective: The purpose of this study was investigating situation and presenting a conceptual model for clinical governance information system by using UML in two sample hospitals. Background: However, use of information is one of the fundamental components of clinical governance; but unfortunately, it does not pay much attention to information management. Material and Methods: A cross sectional study was conducted in October 2012- May 2013. Data were gathered through questionnaires and interviews in two sample hospitals. Face and content validity of the questionnaire has been confirmed by experts. Data were collected from a pilot hospital and reforms were carried out and Final questionnaire was prepared. Data were analyzed by descriptive statistics and SPSS 16 software. Results: With the scenario derived from questionnaires, UML diagrams are presented by using Rational Rose 7 software. The results showed that 32.14 percent Indicators of the hospitals were calculated. Database was not designed and 100 percent of the hospital’s clinical governance was required to create a database. Conclusion: Clinical governance unit of hospitals to perform its mission, do not have access to all the needed indicators. Defining of Processes and drawing of models and creating of database are essential for designing of information systems. PMID:27147804
Jeddi, Fatemeh Rangraz; Farzandipoor, Mehrdad; Arabfard, Masoud; Hosseini, Azam Haj Mohammad
2016-04-01
The purpose of this study was investigating situation and presenting a conceptual model for clinical governance information system by using UML in two sample hospitals. However, use of information is one of the fundamental components of clinical governance; but unfortunately, it does not pay much attention to information management. A cross sectional study was conducted in October 2012- May 2013. Data were gathered through questionnaires and interviews in two sample hospitals. Face and content validity of the questionnaire has been confirmed by experts. Data were collected from a pilot hospital and reforms were carried out and Final questionnaire was prepared. Data were analyzed by descriptive statistics and SPSS 16 software. With the scenario derived from questionnaires, UML diagrams are presented by using Rational Rose 7 software. The results showed that 32.14 percent Indicators of the hospitals were calculated. Database was not designed and 100 percent of the hospital's clinical governance was required to create a database. Clinical governance unit of hospitals to perform its mission, do not have access to all the needed indicators. Defining of Processes and drawing of models and creating of database are essential for designing of information systems.
Research of the Information Literacy Education on Relevance theory MOOC%关联主义MOOC的信息素养教育探究
卜冰华
2015-01-01
By trying to In-depth analysis and research of the characteristics of relevance theory MOOC, This article constructs the content frame about information literacy that relevance theory MOOC learner should have and explored the education model for information literacy education that the MOOC platform carried on.%文章试图通过对关联主义MOOC的学习特点进行深入分析与研究，构建出关联主义MOOC学习者应具备的信息素养内容框架，并对MOOC平台开展信息素养的教育模式进行了探究。
K. A. Pfaffhuber
2012-10-01
precipitation. Mercury deposited through wet processes is more strongly retained by snowpacks than mercury deposited through dry processes. Revolatilization of mercury deposited through wet processes may be inhibited through burial by fresh snowfalls and/or by its more central location, compared to that of mercury deposited through dry deposition, within snowpack snow grains. The two depositions of oxidized mercury together explain 84% of the variability in observed concentrations of mercury in surface snow, 52% of the variability of observed concentrations of mercury in seasonal snowpacks and their meltwater's ionic pulse, and only 20% of the variability of observed concentrations of mercury in long-term snowpack-related records; other environmental controls seemingly gain in relevance as time passes. The concentration of mercury in long-term records is apparently primarily affected by latitude; both the primary sources of anthropogenic mercury and the strong upper-level zonal winds are located in the midlatitudes.
Durnford, D. A.; Dastoor, A. P.; Steen, A. O.; Berg, T.; Ryzhkov, A.; Figueras-Nieto, D.; Hole, L. R.; Pfaffhuber, K. A.; Hung, H.
2012-10-01
deposited through wet processes is more strongly retained by snowpacks than mercury deposited through dry processes. Revolatilization of mercury deposited through wet processes may be inhibited through burial by fresh snowfalls and/or by its more central location, compared to that of mercury deposited through dry deposition, within snowpack snow grains. The two depositions of oxidized mercury together explain 84% of the variability in observed concentrations of mercury in surface snow, 52% of the variability of observed concentrations of mercury in seasonal snowpacks and their meltwater's ionic pulse, and only 20% of the variability of observed concentrations of mercury in long-term snowpack-related records; other environmental controls seemingly gain in relevance as time passes. The concentration of mercury in long-term records is apparently primarily affected by latitude; both the primary sources of anthropogenic mercury and the strong upper-level zonal winds are located in the midlatitudes.
2010-07-01
... each LDV/T or MDPV subject to this subpart: (i) Model year; (ii) Applicable fleet average NOX standard... which the LDV/T or MDPV is certified; and (vii) Information on the point of first sale, including the...
Rapp, Marc Steffen
2010-01-01
While some of the modern performance measures used in managerial accounting rely on cash flow based figures others try to take advantage of the information content of accounting figures. However, whether the additional information content in the accrual components of earnings improves the internal performance measurement is an open empirical question. To shed light on this question, I examine the correlation between operating cash flows and earnings with firm's total shareholder returns. Usin...
Lioma, Christina; Larsen, Birger; Petersen, Casper
2016-01-01
train a Recurrent Neural Network (RNN) on existing relevant information to that query. We then use the RNN to "deep learn" a single, synthetic, and we assume, relevant document for that query. We design a crowdsourcing experiment to assess how relevant the "deep learned" document is, compared......What if Information Retrieval (IR) systems did not just retrieve relevant information that is stored in their indices, but could also "understand" it and synthesise it into a single document? We present a preliminary study that makes a first step towards answering this question. Given a query, we...... to existing relevant documents. Users are shown a query and four wordclouds (of three existing relevant documents and our deep learned synthetic document). The synthetic document is ranked on average most relevant of all....
Walter, Donald A.; Starn, J. Jeffrey
2013-01-01
Statistical models of nitrate occurrence in the glacial aquifer system of the northern United States, developed by the U.S. Geological Survey, use observed relations between nitrate concentrations and sets of explanatory variables—representing well-construction, environmental, and source characteristics— to predict the probability that nitrate, as nitrogen, will exceed a threshold concentration. However, the models do not explicitly account for the processes that control the transport of nitrogen from surface sources to a pumped well and use area-weighted mean spatial variables computed from within a circular buffer around the well as a simplified source-area conceptualization. The use of models that explicitly represent physical-transport processes can inform and, potentially, improve these statistical models. Specifically, groundwater-flow models simulate advective transport—predominant in many surficial aquifers— and can contribute to the refinement of the statistical models by (1) providing for improved, physically based representations of a source area to a well, and (2) allowing for more detailed estimates of environmental variables. A source area to a well, known as a contributing recharge area, represents the area at the water table that contributes recharge to a pumped well; a well pumped at a volumetric rate equal to the amount of recharge through a circular buffer will result in a contributing recharge area that is the same size as the buffer but has a shape that is a function of the hydrologic setting. These volume-equivalent contributing recharge areas will approximate circular buffers in areas of relatively flat hydraulic gradients, such as near groundwater divides, but in areas with steep hydraulic gradients will be elongated in the upgradient direction and agree less with the corresponding circular buffers. The degree to which process-model-estimated contributing recharge areas, which simulate advective transport and therefore account for
2014-01-01
Recently, a lot of changes in the legislation governing the Brazilian accounting practices initiated the alignment of Brazil to the internationalization process of accounting. In this context, this article aims to verify if the process of convergence to international accounting standards impacted the value relevance of accounting information such as Earnings per Share (LPA) and Equity per Share (PLPA), of the non-financial companies most traded on BM&FBOVESPA. This is done by testing the ...
Bell, Stephen A; Delpech, Valerie; Raben, Dorthe; Casabona, Jordi; Tsereteli, Nino; de Wit, John
2016-02-01
In the context of a shift from exceptionalism to normalisation, this study examines recommendations/evidence in current pan-European/global guidelines regarding pre-test HIV testing and counselling practices in health care settings. It also reviews new research not yet included in guidelines. There is consensus that verbal informed consent must be gained prior to testing, individually, in private, confidentially, in the presence of a health care provider. All guidelines recommend pre-test information/discussion delivered verbally or via other methods (information sheet). There is agreement about a minimum standard of information to be provided before a test, but guidelines differ regarding discussion about issues encouraging patients to think about implications of the result. There is heavy reliance on expert consultation in guideline development. Referenced scientific evidence is often more than ten years old and based on US/UK research. Eight new papers are reviewed. Current HIV testing and counselling guidelines have inconsistencies regarding the extent and type of information that is recommended during pre-test discussions. The lack of new research underscores a need for new evidence from a range of European settings to support the process of expert consultation in guideline development.
Sium, Aman; Giuliani, Meredith; Papadakos, Janet
2015-11-18
Since the early 2000s, web and digital health information and education has progressed in both volume and innovation (Dutta-Bergman 2006; Mano, Computers in Human Behavior 39 404 412, 2014). A growing number of leading Canadian health institutions (e.g., hospitals, community health centers, and health ministries) are migrating much of their vital public health information and education, once restricted to pamphlets and other physically distributed materials, to online platforms. Examples of these platforms are websites and web pages, eLearning modules, eBooks, streamed classrooms, audiobooks, and online health videos. The steady migration of health information to online platforms is raising important questions for fields of patient education, such as cancer education. These questions include, but are not limited to (a) are pamphlets still a useful modality for patient information and education when so much is available on the Internet? (b) If so, what should be the relationship between print-based and online health information and education, and when should one modality take precedence over the other? This article responds to these questions within the Canadian health care context.
Arlene August
1998-01-01
Full Text Available This paper examines the process and outcome of a major curriculum update for the Office Information Systems (OIS major in the Office Information Systems Department in the School of Computer Science and Information Systems (CSIS at Pace University. The curriculum was updated to better prepare our students for success as end-user specialists in todays flattened organizations. The changes made were based on modules recommended from the Office Systems Research Association (OSRA--recommendations that were both reliable and valid. OSRAs national curriculum was flexible enough to allow us to incorporate regional business demands as well as adhere to CSISs mission statement. The success of this curriculum, now two years old, is measured by the success of our graduates (B.Sc. degree in obtaining meaningful employment.
Lyons, L
2016-01-01
Accelerators and detectors are expensive, both in terms of money and human effort. It is thus important to invest effort in performing a good statistical anal- ysis of the data, in order to extract the best information from it. This series of five lectures deals with practical aspects of statistical issues that arise in typical High Energy Physics analyses.
The Surveillance, Epidemiology, and End Results (SEER) Program of the National Cancer Institute works to provide information on cancer statistics in an effort to reduce the burden of cancer among the U.S. population.
U.S. Department of Health & Human Services — The CMS Center for Strategic Planning produces an annual CMS Statistics reference booklet that provides a quick reference for summary information about health...
Wendelberger, Laura Jean [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-08-08
In large datasets, it is time consuming or even impossible to pick out interesting images. Our proposed solution is to find statistics to quantify the information in each image and use those to identify and pick out images of interest.
Kern, R.; Hateren, J.H. van; Egelhaaf, M.
2006-01-01
Flying blowflies shift their gaze by saccadic turns of body and head, keeping their gaze basically fixed between saccades. For the head, this results in almost pure translational optic flow between saccades, enabling visual interneurons in the fly motion pathway to extract information about translat
Kern, R.; Hateren, J.H. van; Egelhaaf, M.
2006-01-01
Flying blowflies shift their gaze by saccadic turns of body and head, keeping their gaze basically fixed between saccades. For the head, this results in almost pure translational optic flow between saccades, enabling visual interneurons in the fly motion pathway to extract information about
Kinnell, Margaret; Garrod, Penny
This British Library Research and Development Department study assesses current activities and attitudes toward quality management in library and information services (LIS) in the academic sector as well as the commercial/industrial sector. Definitions and types of benchmarking are described, and the relevance of benchmarking to LIS is evaluated.…
Shirasaki, Masato; Li, Baojiu; Higuchi, Yuichi
2016-01-01
We investigate the information content of various cosmic shear statistics on the theory of gravity. Focusing on the Hu-Sawicki type $f(R)$ model, we perform a set of ray-tracing simulations and measure the convergence bispectrum, peak counts and Minkowski functionals, paying a special attention to their complementarity to the standard power spectrum analysis. We first show that while the convergence power spectrum does have sensitivity to the current value of extra scalar degree of freedom $|f_{\\rm R0}|$, it is largely compensated by a change in the present density amplitude parameter $\\sigma_{8}$ and the matter density parameter $\\Omega_{\\rm m0}$. With accurate covariance matrices obtained from 1000 lensing simulations, we then examine the constraining power of the three additional statistics. We find that these probes are indeed helpful to break the parameter degeneracy, which can not be resolved from the power spectrum alone. We show that especially the peak counts and Minkowski functionals have the potent...
Yu, Yang; Bing-Zhong, Wang; Shuai, Ding
2016-05-01
Utilizing channel reciprocity, time reversal (TR) technique increases the signal-to-noise ratio (SNR) at the receiver with very low transmitter complexity in complex multipath environment. Present research works about TR multiple-input multiple-output (MIMO) communication all focus on the system implementation and network building. The aim of this work is to analyze the influence of antenna coupling on the capacity of wideband TR MIMO system, which is a realistic question in designing a practical communication system. It turns out that antenna coupling stabilizes the capacity in a small variation range with statistical wideband channel response. Meanwhile, antenna coupling only causes a slight detriment to the channel capacity in a wideband TR MIMO system. Comparatively, uncorrelated stochastic channels without coupling exhibit a wider range of random capacity distribution which greatly depends on the statistical channel. The conclusions drawn from information difference entropy theory provide a guideline for designing better high-performance wideband TR MIMO communication systems. Project supported by the National Natural Science Foundation of China (Grant Nos. 61331007, 61361166008, and 61401065) and the Specialized Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20120185130001).
Irina E. Zhukovskya
2013-01-01
Full Text Available This paper focuses on the improvement of the statistical branch-based application of electronic document management and network information technology. As a software solutions proposed use of new software solutions of the State Committee on Statistics of the Republic of Uzbekistan «eStat 2.0», allowing not only to optimize the statistical sector employees, but also serves as a link between all the economic entities of the national economy.
Xuemin Zhuang; Yonggen Luo
2015-01-01
Purpose: The purpose of this article is to study whether there exists natural relationship between fair value and corporate external market. A series of special phenomenon in the application of fair value arouses our research interests, which present evidences on how competition affects the correlation of fair value information. Design/methodology/approach: this thesis chooses fair value changes gains and losses and calculate the ratio of DFVPSit as the alternative variable of the fair value....
Cueva, Katie; Revels, Laura; Cueva, Melany; Lanier, Anne P; Dignan, Mark; Viswanath, K; Fung, Teresa T; Geller, Alan C
2017-04-12
To address a desire for timely, medically accurate cancer education in rural Alaska, ten culturally relevant online learning modules were developed with, and for, Alaska's Community Health Aides/Practitioners (CHA/Ps). The project was guided by the framework of Community-Based Participatory Action Research, honored Indigenous Ways of Knowing, and was informed by Empowerment Theory. A total of 428 end-of-module evaluation surveys were completed by 89 unique Alaska CHA/Ps between January and December 2016. CHA/Ps shared that as a result of completing the modules, they were empowered to share cancer information with their patients, families, friends, and communities, as well as engage in cancer risk reduction behaviors such as eating healthier, getting cancer screenings, exercising more, and quitting tobacco. CHA/Ps also reported the modules were informative and respectful of their diverse cultures. These results from end-of-module evaluation surveys suggest that the collaboratively developed, culturally relevant, online cancer education modules have empowered CHA/Ps to reduce cancer risk and disseminate cancer information. "brought me to tears couple of times, and I think it will help in destroying the silence that surrounds cancer".
Szulc, Stefan
1965-01-01
Statistical Methods provides a discussion of the principles of the organization and technique of research, with emphasis on its application to the problems in social statistics. This book discusses branch statistics, which aims to develop practical ways of collecting and processing numerical data and to adapt general statistical methods to the objectives in a given field.Organized into five parts encompassing 22 chapters, this book begins with an overview of how to organize the collection of such information on individual units, primarily as accomplished by government agencies. This text then
Dombrowsky, W.R. [Kiel Univ. (Germany). Katastrophenforschungsstelle
1997-12-31
Based on the results of empirical research, which implemented and evaluated information to the public requested by law (HIO-Paragraph-11a) and based on the general findings of crisis- and risk-communication research, some disturbing elements in the relationship between entrepreneurs, administration and the public will be described in terms of cognitive dissonance, prejudice, fears and false expectations. The empirical example of public information in emergencies will evidence the conflicting views on types, styles, size and profoundity of such information as well as the differences in perception, motivation and interest of all parties involved. Finally, the cultural context of risk perception and of coping capabilities will be interrelated with historical changes of risk-management to prepare for the understanding that risk- and crisis communication has to be more than talking about safety. (orig.) [Deutsch] Am Beispiel einer Implementations- und Evaluationsforschung zur Erstellung von Stoerfallinformationen nach Paragraph 11a BimSchG fuer zwei Unternehmen und auf Basis des Kenntnisstandes der internationalen Forschung zur Krisen- und Risikokommunikation wird verdeutlicht, welche kognitiven Dissonanzen zwischen Anlagenbetreibern, Behoerden und Bevoelkerung ueber Art, Umfang und Gestaltung von Gefahrinformationen bestehen, welche Vorurteile und Aengste eine sachliche Kommunikation behindern, welche gesellschaftlichen Faktoren bislang weitgehend uebersehen wurden, was von wem fuer `stoer- und unfallrelevant` gehalten wird und welche gesellschaftlichen, sozialen `settings`, d.h. welche menschlichen Bedingungen die Wahrnehmung und Verarbeitung welcher Informationen beeinflussen. Darin liegt die empirische Bestaetigung der Hypothese, dass sich die Wahrnehmung von Risiken und Bedrohungen historisch kurzfristig (bereits innerhalb einer Generation) veraendert und es keine `one-for-all`-Strategie der Risiko- und Krisenkommunikation geben kann, wohl aber allgemeine
Cartabellotta, A
1998-05-01
Evidence-based Medicine is a product of the electronic information age and there are several databases useful for practice it--MEDLINE, EMBASE, specialized compendiums of evidence (Cochrane Library, Best Evidence), practice guidelines--most of them free available through Internet, that offers a growing number of health resources. Because searching best evidence is a basic step to practice Evidence-based Medicine, this second review (the first one has been published in the issue of March 1998) has the aim to provide physicians tools and skills for retrieving relevant biomedical information. Therefore, we discuss about strategies for managing information overload, analyze characteristics, usefulness and limits of medical databases and explain how to use MEDLINE in day-to-day clinical practice.
Holtrop, Kendal; Chaviano, Casey L; Scott, Jenna C; McNeil Smith, Shardé
2015-11-01
Homeless families in transitional housing face a number of distinct challenges, yet there is little research seeking to guide prevention and intervention work with homeless parents. Informed by the tenets of community-based participatory research, the purpose of this study was to identify relevant components to include in a parenting intervention for this population. Data were gathered from 40 homeless parents through semistructured individual interviews and were analyzed using qualitative content analysis. The resulting 15 categories suggest several topics, approach considerations, and activities that can inform parenting intervention work with homeless families in transitional housing. Study findings are discussed within the context of intervention fidelity versus adaptation, and implications for practice, research, and policy are suggested. This study provides important insights for informing parenting intervention adaptation and implementation efforts with homeless families in transitional housing. (PsycINFO Database Record
Gheorghe Hrinca
2016-11-01
Full Text Available Biodiversity and the studies of spongiform encephalopathies in the farm animals are highly topical concerns of the contemporary scientific world. Both themes are very interesting for the life sciences and very important for the application field of animal breeding. The implementation of these two concepts creates an antithetical paradigm: the achievement of genetic prophylaxis joins with the decrease of genetic diversity. The paper examines the genetic diversity and its evolution in sheep livestock from the European space in the context in which the European Community has developed very laborious and costly programs targeted both for conservation and enhancement of biodiversity and to eradicate the scrapie in small ruminants. This paper utilises a precise method to quantify the genetic biodiversity in all sheep populations in Europe by a modern concept derived from informational statistics - informational energy. In addition, the paper proposes concrete and viable solutions to achieve these two desiderata at optimal levels in connection with a perfect perspicacity of sheep breeder which consists in accuracy of the reproduction process and correct application of the selection criteria.
On the future of astrostatistics: statistical foundations and statistical practice
Loredo, Thomas J
2012-01-01
This paper summarizes a presentation for a panel discussion on "The Future of Astrostatistics" held at the Statistical Challenges in Modern Astronomy V conference at Pennsylvania State University in June 2011. I argue that the emerging needs of astrostatistics may both motivate and benefit from fundamental developments in statistics. I highlight some recent work within statistics on fundamental topics relevant to astrostatistical practice, including the Bayesian/frequentist debate (and ideas for a synthesis), multilevel models, and multiple testing. As an important direction for future work in statistics, I emphasize that astronomers need a statistical framework that explicitly supports unfolding chains of discovery, with acquisition, cataloging, and modeling of data not seen as isolated tasks, but rather as parts of an ongoing, integrated sequence of analyses, with information and uncertainty propagating forward and backward through the chain. A prototypical example is surveying of astronomical populations, ...
Statistical Yearbook of Norway 2012
NONE
2012-07-01
The Statistical Yearbook of Norway 2012 contains statistics on Norway and main figures for the Nordic countries and other countries selected from international statistics. The international over-views are integrated with the other tables and figures. The selection of tables in this edition is mostly the same as in the 2011 edition. The yearbook's 480 tables and figures present the main trends in official statistics in most areas of society. The list of tables and figures and an index at the back of the book provide easy access to relevant information. In addition, source information and Internet addresses below the tables make the yearbook a good starting point for those who are looking for more detailed statistics. The statistics are based on data gathered in statistical surveys and from administrative data, which, in cooperation with other public institutions, have been made available for statistical purposes. Some tables have been prepared in their entirety by other public institutions. The statistics follow approved principles, standards and classifications that are in line with international recommendations and guidelines. Content: 00. General subjects; 01. Environment; 02. Population; 03. Health and social conditions; 04. Education; 05. Personal economy and housing conditions; 06. Labour market; 07. Recreational, cultural and sporting activities; 08. Prices and indices; 09. National Economy and external trade; 10. Industrial activities; 11. Financial markets; 12. Public finances; Geographical survey.(eb)
Perceptions of document relevance
Peter eBruza
2014-07-01
Full Text Available This article presents a study of how humans perceive the relevance of documents.Humans are adept at making reasonably robust and quick decisions about what information is relevant to them, despite the ever increasing complexity and volume of their surrounding information environment. The literature on document relevance has identified various dimensions of relevance (e.g., topicality, novelty, etc., however little is understood about how these dimensions may interact.We performed a crowdsourced study of how human subjects judge two relevance dimensions in relation to document snippets retrieved from an internet search engine.The order of the judgement was controlled.For those judgements exhibiting an order effect, a q-test was performed to determine whether the order effects can be explained by a quantum decision model based on incompatible decision perspectives.Some evidence of incompatibility was found which suggests incompatible decision perspectives is appropriate for explaining interacting dimensions of relevance.
政府与统计信息产品的供给%On Government and the Supply of Economic Statistical Information Products
朱琴华
2004-01-01
Facing the age of economic globlization and knowledge economy, we found that economic statistical information products supplied only by the government cannot meet the multiple demands from different entities. Based on the theory of public products, economic information products can be judged to be quasipublic products and they can be supplied both by government and by market in a harmonious way. Therefore, in order to meet the increasing new demands, the government should improve its supply efficiency of statistical information and greatly promote the development of market supply as well.
Farnsworth, G.L.; Nichols, J.D.; Sauer, J.R.; Fancy, S.G.; Pollock, K.H.; Shriner, S.A.; Simons, T.R.; Ralph, C. John; Rich, Terrell D.
2005-01-01
Point counts are a standard sampling procedure for many bird species, but lingering concerns still exist about the quality of information produced from the method. It is well known that variation in observer ability and environmental conditions can influence the detection probability of birds in point counts, but many biologists have been reluctant to abandon point counts in favor of more intensive approaches to counting. However, over the past few years a variety of statistical and methodological developments have begun to provide practical ways of overcoming some of the problems with point counts. We describe some of these approaches, and show how they can be integrated into standard point count protocols to greatly enhance the quality of the information. Several tools now exist for estimation of detection probability of birds during counts, including distance sampling, double observer methods, time-depletion (removal) methods, and hybrid methods that combine these approaches. Many counts are conducted in habitats that make auditory detection of birds much more likely than visual detection. As a framework for understanding detection probability during such counts, we propose separating two components of the probability a bird is detected during a count into (1) the probability a bird vocalizes during the count and (2) the probability this vocalization is detected by an observer. In addition, we propose that some measure of the area sampled during a count is necessary for valid inferences about bird populations. This can be done by employing fixed-radius counts or more sophisticated distance-sampling models. We recommend any studies employing point counts be designed to estimate detection probability and to include a measure of the area sampled.
Passage relevance models for genomics search
Frieder Ophir
2009-03-01
Full Text Available Abstract We present a passage relevance model for integrating syntactic and semantic evidence of biomedical concepts and topics using a probabilistic graphical model. Component models of topics, concepts, terms, and document are represented as potential functions within a Markov Random Field. The probability of a passage being relevant to a biologist's information need is represented as the joint distribution across all potential functions. Relevance model feedback of top ranked passages is used to improve distributional estimates of query concepts and topics in context, and a dimensional indexing strategy is used for efficient aggregation of concept and term statistics. By integrating multiple sources of evidence including dependencies between topics, concepts, and terms, we seek to improve genomics literature passage retrieval precision. Using this model, we are able to demonstrate statistically significant improvements in retrieval precision using a large genomics literature corpus.
STATISTICAL ANALYSIS, REPORTS), (*PROBABILITY, REPORTS), INFORMATION THEORY, DIFFERENTIAL EQUATIONS, STATISTICAL PROCESSES, STOCHASTIC PROCESSES, MULTIVARIATE ANALYSIS, DISTRIBUTION THEORY , DECISION THEORY, MEASURE THEORY, OPTIMIZATION
Luo, Li; Zhu, Yun; Xiong, Momiao
2012-06-01
The genome-wide association studies (GWAS) designed for next-generation sequencing data involve testing association of genomic variants, including common, low frequency, and rare variants. The current strategies for association studies are well developed for identifying association of common variants with the common diseases, but may be ill-suited when large amounts of allelic heterogeneity are present in sequence data. Recently, group tests that analyze their collective frequency differences between cases and controls shift the current variant-by-variant analysis paradigm for GWAS of common variants to the collective test of multiple variants in the association analysis of rare variants. However, group tests ignore differences in genetic effects among SNPs at different genomic locations. As an alternative to group tests, we developed a novel genome-information content-based statistics for testing association of the entire allele frequency spectrum of genomic variation with the diseases. To evaluate the performance of the proposed statistics, we use large-scale simulations based on whole genome low coverage pilot data in the 1000 Genomes Project to calculate the type 1 error rates and power of seven alternative statistics: a genome-information content-based statistic, the generalized T(2), collapsing method, multivariate and collapsing (CMC) method, individual χ(2) test, weighted-sum statistic, and variable threshold statistic. Finally, we apply the seven statistics to published resequencing dataset from ANGPTL3, ANGPTL4, ANGPTL5, and ANGPTL6 genes in the Dallas Heart Study. We report that the genome-information content-based statistic has significantly improved type 1 error rates and higher power than the other six statistics in both simulated and empirical datasets.
Seid, Ahmed; Gadisa, Endalamaw; Tsegaw, Teshome; Abera, Adugna; Teshome, Aklilu; Mulugeta, Abate; Herrero, Merce; Argaw, Daniel; Jorge, Alvar; Kebede, Asnakew; Aseffa, Abraham
2014-05-01
Cutaneous leishmaniasis (CL) is a neglected tropical disease strongly associated with poverty. Treatment is problematic and no vaccine is available. Ethiopia has seen new outbreaks in areas previously not known to be endemic, often with co-infection by the human immunodeficiency virus (HIV) with rates reaching 5.6% of the cases. The present study concerns the development of a risk model based on environmental factors using geographical information systems (GIS), statistical analysis and modelling. Odds ratio (OR) of bivariate and multivariate logistic regression was used to evaluate the relative importance of environmental factors, accepting P ≤ 0.056 as the inclusion level for the model's environmental variables. When estimating risk from the viewpoint of geographical surface, slope, elevation and annual rainfall were found to be good predictors of CL presence based on both probabilistic and weighted overlay approaches. However, when considering Ethiopia as whole, a minor difference was observed between the two methods with the probabilistic technique giving a 22.5% estimate, while that of weighted overlay approach was 19.5%. Calculating the population according to the land surface estimated by the latter method, the total Ethiopian population at risk for CL was estimated at 28,955,035, mainly including people in the highlands of the regional states of Amhara, Oromia, Tigray and the Southern Nations, Nationalities and Peoples' Region, one of the nine ethnic divisions in Ethiopia. Our environmental risk model provided an overall prediction accuracy of 90.4%. The approach proposed here can be replicated for other diseases to facilitate implementation of evidence-based, integrated disease control activities.
Ahmed Seid
2014-05-01
Full Text Available Cutaneous leishmaniasis (CL is a neglected tropical disease strongly associated with poverty. Treatment is problematic and no vaccine is available. Ethiopia has seen new outbreaks in areas previously not known to be endemic, often with co-infection by the human immunodeficiency virus (HIV with rates reaching 5.6% of the cases. The present study concerns the development of a risk model based on environmental factors using geographical information systems (GIS, statistical analysis and modelling. Odds ratio (OR of bivariate and multivariate logistic regression was used to evaluate the relative importance of environmental factors, accepting P ≤0.056 as the inclusion level for the model’s environmental variables. When estimating risk from the viewpoint of geographical surface, slope, elevation and annual rainfall were found to be good predictors of CL presence based on both probabilistic and weighted overlay approaches. However, when considering Ethiopia as whole, a minor difference was observed between the two methods with the probabilistic technique giving a 22.5% estimate, while that of weighted overlay approach was 19.5%. Calculating the population according to the land surface estimated by the latter method, the total Ethiopian population at risk for CL was estimated at 28,955,035, mainly including people in the highlands of the regional states of Amhara, Oromia, Tigray and the Southern Nations, Nationalities and Peoples’ Region, one of the nine ethnic divisions in Ethiopia. Our environmental risk model provided an overall prediction accuracy of 90.4%. The approach proposed here can be replicated for other diseases to facilitate implementation of evidence-based, integrated disease control activities.
Ranald Macdonald and statistical inference.
Smith, Philip T
2009-05-01
Ranald Roderick Macdonald (1945-2007) was an important contributor to mathematical psychology in the UK, as a referee and action editor for British Journal of Mathematical and Statistical Psychology and as a participant and organizer at the British Psychological Society's Mathematics, statistics and computing section meetings. This appreciation argues that his most important contribution was to the foundations of significance testing, where his concern about what information was relevant in interpreting the results of significance tests led him to be a persuasive advocate for the 'Weak Fisherian' form of hypothesis testing.
Official Statistics In a Modern Society
Ilie DUMITRESCU
2010-10-01
Full Text Available The modern democratic society cannot efficiently and rigorously function in the absence of a solid basis of relevant and reliable statistical data, allowing for an easy and user-friendly access. Representing a “public good”, in the contemporary society, the official statistical information is meant to serve the whole society, under the conditions of maximum transparency, impartiality and equal treatment of all the categories of data users.Official statistics should adapt itself to the changes taking place in the modern society and should comply with its increased demands for high quality information. On its turn, it imposes to both national and global statistical systems major tasks of structural changes in the activity of producing and disseminating official statistics, as well as in the communication with its partners from the informational fl ow upstream, but particularly from its downstream – these being the target recipients of statistical data.The article presents the vision on the official statistics role, functions and tasks in the modern society, as against the major challenges regarding the transformation of statistical information into knowledge, the promotion of statistical literacy and culture, ensuring the usefulness and the large scale use of statistical data.
Lopera Broto, A. J.; Balbas Gomez, S.
2012-07-01
This weekly publication intended to make it to all the people who work at the sites of Asco and Vandellos relevant information for security since we are all responsible for the safe and reliable operation of our plants.
Ladd, David E.; Law, George S.
2007-01-01
The U.S. Geological Survey (USGS) provides streamflow and other stream-related information needed to protect people and property from floods, to plan and manage water resources, and to protect water quality in the streams. Streamflow statistics provided by the USGS, such as the 100-year flood and the 7-day 10-year low flow, frequently are used by engineers, land managers, biologists, and many others to help guide decisions in their everyday work. In addition to streamflow statistics, resource managers often need to know the physical and climatic characteristics (basin characteristics) of the drainage basins for locations of interest to help them understand the mechanisms that control water availability and water quality at these locations. StreamStats is a Web-enabled geographic information system (GIS) application that makes it easy for users to obtain streamflow statistics, basin characteristics, and other information for USGS data-collection stations and for ungaged sites of interest. If a user selects the location of a data-collection station, StreamStats will provide previously published information for the station from a database. If a user selects a location where no data are available (an ungaged site), StreamStats will run a GIS program to delineate a drainage basin boundary, measure basin characteristics, and estimate streamflow statistics based on USGS streamflow prediction methods. A user can download a GIS feature class of the drainage basin boundary with attributes including the measured basin characteristics and streamflow estimates.
Chen, Chao; Yan, Xuefeng
2015-06-01
In this paper, an optimized multilayer feed-forward network (MLFN) is developed to construct a soft sensor for controlling naphtha dry point. To overcome the two main flaws in the structure and weight of MLFNs, which are trained by a back-propagation learning algorithm, minimal redundancy maximal relevance-partial mutual information clustering (mPMIc) integrated with least square regression (LSR) is proposed to optimize the MLFN. The mPMIc can determine the location of hidden layer nodes using information in the hidden and output layers, as well as remove redundant hidden layer nodes. These selected nodes are highly related to output data, but are minimally correlated with other hidden layer nodes. The weights between the selected hidden layer nodes and output layer are then updated through LSR. When the redundant nodes from the hidden layer are removed, the ideal MLFN structure can be obtained according to the test error results. In actual applications, the naphtha dry point must be controlled accurately because it strongly affects the production yield and the stability of subsequent operational processes. The mPMIc-LSR MLFN with a simple network size performs better than other improved MLFN variants and existing efficient models.
2010-01-01
... continued financial fitness, or comply with special information requests by Congress, Department officials... for use of domestic and international service segment and market data in accordance with...
Natrella, Mary Gibbons
2005-01-01
Formulated to assist scientists and engineers engaged in army ordnance research and development programs, this well-known and highly regarded handbook is a ready reference for advanced undergraduate and graduate students as well as for professionals seeking engineering information and quantitative data for designing, developing, constructing, and testing equipment. Topics include characterizing and comparing the measured performance of a material, product, or process; general considerations in planning experiments; statistical techniques for analyzing extreme-value data; use of transformations
Wilmar Hernandez
2005-11-01
Full Text Available In the present paper, in order to estimate the response of both a wheel speedsensor and an accelerometer placed in a car under performance tests, robust and optimalmultivariable estimation techniques are used. In this case, the disturbances and noisescorrupting the relevant information coming from the sensorsÃ¢Â€Â™ outputs are so dangerous thattheir negative influence on the electrical systems impoverish the general performance of thecar. In short, the solution to this problem is a safety related problem that deserves our fullattention. Therefore, in order to diminish the negative effects of the disturbances and noiseson the carÃ¢Â€Â™s electrical and electromechanical systems, an optimum observer is used. Theexperimental results show a satisfactory improvement in the signal-to-noise ratio of therelevant signals and demonstrate the importance of the fusion of several intelligent sensordesign techniques when designing the intelligent sensors that todayÃ¢Â€Â™s cars need.
Teaching statistics to nursing students: an expert panel consensus.
Hayat, Matthew J; Eckardt, Patricia; Higgins, Melinda; Kim, MyoungJin; Schmiege, Sarah J
2013-06-01
Statistics education is a necessary element of nursing education, and its inclusion is recommended in the American Association of Colleges of Nursing guidelines for nurse training at all levels. This article presents a cohesive summary of an expert panel discussion, "Teaching Statistics to Nursing Students," held at the 2012 Joint Statistical Meetings. All panelists were statistics experts, had extensive teaching and consulting experience, and held faculty appointments in a U.S.-based nursing college or school. The panel discussed degree-specific curriculum requirements, course content, how to ensure nursing students understand the relevance of statistics, approaches to integrating statistics consulting knowledge, experience with classroom instruction, use of knowledge from the statistics education research field to make improvements in statistics education for nursing students, and classroom pedagogy and instruction on the use of statistical software. Panelists also discussed the need for evidence to make data-informed decisions about statistics education and training for nurses.
Pasquaretta, Cristian; Klenschi, Elizabeth; Pansanel, Jérôme; Battesti, Marine; Mery, Frederic; Sueur, Cédric
2016-01-01
Social learning – the transmission of behaviors through observation or interaction with conspecifics – can be viewed as a decision-making process driven by interactions among individuals. Animal group structures change over time and interactions among individuals occur in particular orders that may be repeated following specific patterns, change in their nature, or disappear completely. Here we used a stochastic actor-oriented model built using the RSiena package in R to estimate individual behaviors and their changes through time, by analyzing the dynamic of the interaction network of the fruit fly Drosophila melanogaster during social learning experiments. In particular, we re-analyzed an experimental dataset where uninformed flies, left free to interact with informed ones, acquired and later used information about oviposition site choice obtained by social interactions. We estimated the degree to which the uninformed flies had successfully acquired the information carried by informed individuals using the proportion of eggs laid by uninformed flies on the medium their conspecifics had been trained to favor. Regardless of the degree of information acquisition measured in uninformed individuals, they always received and started interactions more frequently than informed ones did. However, information was efficiently transmitted (i.e., uninformed flies predominantly laid eggs on the same medium informed ones had learn to prefer) only when the difference in contacts sent between the two fly types was small. Interestingly, we found that the degree of reciprocation, the tendency of individuals to form mutual connections between each other, strongly affected oviposition site choice in uninformed flies. This work highlights the great potential of RSiena and its utility in the studies of interaction networks among non-human animals. PMID:27148146
Cristian ePasquaretta
2016-04-01
Full Text Available Social learning – the transmission of behaviors through observation or interaction with conspecifics – can be viewed as a decision-making process driven by interactions among individuals. Animal group structures change over time and interactions among individuals occur in particular orders that may be repeated following specific patterns, change in their nature, or disappear completely. Here we used a stochastic actor-oriented model built using the RSiena package in R to estimate individual behaviors and their changes through time, by analyzing the dynamic of the interaction network of the fruit fly Drosophila melanogaster during social learning experiments. In particular, we re-analyzed an experimental dataset where uninformed flies, left free to interact with informed ones, acquired and later used information about oviposition site choice obtained by social interactions. We estimated the degree to which the uninformed flies had successfully acquired the information carried by informed individuals using the proportion of eggs laid by uninformed flies on the medium their conspecifics had been trained to favor. Regardless of the degree of information acquisition measured in uninformed individuals, they always received and started interactions more frequently than informed ones did. However, information was efficiently transmitted (i.e. uninformed flies predominantly laid eggs on the same medium informed ones had learn to prefer only when the difference in contacts sent between the two fly types was small. Interestingly, we found that the degree of reciprocation, the tendency of individuals to form mutual connections between each other, strongly affected oviposition site choice in uninformed flies. This work highlights the great potential of RSiena and its utility in the studies of interaction networks among non-human animals.
刘恒江; 陈继祥
2004-01-01
This article analyzes the appearing process, development status and application value of foreign industrial cluster statistics, then surveys the problems and development essentiality of industrial cluster statistics in China. Based on these, it gives some development proposals for China' s industrial cluster statistics.
Rabal, Obdulia; Link, Wolfgang; Serelde, Beatriz G; Bischoff, James R; Oyarzabal, Julen
2010-04-01
Here we report the development and validation of a complete solution to manage and analyze the data produced by image-based phenotypic screening campaigns of small-molecule libraries. In one step initial crude images are analyzed for multiple cytological features, statistical analysis is performed and molecules that produce the desired phenotypic profile are identified. A naïve Bayes classifier, integrating chemical and phenotypic spaces, is built and utilized during the process to assess those images initially classified as "fuzzy"-an automated iterative feedback tuning. Simultaneously, all this information is directly annotated in a relational database containing the chemical data. This novel fully automated method was validated by conducting a re-analysis of results from a high-content screening campaign involving 33 992 molecules used to identify inhibitors of the PI3K/Akt signaling pathway. Ninety-two percent of confirmed hits identified by the conventional multistep analysis method were identified using this integrated one-step system as well as 40 new hits, 14.9% of the total, originally false negatives. Ninety-six percent of true negatives were properly recognized too. A web-based access to the database, with customizable data retrieval and visualization tools, facilitates the posterior analysis of annotated cytological features which allows identification of additional phenotypic profiles; thus, further analysis of original crude images is not required.
Estimation and inferential statistics
Sahu, Pradip Kumar; Das, Ajit Kumar
2015-01-01
This book focuses on the meaning of statistical inference and estimation. Statistical inference is concerned with the problems of estimation of population parameters and testing hypotheses. Primarily aimed at undergraduate and postgraduate students of statistics, the book is also useful to professionals and researchers in statistical, medical, social and other disciplines. It discusses current methodological techniques used in statistics and related interdisciplinary areas. Every concept is supported with relevant research examples to help readers to find the most suitable application. Statistical tools have been presented by using real-life examples, removing the “fear factor” usually associated with this complex subject. The book will help readers to discover diverse perspectives of statistical theory followed by relevant worked-out examples. Keeping in mind the needs of readers, as well as constantly changing scenarios, the material is presented in an easy-to-understand form.
2011-01-01
There are unstructured abstracts (no more than 256 words) and structured abstracts (no more than 480). The specific requirements for structured abstracts are as follows:An informative, structured abstracts of no more than 4-80 words should accompany each manuscript. Abstracts for original contributions should be structured into the following sections. AIM (no more than 20 words): Only the purpose should be included. Please write the aim as the form of "To investigate/ study/..."; MATERIALS AND METHODS (no more than 140 words); RESULTS (no more than 294 words): You should present P values where appropnate and must provide relevant data to illustrate how they were obtained, e.g. 6.92 ± 3.86 vs 3.61 ± 1.67, P< 0.001; CONCLUSION (no more than 26 words).
Examining Different Regions of Relevance: From Highly Relevant to Not Relevant.
Spink, Amanda; Greisdorf, Howard; Bateman, Judy
1998-01-01
Proposes a useful concept of relevance as a relationship and an effect on the movement of a user through the iterative stages of their information seeking process, and that users' relevance judgments can be plotted on a Three-Dimensional Spatial Model of Relevance Level, Degree and Time. Discusses implications for the development of information…
Tue, Nguyen Minh; Takahashi, Shin; Suzuki, Go; Isobe, Tomohiko; Viet, Pham Hung; Kobara, Yuso; Seike, Nobuyasu; Zhang, Gan; Sudaryanto, Agus; Tanabe, Shinsuke
2013-01-01
This study investigated the occurrence of polychlorinated biphenyls (PCBs), and several additive brominated flame retardants (BFRs) in indoor dust and air from two Vietnamese informal e-waste recycling sites (EWRSs) and an urban site in order to assess the relevance of these media for human exposure. The levels of polybrominated diphenyl ethers (PBDEs), hexabromocyclododecane (HBCD), 1,2-bis-(2,4,6-tribromophenoxy)ethane (BTBPE) and decabromodiphenyl ethane (DBDPE) in settled house dust from the EWRSs (130-12,000, 5.4-400, 5.2-620 and 31-1400 ng g(-1), respectively) were significantly higher than in urban house dust but the levels of PCBs (4.8-320 ng g(-1)) were not higher. The levels of PCBs and PBDEs in air at e-waste recycling houses (1000-1800 and 620-720 pg m(-3), respectively), determined using passive sampling, were also higher compared with non-e-waste houses. The composition of BFRs in EWRS samples suggests the influence from high-temperature processes and occurrence of waste materials containing older BFR formulations. Results of daily intake estimation for e-waste recycling workers are in good agreement with the accumulation patterns previously observed in human milk and indicate that dust ingestion contributes a large portion of the PBDE intake (60%-88%), and air inhalation to the low-chlorinated PCB intake (>80% for triCBs) due to their high levels in dust and air, respectively. Further investigation of both indoor dust and air as the exposure media for other e-waste recycling-related contaminants and assessment of health risk associated with exposure to these contaminant mixtures is necessary.
Strauss, Soeren; Woodgate, Philip J W; Sami, Saber A; Heinke, Dietmar
2015-12-01
We present an extension of a neurobiologically inspired robotics model, termed CoRLEGO (Choice reaching with a LEGO arm robot). CoRLEGO models experimental evidence from choice reaching tasks (CRT). In a CRT participants are asked to rapidly reach and touch an item presented on the screen. These experiments show that non-target items can divert the reaching movement away from the ideal trajectory to the target item. This is seen as evidence attentional selection of reaching targets can leak into the motor system. Using competitive target selection and topological representations of motor parameters (dynamic neural fields) CoRLEGO is able to mimic this leakage effect. Furthermore if the reaching target is determined by its colour oddity (i.e. a green square among red squares or vice versa), the reaching trajectories become straighter with repetitions of the target colour (colour streaks). This colour priming effect can also be modelled with CoRLEGO. The paper also presents an extension of CoRLEGO. This extension mimics findings that transcranial direct current stimulation (tDCS) over the motor cortex modulates the colour priming effect (Woodgate et al., 2015). The results with the new CoRLEGO suggest that feedback connections from the motor system to the brain's attentional system (parietal cortex) guide visual attention to extract movement-relevant information (i.e. colour) from visual stimuli. This paper adds to growing evidence that there is a close interaction between the motor system and the attention system. This evidence contradicts the traditional conceptualization of the motor system as the endpoint of a serial chain of processing stages. At the end of the paper we discuss CoRLEGO's predictions and also lessons for neurobiologically inspired robotics emerging from this work.
Cumming, John McClure
2011-01-01
Caregiver burden and distress have been associated with informal caregivers. Research findings on the specific aspects of the caregiving role that influence burden are mixed. Factors such as amount of time per day giving care and specific characteristics about the disease progression have been linked to caregiver burden and distress. Other…
Cumming, John McClure
2011-01-01
Caregiver burden and distress have been associated with informal caregivers. Research findings on the specific aspects of the caregiving role that influence burden are mixed. Factors such as amount of time per day giving care and specific characteristics about the disease progression have been linked to caregiver burden and distress. Other…
... Certification Import Safety International Recall Guidance Civil and Criminal Penalties Federal Court Orders & Decisions Research & Statistics Research & Statistics Technical Reports Injury Statistics NEISS Injury ...
Human thoracic anatomy relevant to implantable artificial hearts
Jacobs, G.B.; Kiraly, R.J.; Nose, Y.
1976-10-01
The objective of the study is to define the human thorax in a quantitative statistical manner such that the information will be useful to the designers of cardiac prostheses, both total replacement and assist devices. This report pertains specifically to anatomical parameters relevant to the total cardiac prosthesis. This information will also be clinically useful in that the proposed recipient of a cardiac prosthesis can by simple radiography be assured of an adequate fit with the prosthesis prior to the implantation.
... Research AMIGAS Fighting Cervical Cancer Worldwide Stay Informed Statistics for Other Kinds of Cancer Breast Cervical Colorectal ( ... Skin Vaginal and Vulvar Cancer Home Uterine Cancer Statistics Language: English Español (Spanish) Recommend on Facebook Tweet ...
Cosmic Statistics of Statistics
Szapudi, I.; Colombi, S.; Bernardeau, F.
1999-01-01
The errors on statistics measured in finite galaxy catalogs are exhaustively investigated. The theory of errors on factorial moments by Szapudi & Colombi (1996) is applied to cumulants via a series expansion method. All results are subsequently extended to the weakly non-linear regime. Together with previous investigations this yields an analytic theory of the errors for moments and connected moments of counts in cells from highly nonlinear to weakly nonlinear scales. The final analytic formu...
Tlusty, Tsvi
2010-09-01
The genetic code maps the sixty-four nucleotide triplets (codons) to twenty amino-acids. While the biochemical details of this code were unraveled long ago, its origin is still obscure. We review information-theoretic approaches to the problem of the code's origin and discuss the results of a recent work that treats the code in terms of an evolving, error-prone information channel. Our model - which utilizes the rate-distortion theory of noisy communication channels - suggests that the genetic code originated as a result of the interplay of the three conflicting evolutionary forces: the needs for diverse amino-acids, for error-tolerance and for minimal cost of resources. The description of the code as an information channel allows us to mathematically identify the fitness of the code and locate its emergence at a second-order phase transition when the mapping of codons to amino-acids becomes nonrandom. The noise in the channel brings about an error-graph, in which edges connect codons that are likely to be confused. The emergence of the code is governed by the topology of the error-graph, which determines the lowest modes of the graph-Laplacian and is related to the map coloring problem. (c) 2010 Elsevier B.V. All rights reserved.
Bergenholtz, Henning; Gouws, Rufus
2007-01-01
as detrimental to the status of a dictionary as a container of linguistic knowledge. This paper shows that, from a lexicographic perspective, such a distinction is not relevant. What is important is that definitions should contain information that is relevant to and needed by the target users of that specific......In explanatory dictionaries, both general language dictionaries and dictionaries dealing with languages for special purposes, the lexicographic definition is an important item to present the meaning of a given lemma. Due to a strong linguistic bias, resulting from an approach prevalent in the early...... phases of the development of theoretical lexicography, a distinction is often made between encyclopaedic information and semantic information in dictionary definitions, and dictionaries had often been criticized when their definitions were dominated by an encyclopaedic approach. This used to be seen...
Mathematical statistics with applications
Wackerly, Dennis D; Scheaffer, Richard L
2008-01-01
In their bestselling MATHEMATICAL STATISTICS WITH APPLICATIONS, premiere authors Dennis Wackerly, William Mendenhall, and Richard L. Scheaffer present a solid foundation in statistical theory while conveying the relevance and importance of the theory in solving practical problems in the real world. The authors' use of practical applications and excellent exercises helps you discover the nature of statistics and understand its essential role in scientific research.
Geometric theory of information
2014-01-01
This book brings together geometric tools and their applications for Information analysis. It collects current and many uses of in the interdisciplinary fields of Information Geometry Manifolds in Advanced Signal, Image & Video Processing, Complex Data Modeling and Analysis, Information Ranking and Retrieval, Coding, Cognitive Systems, Optimal Control, Statistics on Manifolds, Machine Learning, Speech/sound recognition, and natural language treatment which are also substantially relevant for the industry.
Wallace, Lorraine S; Chisolm, Deena J; Abdel-Rasoul, Mahmoud; DeVoe, Jennifer E
2013-08-01
This study examined adults' self-reported understanding and formatting preferences of medical statistics, confidence in self-care and ability to obtain health advice or information, and perceptions of patient-health-care provider communication measured through dual survey modes (random digital dial and mail). Even while controlling for sociodemographic characteristics, significant differences in regard to adults' responses to survey variables emerged as a function of survey mode. While the analyses do not allow us to pinpoint the underlying causes of the differences observed, they do suggest that mode of administration should be carefully adjusted for and considered.
Karin eBinder
2015-08-01
Full Text Available In their research articles, scholars often use 2 x 2 tables or tree diagrams including natural frequencies in order to illustrate Bayesian reasoning situations to their peers. Interestingly, the effect of these visualizations on participants’ performance has not been tested empirically so far (apart from explicit training studies. In the present article, we report on an empirical study (3 x 2 x 2 design in which we systematically vary visualization (no visualization vs. 2 x 2 table vs. tree diagram and information format (probabilities vs. natural frequencies for two contexts (medical vs. economical context; not a factor of interest. Each of N = 259 participants (students of age 16-18 had to solve two typical Bayesian reasoning tasks (mammography problem and economics problem. The hypothesis is that 2 x 2 tables and tree diagrams – especially when natural frequencies are included – can foster insight into the notoriously difficult structure of Bayesian reasoning situations. In contrast to many other visualizations (e. g., icon arrays, Euler diagrams, 2 x 2 tables and tree diagrams have the advantage that they can be constructed easily. The implications of our findings for teaching Bayesian reasoning will be discussed.
Ricardo de Matos Simoes
Full Text Available The inference of gene regulatory networks from gene expression data is a difficult problem because the performance of the inference algorithms depends on a multitude of different factors. In this paper we study two of these. First, we investigate the influence of discrete mutual information (MI estimators on the global and local network inference performance of the C3NET algorithm. More precisely, we study 4 different MI estimators (Empirical, Miller-Madow, Shrink and Schürmann-Grassberger in combination with 3 discretization methods (equal frequency, equal width and global equal width discretization. We observe the best global and local inference performance of C3NET for the Miller-Madow estimator with an equal width discretization. Second, our numerical analysis can be considered as a systems approach because we simulate gene expression data from an underlying gene regulatory network, instead of making a distributional assumption to sample thereof. We demonstrate that despite the popularity of the latter approach, which is the traditional way of studying MI estimators, this is in fact not supported by simulated and biological expression data because of their heterogeneity. Hence, our study provides guidance for an efficient design of a simulation study in the context of network inference, supporting a systems approach.
de Matos Simoes, Ricardo; Emmert-Streib, Frank
2011-01-01
The inference of gene regulatory networks from gene expression data is a difficult problem because the performance of the inference algorithms depends on a multitude of different factors. In this paper we study two of these. First, we investigate the influence of discrete mutual information (MI) estimators on the global and local network inference performance of the C3NET algorithm. More precisely, we study 4 different MI estimators (Empirical, Miller-Madow, Shrink and Schürmann-Grassberger) in combination with 3 discretization methods (equal frequency, equal width and global equal width discretization). We observe the best global and local inference performance of C3NET for the Miller-Madow estimator with an equal width discretization. Second, our numerical analysis can be considered as a systems approach because we simulate gene expression data from an underlying gene regulatory network, instead of making a distributional assumption to sample thereof. We demonstrate that despite the popularity of the latter approach, which is the traditional way of studying MI estimators, this is in fact not supported by simulated and biological expression data because of their heterogeneity. Hence, our study provides guidance for an efficient design of a simulation study in the context of network inference, supporting a systems approach.
Christine F. Marton
2011-01-01
Full Text Available Objectives – To compare the performance of the vector space model and the probabilistic weighting model of relevance feedback for the overall purpose of determining the most useful relevance feedback procedures. The amount of improvement that can be obtained from searching several test document collections with only one feedback iteration of each relevance feedback model was measured.Design – The experimental design consisted of 72 different tests: 2 different relevance feedback methods, each with 6 permutations, on 6 test document collections of various sizes. A residual collection method was utilized to ascertain the “true advantage provided by the relevance feedback process.” (Salton & Buckley, 1990, p. 293Setting – Department of Computer Science at Cornell University.Subjects – Six test document collections.Methods – Relevance feedback is an effective technique for query modification that provides significant improvement in search performance. Relevance feedback entails both “term reweighting,” the modification of term weights based on term use in retrieved relevant and non-relevant documents, and “query expansion,” which is the addition of new terms from relevant documents retrieved (Harman, 1992. Salton and Buckley (1990 evaluated two established relevance feedback models based on the vector space model (a spatial model and the probabilistic model, respectively. Harman (1992 describes the two key differences between these competing models of relevance feedback.[The vector space model merges] document vectors and original query vectors. This automatically reweights query terms by adding the weights from the actual occurrence of those query terms in the relevant documents, and subtracting the weights of those terms occurring in the non-relevant documents. Queries are automatically expanded by adding all the terms not in the original query that are in the relevant documents and non-relevant documents. They are expanded
Bauer, Patricia J.; Larkina, Marina
2017-01-01
In accumulating knowledge, direct modes of learning are complemented by productive processes, including self-generation based on integration of separate episodes. Effects of the number of potentially relevant episodes on integration were examined in 4- to 8-year-olds (N = 121; racially/ethnically heterogeneous sample, English speakers, from large…
Business statistics for dummies
Anderson, Alan
2013-01-01
Score higher in your business statistics course? Easy. Business statistics is a common course for business majors and MBA candidates. It examines common data sets and the proper way to use such information when conducting research and producing informational reports such as profit and loss statements, customer satisfaction surveys, and peer comparisons. Business Statistics For Dummies tracks to a typical business statistics course offered at the undergraduate and graduate levels and provides clear, practical explanations of business statistical ideas, techniques, formulas, and calculations, w
Mahalanobis, P C
1965-01-01
Contributions to Statistics focuses on the processes, methodologies, and approaches involved in statistics. The book is presented to Professor P. C. Mahalanobis on the occasion of his 70th birthday. The selection first offers information on the recovery of ancillary information and combinatorial properties of partially balanced designs and association schemes. Discussions focus on combinatorial applications of the algebra of association matrices, sample size analogy, association matrices and the algebra of association schemes, and conceptual statistical experiments. The book then examines latt
Geert Heidema, A.; Thissen, U.; Boer, J.M.A.; Bouwman, F.G.; Feskens, E.J.M.; Mariman, E.C.M.
2009-01-01
In this study, we applied the multivariate statistical tool Partial Least Squares (PLS) to analyze the relative importance of 83 plasma proteins in relation to coronary heart disease (CHD) mortality and the intermediate end points body mass index, HDL-cholesterol and total cholesterol. From a Dutch
Watson, Curtis L.
2010-01-01
This report details an ongoing investigation of the decision-making processes of a group of secondary school students in south-eastern Australia undertaking information search tasks. The study is situated in the field of information seeking and use, and, more broadly, in decision making. Research questions focus on students' decisions about the…
Statistical physics of vaccination
Wang, Zhen; Bauch, Chris T.; Bhattacharyya, Samit; d'Onofrio, Alberto; Manfredi, Piero; Perc, Matjaž; Perra, Nicola; Salathé, Marcel; Zhao, Dawei
2016-12-01
Historically, infectious diseases caused considerable damage to human societies, and they continue to do so today. To help reduce their impact, mathematical models of disease transmission have been studied to help understand disease dynamics and inform prevention strategies. Vaccination-one of the most important preventive measures of modern times-is of great interest both theoretically and empirically. And in contrast to traditional approaches, recent research increasingly explores the pivotal implications of individual behavior and heterogeneous contact patterns in populations. Our report reviews the developmental arc of theoretical epidemiology with emphasis on vaccination, as it led from classical models assuming homogeneously mixing (mean-field) populations and ignoring human behavior, to recent models that account for behavioral feedback and/or population spatial/social structure. Many of the methods used originated in statistical physics, such as lattice and network models, and their associated analytical frameworks. Similarly, the feedback loop between vaccinating behavior and disease propagation forms a coupled nonlinear system with analogs in physics. We also review the new paradigm of digital epidemiology, wherein sources of digital data such as online social media are mined for high-resolution information on epidemiologically relevant individual behavior. Armed with the tools and concepts of statistical physics, and further assisted by new sources of digital data, models that capture nonlinear interactions between behavior and disease dynamics offer a novel way of modeling real-world phenomena, and can help improve health outcomes. We conclude the review by discussing open problems in the field and promising directions for future research.
Norén, Patrik
2013-01-01
Algebraic statistics brings together ideas from algebraic geometry, commutative algebra, and combinatorics to address problems in statistics and its applications. Computer algebra provides powerful tools for the study of algorithms and software. However, these tools are rarely prepared to address statistical challenges and therefore new algebraic results need often be developed. This way of interplay between algebra and statistics fertilizes both disciplines. Algebraic statistics is a relativ...
郝昱文; 卢沙林; 杨宇; 闫小萍
2013-01-01
Objective To develop the hospital information intelligent statistical analysis system, and collect mass data of every information system in hospital, so as to provide decision support for hospital leaders. Methods Taking advantage of business intelligence technology to integrate hospital business data and build the data center. Combining with the modern statistical analysis method, we set up a comprehensive query and analysis platform, and on this basis, carry on the in-depth data analysis and decision support, in order to make management work more simple and efficient. Results By integrating kinds of resources dominated by HIS database, such as medical information, materials management information, personnel management information, we build uniform reports analysis center to realize the seamless integration of information systems. This system could not only provide data analysis reference for medical workers, but also construct a performance evaluation system on this basis, further, offer support for these leaders. Conclusion Developing an intelligent statistical analysis platform of rapid, efficient, easy-to-use could quickly improve the management level of hospital informatization.%目的 开发医院信息智能统计分析系统,用于医院各信息系统集成后产生的海量数据管理,为医院管理者提供决策支持.方法 利用商业智能技术整合医院业务数据,搭建数据中心;结合现代统计分析方法,建立综合查询分析平台,并在此基础上进行深度的数据分析和决策支持,使管理工作更加便捷和高效.结果 整合以医院信息系统(HIS)数据库为主的医疗信息、物资管理信息、人事管理信息等资源,建立统一的报表分析中心,实现了全院信息系统的无缝链接.该系统不仅能够为医护人员工作提供数据分析依据,而且能够在此基础上构建医院绩效考评体系,进而为领导决策提供支持.结论 开发一个快捷、高效、易用的智
Practical statistics for nursing and health care
Fowler, Jim; Chevannes, Mel
2002-01-01
Nursing is a growing area of higher education, in which an introduction to statistics is an essential component. There is currently a gap in the market for a 'user-friendly' book which is contextulised and targeted for nursing. Practical Statistics for Nursing and Health Care introduces statistical techniques in such a way that readers will easily grasp the fundamentals to enable them to gain the confidence and understanding to perform their own analysis. It also provides sufficient advice in areas such as clinical trials and epidemiology to enable the reader to critically appraise work published in journals such as the Lancet and British Medical Journal. * Covers all basic statistical concepts and tests * Is user-friendly - avoids excessive jargon * Includes relevant examples for nurses, including case studies and data sets * Provides information on further reading * Starts from first principles and progresses step by step * Includes 'advice on' sections for all of the tests described.
成颖
2011-01-01
In this paper, relevance criteria oriented academic information retrieval system success model is constructed. The main components of the model include relevance criteria, academic retrieval system character, Information System Success Model （ISSM） and TEDS model. And the relationships in the model are analyzed and hypotheses are proposed.%基于相关性判据研究成果、学术信息检索系统特征调研成果、信息系统成功模型（ISSM）以及TEDS模型构建面向相关性判据的学术信息检索系统成功模型，对模型中的关系进行简要分析，并提出研究假设。
Baseline Statistics of Linked Statistical Data
Scharnhorst, Andrea; Meroño-Peñuela, Albert; Guéret, Christophe
2014-01-01
We are surrounded by an ever increasing ocean of information, everybody will agree to that. We build sophisticated strategies to govern this information: design data models, develop infrastructures for data sharing, building tool for data analysis. Statistical datasets curated by National Statistica
A hand-held bus charging and information statistics system%一种手持式公交车收费及信息统计装置
叶鼎晟; 张凯
2012-01-01
A hand-held bus charging and information statistics system is presented, according to passengers ride the journey to collect fees, instead of the traditional sectional charge, it mainly used for counting recent number of Passengers and each station number of getting on and off. By the wireless data transmission module, the information can be sent to platform as reference for waiting passengers. This kind of device makes public travel conveniently because of the compatibility of existing public transit card. The device can store and Summarize passenger information in the bus. Based on the Analysis of traffic flow and passengers flow information, it is also convenient for Bus Company to dispatch and count.%提出了一种手持式公交车收费及信息统计装置，根据乘客所乘坐路程来进行收费，而不是传统的分段式收费，同时可以统计各站上车人数，每站的下车人数以及目前车内人数等；然后通过无线数据传输模块还可以把车上的信息发送到站台上供等车的乘客参考，并且这种装置对于现存的公交卡也能进行读写，为市民绿色出行提供便利。此种装置能将公交车上的客流信息进行储存、汇总，通过对车流、人流的信息进行分析，还能方便公交车公司进行调度和统计。
Fujisawa, Mariko
2016-04-01
Climate forecasts have been developed to assist decision making in sectors averse to, and affected by, climate risks, and agriculture is one of those. In agriculture and food security, climate information is now used on a range of timescales, from days (weather), months (seasonal outlooks) to decades (climate change scenarios). Former researchers have shown that when seasonal climate forecast information was provided to farmers prior to decision making, farmers adapted by changing their choice of planting seeds and timing or area planted. However, it is not always clear that the end-users' needs for climate information are met and there might be a large gap between information supplied and needed. It has been pointed out that even when forecasts were available, they were often not utilized by farmers and extension services because of lack of trust in the forecast or the forecasts did not reach the targeted farmers. Many studies have focused on the use of either seasonal forecasts or longer term climate change prediction, but little research has been done on the medium term, that is, 1 to 10 year future climate information. The agriculture and food system sector is one potential user of medium term information, as land use policy and cropping systems selection may fall into this time scale and may affect farmers' decision making process. Assuming that reliable information is provided and it is utilized by farmers for decision making, it might contribute to resilient farming and indeed to longer term food security. To this end, we try to determine the effect of medium term climate information on farmers' strategic decision making process. We explored the end-users' needs for climate information and especially the possible role of medium term information in agricultural system, by conducting interview surveys with farmers and agricultural experts. In this study, the cases of apple production in South Africa, maize production in Malawi and rice production in Tanzania
User perspectives on relevance criteria
Maglaughlin, Kelly L.; Sonnenwald, Diane H.
2002-01-01
matter, thought catalyst), full text (e.g., audience, novelty, type, possible content, utility), journal/publisher (e.g., novelty, main focus, perceived quality), and personal (e.g., competition, time requirements). Results further indicate that multiple criteria are used when making relevant, partially...... relevant, and not-relevant judgments, and that most criteria can have either a positive or negative contribution to the relevance of a document. The criteria most frequently mentioned by study participants were content, followed by criteria characterizing the full text document. These findings may have...... implications for relevance feedback in information retrieval systems, suggesting that systems accept and utilize multiple positive and negative relevance criteria from users. Systems designers may want to focus on supporting content criteria followed by full text criteria as these may provide the greatest cost...
Diez Claudius
2006-10-01
Full Text Available Abstract Background It is not clear how prevalent Internet use among cardiopathic patients in Germany is and what impact it has on the health care utilisation. We measured the extent of Internet use among cardiopathic patients and examined the effects that Internet use has on users' knowledge about their cardiac disease, health care matters and their use of the health care system. Methods We conducted a prospective survey among 255 cardiopathic patients at a German university hospital. Results Forty seven respondents (18 % used the internet and 8,8 % (n = 23 went online more than 20 hours per month. The most frequent reason for not using the internet was disinterest (52,3 %. Fourteen patients (5,4 % searched for specific disease-related information and valued the retrieved information on an analogous scale (1 = not relevant, 5 = very relevant on median with 4,0. Internet use is age and education dependent. Only 36 (14,1 % respondents found the internet useful, whereas the vast majority would not use it. Electronic scheduling for ambulatory visits or postoperative telemedical monitoring were rather disapproved. Conclusion We conclude that Internet use is infrequent among our study population and the search for relevant health and disease related information is not well established.
Kanji, Gopal K
2006-01-01
This expanded and updated Third Edition of Gopal K. Kanji's best-selling resource on statistical tests covers all the most commonly used tests with information on how to calculate and interpret results with simple datasets. Each entry begins with a short summary statement about the test's purpose, and contains details of the test objective, the limitations (or assumptions) involved, a brief outline of the method, a worked example, and the numerical calculation. 100 Statistical Tests, Third Edition is the one indispensable guide for users of statistical materials and consumers of statistical information at all levels and across all disciplines.
Marco Giacopetti
2016-06-01
Full Text Available A conceptual model related to a mountain aquifer that is characterized by a lack of data of hydrogeological parameters and boundary conditions, which were based on a single available observational dataset used for calibration, was studied using numerical models. For the first time, a preliminary spatial-temporal analysis has been applied to the study area in order to evaluate the real extension of the aquifer studied. The analysis was based on four models that were characterized by an increasing degree of complexity using a minimum of two zones and a maximum of five zones, which consequently increased the number of adjustable parameters from a minimum of 10 to a maximum of 22, calibrated using the parameter estimation code PEST. Statistical index and information criteria were calculated for each model, which showed comparable results; the information criteria indicated that the model with the low number of adjustable parameters was the optimal model. A comparison of the simulated and observed spring hydrographs showed a good shape correspondence but a general overestimation of the discharge, which indicated a good fit with the rainfall time series and a probably incorrect extension of the aquifer structure: the recharge contributes more than half of the total outflow at the springs but is not able to completely feed the springs.
新家, 健精
2013-01-01
© 2012 Springer Science+Business Media, LLC. All rights reserved. Article Outline: Glossary Definition of the Subject and Introduction The Bayesian Statistical Paradigm Three Examples Comparison with the Frequentist Statistical Paradigm Future Directions Bibliography
Pestman, Wiebe R
2009-01-01
This textbook provides a broad and solid introduction to mathematical statistics, including the classical subjects hypothesis testing, normal regression analysis, and normal analysis of variance. In addition, non-parametric statistics and vectorial statistics are considered, as well as applications of stochastic analysis in modern statistics, e.g., Kolmogorov-Smirnov testing, smoothing techniques, robustness and density estimation. For students with some elementary mathematical background. With many exercises. Prerequisites from measure theory and linear algebra are presented.
2012-05-01
a dynamic decision making experiment. Genetic Algorithms and Soft Computing. 8: 658-682. Go, E., Bos, J. C., and Lamoureux, T. M. 2006. Team...sustained performance under sleep-deprivation conditions. Chronobiology International. 27: 318-333. Persson, M. and Worm, A. 2002. Information
Wallis, W Allen
2014-01-01
Focusing on everyday applications as well as those of scientific research, this classic of modern statistical methods requires little to no mathematical background. Readers develop basic skills for evaluating and using statistical data. Lively, relevant examples include applications to business, government, social and physical sciences, genetics, medicine, and public health. ""W. Allen Wallis and Harry V. Roberts have made statistics fascinating."" - The New York Times ""The authors have set out with considerable success, to write a text which would be of interest and value to the student who,
Bailey, Joseph; Field, Richard; Boyd, Doreen
2016-04-01
We assess the scale-dependency of the relationship between biodiversity and novel geodiversity information by studying spatial patterns of native and alien (archaeophytes and neophytes) vascular plant species richness at varying spatial scales across Great Britain. Instead of using a compound geodiversity metric, we study individual geodiversity components (GDCs) to advance our understanding of which aspects of 'geodiversity' are most important and at what scale. Terrestrial native (n = 1,490) and alien (n = 1,331) vascular plant species richness was modelled across the island of Great Britain at two grain sizes and several extent radii. Various GDCs (landforms, hydrology, geology) were compiled from existing national datasets and automatically extracted landform coverage information (e.g. hollows, valleys, peaks), the latter using a digital elevation model (DEM) and geomorphometric techniques. More traditional predictors of species richness (climate, widely-used topography metrics, land cover diversity, and human population) were also incorporated. Boosted Regression Tree (BRT) models were produced at all grain sizes and extents for each species group and the dominant predictors were assessed. Models with and without geodiversity data were compared. Overarching patterns indicated a clear dominance of geodiversity information at the smallest study extent (12.5km radius) and finest grain size (1x1km), which substantially decreased for each increase in extent as the contribution of climatic variables increased. The contribution of GDCs to biodiversity models was chiefly driven by landform information from geomorphometry, but hydrology (rivers and lakes), and to a lesser extent materials (soil, superficial deposits, and geology), were important, also. GDCs added significantly to vascular plant biodiversity models in Great Britain, independently of widely-used topographic metrics, particularly for native species. The wider consideration of geodiversity alongside
Flacke, Johannes; Schüle, Steffen Andreas; Köckler, Heike; Bolte, Gabriele
2016-07-13
Spatial differences in urban environmental conditions contribute to health inequalities within cities. The purpose of the paper is to map environmental inequalities relevant for health in the City of Dortmund, Germany, in order to identify needs for planning interventions. We develop suitable indicators for mapping socioeconomically-driven environmental inequalities at the neighborhood level based on published scientific evidence and inputs from local stakeholders. Relationships between socioeconomic and environmental indicators at the level of 170 neighborhoods were analyzed continuously with Spearman rank correlation coefficients and categorically applying chi-squared tests. Reclassified socioeconomic and environmental indicators were then mapped at the neighborhood level in order to determine multiple environmental burdens and hotspots of environmental inequalities related to health. Results show that the majority of environmental indicators correlate significantly, leading to multiple environmental burdens in specific neighborhoods. Some of these neighborhoods also have significantly larger proportions of inhabitants of a lower socioeconomic position indicating hotspots of environmental inequalities. Suitable planning interventions mainly comprise transport planning and green space management. In the conclusions, we discuss how the analysis can be used to improve state of the art planning instruments, such as clean air action planning or noise reduction planning towards the consideration of the vulnerability of the population.
Eliazar, Iddo
2017-05-01
The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their 'public relations' for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of this object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford's law, and 1/f noise.
María Fernanda Víquez Ortiz
2012-08-01
Full Text Available Desde 1995 se incluyen en los programas de estudio del Ministerio de Educación Pública (MEP los temas de estadística y probabilidad en I y II Ciclos de la Educación General Básica. Como parte del análisis de la realidad de la enseñanza de dichos temas se realizó una investigación en dos regiones educativas del país, a saber: Heredia y Pérez Zeledón. Se indagó acerca de la formación universitaria que los docentes, en ejercicio en esas regiones, han tenido en dichas temáticas para enfrentar su enseñanza en las aulas, así como en los procesos de capacitación y actualización que han recibido. Al final de la investigación se evidenció la escasa formación universitaria en estos temas y la insatisfacción de los docentes al respecto, además del poco acompañamiento que han tenido durante el ejercicio de su profesión por parte de entes capacitadores.Probability and Statistics were included in the Basic General Education curricula by the Ministry of Public Education (Costa Rica, since 1995. To analyze the teaching reality in these fields, a research was conducted in two educational regions of the country: Heredia and Pérez Zeledón. The survey included university training and updating processes of teachers teaching Statistics and Probability in the schools. The research demonstrated the limited university training in these fields, the dissatisfaction of teachers about it, and the poor support of training institutions to their professional exercise. Recibido 14 de marzo de 2012 • Corregido 01 de junio de 2012 • Aceptado 28 de junio de 2012
International migration statistics in Mexico.
Garcia Y Griego, M
1987-01-01
During the past decade, Mexico has experienced both large-scale emigration directly, mostly to the US, and the mass immigration of Central American refugees. The implementation of the US Immigration and Control Act of 1986 and the possible escalation of armed conflicts in Central America may result in expanded inflows either of returning citizens or of new refugee waves. To develop appropriate policy responses, Mexico needs reliable information on international migration flows. This research note reviews available sources of that information--arrival and departure statistics, population censuses, refugee censuses, and survey data--and concludes that most of them are relatively weak. Currently, the published data on entries and departures provide little information on the demographic impact of legal migration, although they suggest that the inflow of foreigners is small. The census corroborates such findings, but it yields inadequate demographic detail. The movement of Mexican nationals, on the other hand, is poorly reflected by both sources. The void they leave has been palliated somewhat by surveys, but the only nationally representative survey on emigration was carried out in the late 1970s and might be a less than ideal basis for current policy formulation. It is hoped that as the relevance of international migration becomes more evident, steps towards the improvement of existing statistical systems may be undertaken. In the absence of such measures, policy-makers and researchers will have to continue relying on ad hoc surveys to answer the most pressing questions on the subject.
Statistical Methods for Astronomy
Feigelson, Eric D
2012-01-01
This review outlines concepts of mathematical statistics, elements of probability theory, hypothesis tests and point estimation for use in the analysis of modern astronomical data. Least squares, maximum likelihood, and Bayesian approaches to statistical inference are treated. Resampling methods, particularly the bootstrap, provide valuable procedures when distributions functions of statistics are not known. Several approaches to model selection and good- ness of fit are considered. Applied statistics relevant to astronomical research are briefly discussed: nonparametric methods for use when little is known about the behavior of the astronomical populations or processes; data smoothing with kernel density estimation and nonparametric regression; unsupervised clustering and supervised classification procedures for multivariate problems; survival analysis for astronomical datasets with nondetections; time- and frequency-domain times series analysis for light curves; and spatial statistics to interpret the spati...
乔建忠
2012-01-01
This paper proposes a new Web page type relevance judgment strategy based on several statistical characteris- tics of Web document types to meet the online classification lightweight design requirements of focused crawler. Using the API provided by WEKA, this paper devises appropriate training algorithm and classification algorithm for the relevance judgment strategy. of the relevance ju The experiments of classification accuracy, efficiency, and attribute selection demonstrate the validity dgment strategy and five Web page statistical characteristics playing a key role in the type identification.%为满足主题爬行器在线分类的轻量化设计要求，提出一种基于多项表示网络文档类型的统计特征实现网页按类型进行主题相关性判断的策略；借助WEKA提供的API，为该主题相关性判断策略设计相应的训练算法和分类算法。通过分类准确率、效率和特征选择实验，证明该主题相关性判断策略的有效性以及5项对类型识别起关键作用的统计特征。
Game theory analysis of statistical information distortion in China%统计数据失真成因的博弈分析
卢冶飞
2003-01-01
Statistical bodies are in essence interest bodies. To analyse the fundamental reasons Why somestatistical data are untrue, we need to consider the inerests ofthe statistical bodies. Their choices are thetesults of interest gaming. Based on this awareness, this article analyses the game action of the Statisticalbodies when prouding statistical date according to game theory. Therefore we get to know the reasons of theuntrueness of some statistical date. Furthermore the measures and suggestions are given.
Risch, N. (Yale Univ. School of Medicine, New Haven, CT (United States)); Ghosh, S.; Todd, J.A.
1993-09-01
Common, familial human disorders generally do not follow Mendelian inheritance patterns, presumably because multiple loci are involved in disease susceptibility. One approach to mapping genes for such traits in humans is to first study an analogous form in an animal model, such as mouse, by using inbred strains and backcross experiments. Here the authors describe methodology for analyzing multiple-locus linkage data from such experimental backcrosses, particularly in light of multilocus genetic models, including the effects of epistasis. They illustrate these methods by using data from backcrosses involving nonobese diabetic mouse, which serves as an animal model for human insulin-dependent diabetes mellitus. They show that it is likely that a minimum of nine loci contribute to susceptibility, with strong epistasis effects among these loci. Three of the loci actually confer a protective effect in the homozygote, compared with the heterozygote. Further, they discuss the relevance of these studies for analogous studies of the human form of the trait. Specifically, they show that the magnitude of the gene effect in the experimental backcross is likely to correlate only weakly, at best, with the expected magnitude of effect for a human form, because in humans the gene effect will depend more heavily on disease allele frequencies than on the observed penetrance ratios; such allele frequencies are unpredictable. Hence, the major benefit from animal studies may be a better understanding of the disease process itself, rather than identification of cells through comparison mapping in humans by using regions of homology. 12 refs., 7 tabs.
Learning: Statistical Mechanisms in Language Acquisition
Wonnacott, Elizabeth
The grammatical structure of human languages is extremely complex, yet children master this complexity with apparent ease. One explanation is that we come to the task of acquisition equipped with knowledge about the possible grammatical structures of human languages—so-called "Universal Grammar". An alternative is that grammatical patterns are abstracted from the input via a process of identifying reoccurring patterns and using that information to form grammatical generalizations. This statistical learning hypothesis receives support from computational research, which has revealed that even low level statistics based on adjacent word co-occurrences yield grammatically relevant information. Moreover, even as adults, our knowledge and usage of grammatical patterns is often graded and probabilistic, and in ways which directly reflect the statistical makeup of the language we experience. The current chapter explores such evidence and concludes that statistical learning mechanisms play a critical role in acquisition, whilst acknowledging holes in our current knowledge, particularly with respect to the learning of `higher level' syntactic behaviours. Throughout, I emphasize that although a statistical approach is traditionally associated with a strongly empiricist position, specific accounts make specific claims about the nature of the learner, both in terms of learning mechanisms and the information that is primitive to the learning system. In particular, working models which construct grammatical generalizations often assume inbuilt semantic abstractions.
Andrea Vázquez
2011-09-01
Full Text Available Introducción. El pase de guardia es una actividad médica en la que se transfiere información y responsabilidad entre profesionales en situaciones de discontinuidad o transiciones en el cuidado de los pacientes. Los pases de guardia son fuente de errores médicos, a pesar de lo cual la programación formal en la competencia específica está ausente en los currículos de las residencias médicas. En este sentido, implementamos el proyecto educativo 'Pase de guardia oral y escrito en la residencia de clínica médica'. Materiales y métodos. Definimos el constructo 'información relevante' a partir de cinco ítems, uno sistémico y cuatro cognitivos. Se analizó la prevalencia de los déficits de información relevante y su repercusión sobre la práctica clínica. Resultados. En 230 protocolos de guardia, la prevalencia de déficits de información relevante fue del 31,3% (n = 72 y afectó tanto al ítem sistémico (11% como a los ítems con contenidos sustantivos (20%. Con información relevante, las conductas activas fueron del 34,6%, y las pasivas, del 65,4%; con déficits de información relevante, las activas fueron del 13,9%, y las pasivas, del 86,1%. Estas diferencias fueron significativas (p Introduction. Handoffs are medical activity which transfers information and responsibility among professionals in situations of discontinuity or transitions in patient care. Handoffs are source of medical errors and adverse events, which despite the formal programming of specific competencies are absent in the curricula of medical residencies. In this sense, we implemented the educational project 'Oral and written handoffs in internal medicine residency program'. Materials and methods. We defined the parameter relevant information with a systemic item and four other cognitive items; we assess the prevalence of relevant information deficits and the effects on the clinical practice in a prospective study. Results. In 230 protocols the prevalence of
Asai, Kikuo; Kondo, Kimio; Kobayashi, Hideaki; Saito, Fumihiko
We developed a prototype system to support telecommunication by using keywords selected by the speaker in a videoconference. In the traditional presentation style, a speaker talks and uses audiovisual materials, and the audience at remote sites looks at these materials. Unfortunately, the audience often loses concentration and attention during the talk. To overcome this problem, we investigate a keyword presentation style, in which the speaker holds keyword cards that enable the audience to see additional information. Although keyword captions were originally intended for use in video materials for learning foreign languages, they can also be used to improve the quality of distance lectures in videoconferences. Our prototype system recognizes printed keywords in a video image at a server, and transfers the data to clients as multimedia functions such as language translation, three-dimensional (3D) model visualization, and audio reproduction. The additional information is collocated to the keyword cards in the display window, thus forming a spatial relationship between them. We conducted an experiment to investigate the properties of the keyword presentation style for an audience. The results suggest the potential of the keyword presentation style for improving the audience's concentration and attention in distance lectures by providing an environment that facilitates eye contact during videoconferencing.
Bais, F Alexander
2007-01-01
We review of the interface between (theoretical) physics and information for non-experts. The origin of information as related to the notion of entropy is described, first in the context of thermodynamics then in the context of statistical mechanics. A close examination of the foundations of statistical mechanics and the need to reconcile the probabilistic and deterministic views of the world leads us to a discussion of chaotic dynamics, where information plays a crucial role in quantifying predictability. We then discuss a variety of fundamental issues that emerge in defining information and how one must exercise care in discussing concepts such as order, disorder, and incomplete knowledge. We also discuss an alternative form of entropy and its possible relevance for nonequilibrium thermodynamics. In the final part of the paper we discuss how quantum mechanics gives rise to the very different concept of quantum information. Entirely new possibilities for information storage and computation are possible due t...
Statistical mechanics of pluripotency.
MacArthur, Ben D; Lemischka, Ihor R
2013-08-01
Recent reports using single-cell profiling have indicated a remarkably dynamic view of pluripotent stem cell identity. Here, we argue that the pluripotent state is not well defined at the single-cell level but rather is a statistical property of stem cell populations, amenable to analysis using the tools of statistical mechanics and information theory.
CMS Statistics Reference Booklet
U.S. Department of Health & Human Services — The annual CMS Statistics reference booklet provides a quick reference for summary information about health expenditures and the Medicare and Medicaid health...
Shepperson, L
1997-12-01
Full Text Available This publication contains transport and related statistics on roads, vehicles, infrastructure, passengers, freight, rail, air, maritime and road traffic, and international comparisons. The information compiled in this publication has been gathered...
... gov Disability.gov Freedom of Information Act | Privacy & Security Statement | Disclaimers | Customer Survey | Important Web Site Notices U.S. Bureau of Labor Statistics | Postal Square Building, 2 Massachusetts Avenue, NE Washington, ...
U.S. Department of Health & Human Services — This section contains statistical information and reports related to the percentage of electronic transactions being sent to Medicare contractors in the formats...
Every year, the South African Minister of Police releases the crime statistics in ... prove an invaluable source of information for those who seek to better understand and respond to crime ... of Social Development in the JCPS may suggest a.
Goal relevance as a quantitative model of human task relevance.
Tanner, James; Itti, Laurent
2017-03-01
The concept of relevance is used ubiquitously in everyday life. However, a general quantitative definition of relevance has been lacking, especially as pertains to quantifying the relevance of sensory observations to one's goals. We propose a theoretical definition for the information value of data observations with respect to a goal, which we call "goal relevance." We consider the probability distribution of an agent's subjective beliefs over how a goal can be achieved. When new data are observed, its goal relevance is measured as the Kullback-Leibler divergence between belief distributions before and after the observation. Theoretical predictions about the relevance of different obstacles in simulated environments agreed with the majority response of 38 human participants in 83.5% of trials, beating multiple machine-learning models. Our new definition of goal relevance is general, quantitative, explicit, and allows one to put a number onto the previously elusive notion of relevance of observations to a goal. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Sadovskii, Michael V
2012-01-01
This volume provides a compact presentation of modern statistical physics at an advanced level. Beginning with questions on the foundations of statistical mechanics all important aspects of statistical physics are included, such as applications to ideal gases, the theory of quantum liquids and superconductivity and the modern theory of critical phenomena. Beyond that attention is given to new approaches, such as quantum field theory methods and non-equilibrium problems.
Queiroz, José Wilton; Dias, Gutemberg H; Nobre, Maurício Lisboa; De Sousa Dias, Márcia C; Araújo, Sérgio F; Barbosa, James D; Bezerra da Trindade-Neto, Pedro; Blackwell, Jenefer M; Jeronimo, Selma M B
2010-02-01
Applied Spatial Statistics used in conjunction with geographic information systems (GIS) provide an efficient tool for the surveillance of diseases. Here, using these tools we analyzed the spatial distribution of Hansen's disease in an endemic area in Brazil. A sample of 808 selected from a universe of 1,293 cases was geocoded in Mossoró, Rio Grande do Norte, Brazil. Hansen's disease cases were not distributed randomly within the neighborhoods, with higher detection rates found in more populated districts. Cluster analysis identified two areas of high risk, one with a relative risk of 5.9 (P = 0.001) and the other 6.5 (P = 0.001). A significant relationship between the geographic distribution of disease and the social economic variables indicative of poverty was observed. Our study shows that the combination of GIS and spatial analysis can identify clustering of transmissible disease, such as Hansen's disease, pointing to areas where intervention efforts can be targeted to control disease.
Saleh Ali Alomari
2011-12-01
Full Text Available As the rapid progress of the media streaming applications such as video streaming can be classified intotwo types of streaming, Live video streaming, Video on Demand (VoD. Live video streaming is a servicewhich allows the clients to watch many TV channels over the internet and the clients able to use oneoperation to perform is to switch the channels. Video on Demand (VoD is one of the most importantapplications for the internet of the future and has become an interactive multimedia service which allowsthe users to start watching the video of their choice at anytime and anywhere, especially after the rapiddeployment of the wireless networks and mobile devices. In this paper provide statistical information aboutthe Internet, communications and mobile devices etc. This has led to an increased demand for thedevelopment, communication and computational powers of many of the mobile wireless subscribers/mobiledevices such as laptops, PDAs, smart phones and notebook. These techniques are utilized to obtain a videoon demand service with higher resolution and quality. Another objective in this paper is to see Malaysiaranked as a fully developed country by the year 2020.
Prediction Based on Statistical Fundamental Information of Turing Award%图灵奖的基础信息统计与预测
赵芳; 吴琼; 刘彦君
2016-01-01
借鉴以往的研究成果，以图灵奖获奖者为中心，从图灵奖官网、维基百科、百度百科、万方数据库等采集信息，对基础数据统计后，从历年获奖人数、获奖人年龄\\工作机构\\年龄\\研究领域、图灵奖分布的领域等角度对概况进行描述，在此基础上，对数据作进一步分析，对图灵奖未来若干年的获奖趋势作出预测。%After collecting data from Turing Award official website, Wikipedia, Baidu encyclopedia, Wanfang Database, this pa-per carries out statistics on fundamental information of Turing Award winners by drawing on the academic results of previous stud-ies. Then it describes the general situation of the total number of prize winners, organizations, research fields and so on. It also fur-ther analyzes the data and makes prediction for the future Turing Award winners.
Alomari, Saleh Ali
2011-01-01
As the rapid progress of the media streaming applications such as video streaming can be classified into two types of streaming, Live video streaming, Video on Demand (VoD). Live video streaming is a service which allows the clients to watch many TV channels over the internet and the clients able to use one operation to perform is to switch the channels. Video on Demand (VoD) is one of the most important applications for the internet of the future and has become an interactive multimedia service which allows the users to start watching the video of their choice at anytime and anywhere, especially after the rapid deployment of the wireless networks and mobile devices. In this paper provide statistical information about the Internet, communications and mobile devices etc. This has led to an increased demand for the development, communication and computational powers of many of the mobile wireless subscribers/mobile devices such as laptops, PDAs, smart phones and notebook. These techniques are utilized to obtain...
María Támara Polo Sánchez
2011-01-01
Full Text Available The aim of this article is to know university student´s attitudes towards people with disabilities coming from Social Sciences and Psychology, registered in subjects in which information is provided on the disability. Also it is tried to analyze the influence of the contact with disabled. One scale measured subject´s attitudes towards people with disabilities, Escala de Actitudes Hacia las Personas con Discapacidad of Verdugo, Jenaro & Arias (1995. This scale was administered to 470 student´s from University of Granada. The results showed that the students presented positive attitudes towards the disabled persons, existing differences according to the degree and being in addition excellent the fact to maintain contact with the disabled person. These findings are discussed in relation to previous research and suggestions for future research are addressed
Goodman, Joseph W
2015-01-01
This book discusses statistical methods that are useful for treating problems in modern optics, and the application of these methods to solving a variety of such problems This book covers a variety of statistical problems in optics, including both theory and applications. The text covers the necessary background in statistics, statistical properties of light waves of various types, the theory of partial coherence and its applications, imaging with partially coherent light, atmospheric degradations of images, and noise limitations in the detection of light. New topics have been introduced i
... Foodborne, Waterborne, and Environmental Diseases Mycotic Diseases Branch Histoplasmosis Statistics Recommend on Facebook Tweet Share Compartir How common is histoplasmosis? In the United States, an estimated 60% to ...
Forbes, Catherine; Hastings, Nicholas; Peacock, Brian J.
2010-01-01
A new edition of the trusted guide on commonly used statistical distributions Fully updated to reflect the latest developments on the topic, Statistical Distributions, Fourth Edition continues to serve as an authoritative guide on the application of statistical methods to research across various disciplines. The book provides a concise presentation of popular statistical distributions along with the necessary knowledge for their successful use in data modeling and analysis. Following a basic introduction, forty popular distributions are outlined in individual chapters that are complete with re
Eliazar, Iddo, E-mail: eliazar@post.tau.ac.il
2017-05-15
The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their ‘public relations’ for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of this object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford’s law, and 1/f noise. - Highlights: • Harmonic statistics are described and reviewed in detail. • Connections to various statistical laws are established. • Connections to perturbation, renormalization and dynamics are established.
Glaz, Joseph
2009-01-01
Suitable for graduate students and researchers in applied probability and statistics, as well as for scientists in biology, computer science, pharmaceutical science and medicine, this title brings together a collection of chapters illustrating the depth and diversity of theory, methods and applications in the area of scan statistics.
Petocz, Peter; Sowey, Eric
2008-01-01
In this article, the authors focus on hypothesis testing--that peculiarly statistical way of deciding things. Statistical methods for testing hypotheses were developed in the 1920s and 1930s by some of the most famous statisticians, in particular Ronald Fisher, Jerzy Neyman and Egon Pearson, who laid the foundations of almost all modern methods of…
Randall, Allan D.; Freehafer, Douglas A.
2017-08-02
A variety of watershed properties available in 2015 from geographic information systems were tested in regression equations to estimate two commonly used statistical indices of the low flow of streams, namely the lowest flows averaged over 7 consecutive days that have a 1 in 10 and a 1 in 2 chance of not being exceeded in any given year (7-day, 10-year and 7-day, 2-year low flows). The equations were based on streamflow measurements in 51 watersheds in the Lower Hudson River Basin of New York during the years 1958–1978, when the number of streamflow measurement sites on unregulated streams was substantially greater than in subsequent years. These low-flow indices are chiefly a function of the area of surficial sand and gravel in the watershed; more precisely, 7-day, 10-year and 7-day, 2-year low flows both increase in proportion to the area of sand and gravel deposited by glacial meltwater, whereas 7-day, 2-year low flows also increase in proportion to the area of postglacial alluvium. Both low-flow statistics are also functions of mean annual runoff (a measure of net water input to the watershed from precipitation) and area of swamps and poorly drained soils in or adjacent to surficial sand and gravel (where groundwater recharge is unlikely and riparian water loss to evapotranspiration is substantial). Small but significant refinements in estimation accuracy resulted from the inclusion of two indices of stream geometry, channel slope and length, in the regression equations. Most of the regression analysis was undertaken with the ordinary least squares method, but four equations were replicated by using weighted least squares to provide a more realistic appraisal of the precision of low-flow estimates. The most accurate estimation equations tested in this study explain nearly 84 and 87 percent of the variation in 7-day, 10-year and 7-day, 2-year low flows, respectively, with standard errors of 0.032 and 0.050 cubic feet per second per square mile. The equations
Statistical mechanics of multiedge networks.
Sagarra, O; Pérez Vicente, C J; Díaz-Guilera, A
2013-12-01
Statistical properties of binary complex networks are well understood and recently many attempts have been made to extend this knowledge to weighted ones. There are, however, subtle yet important considerations to be made regarding the nature of the weights used in this generalization. Weights can be either continuous or discrete magnitudes, and in the latter case, they can additionally have undistinguishable or distinguishable nature. This fact has not been addressed in the literature insofar and has deep implications on the network statistics. In this work we face this problem introducing multiedge networks as graphs where multiple (distinguishable) connections between nodes are considered. We develop a statistical mechanics framework where it is possible to get information about the most relevant observables given a large spectrum of linear and nonlinear constraints including those depending both on the number of multiedges per link and their binary projection. The latter case is particularly interesting as we show that binary projections can be understood from multiedge processes. The implications of these results are important as many real-agent-based problems mapped onto graphs require this treatment for a proper characterization of their collective behavior.
Yi, Robin H Pugh; Rezende, Lisa F; Huynh, Julie; Kramer, Karen; Cranmer, Melissa; Schlager, Lisa; Dearfield, Craig T; Friedman, Susan J
2017-09-28
Women age 45 years or younger with breast cancer, or who are at high-risk for breast cancer due to previously having the disease or to genetic risk, have distinct health risks and needs from their older counterparts. Young women frequently seek health information through the Internet and mainstream media, but often find it does not address their particular concerns, that it is difficult to evaluate or interpret, or even misleading. To help women better understand media coverage about new research, Facing Our Risk of Cancer Empowered (FORCE) developed the CDC-funded XRAYS (eXamining Relevance of Articles to Young Survivors) program. To assure that the XRAYS program is responsive to the community's needs, FORCE launched a web-based survey to assess where young women seek information about breast cancer, and to learn their unmet information needs. A total of 1,178 eligible women responded to the survey. In general, the breast cancer survivors and high-risk women between ages 18-45 years who responded to this survey, are using multiple media sources to seek information about breast cancer risk, prevention, screening, and treatment. They place trust in several media sources and use them to inform their medical decisions. Only about one-third of respondents to this survey report discussing media sources with their health care providers. Current survey results indicate that, by providing credible information on the quality of evidence and reporting in media reports on cancer, XRAYS is addressing a key need for health information. Results suggest that it will be useful for XRAYS to offer reviews of articles on a broad range of topics that can inform decisions at each stage of risk assessment and treatment.
Ritchie, L David
1991-01-01
This volume thoroughly covers the sub-field of information, and is one of the first in a series which synthesizes the research literature on major concepts in the field of communication. Each concise volume includes a research definition (concept explication) and presents a state-of-the-art analysis of theory and empirical findings related to the concept. After defining the word `information', the author contrasts non-linear and reflexive ideas about human communication with linear perspectives. Information is equated with uncertainty. The result presents a pattern for the process of conceptua
基于主题描述模型的相关性判断在网页信息抽取中的应用%The Application of Topic-Relevance in Web Information Extraction
谭胜; 马静; 吴一占
2011-01-01
Information extraction from the massive web source is an important way to obtain valuable information and the topic relevant judgment of target web page contents is one of the important steps. At present, manual screening and document training that is the main method for relevance judgment is low efficiency and duplication. In this paper, we attempt to introduce topic description model for measuring relevant analysis. Topic description model measures the topic relevance from the object of task. After the page content analysis, we will weight the document by analyzing the document frequency of the keywords and the change trends of the frequency from the task topic description model for correlation judgment. The experiment verified that the method can effectively improve the efficiency of web information extraction and accuracy, and we get the principle for setting the parameters.%信息抽取是从海量网页获取有价值信息的重要方式,对目标网页内容进行主题相关性判断是提高信息抽取效率和准确性的关键环节.目前的相关性判断主要采用人工筛选和文档训练的方法,这其中存在效率低、重复训练等问题,而本文尝试针对抽取任务引入主题描述模型用于网页内容的主题相关性判断.从任务的主题描述模型的角度出发,计算模型中的关键词基于标记信息的加权频率,将网页内容进行量化表示,然后分析关键词加权频率关于任务主题描述模型的变化来判断网页内容的主题相关性.最后通过对比该方法在国防产品信息抽取中结果,实验证明该方法大大提高了网页信息抽取的效率和准确性.
Jackson, Faye L; Fryer, Robert J; Hannah, David M; Millar, Colin P; Malcolm, Iain A
2017-09-14
The thermal suitability of riverine habitats for cold water adapted species may be reduced under climate change. Riparian tree planting is a practical climate change mitigation measure, but it is often unclear where to focus effort for maximum benefit. Recent developments in data collection, monitoring and statistical methods have facilitated the development of increasingly sophisticated river temperature models capable of predicting spatial variability at large scales appropriate to management. In parallel, improvements in temporal river temperature models have increased the accuracy of temperature predictions at individual sites. This study developed a novel large scale spatio-temporal model of maximum daily river temperature (Twmax) for Scotland that predicts variability in both river temperature and climate sensitivity. Twmax was modelled as a linear function of maximum daily air temperature (Tamax), with the slope and intercept allowed to vary as a smooth function of day of the year (DoY) and further modified by landscape covariates including elevation, channel orientation and riparian woodland. Spatial correlation in Twmax was modelled at two scales; (1) river network (2) regional. Temporal correlation was addressed through an autoregressive (AR1) error structure for observations within sites. Additional site level variability was modelled with random effects. The resulting model was used to map (1) spatial variability in predicted Twmax under current (but extreme) climate conditions (2) the sensitivity of rivers to climate variability and (3) the effects of riparian tree planting. These visualisations provide innovative tools for informing fisheries and land-use management under current and future climate. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.
Lim, Jung Sub; Lim, Se Won; Ahn, Ju Hyun; Song, Bong Sub; Shim, Kye Shik
2014-01-01
Purpose To construct new Korean reference curves for birth weight by sex and gestational age using contemporary Korean birth weight data and to compare them with the Lubchenco and the 2010 United States (US) intrauterine growth curves. Methods Data of 2,336,727 newborns by the Korean Statistical Information Service (2008-2012) were used. Smoothed percentile curves were created by the Lambda Mu Sigma method using subsample of singleton. The new Korean reference curves were compared with the Lubchenco and the 2010 US intrauterine growth curves. Results Reference of the 3rd, 10th, 25th, 50th, 75th, 90th, and 97th percentiles birth weight by gestational age were made using 2,249,804 (male, 1,159,070) singleton newborns with gestational age 23-43 weeks. Separate birth weight curves were constructed for male and female. The Korean reference curves are similar to the 2010 US intrauterine growth curves. However, the cutoff values for small for gestational age (<10th percentile) of the new Korean curves differed from those of the Lubchenco curves for each gestational age. The Lubchenco curves underestimated the percentage of infants who were born small for gestational age. Conclusion The new Korean reference curves for birth weight show a different pattern from the Lubchenco curves, which were made from white neonates more than 60 years ago. Further research on short-term and long-term health outcomes of small for gestational age babies based on the new Korean reference data is needed. PMID:25346919
Ross, Sheldon M
2005-01-01
In this revised text, master expositor Sheldon Ross has produced a unique work in introductory statistics. The text's main merits are the clarity of presentation, contemporary examples and applications from diverse areas, and an explanation of intuition and ideas behind the statistical methods. To quote from the preface, ""It is only when a student develops a feel or intuition for statistics that she or he is really on the path toward making sense of data."" Ross achieves this goal through a coherent mix of mathematical analysis, intuitive discussions and examples.* Ross's clear writin
Ross, Sheldon M
2010-01-01
In this 3rd edition revised text, master expositor Sheldon Ross has produced a unique work in introductory statistics. The text's main merits are the clarity of presentation, contemporary examples and applications from diverse areas, and an explanation of intuition and ideas behind the statistical methods. Concepts are motivated, illustrated and explained in a way that attempts to increase one's intuition. To quote from the preface, ""It is only when a student develops a feel or intuition for statistics that she or he is really on the path toward making sense of data."" Ross achieves this
Feiveson, Alan H.; Foy, Millennia; Ploutz-Snyder, Robert; Fiedler, James
2014-01-01
Do you have elevated p-values? Is the data analysis process getting you down? Do you experience anxiety when you need to respond to criticism of statistical methods in your manuscript? You may be suffering from Insufficient Statistical Support Syndrome (ISSS). For symptomatic relief of ISSS, come for a free consultation with JSC biostatisticians at our help desk during the poster sessions at the HRP Investigators Workshop. Get answers to common questions about sample size, missing data, multiple testing, when to trust the results of your analyses and more. Side effects may include sudden loss of statistics anxiety, improved interpretation of your data, and increased confidence in your results.
Wannier, Gregory H
2010-01-01
Until recently, the field of statistical physics was traditionally taught as three separate subjects: thermodynamics, statistical mechanics, and kinetic theory. This text, a forerunner in its field and now a classic, was the first to recognize the outdated reasons for their separation and to combine the essentials of the three subjects into one unified presentation of thermal physics. It has been widely adopted in graduate and advanced undergraduate courses, and is recommended throughout the field as an indispensable aid to the independent study and research of statistical physics.Designed for
Blakemore, J S
1962-01-01
Semiconductor Statistics presents statistics aimed at complementing existing books on the relationships between carrier densities and transport effects. The book is divided into two parts. Part I provides introductory material on the electron theory of solids, and then discusses carrier statistics for semiconductors in thermal equilibrium. Of course a solid cannot be in true thermodynamic equilibrium if any electrical current is passed; but when currents are reasonably small the distribution function is but little perturbed, and the carrier distribution for such a """"quasi-equilibrium"""" co
Entanglement and nonextensive statistics
1999-01-01
It is presented a generalization of the von Neumann mutual information in the context of Tsallis' nonextensive statistics. As an example, entanglement between two (two-level) quantum subsystems is discussed. Important changes occur in the generalized mutual information, which measures the degree of entanglement, depending on the entropic index q.
Statistical air quality mapping
Kassteele, van de J.
2006-01-01
This thesis handles statistical mapping of air quality data. Policy makers require more and more detailed air quality information to take measures to improve air quality. Besides, researchers need detailed air quality information to assess health effects. Accurate and spatially highly resolved maps
... Resources Conducting Clinical Trials Statistical Tools and Data Terminology Resources NCI Data Catalog Cryo-EM NCI's Role ... Contacts Other Funding Find NCI funding for small business innovation, technology transfer, and contracts Training Cancer Training ...
Tryggestad, Kjell
2004-01-01
The study aims is to describe how the inclusion and exclusion of materials and calculative devices construct the boundaries and distinctions between statistical facts and artifacts in economics. My methodological approach is inspired by John Graunt's (1667) Political arithmetic and more recent work...... within constructivism and the field of Science and Technology Studies (STS). The result of this approach is here termed reversible statistics, reconstructing the findings of a statistical study within economics in three different ways. It is argued that all three accounts are quite normal, albeit...... in different ways. The presence and absence of diverse materials, both natural and political, is what distinguishes them from each other. Arguments are presented for a more symmetric relation between the scientific statistical text and the reader. I will argue that a more symmetric relation can be achieved...
Department of Homeland Security — Accident statistics available on the Coast Guard’s website by state, year, and one variable to obtain tables and/or graphs. Data from reports has been loaded for...
Serdobolskii, Vadim Ivanovich
2007-01-01
This monograph presents mathematical theory of statistical models described by the essentially large number of unknown parameters, comparable with sample size but can also be much larger. In this meaning, the proposed theory can be called "essentially multiparametric". It is developed on the basis of the Kolmogorov asymptotic approach in which sample size increases along with the number of unknown parameters.This theory opens a way for solution of central problems of multivariate statistics, which up until now have not been solved. Traditional statistical methods based on the idea of an infinite sampling often break down in the solution of real problems, and, dependent on data, can be inefficient, unstable and even not applicable. In this situation, practical statisticians are forced to use various heuristic methods in the hope the will find a satisfactory solution.Mathematical theory developed in this book presents a regular technique for implementing new, more efficient versions of statistical procedures. ...
杨春梅; 袁丹江
2015-01-01
This paper analyzed the problems of informed consent during medical equipment clinical trials, including the poor writing, nonstandard signing, formalized content and loss of signed informed consent. Moreover, this paper also discussed relevant solutions to improve the writing of informed consent, ethical reviews and file management, regulate the researchers’ behavior and strengthen the awareness of protecting the right of informed consent.%本文就目前医疗器械临床试验中知情同意存在的：知情同意书撰写质量不高、签署欠规范，告知与知情同意流于形式、签署的知情同意书发生丢失等问题进行分析；就如何提高知情同意书撰写质量、伦理审查质量及档案管理水平，如何规范研究者行为、强化受试者知情同意权的保护意识等措施进行了探讨。
Statistical Software Engineering
2007-11-02
Institute acts under the responsibility given to the National Academy of Sciences by its congressional charter to be an adviser to the federal government...research in software visualization—the use of typography, graphic design, animation, and cinematography to facilitate the understanding and...components. One of its major functions is to act as the central repository for all relevant project data (statistical or nonstatistical). Thus this
Relevance of integrating Information Communication Technology ...
... Communication Technology (ICT) into programmes of teacher formation in Nigerian ... The paper defined ICT in pedagogical context and so established its ... tool need to be reduced significantly or eliminated for effective teacher formation.
Relevance of test information in horse breeding
Ducro, B.J.
2011-01-01
The aims of this study were 1) to determine the role of test results of young horses in selection for sport performance, 2) to assess the genetic diversity of a closed horse breed and 3) the consequences of inbreeding for male reproduction. The study was performed using existing databases
Relevance of test information in horse breeding
Ducro, B.J.
2011-01-01
The aims of this study were 1) to determine the role of test results of young horses in selection for sport performance, 2) to assess the genetic diversity of a closed horse breed and 3) the consequences of inbreeding for male reproduction. The study was performed using existing databases
Competitive Analysis of Online Reverse Auctions with Statistic Information%统计信息下在线反向拍卖的竞争分析
徐金红; 马银戌; 蒋微青
2014-01-01
For online reverse auctions with probability distribution bids from sellers , using online algorithm and average-case competitive analysis , we discuss the average-case optimal single pricing and the competitive per-formance of single price strategy , and then propose average-case competitive analysis strategy of online reverse auctions for unlimited divisible goods .We build a model of online reverse auctions on the base of the strategy , and obtain the competitive demand curve of the buyer through solving the model .Moreover, by comparing the average-case analysis strategies with the conventional worst-case competitive analysis which ignore statistic information of bids, we conclude that the competitive performance of the strategies is improved .%对于投标具有统计特征的在线反向拍卖问题，利用在线算法与平均情形竞争分析相结合的方法，讨论了单一定价策略的平均情形最优单一定价及其竞争性能，提出了无限可分商品在线反向拍卖的平均情形竞争分析策略，基于此策略建立了具有均匀分布特征的在线反向拍卖模型，通过对模型求解得到了采购商的竞争需要曲线。与不考虑投标的统计信息、只是利用常规的最坏情形竞争分析得到的在线反向拍卖的竞争策略进行对比分析，发现统计信息的利用提高了在线反向拍卖策略的竞争性能。
Statistical tests for associations between two directed acyclic graphs.
Robert Hoehndorf
Full Text Available Biological data, and particularly annotation data, are increasingly being represented in directed acyclic graphs (DAGs. However, while relevant biological information is implicit in the links between multiple domains, annotations from these different domains are usually represented in distinct, unconnected DAGs, making links between the domains represented difficult to determine. We develop a novel family of general statistical tests for the discovery of strong associations between two directed acyclic graphs. Our method takes the topology of the input graphs and the specificity and relevance of associations between nodes into consideration. We apply our method to the extraction of associations between biomedical ontologies in an extensive use-case. Through a manual and an automatic evaluation, we show that our tests discover biologically relevant relations. The suite of statistical tests we develop for this purpose is implemented and freely available for download.
Introduction to statistics using interactive MM*Stat elements
Härdle, Wolfgang Karl; Rönz, Bernd
2015-01-01
MM*Stat, together with its enhanced online version with interactive examples, offers a flexible tool that facilitates the teaching of basic statistics. It covers all the topics found in introductory descriptive statistics courses, including simple linear regression and time series analysis, the fundamentals of inferential statistics (probability theory, random sampling and estimation theory), and inferential statistics itself (confidence intervals, testing). MM*Stat is also designed to help students rework class material independently and to promote comprehension with the help of additional examples. Each chapter starts with the necessary theoretical background, which is followed by a variety of examples. The core examples are based on the content of the respective chapter, while the advanced examples, designed to deepen students’ knowledge, also draw on information and material from previous chapters. The enhanced online version helps students grasp the complexity and the practical relevance of statistical...
Fuzzy statistical decision-making theory and applications
Kabak, Özgür
2016-01-01
This book offers a comprehensive reference guide to fuzzy statistics and fuzzy decision-making techniques. It provides readers with all the necessary tools for making statistical inference in the case of incomplete information or insufficient data, where classical statistics cannot be applied. The respective chapters, written by prominent researchers, explain a wealth of both basic and advanced concepts including: fuzzy probability distributions, fuzzy frequency distributions, fuzzy Bayesian inference, fuzzy mean, mode and median, fuzzy dispersion, fuzzy p-value, and many others. To foster a better understanding, all the chapters include relevant numerical examples or case studies. Taken together, they form an excellent reference guide for researchers, lecturers and postgraduate students pursuing research on fuzzy statistics. Moreover, by extending all the main aspects of classical statistical decision-making to its fuzzy counterpart, the book presents a dynamic snapshot of the field that is expected to stimu...
Jana, Madhusudan
2015-01-01
Statistical mechanics is self sufficient, written in a lucid manner, keeping in mind the exam system of the universities. Need of study this subject and its relation to Thermodynamics is discussed in detail. Starting from Liouville theorem gradually, the Statistical Mechanics is developed thoroughly. All three types of Statistical distribution functions are derived separately with their periphery of applications and limitations. Non-interacting ideal Bose gas and Fermi gas are discussed thoroughly. Properties of Liquid He-II and the corresponding models have been depicted. White dwarfs and condensed matter physics, transport phenomenon - thermal and electrical conductivity, Hall effect, Magneto resistance, viscosity, diffusion, etc. are discussed. Basic understanding of Ising model is given to explain the phase transition. The book ends with a detailed coverage to the method of ensembles (namely Microcanonical, canonical and grand canonical) and their applications. Various numerical and conceptual problems ar...
Schwabl, Franz
2006-01-01
The completely revised new edition of the classical book on Statistical Mechanics covers the basic concepts of equilibrium and non-equilibrium statistical physics. In addition to a deductive approach to equilibrium statistics and thermodynamics based on a single hypothesis - the form of the microcanonical density matrix - this book treats the most important elements of non-equilibrium phenomena. Intermediate calculations are presented in complete detail. Problems at the end of each chapter help students to consolidate their understanding of the material. Beyond the fundamentals, this text demonstrates the breadth of the field and its great variety of applications. Modern areas such as renormalization group theory, percolation, stochastic equations of motion and their applications to critical dynamics, kinetic theories, as well as fundamental considerations of irreversibility, are discussed. The text will be useful for advanced students of physics and other natural sciences; a basic knowledge of quantum mechan...
Rohatgi, Vijay K
2003-01-01
Unified treatment of probability and statistics examines and analyzes the relationship between the two fields, exploring inferential issues. Numerous problems, examples, and diagrams--some with solutions--plus clear-cut, highlighted summaries of results. Advanced undergraduate to graduate level. Contents: 1. Introduction. 2. Probability Model. 3. Probability Distributions. 4. Introduction to Statistical Inference. 5. More on Mathematical Expectation. 6. Some Discrete Models. 7. Some Continuous Models. 8. Functions of Random Variables and Random Vectors. 9. Large-Sample Theory. 10. General Meth
Mandl, Franz
1988-01-01
The Manchester Physics Series General Editors: D. J. Sandiford; F. Mandl; A. C. Phillips Department of Physics and Astronomy, University of Manchester Properties of Matter B. H. Flowers and E. Mendoza Optics Second Edition F. G. Smith and J. H. Thomson Statistical Physics Second Edition E. Mandl Electromagnetism Second Edition I. S. Grant and W. R. Phillips Statistics R. J. Barlow Solid State Physics Second Edition J. R. Hook and H. E. Hall Quantum Mechanics F. Mandl Particle Physics Second Edition B. R. Martin and G. Shaw The Physics of Stars Second Edition A. C. Phillips Computing for Scient
Levine-Wissing, Robin
2012-01-01
All Access for the AP® Statistics Exam Book + Web + Mobile Everything you need to prepare for the Advanced Placement® exam, in a study system built around you! There are many different ways to prepare for an Advanced Placement® exam. What's best for you depends on how much time you have to study and how comfortable you are with the subject matter. To score your highest, you need a system that can be customized to fit you: your schedule, your learning style, and your current level of knowledge. This book, and the online tools that come with it, will help you personalize your AP® Statistics prep