WorldWideScience

Sample records for relevant statistical information

  1. Statistical approach for selection of biologically informative genes.

    Science.gov (United States)

    Das, Samarendra; Rai, Anil; Mishra, D C; Rai, Shesh N

    2018-05-20

    Selection of informative genes from high dimensional gene expression data has emerged as an important research area in genomics. Many gene selection techniques have been proposed so far are either based on relevancy or redundancy measure. Further, the performance of these techniques has been adjudged through post selection classification accuracy computed through a classifier using the selected genes. This performance metric may be statistically sound but may not be biologically relevant. A statistical approach, i.e. Boot-MRMR, was proposed based on a composite measure of maximum relevance and minimum redundancy, which is both statistically sound and biologically relevant for informative gene selection. For comparative evaluation of the proposed approach, we developed two biological sufficient criteria, i.e. Gene Set Enrichment with QTL (GSEQ) and biological similarity score based on Gene Ontology (GO). Further, a systematic and rigorous evaluation of the proposed technique with 12 existing gene selection techniques was carried out using five gene expression datasets. This evaluation was based on a broad spectrum of statistically sound (e.g. subject classification) and biological relevant (based on QTL and GO) criteria under a multiple criteria decision-making framework. The performance analysis showed that the proposed technique selects informative genes which are more biologically relevant. The proposed technique is also found to be quite competitive with the existing techniques with respect to subject classification and computational time. Our results also showed that under the multiple criteria decision-making setup, the proposed technique is best for informative gene selection over the available alternatives. Based on the proposed approach, an R Package, i.e. BootMRMR has been developed and available at https://cran.r-project.org/web/packages/BootMRMR. This study will provide a practical guide to select statistical techniques for selecting informative genes

  2. Has Financial Statement Information become Less Relevant?

    DEFF Research Database (Denmark)

    Thinggaard, Frank; Damkier, Jesper

    This paper presents insights into the question of whether accounting information based on the EU’s Accounting Directives has become less value-relevant to investors over time. The study is based on a research design first used by Francis and Schipper (1999), where value-relevance is measured......? The sample is based on non-financial companies listed on the Copenhagen Stock Exchange in the period 1984-2002. Our analyses show that all the applied accounting measures are value-relevant as investment strategies based on the information earn positive market-adjusted returns in our sample period....... The results provide some indication of a decline in the value-relevance of earnings information in the 1984-2001 period, and mixed, but not statistically reliable, evidence for accounting measures where book value information and asset values are also extracted from financial statements. The results seem...

  3. Which Type of Risk Information to Use for Whom? Moderating Role of Outcome-Relevant Involvement in the Effects of Statistical and Exemplified Risk Information on Risk Perceptions.

    Science.gov (United States)

    So, Jiyeon; Jeong, Se-Hoon; Hwang, Yoori

    2017-04-01

    The extant empirical research examining the effectiveness of statistical and exemplar-based health information is largely inconsistent. Under the premise that the inconsistency may be due to an unacknowledged moderator (O'Keefe, 2002), this study examined a moderating role of outcome-relevant involvement (Johnson & Eagly, 1989) in the effects of statistical and exemplified risk information on risk perception. Consistent with predictions based on elaboration likelihood model (Petty & Cacioppo, 1984), findings from an experiment (N = 237) concerning alcohol consumption risks showed that statistical risk information predicted risk perceptions of individuals with high, rather than low, involvement, while exemplified risk information predicted risk perceptions of those with low, rather than high, involvement. Moreover, statistical risk information contributed to negative attitude toward drinking via increased risk perception only for highly involved individuals, while exemplified risk information influenced the attitude through the same mechanism only for individuals with low involvement. Theoretical and practical implications for health risk communication are discussed.

  4. A Compositional Relevance Model for Adaptive Information Retrieval

    Science.gov (United States)

    Mathe, Nathalie; Chen, James; Lu, Henry, Jr. (Technical Monitor)

    1994-01-01

    There is a growing need for rapid and effective access to information in large electronic documentation systems. Access can be facilitated if information relevant in the current problem solving context can be automatically supplied to the user. This includes information relevant to particular user profiles, tasks being performed, and problems being solved. However most of this knowledge on contextual relevance is not found within the contents of documents, and current hypermedia tools do not provide any easy mechanism to let users add this knowledge to their documents. We propose a compositional relevance network to automatically acquire the context in which previous information was found relevant. The model records information on the relevance of references based on user feedback for specific queries and contexts. It also generalizes such information to derive relevant references for similar queries and contexts. This model lets users filter information by context of relevance, build personalized views of documents over time, and share their views with other users. It also applies to any type of multimedia information. Compared to other approaches, it is less costly and doesn't require any a priori statistical computation, nor an extended training period. It is currently being implemented into the Computer Integrated Documentation system which enables integration of various technical documents in a hypertext framework.

  5. Statistical significance versus clinical relevance.

    Science.gov (United States)

    van Rijn, Marieke H C; Bech, Anneke; Bouyer, Jean; van den Brand, Jan A J G

    2017-04-01

    In March this year, the American Statistical Association (ASA) posted a statement on the correct use of P-values, in response to a growing concern that the P-value is commonly misused and misinterpreted. We aim to translate these warnings given by the ASA into a language more easily understood by clinicians and researchers without a deep background in statistics. Moreover, we intend to illustrate the limitations of P-values, even when used and interpreted correctly, and bring more attention to the clinical relevance of study findings using two recently reported studies as examples. We argue that P-values are often misinterpreted. A common mistake is saying that P < 0.05 means that the null hypothesis is false, and P ≥0.05 means that the null hypothesis is true. The correct interpretation of a P-value of 0.05 is that if the null hypothesis were indeed true, a similar or more extreme result would occur 5% of the times upon repeating the study in a similar sample. In other words, the P-value informs about the likelihood of the data given the null hypothesis and not the other way around. A possible alternative related to the P-value is the confidence interval (CI). It provides more information on the magnitude of an effect and the imprecision with which that effect was estimated. However, there is no magic bullet to replace P-values and stop erroneous interpretation of scientific results. Scientists and readers alike should make themselves familiar with the correct, nuanced interpretation of statistical tests, P-values and CIs. © The Author 2017. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  6. [Clinical research IV. Relevancy of the statistical test chosen].

    Science.gov (United States)

    Talavera, Juan O; Rivas-Ruiz, Rodolfo

    2011-01-01

    When we look at the difference between two therapies or the association of a risk factor or prognostic indicator with its outcome, we need to evaluate the accuracy of the result. This assessment is based on a judgment that uses information about the study design and statistical management of the information. This paper specifically mentions the relevance of the statistical test selected. Statistical tests are chosen mainly from two characteristics: the objective of the study and type of variables. The objective can be divided into three test groups: a) those in which you want to show differences between groups or inside a group before and after a maneuver, b) those that seek to show the relationship (correlation) between variables, and c) those that aim to predict an outcome. The types of variables are divided in two: quantitative (continuous and discontinuous) and qualitative (ordinal and dichotomous). For example, if we seek to demonstrate differences in age (quantitative variable) among patients with systemic lupus erythematosus (SLE) with and without neurological disease (two groups), the appropriate test is the "Student t test for independent samples." But if the comparison is about the frequency of females (binomial variable), then the appropriate statistical test is the χ(2).

  7. Quantum information theory and quantum statistics

    International Nuclear Information System (INIS)

    Petz, D.

    2008-01-01

    Based on lectures given by the author, this book focuses on providing reliable introductory explanations of key concepts of quantum information theory and quantum statistics - rather than on results. The mathematically rigorous presentation is supported by numerous examples and exercises and by an appendix summarizing the relevant aspects of linear analysis. Assuming that the reader is familiar with the content of standard undergraduate courses in quantum mechanics, probability theory, linear algebra and functional analysis, the book addresses graduate students of mathematics and physics as well as theoretical and mathematical physicists. Conceived as a primer to bridge the gap between statistical physics and quantum information, a field to which the author has contributed significantly himself, it emphasizes concepts and thorough discussions of the fundamental notions to prepare the reader for deeper studies, not least through the selection of well chosen exercises. (orig.)

  8. Detecting clinically relevant new information in clinical notes across specialties and settings.

    Science.gov (United States)

    Zhang, Rui; Pakhomov, Serguei V S; Arsoniadis, Elliot G; Lee, Janet T; Wang, Yan; Melton, Genevieve B

    2017-07-05

    Automated methods for identifying clinically relevant new versus redundant information in electronic health record (EHR) clinical notes is useful for clinicians and researchers involved in patient care and clinical research, respectively. We evaluated methods to automatically identify clinically relevant new information in clinical notes, and compared the quantity of redundant information across specialties and clinical settings. Statistical language models augmented with semantic similarity measures were evaluated as a means to detect and quantify clinically relevant new and redundant information over longitudinal clinical notes for a given patient. A corpus of 591 progress notes over 40 inpatient admissions was annotated for new information longitudinally by physicians to generate a reference standard. Note redundancy between various specialties was evaluated on 71,021 outpatient notes and 64,695 inpatient notes from 500 solid organ transplant patients (April 2015 through August 2015). Our best method achieved at best performance of 0.87 recall, 0.62 precision, and 0.72 F-measure. Addition of semantic similarity metrics compared to baseline improved recall but otherwise resulted in similar performance. While outpatient and inpatient notes had relatively similar levels of high redundancy (61% and 68%, respectively), redundancy differed by author specialty with mean redundancy of 75%, 66%, 57%, and 55% observed in pediatric, internal medicine, psychiatry and surgical notes, respectively. Automated techniques with statistical language models for detecting redundant versus clinically relevant new information in clinical notes do not improve with the addition of semantic similarity measures. While levels of redundancy seem relatively similar in the inpatient and ambulatory settings in the Fairview Health Services, clinical note redundancy appears to vary significantly with different medical specialties.

  9. Information Needs/Relevance

    OpenAIRE

    Wildemuth, Barbara M.

    2009-01-01

    A user's interaction with a DL is often initiated as the result of the user experiencing an information need of some kind. Aspects of that experience and how it might affect the user's interactions with the DL are discussed in this module. In addition, users continuously make decisions about and evaluations of the materials retrieved from a DL, relative to their information needs. Relevance judgments, and their relationship to the user's information needs, are discussed in this module. Draft

  10. Is Information Still Relevant?

    Science.gov (United States)

    Ma, Lia

    2013-01-01

    Introduction: The term "information" in information science does not share the characteristics of those of a nomenclature: it does not bear a generally accepted definition and it does not serve as the bases and assumptions for research studies. As the data deluge has arrived, is the concept of information still relevant for information…

  11. Relevance: An Interdisciplinary and Information Science Perspective

    Directory of Open Access Journals (Sweden)

    Howard Greisdorf

    2000-01-01

    Full Text Available Although relevance has represented a key concept in the field of information science for evaluating information retrieval effectiveness, the broader context established by interdisciplinary frameworks could provide greater depth and breadth to on-going research in the field. This work provides an overview of the nature of relevance in the field of information science with a cursory view of how cross-disciplinary approaches to relevance could represent avenues for further investigation into the evaluative characteristics of relevance as a means for enhanced understanding of human information behavior.

  12. 46 CFR 560.5 - Receipt of relevant information.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 9 2010-10-01 2010-10-01 false Receipt of relevant information. 560.5 Section 560.5... FOREIGN PORTS § 560.5 Receipt of relevant information. (a) In making its decision on matters arising under... submissions should be supported by affidavits of fact and memorandum of law. Relevant information may include...

  13. Signal Enhancement as Minimization of Relevant Information Loss

    OpenAIRE

    Geiger, Bernhard C.; Kubin, Gernot

    2012-01-01

    We introduce the notion of relevant information loss for the purpose of casting the signal enhancement problem in information-theoretic terms. We show that many algorithms from machine learning can be reformulated using relevant information loss, which allows their application to the aforementioned problem. As a particular example we analyze principle component analysis for dimensionality reduction, discuss its optimality, and show that the relevant information loss can indeed vanish if the r...

  14. Dynamic statistical information theory

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    In recent years we extended Shannon static statistical information theory to dynamic processes and established a Shannon dynamic statistical information theory, whose core is the evolution law of dynamic entropy and dynamic information. We also proposed a corresponding Boltzmman dynamic statistical information theory. Based on the fact that the state variable evolution equation of respective dynamic systems, i.e. Fokker-Planck equation and Liouville diffusion equation can be regarded as their information symbol evolution equation, we derived the nonlinear evolution equations of Shannon dynamic entropy density and dynamic information density and the nonlinear evolution equations of Boltzmann dynamic entropy density and dynamic information density, that describe respectively the evolution law of dynamic entropy and dynamic information. The evolution equations of these two kinds of dynamic entropies and dynamic informations show in unison that the time rate of change of dynamic entropy densities is caused by their drift, diffusion and production in state variable space inside the systems and coordinate space in the transmission processes; and that the time rate of change of dynamic information densities originates from their drift, diffusion and dissipation in state variable space inside the systems and coordinate space in the transmission processes. Entropy and information have been combined with the state and its law of motion of the systems. Furthermore we presented the formulas of two kinds of entropy production rates and information dissipation rates, the expressions of two kinds of drift information flows and diffusion information flows. We proved that two kinds of information dissipation rates (or the decrease rates of the total information) were equal to their corresponding entropy production rates (or the increase rates of the total entropy) in the same dynamic system. We obtained the formulas of two kinds of dynamic mutual informations and dynamic channel

  15. Natural brain-information interfaces: Recommending information by relevance inferred from human brain signals

    Science.gov (United States)

    Eugster, Manuel J. A.; Ruotsalo, Tuukka; Spapé, Michiel M.; Barral, Oswald; Ravaja, Niklas; Jacucci, Giulio; Kaski, Samuel

    2016-01-01

    Finding relevant information from large document collections such as the World Wide Web is a common task in our daily lives. Estimation of a user’s interest or search intention is necessary to recommend and retrieve relevant information from these collections. We introduce a brain-information interface used for recommending information by relevance inferred directly from brain signals. In experiments, participants were asked to read Wikipedia documents about a selection of topics while their EEG was recorded. Based on the prediction of word relevance, the individual’s search intent was modeled and successfully used for retrieving new relevant documents from the whole English Wikipedia corpus. The results show that the users’ interests toward digital content can be modeled from the brain signals evoked by reading. The introduced brain-relevance paradigm enables the recommendation of information without any explicit user interaction and may be applied across diverse information-intensive applications. PMID:27929077

  16. 49 CFR 556.9 - Public inspection of relevant information.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 6 2010-10-01 2010-10-01 false Public inspection of relevant information. 556.9... NONCOMPLIANCE § 556.9 Public inspection of relevant information. Information relevant to a petition under this... Administration, 400 Seventh Street, SW., Washington, DC 20590. Copies of available information may be obtained in...

  17. Software Helps Retrieve Information Relevant to the User

    Science.gov (United States)

    Mathe, Natalie; Chen, James

    2003-01-01

    The Adaptive Indexing and Retrieval Agent (ARNIE) is a code library, designed to be used by an application program, that assists human users in retrieving desired information in a hypertext setting. Using ARNIE, the program implements a computational model for interactively learning what information each human user considers relevant in context. The model, called a "relevance network," incrementally adapts retrieved information to users individual profiles on the basis of feedback from the users regarding specific queries. The model also generalizes such knowledge for subsequent derivation of relevant references for similar queries and profiles, thereby, assisting users in filtering information by relevance. ARNIE thus enables users to categorize and share information of interest in various contexts. ARNIE encodes the relevance and structure of information in a neural network dynamically configured with a genetic algorithm. ARNIE maintains an internal database, wherein it saves associations, and from which it returns associated items in response to a query. A C++ compiler for a platform on which ARNIE will be utilized is necessary for creating the ARNIE library but is not necessary for the execution of the software.

  18. The Development of Relevance in Information Retrieval

    Directory of Open Access Journals (Sweden)

    Mu-hsuan Huang

    1997-12-01

    Full Text Available This article attempts to investigate the notion of relevance in information retrieval. It discusses various definitions for relevance from historical viewpoints and the characteristics of relevance judgments. Also, it introduces empirical results of important related researches.[Article content in Chinese

  19. PRIS-STATISTICS: Power Reactor Information System Statistical Reports. User's Manual

    International Nuclear Information System (INIS)

    2013-01-01

    The IAEA developed the Power Reactor Information System (PRIS)-Statistics application to assist PRIS end users with generating statistical reports from PRIS data. Statistical reports provide an overview of the status, specification and performance results of every nuclear power reactor in the world. This user's manual was prepared to facilitate the use of the PRIS-Statistics application and to provide guidelines and detailed information for each report in the application. Statistical reports support analyses of nuclear power development and strategies, and the evaluation of nuclear power plant performance. The PRIS database can be used for comprehensive trend analyses and benchmarking against best performers and industrial standards.

  20. Evaluating automatic attentional capture by self-relevant information.

    Science.gov (United States)

    Ocampo, Brenda; Kahan, Todd A

    2016-01-01

    Our everyday decisions and memories are inadvertently influenced by self-relevant information. For example, we are faster and more accurate at making perceptual judgments about stimuli associated with ourselves, such as our own face or name, as compared with familiar non-self-relevant stimuli. Humphreys and Sui propose a "self-attention network" to account for these effects, wherein self-relevant stimuli automatically capture our attention and subsequently enhance the perceptual processing of self-relevant information. We propose that the masked priming paradigm and continuous flash suppression represent two ways to experimentally examine these controversial claims.

  1. Evolutionary relevance facilitates visual information processing.

    Science.gov (United States)

    Jackson, Russell E; Calvillo, Dusti P

    2013-11-03

    Visual search of the environment is a fundamental human behavior that perceptual load affects powerfully. Previously investigated means for overcoming the inhibitions of high perceptual load, however, generalize poorly to real-world human behavior. We hypothesized that humans would process evolutionarily relevant stimuli more efficiently than evolutionarily novel stimuli, and evolutionary relevance would mitigate the repercussions of high perceptual load during visual search. Animacy is a significant component to evolutionary relevance of visual stimuli because perceiving animate entities is time-sensitive in ways that pose significant evolutionary consequences. Participants completing a visual search task located evolutionarily relevant and animate objects fastest and with the least impact of high perceptual load. Evolutionarily novel and inanimate objects were located slowest and with the highest impact of perceptual load. Evolutionary relevance may importantly affect everyday visual information processing.

  2. Evolutionary Relevance Facilitates Visual Information Processing

    Directory of Open Access Journals (Sweden)

    Russell E. Jackson

    2013-07-01

    Full Text Available Visual search of the environment is a fundamental human behavior that perceptual load affects powerfully. Previously investigated means for overcoming the inhibitions of high perceptual load, however, generalize poorly to real-world human behavior. We hypothesized that humans would process evolutionarily relevant stimuli more efficiently than evolutionarily novel stimuli, and evolutionary relevance would mitigate the repercussions of high perceptual load during visual search. Animacy is a significant component to evolutionary relevance of visual stimuli because perceiving animate entities is time-sensitive in ways that pose significant evolutionary consequences. Participants completing a visual search task located evolutionarily relevant and animate objects fastest and with the least impact of high perceptual load. Evolutionarily novel and inanimate objects were located slowest and with the highest impact of perceptual load. Evolutionary relevance may importantly affect everyday visual information processing.

  3. Perceived Relevance of an Introductory Information Systems Course to Prospective Business Students

    Directory of Open Access Journals (Sweden)

    Irene Govender

    2013-12-01

    Full Text Available The study is designed to examine students’ perceptions of the introductory Information Systems (IS course. It was an exploratory study in which 67 students participated. A quantitative approach was followed making use of questionnaires for the collection of data. Using the theory of reasoned action as a framework, the study explores the factors that influence non-IS major students’ perceived relevance of the IS introductory course. The analysis of collected data included descriptive and inferential statistics. Using multiple regression analysis, the results suggest that overall, the independent variables, relevance of the content, previous IT knowledge, relevance for professional practice, IT preference in courses and peers’ influence may account for 72% of the explanatory power for the dependent variable, perceived relevance of the IS course. In addition, the results have shown some strong predictors (IT preference and peers’ influence that influence students’ perceived relevance of the IS course. Practical work was found to be a strong mediating variable toward positive perceptions of IS. The results of this study suggest that students do indeed perceive the introductory IS course to be relevant and match their professional needs, but more practical work would enhance their learning. Implications for theory and practice are discussed as a result of the behavioural intention to perceive the IS course to be relevant and eventually to recruit more IS students.

  4. Statistical Symbolic Execution with Informed Sampling

    Science.gov (United States)

    Filieri, Antonio; Pasareanu, Corina S.; Visser, Willem; Geldenhuys, Jaco

    2014-01-01

    Symbolic execution techniques have been proposed recently for the probabilistic analysis of programs. These techniques seek to quantify the likelihood of reaching program events of interest, e.g., assert violations. They have many promising applications but have scalability issues due to high computational demand. To address this challenge, we propose a statistical symbolic execution technique that performs Monte Carlo sampling of the symbolic program paths and uses the obtained information for Bayesian estimation and hypothesis testing with respect to the probability of reaching the target events. To speed up the convergence of the statistical analysis, we propose Informed Sampling, an iterative symbolic execution that first explores the paths that have high statistical significance, prunes them from the state space and guides the execution towards less likely paths. The technique combines Bayesian estimation with a partial exact analysis for the pruned paths leading to provably improved convergence of the statistical analysis. We have implemented statistical symbolic execution with in- formed sampling in the Symbolic PathFinder tool. We show experimentally that the informed sampling obtains more precise results and converges faster than a purely statistical analysis and may also be more efficient than an exact symbolic analysis. When the latter does not terminate symbolic execution with informed sampling can give meaningful results under the same time and memory limits.

  5. Information theory and statistics

    CERN Document Server

    Kullback, Solomon

    1968-01-01

    Highly useful text studies logarithmic measures of information and their application to testing statistical hypotheses. Includes numerous worked examples and problems. References. Glossary. Appendix. 1968 2nd, revised edition.

  6. Types of Lexicographical Information Needs and their Relevance for Information Science

    Directory of Open Access Journals (Sweden)

    Bergenholtz, Henning

    2017-09-01

    Full Text Available In some situations, you need information in order to solve a problem that has occurred. In information science, user needs are often described through very specific examples rather than through a classification of situation types in which information needs occur. Furthermore, information science often describes general human needs, typically with a reference to Maslow's classification of needs (1954, instead of actual information needs. Lexicography has also focused on information needs, but has developed a more abstract classification of types of information needs, though (until more recent research into lexicographical functions with a particular interest in linguistic uncertainties and the lack of knowledge and skills in relation to one or several languages. In this article, we suggest a classification of information needs in which a tripartition has been made according to the different types of situations: communicative needs, cognitive needs, and operative needs. This is a classification that is relevant and useful in general in our modern information society and therefore also relevant for information science, including lexicography.

  7. Relevance of the c-statistic when evaluating risk-adjustment models in surgery.

    Science.gov (United States)

    Merkow, Ryan P; Hall, Bruce L; Cohen, Mark E; Dimick, Justin B; Wang, Edward; Chow, Warren B; Ko, Clifford Y; Bilimoria, Karl Y

    2012-05-01

    The measurement of hospital quality based on outcomes requires risk adjustment. The c-statistic is a popular tool used to judge model performance, but can be limited, particularly when evaluating specific operations in focused populations. Our objectives were to examine the interpretation and relevance of the c-statistic when used in models with increasingly similar case mix and to consider an alternative perspective on model calibration based on a graphical depiction of model fit. From the American College of Surgeons National Surgical Quality Improvement Program (2008-2009), patients were identified who underwent a general surgery procedure, and procedure groups were increasingly restricted: colorectal-all, colorectal-elective cases only, and colorectal-elective cancer cases only. Mortality and serious morbidity outcomes were evaluated using logistic regression-based risk adjustment, and model c-statistics and calibration curves were used to compare model performance. During the study period, 323,427 general, 47,605 colorectal-all, 39,860 colorectal-elective, and 21,680 colorectal cancer patients were studied. Mortality ranged from 1.0% in general surgery to 4.1% in the colorectal-all group, and serious morbidity ranged from 3.9% in general surgery to 12.4% in the colorectal-all procedural group. As case mix was restricted, c-statistics progressively declined from the general to the colorectal cancer surgery cohorts for both mortality and serious morbidity (mortality: 0.949 to 0.866; serious morbidity: 0.861 to 0.668). Calibration was evaluated graphically by examining predicted vs observed number of events over risk deciles. For both mortality and serious morbidity, there was no qualitative difference in calibration identified between the procedure groups. In the present study, we demonstrate how the c-statistic can become less informative and, in certain circumstances, can lead to incorrect model-based conclusions, as case mix is restricted and patients become

  8. Back to the basics: Identifying and addressing underlying challenges in achieving high quality and relevant health statistics for indigenous populations in Canada.

    Science.gov (United States)

    Smylie, Janet; Firestone, Michelle

    Canada is known internationally for excellence in both the quality and public policy relevance of its health and social statistics. There is a double standard however with respect to the relevance and quality of statistics for Indigenous populations in Canada. Indigenous specific health and social statistics gathering is informed by unique ethical, rights-based, policy and practice imperatives regarding the need for Indigenous participation and leadership in Indigenous data processes throughout the spectrum of indicator development, data collection, management, analysis and use. We demonstrate how current Indigenous data quality challenges including misclassification errors and non-response bias systematically contribute to a significant underestimate of inequities in health determinants, health status, and health care access between Indigenous and non-Indigenous people in Canada. The major quality challenge underlying these errors and biases is the lack of Indigenous specific identifiers that are consistent and relevant in major health and social data sources. The recent removal of an Indigenous identity question from the Canadian census has resulted in further deterioration of an already suboptimal system. A revision of core health data sources to include relevant, consistent, and inclusive Indigenous self-identification is urgently required. These changes need to be carried out in partnership with Indigenous peoples and their representative and governing organizations.

  9. Determinants of Judgments of Explanatory Power: Credibility, Generality, and Statistical Relevance

    Science.gov (United States)

    Colombo, Matteo; Bucher, Leandra; Sprenger, Jan

    2017-01-01

    Explanation is a central concept in human psychology. Drawing upon philosophical theories of explanation, psychologists have recently begun to examine the relationship between explanation, probability and causality. Our study advances this growing literature at the intersection of psychology and philosophy of science by systematically investigating how judgments of explanatory power are affected by (i) the prior credibility of an explanatory hypothesis, (ii) the causal framing of the hypothesis, (iii) the perceived generalizability of the explanation, and (iv) the relation of statistical relevance between hypothesis and evidence. Collectively, the results of our five experiments support the hypothesis that the prior credibility of a causal explanation plays a central role in explanatory reasoning: first, because of the presence of strong main effects on judgments of explanatory power, and second, because of the gate-keeping role it has for other factors. Highly credible explanations are not susceptible to causal framing effects, but they are sensitive to the effects of normatively relevant factors: the generalizability of an explanation, and its statistical relevance for the evidence. These results advance current literature in the philosophy and psychology of explanation in three ways. First, they yield a more nuanced understanding of the determinants of judgments of explanatory power, and the interaction between these factors. Second, they show the close relationship between prior beliefs and explanatory power. Third, they elucidate the nature of abductive reasoning. PMID:28928679

  10. Testing the idea of privileged awareness of self-relevant information.

    Science.gov (United States)

    Stein, Timo; Siebold, Alisha; van Zoest, Wieske

    2016-03-01

    Self-relevant information is prioritized in processing. Some have suggested the mechanism driving this advantage is akin to the automatic prioritization of physically salient stimuli in information processing (Humphreys & Sui, 2015). Here we investigate whether self-relevant information is prioritized for awareness under continuous flash suppression (CFS), as has been found for physical salience. Gabor patches with different orientations were first associated with the labels You or Other. Participants were more accurate in matching the self-relevant association, replicating previous findings of self-prioritization. However, breakthrough into awareness from CFS did not differ between self- and other-associated Gabors. These findings demonstrate that self-relevant information has no privileged access to awareness. Rather than modulating the initial visual processes that precede and lead to awareness, the advantage of self-relevant information may better be characterized as prioritization at later processing stages. (c) 2016 APA, all rights reserved).

  11. Value Relevance of Accounting Information in the United Arab Emirates

    Directory of Open Access Journals (Sweden)

    Jamal Barzegari Khanagha

    2011-01-01

    Full Text Available This paper examines the value relevance of accounting information in per and post-periods of International Financial Reporting Standards implementation using the regression and portfolio approaches for sample of the UAE companies. The results obtained from a combination of regression and portfolio approaches, show accounting information is value relevant in UAE stock market. A comparison of the results for the periods before and after adoption, based on both regression and portfolio approaches, shows a decline in value relevance of accounting information after the reform in accounting standards. It could be interpreted to mean that following to IFRS in UAE didn’t improve value relevancy of accounting information. However, results based on and portfolio approach shows that cash flows’ incremental information content increased for the post-IFRS period.

  12. 50 CFR 424.13 - Sources of information and relevant data.

    Science.gov (United States)

    2010-10-01

    ... 50 Wildlife and Fisheries 7 2010-10-01 2010-10-01 false Sources of information and relevant data... Sources of information and relevant data. When considering any revision of the lists, the Secretary shall..., administrative reports, maps or other graphic materials, information received from experts on the subject, and...

  13. Alpha power gates relevant information during working memory updating.

    Science.gov (United States)

    Manza, Peter; Hau, Chui Luen Vera; Leung, Hoi-Chung

    2014-04-23

    Human working memory (WM) is inherently limited, so we must filter out irrelevant information in our environment or our mind while retaining limited important relevant contents. Previous work suggests that neural oscillations in the alpha band (8-14 Hz) play an important role in inhibiting incoming distracting information during attention and selective encoding tasks. However, whether alpha power is involved in inhibiting no-longer-relevant content or in representing relevant WM content is still debated. To clarify this issue, we manipulated the amount of relevant/irrelevant information using a task requiring spatial WM updating while measuring neural oscillatory activity via EEG and localized current sources across the scalp using a surface Laplacian transform. An initial memory set of two, four, or six spatial locations was to be memorized over a delay until an updating cue was presented indicating that only one or three locations remained relevant for a subsequent recognition test. Alpha amplitude varied with memory maintenance and updating demands among a cluster of left frontocentral electrodes. Greater postcue alpha power was associated with the high relevant load conditions (six and four dots cued to reduce to three relevant) relative to the lower load conditions (four and two dots reduced to one). Across subjects, this difference in alpha power was correlated with condition differences in performance accuracy. In contrast, no significant effects of irrelevant load were observed. These findings demonstrate that, during WM updating, alpha power reflects maintenance of relevant memory contents rather than suppression of no-longer-relevant memory traces.

  14. 76 FR 34075 - Request for Information (RFI) To Identify and Obtain Relevant Information From Public or Private...

    Science.gov (United States)

    2011-06-10

    ... Relevant Information From Public or Private Entities With an Interest in Biovigilance; Extension AGENCY... and obtain relevant information regarding the possible development of a public-private partnership... Identify and Obtain Relevant Information from Public or Private Entities with an Interest in Biovigilance...

  15. [Pitfalls in informed consent: a statistical analysis of malpractice law suits].

    Science.gov (United States)

    Echigo, Junko

    2014-05-01

    In medical malpractice law suits, the notion of informed consent is often relevant in assessing whether negligence can be attributed to the medical practitioner who has caused injury to a patient. Furthermore, it is not rare that courts award damages for a lack of appropriate informed consent alone. In this study, two results were arrived at from a statistical analysis of medical malpractice law suits. One, unexpectedly, was that the severity of a patient's illness made no significant difference to whether damages were awarded. The other was that cases of typical medical treatment that national medical insurance does not cover were involved significantly more often than insured treatment cases. In cases where damages were awarded, the courts required more disclosure and written documents of information by medical practitioners, especially about complications and adverse effects that the patient might suffer.

  16. Information trimming: Sufficient statistics, mutual information, and predictability from effective channel states

    Science.gov (United States)

    James, Ryan G.; Mahoney, John R.; Crutchfield, James P.

    2017-06-01

    One of the most basic characterizations of the relationship between two random variables, X and Y , is the value of their mutual information. Unfortunately, calculating it analytically and estimating it empirically are often stymied by the extremely large dimension of the variables. One might hope to replace such a high-dimensional variable by a smaller one that preserves its relationship with the other. It is well known that either X (or Y ) can be replaced by its minimal sufficient statistic about Y (or X ) while preserving the mutual information. While intuitively reasonable, it is not obvious or straightforward that both variables can be replaced simultaneously. We demonstrate that this is in fact possible: the information X 's minimal sufficient statistic preserves about Y is exactly the information that Y 's minimal sufficient statistic preserves about X . We call this procedure information trimming. As an important corollary, we consider the case where one variable is a stochastic process' past and the other its future. In this case, the mutual information is the channel transmission rate between the channel's effective states. That is, the past-future mutual information (the excess entropy) is the amount of information about the future that can be predicted using the past. Translating our result about minimal sufficient statistics, this is equivalent to the mutual information between the forward- and reverse-time causal states of computational mechanics. We close by discussing multivariate extensions to this use of minimal sufficient statistics.

  17. Textual information access statistical models

    CERN Document Server

    Gaussier, Eric

    2013-01-01

    This book presents statistical models that have recently been developed within several research communities to access information contained in text collections. The problems considered are linked to applications aiming at facilitating information access:- information extraction and retrieval;- text classification and clustering;- opinion mining;- comprehension aids (automatic summarization, machine translation, visualization).In order to give the reader as complete a description as possible, the focus is placed on the probability models used in the applications

  18. Support Vector Machines: Relevance Feedback and Information Retrieval.

    Science.gov (United States)

    Drucker, Harris; Shahrary, Behzad; Gibbon, David C.

    2002-01-01

    Compares support vector machines (SVMs) to Rocchio, Ide regular and Ide dec-hi algorithms in information retrieval (IR) of text documents using relevancy feedback. If the preliminary search is so poor that one has to search through many documents to find at least one relevant document, then SVM is preferred. Includes nine tables. (Contains 24…

  19. System for selecting relevant information for decision support.

    Science.gov (United States)

    Kalina, Jan; Seidl, Libor; Zvára, Karel; Grünfeldová, Hana; Slovák, Dalibor; Zvárová, Jana

    2013-01-01

    We implemented a prototype of a decision support system called SIR which has a form of a web-based classification service for diagnostic decision support. The system has the ability to select the most relevant variables and to learn a classification rule, which is guaranteed to be suitable also for high-dimensional measurements. The classification system can be useful for clinicians in primary care to support their decision-making tasks with relevant information extracted from any available clinical study. The implemented prototype was tested on a sample of patients in a cardiological study and performs an information extraction from a high-dimensional set containing both clinical and gene expression data.

  20. Aspects of statistical spectroscopy relevant to effective-interaction theory

    International Nuclear Information System (INIS)

    French, J.B.

    1975-01-01

    The three aspects of statistical spectroscopy discussed in this paper are the information content of complex spectra: procedures for spectroscopy in huge model spaces, useful in effective-interaction theory; and practical ways of identifying and calculating measurable parameters of the effective Hamiltonian and other operators, and of comparing different effective Hamiltonians. (4 figures) (U.S.)

  1. A User-Centered Approach to Adaptive Hypertext Based on an Information Relevance Model

    Science.gov (United States)

    Mathe, Nathalie; Chen, James

    1994-01-01

    Rapid and effective to information in large electronic documentation systems can be facilitated if information relevant in an individual user's content can be automatically supplied to this user. However most of this knowledge on contextual relevance is not found within the contents of documents, it is rather established incrementally by users during information access. We propose a new model for interactively learning contextual relevance during information retrieval, and incrementally adapting retrieved information to individual user profiles. The model, called a relevance network, records the relevance of references based on user feedback for specific queries and user profiles. It also generalizes such knowledge to later derive relevant references for similar queries and profiles. The relevance network lets users filter information by context of relevance. Compared to other approaches, it does not require any prior knowledge nor training. More importantly, our approach to adaptivity is user-centered. It facilitates acceptance and understanding by users by giving them shared control over the adaptation without disturbing their primary task. Users easily control when to adapt and when to use the adapted system. Lastly, the model is independent of the particular application used to access information, and supports sharing of adaptations among users.

  2. A content relevance model for social media health information.

    Science.gov (United States)

    Prybutok, Gayle Linda; Koh, Chang; Prybutok, Victor R

    2014-04-01

    Consumer health informatics includes the development and implementation of Internet-based systems to deliver health risk management information and health intervention applications to the public. The application of consumer health informatics to educational and interventional efforts such as smoking reduction and cessation has garnered attention from both consumers and health researchers in recent years. Scientists believe that smoking avoidance or cessation before the age of 30 years can prevent more than 90% of smoking-related cancers and that individuals who stop smoking fare as well in preventing cancer as those who never start. The goal of this study was to determine factors that were most highly correlated with content relevance for health information provided on the Internet for a study group of 18- to 30-year-old college students. Data analysis showed that the opportunity for convenient entertainment, social interaction, health information-seeking behavior, time spent surfing on the Internet, the importance of available activities on the Internet (particularly e-mail), and perceived site relevance for Internet-based sources of health information were significantly correlated with content relevance for 18- to 30-year-old college students, an educated subset of this population segment.

  3. Advances in statistical multisource-multitarget information fusion

    CERN Document Server

    Mahler, Ronald PS

    2014-01-01

    This is the sequel to the 2007 Artech House bestselling title, Statistical Multisource-Multitarget Information Fusion. That earlier book was a comprehensive resource for an in-depth understanding of finite-set statistics (FISST), a unified, systematic, and Bayesian approach to information fusion. The cardinalized probability hypothesis density (CPHD) filter, which was first systematically described in the earlier book, has since become a standard multitarget detection and tracking technique, especially in research and development.Since 2007, FISST has inspired a considerable amount of research

  4. Kinematic and dynamic pair collision statistics of sedimenting inertial particles relevant to warm rain initiation

    International Nuclear Information System (INIS)

    Rosa, Bogdan; Parishani, Hossein; Ayala, Orlando; Wang, Lian-Ping; Grabowski, Wojciech W

    2011-01-01

    In recent years, direct numerical simulation (DNS) approach has become a reliable tool for studying turbulent collision-coalescence of cloud droplets relevant to warm rain development. It has been shown that small-scale turbulent motion can enhance the collision rate of droplets by either enhancing the relative velocity and collision efficiency or by inertia-induced droplet clustering. A hybrid DNS approach incorporating DNS of air turbulence, disturbance flows due to droplets, and droplet equation of motion has been developed to quantify these effects of air turbulence. Due to the computational complexity of the approach, a major challenge is to increase the range of scales or size of the computation domain so that all scales affecting droplet pair statistics are simulated. Here we discuss our on-going work in this direction by improving the parallel scalability of the code, and by studying the effect of large-scale forcing on pair statistics relevant to turbulent collision. New results at higher grid resolutions show a saturation of pair and collision statistics with increasing flow Reynolds number, for given Kolmogorov scales and small droplet sizes. Furthermore, we examine the orientation dependence of pair statistics which reflects an interesting coupling of gravity and droplet clustering.

  5. Cogito ergo video: Task-relevant information is involuntarily boosted into awareness.

    Science.gov (United States)

    Gayet, Surya; Brascamp, Jan W; Van der Stigchel, Stefan; Paffen, Chris L E

    2015-01-01

    Only part of the visual information that impinges on our retinae reaches visual awareness. In a series of three experiments, we investigated how the task relevance of incoming visual information affects its access to visual awareness. On each trial, participants were instructed to memorize one of two presented hues, drawn from different color categories (e.g., red and green), for later recall. During the retention interval, participants were presented with a differently colored grating in each eye such as to elicit binocular rivalry. A grating matched either the task-relevant (memorized) color category or the task-irrelevant (nonmemorized) color category. We found that the rivalrous stimulus that matched the task-relevant color category tended to dominate awareness over the rivalrous stimulus that matched the task-irrelevant color category. This effect of task relevance persisted when participants reported the orientation of the rivalrous stimuli, even though in this case color information was completely irrelevant for the task of reporting perceptual dominance during rivalry. When participants memorized the shape of a colored stimulus, however, its color category did not affect predominance of rivalrous stimuli during retention. Taken together, these results indicate that the selection of task-relevant information is under volitional control but that visual input that matches this information is boosted into awareness irrespective of whether this is useful for the observer.

  6. Bootstrapping agency: How control-relevant information affects motivation.

    Science.gov (United States)

    Karsh, Noam; Eitam, Baruch; Mark, Ilya; Higgins, E Tory

    2016-10-01

    How does information about one's control over the environment (e.g., having an own-action effect) influence motivation? The control-based response selection framework was proposed to predict and explain such findings. Its key tenant is that control relevant information modulates both the frequency and speed of responses by determining whether a perceptual event is an outcome of one's actions or not. To test this framework empirically, the current study examines whether and how temporal and spatial contiguity/predictability-previously established as being important for one's sense of agency-modulate motivation from control. In 5 experiments, participants responded to a cue, potentially triggering a perceptual effect. Temporal (Experiments 1a-c) and spatial (Experiments 2a and b) contiguity/predictability between actions and their potential effects were experimentally manipulated. The influence of these control-relevant factors was measured, both indirectly (through their effect on explicit judgments of agency) and directly on response time and response frequency. The pattern of results was highly consistent with the control-based response selection framework in suggesting that control relevant information reliably modulates the impact of "having an effect" on different levels of action selection. We discuss the implications of this study for the notion of motivation from control and for the empirical work on the sense of agency. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  7. Research methodology in dentistry: Part II — The relevance of statistics in research

    Science.gov (United States)

    Krithikadatta, Jogikalmat; Valarmathi, Srinivasan

    2012-01-01

    The lifeline of original research depends on adept statistical analysis. However, there have been reports of statistical misconduct in studies that could arise from the inadequate understanding of the fundamental of statistics. There have been several reports on this across medical and dental literature. This article aims at encouraging the reader to approach statistics from its logic rather than its theoretical perspective. The article also provides information on statistical misuse in the Journal of Conservative Dentistry between the years 2008 and 2011 PMID:22876003

  8. Fuzzy Mutual Information Based min-Redundancy and Max-Relevance Heterogeneous Feature Selection

    Directory of Open Access Journals (Sweden)

    Daren Yu

    2011-08-01

    Full Text Available Feature selection is an important preprocessing step in pattern classification and machine learning, and mutual information is widely used to measure relevance between features and decision. However, it is difficult to directly calculate relevance between continuous or fuzzy features using mutual information. In this paper we introduce the fuzzy information entropy and fuzzy mutual information for computing relevance between numerical or fuzzy features and decision. The relationship between fuzzy information entropy and differential entropy is also discussed. Moreover, we combine fuzzy mutual information with qmin-Redundancy-Max-Relevanceq, qMax-Dependencyq and min-Redundancy-Max-Dependencyq algorithms. The performance and stability of the proposed algorithms are tested on benchmark data sets. Experimental results show the proposed algorithms are effective and stable.

  9. Fuzzy Information Retrieval Using Genetic Algorithms and Relevance Feedback.

    Science.gov (United States)

    Petry, Frederick E.; And Others

    1993-01-01

    Describes an approach that combines concepts from information retrieval, fuzzy set theory, and genetic programing to improve weighted Boolean query formulation via relevance feedback. Highlights include background on information retrieval systems; genetic algorithms; subproblem formulation; and preliminary results based on a testbed. (Contains 12…

  10. Statistical Literacy in the Data Science Workplace

    Science.gov (United States)

    Grant, Robert

    2017-01-01

    Statistical literacy, the ability to understand and make use of statistical information including methods, has particular relevance in the age of data science, when complex analyses are undertaken by teams from diverse backgrounds. Not only is it essential to communicate to the consumers of information but also within the team. Writing from the…

  11. Disclosure of Non-Financial Information: Relevant to Financial Analysts?

    OpenAIRE

    ORENS, Raf; LYBAERT, Nadine

    2013-01-01

    The decline in the relevance of financial statement information to value firms leads to calls from organizational stakeholders to convey non-financial information in order to be able to judge firms' financial performance and value. This literature review aims to report extant literature findings on the use of corporate non-financial information by sell-side financial analysts, the information intermediaries between corporate management and investors. Prior studies highlight that financial ana...

  12. Information sharing during diagnostic assessments: what is relevant for parents?

    Science.gov (United States)

    Klein, Sheryl; Wynn, Kerry; Ray, Lynne; Demeriez, Lori; LaBerge, Patricia; Pei, Jacqueline; St Pierre, Cherie

    2011-05-01

    ABSTRACT This descriptive qualitative study facilitates the application of family-centered care within a tertiary care interdisciplinary neurodevelopmental diagnostic assessment clinic by furthering an understanding of parent perceptions of the relevance of diagnostic information provision. An interdisciplinary assessment team completed an open-ended questionnaire to describe parent information provision. Parents from 9 families completed in-depth parent interviews following clinic attendance to discuss perceptions of information received. Interviews were audiotaped, transcribed, and coded by related themes. Parents did not perceive the information in the way professionals expected. Parents acknowledged receipt of comprehensive information relevant to the diagnosis but indicated that not all their needs were met. During the interviews, parents described the assessment process, preassessment information, and "steps in their journey." They noted that a strength-based approach and a focus on parental competency would support their coping efforts. Results underscore the need for professionals to be attentive to parents' individualized needs.

  13. Impact of Non Accounting Information on The Value Relevance of Accounting Information: The Case of Jordan

    Directory of Open Access Journals (Sweden)

    DHIAA SHAMKI

    2013-07-01

    Full Text Available The paper presents empirical evidence about the impact of firm’s shareholders number as non accounting information on the value relevance of its earnings and book value of equity as accounting information for Jordanian industrial firms for the period from 1993 to 2002. Employing the return regression analysis and using shareholders number in two proxies namely local and foreign shareholders number, the findings of the study are fourfold. First, Individual earnings are value relevant while book value is irrelevant. Second, combining earnings with book value leads both of them to be irrelevant. Third, extending local shareholders number has significant impact on the value relevance of individual and combined earnings. Forth, extending foreign shareholders number has significant impact on the value relevance of individual book value and combined earnings. Since studies on the value relevance of these variables have neglected Jordan (and the Middle Eastern region, the study is the first especially in Jordan that tries to fill this gap by examiningthe impact of shareholders numbers on the value relevance of earnings and book valueto indicate firm value.

  14. Mathematical statistics and stochastic processes

    CERN Document Server

    Bosq, Denis

    2013-01-01

    Generally, books on mathematical statistics are restricted to the case of independent identically distributed random variables. In this book however, both this case AND the case of dependent variables, i.e. statistics for discrete and continuous time processes, are studied. This second case is very important for today's practitioners.Mathematical Statistics and Stochastic Processes is based on decision theory and asymptotic statistics and contains up-to-date information on the relevant topics of theory of probability, estimation, confidence intervals, non-parametric statistics and rob

  15. The Relevance of Information and Communication Technologies in ...

    African Journals Online (AJOL)

    The Relevance of Information and Communication Technologies in Libraries Services ... Technologies in Libraries Services and Librarianship Profession in the 21th Century ... This paper therefore, examines the importance of ICT in librarianship as a ... for Authors · for Policy Makers · about Open Access · Journal Quality.

  16. Passage relevance models for genomics search

    Directory of Open Access Journals (Sweden)

    Frieder Ophir

    2009-03-01

    Full Text Available Abstract We present a passage relevance model for integrating syntactic and semantic evidence of biomedical concepts and topics using a probabilistic graphical model. Component models of topics, concepts, terms, and document are represented as potential functions within a Markov Random Field. The probability of a passage being relevant to a biologist's information need is represented as the joint distribution across all potential functions. Relevance model feedback of top ranked passages is used to improve distributional estimates of query concepts and topics in context, and a dimensional indexing strategy is used for efficient aggregation of concept and term statistics. By integrating multiple sources of evidence including dependencies between topics, concepts, and terms, we seek to improve genomics literature passage retrieval precision. Using this model, we are able to demonstrate statistically significant improvements in retrieval precision using a large genomics literature corpus.

  17. Temporal and Statistical Information in Causal Structure Learning

    Science.gov (United States)

    McCormack, Teresa; Frosch, Caren; Patrick, Fiona; Lagnado, David

    2015-01-01

    Three experiments examined children's and adults' abilities to use statistical and temporal information to distinguish between common cause and causal chain structures. In Experiment 1, participants were provided with conditional probability information and/or temporal information and asked to infer the causal structure of a 3-variable mechanical…

  18. Encryption of covert information into multiple statistical distributions

    International Nuclear Information System (INIS)

    Venkatesan, R.C.

    2007-01-01

    A novel strategy to encrypt covert information (code) via unitary projections into the null spaces of ill-conditioned eigenstructures of multiple host statistical distributions, inferred from incomplete constraints, is presented. The host pdf's are inferred using the maximum entropy principle. The projection of the covert information is dependent upon the pdf's of the host statistical distributions. The security of the encryption/decryption strategy is based on the extreme instability of the encoding process. A self-consistent procedure to derive keys for both symmetric and asymmetric cryptography is presented. The advantages of using a multiple pdf model to achieve encryption of covert information are briefly highlighted. Numerical simulations exemplify the efficacy of the model

  19. The relevance of accounting information enclosed in performance indicators

    Directory of Open Access Journals (Sweden)

    Mihaela-Cristina Onica

    2012-12-01

    Full Text Available This research study is analyzing the relevance of accounting information reflected through the elaboration of firm performance variables and administration because of the necessity of performance to be administrated. The subject of the theme is enclosed in current development of accounting norms at national , european, (Directives and international levels (IAS/IFRS. The analyised topic is based upon the capabilty of accounting to generate information , throug synthesis calculus being settled the nature , the characteristics and the informational valences of financial performance of an organization. The accounting infromation is base for performing the decison process. The rol of accounting in insurring the relevance and comparability of information increased significantly, being already indispensable. A real solution for communication misunderstanig elimination emerged, as result of diputes in perception and interpretation of economic information, as results for the national speciffic norms.The economic communication is demanding for firm not only in its expression but in thinking and in the process of method conceptualization of organization and administration. A detailed financial situation analysis, which are employing annual financial analysis procedures, underling the performance and risks influencing factors, are considering one starting point for addressing the issue. The introduced variables are insuring a whole vision of firm activity and an appropriate strategy for results significance.

  20. Information Geometric Complexity of a Trivariate Gaussian Statistical Model

    Directory of Open Access Journals (Sweden)

    Domenico Felice

    2014-05-01

    Full Text Available We evaluate the information geometric complexity of entropic motion on low-dimensional Gaussian statistical manifolds in order to quantify how difficult it is to make macroscopic predictions about systems in the presence of limited information. Specifically, we observe that the complexity of such entropic inferences not only depends on the amount of available pieces of information but also on the manner in which such pieces are correlated. Finally, we uncover that, for certain correlational structures, the impossibility of reaching the most favorable configuration from an entropic inference viewpoint seems to lead to an information geometric analog of the well-known frustration effect that occurs in statistical physics.

  1. Electronic patient records in action: Transforming information into professionally relevant knowledge.

    Science.gov (United States)

    Winman, Thomas; Rystedt, Hans

    2011-03-01

    The implementation of generic models for organizing information in complex institutions like those in healthcare creates a gap between standardization and the need for locally relevant knowledge. The present study addresses how this gap can be bridged by focusing on the practical work of healthcare staff in transforming information in EPRs into knowledge that is useful for everyday work. Video recording of shift handovers on a rehabilitation ward serves as the empirical case. The results show how extensive selections and reorganizations of information in EPRs are carried out in order to transform information into professionally relevant accounts. We argue that knowledge about the institutional obligations and professional ways of construing information are fundamental for these transitions. The findings point to the need to consider the role of professional knowledge inherent in unpacking information in efforts to develop information systems intended to bridge between institutional and professional boundaries in healthcare. © The Author(s) 2011.

  2. Coordination of the National Statistical System in the Information Security Context

    Directory of Open Access Journals (Sweden)

    O. H.

    2017-12-01

    Full Text Available The need for building the national statistical system (NSS as the framework for coordination of statistical works is substantiated. NSS is defined on the basis of system approach. It is emphasized that the essential conditions underlying NSS are strategic planning, reliance on internationally adopted methods and due consideration to country-specific environment. The role of the state coordination policy in organizing statistical activities in the NSS framework is highlighted, key objectives of the integrated national policy on coordination of statistical activities are given. Threats arising from non-existence of NSS in a country are shown: “irregular” pattern of statistical activities, resulting from absence of common legal, methodological and organizational grounds; high costs involved in the finished information product in parallel with its low quality; impossibility of administering the statistical information security in a coherent manner, i. e. keeping with the rules on confidentiality of data, preventing intentional distortion of information and keeping with the rules of treatment with data making the state secret. An extensive review of NSS functional objectives is made: to ensure the system development of the official statistics; to ensure confidentiality and protection of individual data; to establish interdepartmental mechanisms for control and protection of secret statistical information; to broaden and regulate the access to statistical data and their effective use. The need for creating the National Statistical Commission is grounded.

  3. Annual statistical information 1996; Informe estatistico anual 1996

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-12-31

    This annual statistical report aims to propagate the information about the generation, transmission and distribution systems evolution and about the electric power market from the Parana State, Brazil, in 1996. The electric power consumption in the distribution area of the Parana Power Company (COPEL) presented a growth about 6,7%. The electric power production in the the COPEL plants increased 42,2% higher than 1995, due to the outflows verified in the Iguacu river and to the long period of the affluence reduction that the Southern region tanks coursed during this year. This report presents statistical data about the following topics: a) electric power energy balance from the Parana State; b) electric power energy balance from the COPEL - own generation, certain interchange, electric power requirement, direct distribution and the electric system 6 graphs, 3 maps, 61 tabs.; e-mail: splcnmr at mail.copel.br

  4. The influence of narrative v. statistical information on perceiving vaccination risks.

    Science.gov (United States)

    Betsch, Cornelia; Ulshöfer, Corina; Renkewitz, Frank; Betsch, Tilmann

    2011-01-01

    Health-related information found on the Internet is increasing and impacts patient decision making, e.g. regarding vaccination decisions. In addition to statistical information (e.g. incidence rates of vaccine adverse events), narrative information is also widely available such as postings on online bulletin boards. Previous research has shown that narrative information can impact treatment decisions, even when statistical information is presented concurrently. As the determinants of this effect are largely unknown, we will vary features of the narratives to identify mechanisms through which narratives impact risk judgments. An online bulletin board setting provided participants with statistical information and authentic narratives about the occurrence and nonoccurrence of adverse events. Experiment 1 followed a single factorial design with 1, 2, or 4 narratives out of 10 reporting adverse events. Experiment 2 implemented a 2 (statistical risk 20% vs. 40%) × 2 (2/10 vs. 4/10 narratives reporting adverse events) × 2 (high vs. low richness) × 2 (high vs. low emotionality) between-subjects design. Dependent variables were perceived risk of side-effects and vaccination intentions. Experiment 1 shows an inverse relation between the number of narratives reporting adverse-events and vaccination intentions, which was mediated by the perceived risk of vaccinating. Experiment 2 showed a stronger influence of the number of narratives than of the statistical risk information. High (vs. low) emotional narratives had a greater impact on the perceived risk, while richness had no effect. The number of narratives influences risk judgments can potentially override statistical information about risk.

  5. Task-relevant information is prioritized in spatiotemporal contextual cueing.

    Science.gov (United States)

    Higuchi, Yoko; Ueda, Yoshiyuki; Ogawa, Hirokazu; Saiki, Jun

    2016-11-01

    Implicit learning of visual contexts facilitates search performance-a phenomenon known as contextual cueing; however, little is known about contextual cueing under situations in which multidimensional regularities exist simultaneously. In everyday vision, different information, such as object identity and location, appears simultaneously and interacts with each other. We tested the hypothesis that, in contextual cueing, when multiple regularities are present, the regularities that are most relevant to our behavioral goals would be prioritized. Previous studies of contextual cueing have commonly used the visual search paradigm. However, this paradigm is not suitable for directing participants' attention to a particular regularity. Therefore, we developed a new paradigm, the "spatiotemporal contextual cueing paradigm," and manipulated task-relevant and task-irrelevant regularities. In four experiments, we demonstrated that task-relevant regularities were more responsible for search facilitation than task-irrelevant regularities. This finding suggests our visual behavior is focused on regularities that are relevant to our current goal.

  6. Iterative Filtering of Retrieved Information to Increase Relevance

    Directory of Open Access Journals (Sweden)

    Robert Zeidman

    2007-12-01

    Full Text Available Efforts have been underway for years to find more effective ways to retrieve information from large knowledge domains. This effort is now being driven particularly by the Internet and the vast amount of information that is available to unsophisticated users. In the early days of the Internet, some effort involved allowing users to enter Boolean equations of search terms into search engines, for example, rather than just a list of keywords. More recently, effort has focused on understanding a user's desires from past search histories in order to narrow searches. Also there has been much effort to improve the ranking of results based on some measure of relevancy. This paper discusses using iterative filtering of retrieved information to focus in on useful information. This work was done for finding source code correlation and the author extends his findings to Internet searching and e-commerce. The paper presents specific information about a particular filtering application and then generalizes it to other forms of information retrieval.

  7. Online drug databases: a new method to assess and compare inclusion of clinically relevant information.

    Science.gov (United States)

    Silva, Cristina; Fresco, Paula; Monteiro, Joaquim; Rama, Ana Cristina Ribeiro

    2013-08-01

    Evidence-Based Practice requires health care decisions to be based on the best available evidence. The model "Information Mastery" proposes that clinicians should use sources of information that have previously evaluated relevance and validity, provided at the point of care. Drug databases (DB) allow easy and fast access to information and have the benefit of more frequent content updates. Relevant information, in the context of drug therapy, is that which supports safe and effective use of medicines. Accordingly, the European Guideline on the Summary of Product Characteristics (EG-SmPC) was used as a standard to evaluate the inclusion of relevant information contents in DB. To develop and test a method to evaluate relevancy of DB contents, by assessing the inclusion of information items deemed relevant for effective and safe drug use. Hierarchical organisation and selection of the principles defined in the EGSmPC; definition of criteria to assess inclusion of selected information items; creation of a categorisation and quantification system that allows score calculation; calculation of relative differences (RD) of scores for comparison with an "ideal" database, defined as the one that achieves the best quantification possible for each of the information items; pilot test on a sample of 9 drug databases, using 10 drugs frequently associated in literature with morbidity-mortality and also being widely consumed in Portugal. Main outcome measure Calculate individual and global scores for clinically relevant information items of drug monographs in databases, using the categorisation and quantification system created. A--Method development: selection of sections, subsections, relevant information items and corresponding requisites; system to categorise and quantify their inclusion; score and RD calculation procedure. B--Pilot test: calculated scores for the 9 databases; globally, all databases evaluated significantly differed from the "ideal" database; some DB performed

  8. Communicating stereotype-relevant information: is factual information subject to the same communication biases as fictional information?

    Science.gov (United States)

    Goodman, Ruth L; Webb, Thomas L; Stewart, Andrew J

    2009-07-01

    Factual information is more frequently read and discussed than fictional information. However, research on the role of communication in shaping stereotypes has focused almost exclusively on fictional narratives. In Experiments 1 and 2 a newspaper article containing information about heroin users was communicated along chains of 4 people. No stereotype-consistency bias was observed. Instead, a greater proportion of stereotype-inconsistent information was communicated than was stereotype-consistent or -neutral information. Three further experiments investigated explanations for the difference between the communication of fictional and factual information. Experiment 3 ruled out the possibility that participants' beliefs about the validity of the information could influence the way that it is communicated. Experiments 4 and 5 divided information into concrete (a specific event or fact) or abstract (opinion). A stereotype-consistency bias emerged only for abstract information. In summary, linguistic abstraction moderates whether stereotype-consistency biases emerge in the communication of stereotype-relevant factual information.

  9. Why relevance theory is relevant for lexicography

    DEFF Research Database (Denmark)

    Bothma, Theo; Tarp, Sven

    2014-01-01

    This article starts by providing a brief summary of relevance theory in information science in relation to the function theory of lexicography, explaining the different types of relevance, viz. objective system relevance and the subjective types of relevance, i.e. topical, cognitive, situational...... that is very important for lexicography as well as for information science, viz. functional relevance. Since all lexicographic work is ultimately aimed at satisfying users’ information needs, the article then discusses why the lexicographer should take note of all these types of relevance when planning a new...... dictionary project, identifying new tasks and responsibilities of the modern lexicographer. The article furthermore discusses how relevance theory impacts on teaching dictionary culture and reference skills. By integrating insights from lexicography and information science, the article contributes to new...

  10. Age differences in attention toward decision-relevant information: education matters.

    Science.gov (United States)

    Xing, Cai; Isaacowitz, Derek

    2011-01-01

    Previous studies suggested that older adults are more likely to engage in heuristic decision-making than young adults. This study used eye tracking technique to examine young adults' and highly educated older adults' attention toward two types of decision-relevant information: heuristic cue vs. factual cues. Surprisingly, highly educated older adults showed the reversed age pattern-they looked more toward factual cues than did young adults. This age difference disappeared after controlling for educational level. Additionally, education correlated with attentional pattern to decision-relevant information. We interpret this finding as an indication of the power of education: education may modify what are thought to be "typical" age differences in decision-making, and education may influence young and older people's decision-making via different paths.

  11. The Importance of Integrating Clinical Relevance and Statistical Significance in the Assessment of Quality of Care--Illustrated Using the Swedish Stroke Register.

    Directory of Open Access Journals (Sweden)

    Anita Lindmark

    Full Text Available When profiling hospital performance, quality inicators are commonly evaluated through hospital-specific adjusted means with confidence intervals. When identifying deviations from a norm, large hospitals can have statistically significant results even for clinically irrelevant deviations while important deviations in small hospitals can remain undiscovered. We have used data from the Swedish Stroke Register (Riksstroke to illustrate the properties of a benchmarking method that integrates considerations of both clinical relevance and level of statistical significance.The performance measure used was case-mix adjusted risk of death or dependency in activities of daily living within 3 months after stroke. A hospital was labeled as having outlying performance if its case-mix adjusted risk exceeded a benchmark value with a specified statistical confidence level. The benchmark was expressed relative to the population risk and should reflect the clinically relevant deviation that is to be detected. A simulation study based on Riksstroke patient data from 2008-2009 was performed to investigate the effect of the choice of the statistical confidence level and benchmark value on the diagnostic properties of the method.Simulations were based on 18,309 patients in 76 hospitals. The widely used setting, comparing 95% confidence intervals to the national average, resulted in low sensitivity (0.252 and high specificity (0.991. There were large variations in sensitivity and specificity for different requirements of statistical confidence. Lowering statistical confidence improved sensitivity with a relatively smaller loss of specificity. Variations due to different benchmark values were smaller, especially for sensitivity. This allows the choice of a clinically relevant benchmark to be driven by clinical factors without major concerns about sufficiently reliable evidence.The study emphasizes the importance of combining clinical relevance and level of statistical

  12. The Importance of Integrating Clinical Relevance and Statistical Significance in the Assessment of Quality of Care--Illustrated Using the Swedish Stroke Register.

    Science.gov (United States)

    Lindmark, Anita; van Rompaye, Bart; Goetghebeur, Els; Glader, Eva-Lotta; Eriksson, Marie

    2016-01-01

    When profiling hospital performance, quality inicators are commonly evaluated through hospital-specific adjusted means with confidence intervals. When identifying deviations from a norm, large hospitals can have statistically significant results even for clinically irrelevant deviations while important deviations in small hospitals can remain undiscovered. We have used data from the Swedish Stroke Register (Riksstroke) to illustrate the properties of a benchmarking method that integrates considerations of both clinical relevance and level of statistical significance. The performance measure used was case-mix adjusted risk of death or dependency in activities of daily living within 3 months after stroke. A hospital was labeled as having outlying performance if its case-mix adjusted risk exceeded a benchmark value with a specified statistical confidence level. The benchmark was expressed relative to the population risk and should reflect the clinically relevant deviation that is to be detected. A simulation study based on Riksstroke patient data from 2008-2009 was performed to investigate the effect of the choice of the statistical confidence level and benchmark value on the diagnostic properties of the method. Simulations were based on 18,309 patients in 76 hospitals. The widely used setting, comparing 95% confidence intervals to the national average, resulted in low sensitivity (0.252) and high specificity (0.991). There were large variations in sensitivity and specificity for different requirements of statistical confidence. Lowering statistical confidence improved sensitivity with a relatively smaller loss of specificity. Variations due to different benchmark values were smaller, especially for sensitivity. This allows the choice of a clinically relevant benchmark to be driven by clinical factors without major concerns about sufficiently reliable evidence. The study emphasizes the importance of combining clinical relevance and level of statistical confidence when

  13. Information Geometry, Inference Methods and Chaotic Energy Levels Statistics

    OpenAIRE

    Cafaro, Carlo

    2008-01-01

    In this Letter, we propose a novel information-geometric characterization of chaotic (integrable) energy level statistics of a quantum antiferromagnetic Ising spin chain in a tilted (transverse) external magnetic field. Finally, we conjecture our results might find some potential physical applications in quantum energy level statistics.

  14. Phase synchronization of delta and theta oscillations increase during the detection of relevant lexical information

    Directory of Open Access Journals (Sweden)

    Enzo eBrunetti

    2013-06-01

    Full Text Available During monitoring of the discourse, the detection of the relevance of incoming lexical information could be critical for its incorporation to update mental representations in memory. Because, in these situations, the relevance for lexical information is defined by abstract rules that are maintained in memory, results critical to understand how an abstract level of knowledge maintained in mind mediates the detection of the lower-level semantic information. In the present study, we propose that neuronal oscillations participate in the detection of relevant lexical information, based on ‘kept in mind’ rules deriving from more abstract semantic information. We tested our hypothesis using an experimental paradigm that restricted the detection of relevance to inferences based on explicit information, thus controlling for ambiguities derived from implicit aspects. We used a categorization task, in which the semantic relevance was previously defined based on the congruency between a kept in mind category (abstract knowledge, and the lexical-semantic information presented. Our results show that during the detection of the relevant lexical information, phase synchronization of neuronal oscillations selectively increases in delta and theta frequency bands during the interval of semantic analysis. These increments were independent of the semantic category maintained in memory, had a temporal profile specific for each subject, and were mainly induced, as they had no effect on the evoked mean global field power. Also, recruitment of an increased number of pairs of electrodes was a robust observation during the detection of semantic contingent words. These results are consistent with the notion that the detection of relevant lexical information based on a particular semantic rule, could be mediated by increasing the global phase synchronization of neuronal oscillations, which may contribute to the recruitment of an extended number of cortical regions.

  15. Phase synchronization of delta and theta oscillations increase during the detection of relevant lexical information.

    Science.gov (United States)

    Brunetti, Enzo; Maldonado, Pedro E; Aboitiz, Francisco

    2013-01-01

    During monitoring of the discourse, the detection of the relevance of incoming lexical information could be critical for its incorporation to update mental representations in memory. Because, in these situations, the relevance for lexical information is defined by abstract rules that are maintained in memory, a central aspect to elucidate is how an abstract level of knowledge maintained in mind mediates the detection of the lower-level semantic information. In the present study, we propose that neuronal oscillations participate in the detection of relevant lexical information, based on "kept in mind" rules deriving from more abstract semantic information. We tested our hypothesis using an experimental paradigm that restricted the detection of relevance to inferences based on explicit information, thus controlling for ambiguities derived from implicit aspects. We used a categorization task, in which the semantic relevance was previously defined based on the congruency between a kept in mind category (abstract knowledge), and the lexical semantic information presented. Our results show that during the detection of the relevant lexical information, phase synchronization of neuronal oscillations selectively increases in delta and theta frequency bands during the interval of semantic analysis. These increments occurred irrespective of the semantic category maintained in memory, had a temporal profile specific for each subject, and were mainly induced, as they had no effect on the evoked mean global field power. Also, recruitment of an increased number of pairs of electrodes was a robust observation during the detection of semantic contingent words. These results are consistent with the notion that the detection of relevant lexical information based on a particular semantic rule, could be mediated by increasing the global phase synchronization of neuronal oscillations, which may contribute to the recruitment of an extended number of cortical regions.

  16. A protocol for classifying ecologically relevant marine zones, a statistical approach

    Science.gov (United States)

    Verfaillie, Els; Degraer, Steven; Schelfaut, Kristien; Willems, Wouter; Van Lancker, Vera

    2009-06-01

    Mapping ecologically relevant zones in the marine environment has become increasingly important. Biological data are however often scarce and alternatives are being sought in optimal classifications of abiotic variables. The concept of 'marine landscapes' is based on a hierarchical classification of geological, hydrographic and other physical data. This approach is however subject to many assumptions and subjective decisions. An objective protocol for zonation is being proposed here where abiotic variables are subjected to a statistical approach, using principal components analysis (PCA) and a cluster analysis. The optimal number of clusters (or zones) is being defined using the Calinski-Harabasz criterion. The methodology has been applied on datasets of the Belgian part of the North Sea (BPNS), a shallow sandy shelf environment with a sandbank-swale topography. The BPNS was classified into 8 zones that represent well the natural variability of the seafloor. The internal cluster consistency was validated with a split-run procedure, with more than 99% correspondence between the validation and the original dataset. The ecological relevance of 6 out of the 8 zones was demonstrated, using indicator species analysis. The proposed protocol, as exemplified for the BPNS, can easily be applied to other areas and provides a strong knowledge basis for environmental protection and management of the marine environment. A SWOT-analysis, showing the strengths, weaknesses, opportunities and threats of the protocol was performed.

  17. The relevance of music information representation metadata from the perspective of expert users

    Directory of Open Access Journals (Sweden)

    Camila Monteiro de Barros

    Full Text Available The general goal of this research was to verify which metadata elements of music information representation are relevant for its retrieval from the perspective of expert music users. Based on a bibliographical research, a comprehensive metadata set of music information representation was developed and transformed into a questionnaire for data collection, which was applied to students and professors of the Graduate Program in Music at the Federal University of Rio Grande do Sul. The results show that the most relevant information for expert music users is related to identification and authorship responsibilities. The respondents from Composition and Interpretative Practice areas agree with these results, while the respondents from Musicology/Ethnomusicology and Music Education areas also consider the metadata related to the historical context of composition relevant.

  18. the effect of current and relevant information sources on the use

    African Journals Online (AJOL)

    Admin

    reported similar findings at Yaba College of. Technology, Lagos. However, in a ... values. In other words, current information sources resulted in the use of the library. Jam (1992) identified lack of relevant information sources to be one of the problems facing library users and has ... Bachelor's degree holders. That those with.

  19. Elementary statistics for effective library and information service management

    CERN Document Server

    Egghe, Leo

    2001-01-01

    This title describes how best to use statistical data to produce professional reports on library activities. The authors cover data gathering, sampling, graphical representation of data and summary statistics from data, and also include a section on trend analysis. A full bibliography and a subject index make this a key title for any information professional..

  20. Task-Relevant Information Modulates Primary Motor Cortex Activity Before Movement Onset.

    Science.gov (United States)

    Calderon, Cristian B; Van Opstal, Filip; Peigneux, Philippe; Verguts, Tom; Gevers, Wim

    2018-01-01

    Monkey neurophysiology research supports the affordance competition hypothesis (ACH) proposing that cognitive information useful for action selection is integrated in sensorimotor areas. In this view, action selection would emerge from the simultaneous representation of competing action plans, in parallel biased by relevant task factors. This biased competition would take place up to primary motor cortex (M1). Although ACH is plausible in environments affording choices between actions, its relevance for human decision making is less clear. To address this issue, we designed an functional magnetic resonance imaging (fMRI) experiment modeled after monkey neurophysiology studies in which human participants processed cues conveying predictive information about upcoming button presses. Our results demonstrate that, as predicted by the ACH, predictive information (i.e., the relevant task factor) biases activity of primary motor regions. Specifically, first, activity before movement onset in contralateral M1 increases as the competition is biased in favor of a specific button press relative to activity in ipsilateral M1. Second, motor regions were more tightly coupled with fronto-parietal regions when competition between potential actions was high, again suggesting that motor regions are also part of the biased competition network. Our findings support the idea that action planning dynamics as proposed in the ACH are valid both in human and non-human primates.

  1. Relevant Information and Informed Consent in Research: In Defense of the Subjective Standard of Disclosure.

    Science.gov (United States)

    Dranseika, Vilius; Piasecki, Jan; Waligora, Marcin

    2017-02-01

    In this article, we seek to contribute to the debate on the requirement of disclosure in the context of informed consent for research. We defend the subjective standard of disclosure and describe ways to implement this standard in research practice. We claim that the researcher should make an effort to find out what kinds of information are likely to be relevant for those consenting to research. This invites researchers to take empirical survey information seriously, attempt to understand the cultural context, talk to patients to be better able to understand what can be potentially different concerns and interests prevalent in the target population. The subjective standard of disclosure should be seen as a moral ideal that perhaps can never be perfectly implemented but still can and should be used as a normative ideal guiding research practice. In the light of these discussions, we call for more empirical research on what considerations are likely to be perceived as relevant by potential research participants recruited from different socio-economic and cultural groups.

  2. 77 FR 42339 - Improving Contracting Officers' Access to Relevant Integrity Information

    Science.gov (United States)

    2012-07-18

    ... contracting officers' access to relevant information about contractor business ethics in the Federal Awardee... ability to evaluate the business ethics of prospective contractors and protect the Government from...

  3. The Common Body of Knowledge: A Framework to Promote Relevant Information Security Research

    Directory of Open Access Journals (Sweden)

    Kenneth J. Knapp

    2007-03-01

    Full Text Available This study proposes using an established common body of knowledge (CBK as one means of organizing information security literature.  Consistent with calls for more relevant information systems (IS research, this industry-developed framework can motivate future research towards topics that are important to the security practitioner.  In this review, forty-eight articles from ten IS journals from 1995 to 2004 are selected and cross-referenced to the ten domains of the information security CBK.  Further, we distinguish articles as empirical research, frameworks, or tutorials.  Generally, this study identified a need for additional empirical research in every CBK domain including topics related to legal aspects of information security.  Specifically, this study identified a need for additional IS security research relating to applications development, physical security, operations security, and business continuity.  The CBK framework is inherently practitioner oriented and using it will promote relevancy by steering IS research towards topics important to practitioners.  This is important considering the frequent calls by prominent information systems scholars for more relevant research.  Few research frameworks have emerged from the literature that specifically classify the diversity of security threats and range of problems that businesses today face.  With the recent surge of interest in security, the need for a comprehensive framework that also promotes relevant research can be of great value.

  4. Dynamical and statistical aspects of intermediate energy heavy ion collisions

    International Nuclear Information System (INIS)

    Knoll, J.

    1987-01-01

    The lectures presented deal with three different topics relevant for the discussion of nuclear collisions at medium to high energies. The first lecture concerns a subject of general interest, the description of statistical systems and their dynamics by the concept of missing information. If presents an excellent scope to formulate statistical theories in such a way that they carefully keep track of the known (relevant) information while maximizing the ignorance about the irrelevant, unknown information. The last two lectures deal with quite actual questions of intermediate energy heavy-ion collisions. These are the multi-fragmentation dynamics of highly excited nuclear systems, and the so called subthreshold particle production. All three subjects are self-contained, and can be read without the knowledge about the other ones. (orig.)

  5. Pre-service primary school teachers’ knowledge of informal statistical inference

    NARCIS (Netherlands)

    de Vetten, Arjen; Schoonenboom, Judith; Keijzer, Ronald; van Oers, Bert

    2018-01-01

    The ability to reason inferentially is increasingly important in today’s society. It is hypothesized here that engaging primary school students in informal statistical reasoning (ISI), defined as making generalizations without the use of formal statistical tests, will help them acquire the

  6. THE RELEVANCE OF ECONOMIC INFORMATION IN ANALYZING THE ECONOMIC PERFORMANCE

    Directory of Open Access Journals (Sweden)

    PATRUTA MIRCEA IOAN

    2017-12-01

    Full Text Available The performance analysis is based on an informational system, which provides financial information in various formatsand with various applicabilities.We intend to formulate a set of important caracteristics of financial information along with identifying a set of relevant financial rates and indicatorsused to appreciate the performance level of a company. Economic performance can be interpreted in different ways at each level of analysis. Generally, it refers to economic growth, increased productivity and profitability. The growth of labor productivity or increased production per worker is a measure of efficient use of resources in value creation.

  7. Information gathering for the Transportation Statistics Data Bank

    International Nuclear Information System (INIS)

    Shappert, L.B.; Mason, P.J.

    1981-10-01

    The Transportation Statistics Data Bank (TSDB) was developed in 1974 to collect information on the transport of Department of Energy (DOE) materials. This computer program may be used to provide the framework for collecting more detailed information on DOE shipments of radioactive materials. This report describes the type of information that is needed in this area and concludes that the existing system could be readily modified to collect and process it. The additional needed information, available from bills of lading and similar documents, could be gathered from DOE field offices and transferred in a standard format to the TSDB system. Costs of the system are also discussed briefly

  8. Pengaruh Participation Budgeting, Information Asimetry dan Job Relevant Information terhadap Budget Slack pada Institusi Pendidikan (Studi pada Institusi Pendidikan Universitas Kristen Maranatha)

    OpenAIRE

    K. S., Christine Dwi; Agustina, Lidya

    2010-01-01

    The purpose of this research is to analyze and examine the hyphothesis effect of participation budgeting on job relevant information and  information asimetry as a moderating variable , and effect of participation budgeting and information asimetry on budget slack and job relevant information as mediating varible. The respondent of this research are 30 structural staf of program and ministry in Maranatha Christian University who have participated to make  budgeting. This method that...

  9. Relevant Scatterers Characterization in SAR Images

    Science.gov (United States)

    Chaabouni, Houda; Datcu, Mihai

    2006-11-01

    Recognizing scenes in a single look meter resolution Synthetic Aperture Radar (SAR) images, requires the capability to identify relevant signal signatures in condition of variable image acquisition geometry, arbitrary objects poses and configurations. Among the methods to detect relevant scatterers in SAR images, we can mention the internal coherence. The SAR spectrum splitted in azimuth generates a series of images which preserve high coherence only for particular object scattering. The detection of relevant scatterers can be done by correlation study or Independent Component Analysis (ICA) methods. The present article deals with the state of the art for SAR internal correlation analysis and proposes further extensions using elements of inference based on information theory applied to complex valued signals. The set of azimuth looks images is analyzed using mutual information measures and an equivalent channel capacity is derived. The localization of the "target" requires analysis in a small image window, thus resulting in imprecise estimation of the second order statistics of the signal. For a better precision, a Hausdorff measure is introduced. The method is applied to detect and characterize relevant objects in urban areas.

  10. Age Differences in Attention toward Decision-Relevant Information: Education Matters

    Science.gov (United States)

    Xing, Cai; Isaacowitz, Derek

    2011-01-01

    Previous studies suggested that older adults are more likely to engage in heuristic decision-making than young adults. This study used eye tracking technique to examine young adults' and highly educated older adults' attention toward two types of decision-relevant information: heuristic cue vs. factual cues. Surprisingly, highly educated older…

  11. The pricing relevance of insider information; Die Preiserheblichkeit von Insiderinformationen

    Energy Technology Data Exchange (ETDEWEB)

    Kruse, Dominik

    2011-07-01

    The publication attempts to describe the so far discussion concerning the feature of pricing relevance and to develop it further with the aid of new research approaches. First, a theoretical outline is presented of the elementary regulation problem of insider trading, its historical development, and the regulation goals of the WpHG. This is followed by an analysis of the concrete specifications of the law. In view of the exemplarity of US law, a country with long experience in regulation of the capital market, the materiality doctrine of US insider law is gone into in some detail. The goals and development of the doctrine are reviewed in the light of court rulings. The third part outlines the requirements of German law in order to forecast the pricing relevance of insider information, while the final part presents a critical review of the current regulations on pricing relevance. (orig./RHM)

  12. Spatially Compact Neural Clusters in the Dorsal Striatum Encode Locomotion Relevant Information.

    Science.gov (United States)

    Barbera, Giovanni; Liang, Bo; Zhang, Lifeng; Gerfen, Charles R; Culurciello, Eugenio; Chen, Rong; Li, Yun; Lin, Da-Ting

    2016-10-05

    An influential striatal model postulates that neural activities in the striatal direct and indirect pathways promote and inhibit movement, respectively. Normal behavior requires coordinated activity in the direct pathway to facilitate intended locomotion and indirect pathway to inhibit unwanted locomotion. In this striatal model, neuronal population activity is assumed to encode locomotion relevant information. Here, we propose a novel encoding mechanism for the dorsal striatum. We identified spatially compact neural clusters in both the direct and indirect pathways. Detailed characterization revealed similar cluster organization between the direct and indirect pathways, and cluster activities from both pathways were correlated with mouse locomotion velocities. Using machine-learning algorithms, cluster activities could be used to decode locomotion relevant behavioral states and locomotion velocity. We propose that neural clusters in the dorsal striatum encode locomotion relevant information and that coordinated activities of direct and indirect pathway neural clusters are required for normal striatal controlled behavior. VIDEO ABSTRACT. Published by Elsevier Inc.

  13. Changing Zaire to Congo: the fate of no-longer relevant mnemonic information.

    Science.gov (United States)

    Eriksson, Johan; Stiernstedt, Mikael; Öhlund, Maria; Nyberg, Lars

    2014-11-01

    In an ever-changing world there is constant pressure on revising long-term memory, such when people or countries change name. What happens to the old, pre-existing information? One possibility is that old associations gradually are weakened and eventually lost. Alternatively, old and no longer relevant information may still be an integral part of memory traces. To test the hypothesis that old mnemonic information still becomes activated when people correctly retrieve new, currently relevant information, brain activity was measured with fMRI while participants performed a cued-retrieval task. Paired associates (symbol-sound and symbol-face pairs) were first learned during two days. Half of the associations were then updated during the next two days, followed by fMRI scanning on day 5 and also 18 months later. As expected, retrieval reactivated sensory cortex related to the most recently learned association (visual cortex for symbol-face pairs, auditory cortex for symbol-sound pairs). Critically, retrieval also reactivated sensory cortex related to the no-longer relevant associate. Eighteen months later, only non-updated symbol-face associations were intact. Intriguingly, a subset of the updated associations was now treated as though the original association had taken over, in that memory performance was significantly worse than chance and that activity in sensory cortex for the original but not the updated associate correlated (negatively) with performance. Moreover, the degree of "residual" reactivation during day 5 inversely predicted memory performance 18 months later. Thus, updating of long-term memory involves adding new information to already existing networks, in which old information can stay resilient for a long time. Copyright © 2014. Published by Elsevier Inc.

  14. How Long Should Routine EEG Be Recorded to Get Relevant Information?

    Science.gov (United States)

    Doudoux, Hannah; Skaare, Kristina; Geay, Thomas; Kahane, Philippe; Bosson, Jean L; Sabourdy, Cécile; Vercueil, Laurent

    2017-03-01

    The optimal duration of routine EEG (rEEG) has not been determined on a clinical basis. This study aims to determine the time required to obtain relevant information during rEEG with respect to the clinical request. All rEEGs performed over 3 months in unselected patients older than 14 years in an academic hospital were analyzed retrospectively. The latency required to obtain relevant information was determined for each rEEG by 2 independent readers blinded to the clinical data. EEG final diagnoses and latencies were analyzed with respect to the main clinical requests: subacute cognitive impairment, spells, transient focal neurologic manifestation or patients referred by epileptologists. From 430 rEEGs performed in the targeted period, 364 were analyzed: 92% of the pathological rEEGs were provided within the first 10 minutes of recording. Slowing background activity was diagnosed from the beginning, whereas interictal epileptiform discharges were recorded over time. Moreover, the time elapsed to demonstrate a pattern differed significantly in the clinical groups: in patients with subacute cognitive impairment, EEG abnormalities appeared within the first 10 minutes, whereas in the other groups, data could be provided over time. Patients with subacute cognitive impairment differed from those in the other groups significantly in the elapsed time required to obtain relevant information during rEEG, suggesting that 10-minute EEG recordings could be sufficient, arguing in favor of individualized rEEG. However, this conclusion does not apply to intensive care unit patients.

  15. Informing Evidence Based Decisions: Usage Statistics for Online Journal Databases

    Directory of Open Access Journals (Sweden)

    Alexei Botchkarev

    2017-06-01

    Full Text Available Abstract Objective – The primary objective was to examine online journal database usage statistics for a provincial ministry of health in the context of evidence based decision-making. In addition, the study highlights implementation of the Journal Access Centre (JAC that is housed and powered by the Ontario Ministry of Health and Long-Term Care (MOHLTC to inform health systems policy-making. Methods – This was a prospective case study using descriptive analysis of the JAC usage statistics of journal articles from January 2009 to September 2013. Results – JAC enables ministry employees to access approximately 12,000 journals with full-text articles. JAC usage statistics for the 2011-2012 calendar years demonstrate a steady level of activity in terms of searches, with monthly averages of 5,129. In 2009-2013, a total of 4,759 journal titles were accessed including 1,675 journals with full-text. Usage statistics demonstrate that the actual consumption was over 12,790 full-text downloaded articles or approximately 2,700 articles annually. Conclusion – JAC’s steady level of activities, revealed by the study, reflects continuous demand for JAC services and products. It testifies that access to online journal databases has become part of routine government knowledge management processes. MOHLTC’s broad area of responsibilities with dynamically changing priorities translates into the diverse information needs of its employees and a large set of required journals. Usage statistics indicate that MOHLTC information needs cannot be mapped to a reasonably compact set of “core” journals with a subsequent subscription to those.

  16. Earlier saccades to task-relevant targets irrespective of relative gain between peripheral and foveal information.

    Science.gov (United States)

    Wolf, Christian; Schütz, Alexander C

    2017-06-01

    Saccades bring objects of interest onto the fovea for high-acuity processing. Saccades to rewarded targets show shorter latencies that correlate negatively with expected motivational value. Shorter latencies are also observed when the saccade target is relevant for a perceptual discrimination task. Here we tested whether saccade preparation is equally influenced by informational value as it is by motivational value. We defined informational value as the probability that information is task-relevant times the ratio between postsaccadic foveal and presaccadic peripheral discriminability. Using a gaze-contingent display, we independently manipulated peripheral and foveal discriminability of the saccade target. Latencies of saccades with perceptual task were reduced by 36 ms in general, but they were not modulated by the information saccades provide (Experiments 1 and 2). However, latencies showed a clear negative linear correlation with the probability that the target is task-relevant (Experiment 3). We replicated that the facilitation by a perceptual task is spatially specific and not due to generally heightened arousal (Experiment 4). Finally, the facilitation only emerged when the perceptual task is in the visual but not in the auditory modality (Experiment 5). Taken together, these results suggest that saccade latencies are not equally modulated by informational value as by motivational value. The facilitation by a perceptual task only arises when task-relevant visual information is foveated, irrespective of whether the foveation is useful or not.

  17. Proactive Support of Internet Browsing when Searching for Relevant Health Information.

    Science.gov (United States)

    Rurik, Clas; Zowalla, Richard; Wiesner, Martin; Pfeifer, Daniel

    2015-01-01

    Many people use the Internet as one of the primary sources of health information. This is due to the high volume and easy access of freely available information regarding diseases, diagnoses and treatments. However, users may find it difficult to retrieve information which is easily understandable and does not require a deep medical background. In this paper, we present a new kind of Web browser add-on, in order to proactively support users when searching for relevant health information. Our add-on not only visualizes the understandability of displayed medical text but also provides further recommendations of Web pages which hold similar content but are potentially easier to comprehend.

  18. Mathematical and statistical applications in life sciences and engineering

    CERN Document Server

    Adhikari, Mahima; Chaubey, Yogendra

    2017-01-01

    The book includes articles from eminent international scientists discussing a wide spectrum of topics of current importance in mathematics and statistics and their applications. It presents state-of-the-art material along with a clear and detailed review of the relevant topics and issues concerned. The topics discussed include message transmission, colouring problem, control of stochastic structures and information dynamics, image denoising, life testing and reliability, survival and frailty models, analysis of drought periods, prediction of genomic profiles, competing risks, environmental applications and chronic disease control. It is a valuable resource for researchers and practitioners in the relevant areas of mathematics and statistics.

  19. Statistical Yearbook of Norway 2012

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2012-07-01

    The Statistical Yearbook of Norway 2012 contains statistics on Norway and main figures for the Nordic countries and other countries selected from international statistics. The international over-views are integrated with the other tables and figures. The selection of tables in this edition is mostly the same as in the 2011 edition. The yearbook's 480 tables and figures present the main trends in official statistics in most areas of society. The list of tables and figures and an index at the back of the book provide easy access to relevant information. In addition, source information and Internet addresses below the tables make the yearbook a good starting point for those who are looking for more detailed statistics. The statistics are based on data gathered in statistical surveys and from administrative data, which, in cooperation with other public institutions, have been made available for statistical purposes. Some tables have been prepared in their entirety by other public institutions. The statistics follow approved principles, standards and classifications that are in line with international recommendations and guidelines. Content: 00. General subjects; 01. Environment; 02. Population; 03. Health and social conditions; 04. Education; 05. Personal economy and housing conditions; 06. Labour market; 07. Recreational, cultural and sporting activities; 08. Prices and indices; 09. National Economy and external trade; 10. Industrial activities; 11. Financial markets; 12. Public finances; Geographical survey.(eb)

  20. Statistical Yearbook of Norway 2012

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2012-07-01

    The Statistical Yearbook of Norway 2012 contains statistics on Norway and main figures for the Nordic countries and other countries selected from international statistics. The international over-views are integrated with the other tables and figures. The selection of tables in this edition is mostly the same as in the 2011 edition. The yearbook's 480 tables and figures present the main trends in official statistics in most areas of society. The list of tables and figures and an index at the back of the book provide easy access to relevant information. In addition, source information and Internet addresses below the tables make the yearbook a good starting point for those who are looking for more detailed statistics. The statistics are based on data gathered in statistical surveys and from administrative data, which, in cooperation with other public institutions, have been made available for statistical purposes. Some tables have been prepared in their entirety by other public institutions. The statistics follow approved principles, standards and classifications that are in line with international recommendations and guidelines. Content: 00. General subjects; 01. Environment; 02. Population; 03. Health and social conditions; 04. Education; 05. Personal economy and housing conditions; 06. Labour market; 07. Recreational, cultural and sporting activities; 08. Prices and indices; 09. National Economy and external trade; 10. Industrial activities; 11. Financial markets; 12. Public finances; Geographical survey.(eb)

  1. Prototyping a Distributed Information Retrieval System That Uses Statistical Ranking.

    Science.gov (United States)

    Harman, Donna; And Others

    1991-01-01

    Built using a distributed architecture, this prototype distributed information retrieval system uses statistical ranking techniques to provide better service to the end user. Distributed architecture was shown to be a feasible alternative to centralized or CD-ROM information retrieval, and user testing of the ranking methodology showed both…

  2. Prioritising the relevant information for learning and decision making within orbital and ventromedial prefrontal cortex.

    Science.gov (United States)

    Walton, Mark E; Chau, Bolton K H; Kennerley, Steven W

    2015-02-01

    Our environment and internal states are frequently complex, ambiguous and dynamic, meaning we need to have selection mechanisms to ensure we are basing our decisions on currently relevant information. Here, we review evidence that orbitofrontal (OFC) and ventromedial prefrontal cortex (VMPFC) play conserved, critical but distinct roles in this process. While OFC may use specific sensory associations to enhance task-relevant information, particularly in the context of learning, VMPFC plays a role in ensuring irrelevant information does not impinge on the decision in hand.

  3. A Parallel Relational Database Management System Approach to Relevance Feedback in Information Retrieval.

    Science.gov (United States)

    Lundquist, Carol; Frieder, Ophir; Holmes, David O.; Grossman, David

    1999-01-01

    Describes a scalable, parallel, relational database-drive information retrieval engine. To support portability across a wide range of execution environments, all algorithms adhere to the SQL-92 standard. By incorporating relevance feedback algorithms, accuracy is enhanced over prior database-driven information retrieval efforts. Presents…

  4. Video coding and decoding devices and methods preserving ppg relevant information

    NARCIS (Netherlands)

    2013-01-01

    The present invention relates to a video encoding device (10) for encoding video data and a corresponding video decoding device, wherein during decoding PPG relevant information shall be preserved. For this purpose the video coding device (10) comprises a first encoder (20) for encoding input video

  5. Acoustic fine structure may encode biologically relevant information for zebra finches.

    Science.gov (United States)

    Prior, Nora H; Smith, Edward; Lawson, Shelby; Ball, Gregory F; Dooling, Robert J

    2018-04-18

    The ability to discriminate changes in the fine structure of complex sounds is well developed in birds. However, the precise limit of this discrimination ability and how it is used in the context of natural communication remains unclear. Here we describe natural variability in acoustic fine structure of male and female zebra finch calls. Results from psychoacoustic experiments demonstrate that zebra finches are able to discriminate extremely small differences in fine structure, which are on the order of the variation in acoustic fine structure that is present in their vocal signals. Results from signal analysis methods also suggest that acoustic fine structure may carry information that distinguishes between biologically relevant categories including sex, call type and individual identity. Combined, our results are consistent with the hypothesis that zebra finches can encode biologically relevant information within the fine structure of their calls. This study provides a foundation for our understanding of how acoustic fine structure may be involved in animal communication.

  6. A Planetary Defense Gateway for Smart Discovery of relevant Information for Decision Support

    Science.gov (United States)

    Bambacus, Myra; Yang, Chaowei Phil; Leung, Ronald Y.; Barbee, Brent; Nuth, Joseph A.; Seery, Bernard; Jiang, Yongyao; Qin, Han; Li, Yun; Yu, Manzhu; hide

    2017-01-01

    A Planetary Defense Gateway for Smart Discovery of relevant Information for Decision Support presentation discussing background, framework architecture, current results, ongoing research, conclusions.

  7. Reasoning about Informal Statistical Inference: One Statistician's View

    Science.gov (United States)

    Rossman, Allan J.

    2008-01-01

    This paper identifies key concepts and issues associated with the reasoning of informal statistical inference. I focus on key ideas of inference that I think all students should learn, including at secondary level as well as tertiary. I argue that a fundamental component of inference is to go beyond the data at hand, and I propose that statistical…

  8. Use of Statistical Information for Damage Assessment of Civil Engineering Structures

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Andersen, P.

    This paper considers the problem of damage assessment of civil engineering structures using statistical information. The aim of the paper is to review how researchers recently have tried to solve the problem. It is pointed out that the problem consists of not only how to use the statistical...

  9. Disseminating relevant health information to underserved audiences: implications of the Digital Divide Pilot Projects.

    Science.gov (United States)

    Kreps, Gary L

    2005-10-01

    This paper examines the influence of the digital divide on disparities in health outcomes for vulnerable populations, identifying implications for medical and public libraries. The paper describes the results of the Digital Divide Pilot Projects demonstration research programs funded by the National Cancer Institute to test new strategies for disseminating relevant health information to underserved and at-risk audiences. The Digital Divide Pilot Projects field-tested innovative systemic strategies for helping underserved populations access and utilize relevant health information to make informed health-related decisions about seeking appropriate health care and support, resisting avoidable and significant health risks, and promoting their own health. The paper builds on the Digital Divide Pilot Projects by identifying implications for developing health communication strategies that libraries can adopt to provide digital health information to vulnerable populations.

  10. Video coding and decoding devices and methods preserving PPG relevant information

    NARCIS (Netherlands)

    2015-01-01

    The present invention relates to a video encoding device (10, 10', 10") and method for encoding video data and to a corresponding video decoding device (60, 60') and method. To preserve PPG relevant information after encoding without requiring a large amount of additional data for the video encoder

  11. Video coding and decoding devices and methods preserving ppg relevant information

    NARCIS (Netherlands)

    2013-01-01

    The present invention relates to a video encoding device (10, 10', 10'') and method for encoding video data and to a corresponding video decoding device (60, 60') and method. To preserve PPG relevant information after encoding without requiring a large amount of additional data for the video encoder

  12. Perceived relevance and information needs regarding food topics and preferred information sources among Dutch adults: results of a quantitative consumer study

    NARCIS (Netherlands)

    Dillen, van S.M.E.; Hiddink, G.J.; Koelen, M.A.; Graaf, de C.; Woerkum, van C.M.J.

    2004-01-01

    Objective: For more effective nutrition communication, it is crucial to identify sources from which consumers seek information. Our purpose was to assess perceived relevance and information needs regarding food topics, and preferred information sources by means of quantitative consumer research.

  13. Public health information and statistics dissemination efforts for Indonesia on the Internet.

    Science.gov (United States)

    Hanani, Febiana; Kobayashi, Takashi; Jo, Eitetsu; Nakajima, Sawako; Oyama, Hiroshi

    2011-01-01

    To elucidate current issues related to health statistics dissemination efforts on the Internet in Indonesia and to propose a new dissemination website as a solution. A cross-sectional survey was conducted. Sources of statistics were identified using link relationship and Google™ search. Menu used to locate statistics, mode of presentation and means of access to statistics, and available statistics were assessed for each site. Assessment results were used to derive design specification; a prototype system was developed and evaluated with usability test. 49 sources were identified on 18 governmental, 8 international and 5 non-government websites. Of 49 menus identified, 33% used non-intuitive titles and lead to inefficient search. 69% of them were on government websites. Of 31 websites, only 39% and 23% used graph/chart and map for presentation. Further, only 32%, 39% and 19% provided query, export and print feature. While >50% sources reported morbidity, risk factor and service provision statistics, disseminate statistics in Indonesia are supported by non-governmental and international organizations and existing their information may not be very useful because it is: a) not widely distributed, b) difficult to locate, and c) not effectively communicated. Actions are needed to ensure information usability, and one of such actions is the development of statistics portal website.

  14. NetNorM: Capturing cancer-relevant information in somatic exome mutation data with gene networks for cancer stratification and prognosis.

    Science.gov (United States)

    Le Morvan, Marine; Zinovyev, Andrei; Vert, Jean-Philippe

    2017-06-01

    Genome-wide somatic mutation profiles of tumours can now be assessed efficiently and promise to move precision medicine forward. Statistical analysis of mutation profiles is however challenging due to the low frequency of most mutations, the varying mutation rates across tumours, and the presence of a majority of passenger events that hide the contribution of driver events. Here we propose a method, NetNorM, to represent whole-exome somatic mutation data in a form that enhances cancer-relevant information using a gene network as background knowledge. We evaluate its relevance for two tasks: survival prediction and unsupervised patient stratification. Using data from 8 cancer types from The Cancer Genome Atlas (TCGA), we show that it improves over the raw binary mutation data and network diffusion for these two tasks. In doing so, we also provide a thorough assessment of somatic mutations prognostic power which has been overlooked by previous studies because of the sparse and binary nature of mutations.

  15. Conference: Statistical Physics and Biological Information

    International Nuclear Information System (INIS)

    Gross, David J.; Hwa, Terence

    2001-01-01

    In the spring of 2001, the Institute for Theoretical Physics ran a 6 month scientific program on Statistical Physics and Biological Information. This program was organized by Walter Fitch (UC Irvine), Terence Hwa (UC San Diego), Luca Peliti (University Federico II), Naples Gary Stormo (Washington University School of Medicine) and Chao Tang (NEC). Overall scientific supervision was provided by David Gross, Director, ITP. The ITP has an online conference/program proceeding which consists of audio and transparencies of almost all of the talks held during this program. Over 100 talks are available on the site at http://online.kitp.ucsb.edu/online/infobio01/

  16. Disseminating relevant health information to underserved audiences: implications of the Digital Divide Pilot Projects*

    Science.gov (United States)

    Kreps, Gary L.

    2005-01-01

    Objective: This paper examines the influence of the digital divide on disparities in health outcomes for vulnerable populations, identifying implications for medical and public libraries. Method: The paper describes the results of the Digital Divide Pilot Projects demonstration research programs funded by the National Cancer Institute to test new strategies for disseminating relevant health information to underserved and at-risk audiences. Results: The Digital Divide Pilot Projects field-tested innovative systemic strategies for helping underserved populations access and utilize relevant health information to make informed health-related decisions about seeking appropriate health care and support, resisting avoidable and significant health risks, and promoting their own health. Implications: The paper builds on the Digital Divide Pilot Projects by identifying implications for developing health communication strategies that libraries can adopt to provide digital health information to vulnerable populations. PMID:16239960

  17. An introduction to inferential statistics: A review and practical guide

    International Nuclear Information System (INIS)

    Marshall, Gill; Jonker, Leon

    2011-01-01

    Building on the first part of this series regarding descriptive statistics, this paper demonstrates why it is advantageous for radiographers to understand the role of inferential statistics in deducing conclusions from a sample and their application to a wider population. This is necessary so radiographers can understand the work of others, can undertake their own research and evidence base their practice. This article explains p values and confidence intervals. It introduces the common statistical tests that comprise inferential statistics, and explains the use of parametric and non-parametric statistics. To do this, the paper reviews relevant literature, and provides a checklist of points to consider before and after applying statistical tests to a data set. The paper provides a glossary of relevant terms and the reader is advised to refer to this when any unfamiliar terms are used in the text. Together with the information provided on descriptive statistics in an earlier article, it can be used as a starting point for applying statistics in radiography practice and research.

  18. An introduction to inferential statistics: A review and practical guide

    Energy Technology Data Exchange (ETDEWEB)

    Marshall, Gill, E-mail: gill.marshall@cumbria.ac.u [Faculty of Health, Medical Sciences and Social Care, University of Cumbria, Lancaster LA1 3JD (United Kingdom); Jonker, Leon [Faculty of Health, Medical Sciences and Social Care, University of Cumbria, Lancaster LA1 3JD (United Kingdom)

    2011-02-15

    Building on the first part of this series regarding descriptive statistics, this paper demonstrates why it is advantageous for radiographers to understand the role of inferential statistics in deducing conclusions from a sample and their application to a wider population. This is necessary so radiographers can understand the work of others, can undertake their own research and evidence base their practice. This article explains p values and confidence intervals. It introduces the common statistical tests that comprise inferential statistics, and explains the use of parametric and non-parametric statistics. To do this, the paper reviews relevant literature, and provides a checklist of points to consider before and after applying statistical tests to a data set. The paper provides a glossary of relevant terms and the reader is advised to refer to this when any unfamiliar terms are used in the text. Together with the information provided on descriptive statistics in an earlier article, it can be used as a starting point for applying statistics in radiography practice and research.

  19. Statistical Language Models and Information Retrieval: Natural Language Processing Really Meets Retrieval

    NARCIS (Netherlands)

    Hiemstra, Djoerd; de Jong, Franciska M.G.

    2001-01-01

    Traditionally, natural language processing techniques for information retrieval have always been studied outside the framework of formal models of information retrieval. In this article, we introduce a new formal model of information retrieval based on the application of statistical language models.

  20. Effect of the Adoption of IFRS on the Information Relevance of Accounting Profits in Brazil

    Directory of Open Access Journals (Sweden)

    Mateus Alexandre Costa dos Santos

    2014-12-01

    Full Text Available This study aimed to assess the effect of adopting the International Financial Reporting Standards (IFRS in Brazil on the information relevance of accounting profits of publicly traded companies. International studies have shown that the adoption of IFRS improves the quality of accounting information compared with domestic accounting standards. Concurrent evidence is sparse in Brazil. Information relevance is understood herein as a multidimensional attribute that is closely related to the quality and usefulness of the information conveyed by accounting profits. The associative capacity and information timeliness of accounting profits in relation to share prices were examined. Furthermore, the level of conditional conservatism present in accounting profits was also analyzed because according to Basu (1997, this aspect is related to timeliness. The study used pooled regressions and panel data models to analyze the quarterly accounting profits of 246 companies between the first quarter of 1999 and the first quarter of 2013, resulting in 9,558 quarter-company observations. The results indicated that the adoption of IFRS in Brazil (1 increased the associative capacity of accounting profits; (2 reduced information timeliness to non-significant levels; and (3 had no effect on conditional conservatism. The joint analysis of the empirical evidence from the present study conclusively precludes stating that the adoption of IFRS in Brazil contributed to an increase the information relevance of accounting profits of publicly traded companies.

  1. Behavioral and Event-Related-Potential Correlates of Processing Congruent and Incongruent Self-Relevant Information

    Science.gov (United States)

    Clark, Sheri L.

    2013-01-01

    People want to be viewed by others as they view themselves. Being confronted with self-relevant information that is either congruent or incongruent with one's self-view has been shown to differentially affect subsequent behavior, memory for the information, and evaluation of the source of the information. However, no research has examined…

  2. Suppression of no-longer relevant information in Working Memory: An alpha-power related mechanism?

    Science.gov (United States)

    Poch, Claudia; Valdivia, María; Capilla, Almudena; Hinojosa, José Antonio; Campo, Pablo

    2018-03-27

    Selective attention can enhance Working Memory (WM) performance by selecting relevant information, while preventing distracting items from encoding or from further maintenance. Alpha oscillatory modulations are a correlate of visuospatial attention. Specifically, an enhancement of alpha power is observed in the ipsilateral posterior cortex to the locus of attention, along with a suppression in the contralateral hemisphere. An influential model proposes that the alpha enhancement is functionally related to the suppression of information. However, whether ipsilateral alpha power represents a mechanism through which no longer relevant WM representations are inhibited has yet not been explored. Here we examined whether the amount of distractors to be suppressed during WM maintenance is functionally related to alpha power lateralized activity. We measure EEG activity while participants (N = 36) performed a retro-cue task in which the WM load was varied across the relevant/irrelevant post-cue hemifield. We found that alpha activity was lateralized respect to the locus of attention, but did not track post-cue irrelevant load. Additionally, non-lateralized alpha activity increased with post-cue relevant load. We propose that alpha lateralization associated to retro-cuing might be related to a general orienting mechanism toward relevant representation. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Bayesian Information Criterion as an Alternative way of Statistical Inference

    Directory of Open Access Journals (Sweden)

    Nadejda Yu. Gubanova

    2012-05-01

    Full Text Available The article treats Bayesian information criterion as an alternative to traditional methods of statistical inference, based on NHST. The comparison of ANOVA and BIC results for psychological experiment is discussed.

  4. Concepts and recent advances in generalized information measures and statistics

    CERN Document Server

    Kowalski, Andres M

    2013-01-01

    Since the introduction of the information measure widely known as Shannon entropy, quantifiers based on information theory and concepts such as entropic forms and statistical complexities have proven to be useful in diverse scientific research fields. This book contains introductory tutorials suitable for the general reader, together with chapters dedicated to the basic concepts of the most frequently employed information measures or quantifiers and their recent applications to different areas, including physics, biology, medicine, economics, communication and social sciences. As these quantif

  5. Task relevance of emotional information affects anxiety-linked attention bias in visual search.

    Science.gov (United States)

    Dodd, Helen F; Vogt, Julia; Turkileri, Nilgun; Notebaert, Lies

    2017-01-01

    Task relevance affects emotional attention in healthy individuals. Here, we investigate whether the association between anxiety and attention bias is affected by the task relevance of emotion during an attention task. Participants completed two visual search tasks. In the emotion-irrelevant task, participants were asked to indicate whether a discrepant face in a crowd of neutral, middle-aged faces was old or young. Irrelevant to the task, target faces displayed angry, happy, or neutral expressions. In the emotion-relevant task, participants were asked to indicate whether a discrepant face in a crowd of middle-aged neutral faces was happy or angry (target faces also varied in age). Trait anxiety was not associated with attention in the emotion-relevant task. However, in the emotion-irrelevant task, trait anxiety was associated with a bias for angry over happy faces. These findings demonstrate that the task relevance of emotional information affects conclusions about the presence of an anxiety-linked attention bias. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Collecting Safeguards Relevant Trade Information: The IAEA Procurement Outreach Programme

    International Nuclear Information System (INIS)

    Schot, P.; El Gebaly, A.; Tarvainen, M.

    2010-01-01

    The increasing awareness of activities of transnational procurement networks to covertly acquire sensitive nuclear related dual use equipment prompted an evolution of safeguards methodologies. One of the responses to this challenge by the Department of Safeguards in the IAEA was to establish the Trade and Technology Unit (TTA) in November 2004 to analyse and report on these covert nuclear related trade activities. To obtain information relevant to this analysis, TTA is engaging States that might be willing to provide this information to the Secretariat on a voluntary basis. This paper will give an overview of current activities, sum up the results achieved and discuss suggestions to further improve this programme made by Member States. (author)

  7. Book value, earnings, dividends, and audit quality on the value relevance of accounting information among Nigerian listed firms

    Directory of Open Access Journals (Sweden)

    Muhammad Yusuf Alkali

    2018-04-01

    Full Text Available The objective of this paper is to determine the effect of International Financial Reporting Standards (IFRS as a new accounting reporting among Nigerian listed firms. This study uses book value, earnings and dividends to fill in the gap using a sample of 126 Nigerian listed firms in the stock market from 2009 to 2013 (pre and Post-IFRS adoption. Data was collected from Thompson Reuters, Bank scope DataStreams and annual reports. The study adopted Ohlson (1995 [Ohlson, J. (1995. Earnings, book-value, and dividends in equity valuation. Contemporary Accounting Research, 11(2, 661–687.] price model that has been frequently used in determining the quality of accounting information studies. The study finds that combined book value, earnings and dividends do not provide statistical significance effects on IFRS after adoption on the quality of accounting information. This could be possible, as dividends do not provide a significant effect in the presence of earnings. Furthermore, the audit big 4 quality provided an effect on the quality of accounting information because of IFRS adoption. Therefore, findings of this study provide additional literature on the decreasing quality of accounting information in an emerging market setting like Nigeria. The study implication is to the policy makers, regulators, and government that accounting information do not provide value relevance among Nigerian listed firms after IFRS adoption.

  8. Fisher statistics for analysis of diffusion tensor directional information.

    Science.gov (United States)

    Hutchinson, Elizabeth B; Rutecki, Paul A; Alexander, Andrew L; Sutula, Thomas P

    2012-04-30

    A statistical approach is presented for the quantitative analysis of diffusion tensor imaging (DTI) directional information using Fisher statistics, which were originally developed for the analysis of vectors in the field of paleomagnetism. In this framework, descriptive and inferential statistics have been formulated based on the Fisher probability density function, a spherical analogue of the normal distribution. The Fisher approach was evaluated for investigation of rat brain DTI maps to characterize tissue orientation in the corpus callosum, fornix, and hilus of the dorsal hippocampal dentate gyrus, and to compare directional properties in these regions following status epilepticus (SE) or traumatic brain injury (TBI) with values in healthy brains. Direction vectors were determined for each region of interest (ROI) for each brain sample and Fisher statistics were applied to calculate the mean direction vector and variance parameters in the corpus callosum, fornix, and dentate gyrus of normal rats and rats that experienced TBI or SE. Hypothesis testing was performed by calculation of Watson's F-statistic and associated p-value giving the likelihood that grouped observations were from the same directional distribution. In the fornix and midline corpus callosum, no directional differences were detected between groups, however in the hilus, significant (pstatistical comparison of tissue structural orientation. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. a Statistical Texture Feature for Building Collapse Information Extraction of SAR Image

    Science.gov (United States)

    Li, L.; Yang, H.; Chen, Q.; Liu, X.

    2018-04-01

    Synthetic Aperture Radar (SAR) has become one of the most important ways to extract post-disaster collapsed building information, due to its extreme versatility and almost all-weather, day-and-night working capability, etc. In view of the fact that the inherent statistical distribution of speckle in SAR images is not used to extract collapsed building information, this paper proposed a novel texture feature of statistical models of SAR images to extract the collapsed buildings. In the proposed feature, the texture parameter of G0 distribution from SAR images is used to reflect the uniformity of the target to extract the collapsed building. This feature not only considers the statistical distribution of SAR images, providing more accurate description of the object texture, but also is applied to extract collapsed building information of single-, dual- or full-polarization SAR data. The RADARSAT-2 data of Yushu earthquake which acquired on April 21, 2010 is used to present and analyze the performance of the proposed method. In addition, the applicability of this feature to SAR data with different polarizations is also analysed, which provides decision support for the data selection of collapsed building information extraction.

  10. The impact of intangibles on the value relevance of accounting information: Evidence from French companies

    Directory of Open Access Journals (Sweden)

    Bilal Kimouche

    2016-03-01

    Full Text Available Purpose: The paper aims to explore whether intangible items that recognised in financial statements are value-relevant to investors in the French context, and whether these items affect the value relevance of accounting information. Design/methodology/approach: Empirical data were collected from a sample of French listed companies, over the nine-year period of 2005 to 2013. Starting of Ohlson’s (1995 model, the correlation analysis and the linear multiple regressions have been applied. Findings: We find that intangibles and traditional accounting measures as a whole are value relevant. However, the amortization and impairment charges of intangibles and cash flows do not affect the market values of French companies, unlike other variables, which affect positively and substantially the market values. Also goodwill and book values are more associated with market values than intangible assets and earnings respectively. Finally, we find that intangibles have improved the value relevance of accounting information. Practical implications: French legislators must give more interest for intangibles, in order to enrich the financial statements content and increasing the pertinence of accounting information. Auditors must give more attention for intangibles’ examination process, in order to certify the amounts related to intangibles in financial statements, and hence enrich their reliability, what provides adequacy guarantees for investors to use them in decision making. Originality/value: The paper used recently available financial data, and proposed an improvement concerning the measure of incremental value relevance of intangibles items.

  11. Paradigms for adaptive statistical information designs: practical experiences and strategies.

    Science.gov (United States)

    Wang, Sue-Jane; Hung, H M James; O'Neill, Robert

    2012-11-10

    In the last decade or so, interest in adaptive design clinical trials has gradually been directed towards their use in regulatory submissions by pharmaceutical drug sponsors to evaluate investigational new drugs. Methodological advances of adaptive designs are abundant in the statistical literature since the 1970s. The adaptive design paradigm has been enthusiastically perceived to increase the efficiency and to be more cost-effective than the fixed design paradigm for drug development. Much interest in adaptive designs is in those studies with two-stages, where stage 1 is exploratory and stage 2 depends upon stage 1 results, but where the data of both stages will be combined to yield statistical evidence for use as that of a pivotal registration trial. It was not until the recent release of the US Food and Drug Administration Draft Guidance for Industry on Adaptive Design Clinical Trials for Drugs and Biologics (2010) that the boundaries of flexibility for adaptive designs were specifically considered for regulatory purposes, including what are exploratory goals, and what are the goals of adequate and well-controlled (A&WC) trials (2002). The guidance carefully described these distinctions in an attempt to minimize the confusion between the goals of preliminary learning phases of drug development, which are inherently substantially uncertain, and the definitive inference-based phases of drug development. In this paper, in addition to discussing some aspects of adaptive designs in a confirmatory study setting, we underscore the value of adaptive designs when used in exploratory trials to improve planning of subsequent A&WC trials. One type of adaptation that is receiving attention is the re-estimation of the sample size during the course of the trial. We refer to this type of adaptation as an adaptive statistical information design. Specifically, a case example is used to illustrate how challenging it is to plan a confirmatory adaptive statistical information

  12. Power analysis as a tool to identify statistically informative indicators for monitoring coral reef disturbances.

    Science.gov (United States)

    Van Wynsberge, Simon; Gilbert, Antoine; Guillemot, Nicolas; Heintz, Tom; Tremblay-Boyer, Laura

    2017-07-01

    Extensive biological field surveys are costly and time consuming. To optimize sampling and ensure regular monitoring on the long term, identifying informative indicators of anthropogenic disturbances is a priority. In this study, we used 1800 candidate indicators by combining metrics measured from coral, fish, and macro-invertebrate assemblages surveyed from 2006 to 2012 in the vicinity of an ongoing mining project in the Voh-Koné-Pouembout lagoon, New Caledonia. We performed a power analysis to identify a subset of indicators which would best discriminate temporal changes due to a simulated chronic anthropogenic impact. Only 4% of tested indicators were likely to detect a 10% annual decrease of values with sufficient power (>0.80). Corals generally exerted higher statistical power than macro-invertebrates and fishes because of lower natural variability and higher occurrence. For the same reasons, higher taxonomic ranks provided higher power than lower taxonomic ranks. Nevertheless, a number of families of common sedentary or sessile macro-invertebrates and fishes also performed well in detecting changes: Echinometridae, Isognomidae, Muricidae, Tridacninae, Arcidae, and Turbinidae for macro-invertebrates and Pomacentridae, Labridae, and Chaetodontidae for fishes. Interestingly, these families did not provide high power in all geomorphological strata, suggesting that the ability of indicators in detecting anthropogenic impacts was closely linked to reef geomorphology. This study provides a first operational step toward identifying statistically relevant indicators of anthropogenic disturbances in New Caledonia's coral reefs, which can be useful in similar tropical reef ecosystems where little information is available regarding the responses of ecological indicators to anthropogenic disturbances.

  13. In the Dark: Young Men's Stories of Sexual Initiation in the Absence of Relevant Sexual Health Information

    Science.gov (United States)

    Kubicek, Katrina; Beyer, William J.; Weiss, George; Iverson, Ellen; Kipke, Michele D.

    2010-01-01

    A growing body of research has investigated the effectiveness of abstinence-only sexual education. There remains a dearth of research on the relevant sexual health information available to young men who have sex with men (YMSM). Drawing on a mixed-methods study with 526 YMSM, this study explores how and where YMSM receive relevant information on…

  14. Feature-selective Attention in Frontoparietal Cortex: Multivoxel Codes Adjust to Prioritize Task-relevant Information.

    Science.gov (United States)

    Jackson, Jade; Rich, Anina N; Williams, Mark A; Woolgar, Alexandra

    2017-02-01

    Human cognition is characterized by astounding flexibility, enabling us to select appropriate information according to the objectives of our current task. A circuit of frontal and parietal brain regions, often referred to as the frontoparietal attention network or multiple-demand (MD) regions, are believed to play a fundamental role in this flexibility. There is evidence that these regions dynamically adjust their responses to selectively process information that is currently relevant for behavior, as proposed by the "adaptive coding hypothesis" [Duncan, J. An adaptive coding model of neural function in prefrontal cortex. Nature Reviews Neuroscience, 2, 820-829, 2001]. Could this provide a neural mechanism for feature-selective attention, the process by which we preferentially process one feature of a stimulus over another? We used multivariate pattern analysis of fMRI data during a perceptually challenging categorization task to investigate whether the representation of visual object features in the MD regions flexibly adjusts according to task relevance. Participants were trained to categorize visually similar novel objects along two orthogonal stimulus dimensions (length/orientation) and performed short alternating blocks in which only one of these dimensions was relevant. We found that multivoxel patterns of activation in the MD regions encoded the task-relevant distinctions more strongly than the task-irrelevant distinctions: The MD regions discriminated between stimuli of different lengths when length was relevant and between the same objects according to orientation when orientation was relevant. The data suggest a flexible neural system that adjusts its representation of visual objects to preferentially encode stimulus features that are currently relevant for behavior, providing a neural mechanism for feature-selective attention.

  15. Analysis of the Relevance of Information Content of the Value Added Statement in the Brazilian Capital Markets

    Directory of Open Access Journals (Sweden)

    Márcio André Veras Machado

    2015-04-01

    Full Text Available The usefulness of financial statements depends, fundamentally, on the degree of relevance of the information they disclose to users. Thus, studies that measure the relevance of accounting information to the users of financial statements are of some importance. One line of research within this subject is in ascertaining the relevance and importance of accounting information for the capital markets: if a particular item of accounting information is minimally reflected in the price of a share, it is because this information has relevance, at least at a certain level of significance, for investors and analysts of the capital markets. This present study aims to analyze the relevance, in the Brazilian capital markets, of the information content of the Value Added Statement (or VAS - referred to in Brazil as the Demonstração do Valor Adicionado, or DVA. It analyzed the ratio between stock price and Wealth created per share (WCPS, using linear regressions, for the period 2005-2011, for non-financial listed companies included in Melhores & Maiores ('Biggest & Best', an annual listing published by Exame Magazine in Brazil. As a secondary objective, this article seeks to establish whether WCPS represents a better indication of a company's result than Net profit per share (in this study, referred to as NPPS. The empirical evidence that was found supports the concept that the VAS has relevant information content, because it shows a capacity to explain a variation in the share price of the companies studied. Additionally, the relationship between WCPS and the stock price was shown to be significant, even after the inclusion of the control variables Stockholders' equity per share (which we abbreviate in this study to SEPS and NPPS. Finally, the evidence found indicates that the market reacts more to WCPS (Wealth created per share than to NPPS. Thus, the results obtained give some indication that, for the Brazilian capital markets, WCPS may be a better proxy

  16. 40 CFR 86.1862-04 - Maintenance of records and submittal of information relevant to compliance with fleet average NOX...

    Science.gov (United States)

    2010-07-01

    ... of information relevant to compliance with fleet average NOX standards. 86.1862-04 Section 86.1862-04...-Cycle Heavy-Duty Vehicles § 86.1862-04 Maintenance of records and submittal of information relevant to..., maintain, and retain the following information in adequately organized and indexed records for each model...

  17. Statistical methods of combining information: Applications to sensor data fusion

    Energy Technology Data Exchange (ETDEWEB)

    Burr, T.

    1996-12-31

    This paper reviews some statistical approaches to combining information from multiple sources. Promising new approaches will be described, and potential applications to combining not-so-different data sources such as sensor data will be discussed. Experiences with one real data set are described.

  18. Strategic relevance and accountability expectations: new perspectives for health care information technology design.

    Science.gov (United States)

    Tan, J K; Modrow, R E

    1999-05-01

    In this article, we discuss the traditional systems analysis perspective on end-user information requirements analysis and extend it to merge with the new accountability expectations perspective to guide the future planning and design of health organization information systems. Underlying the strategic relevance of health care information technology (HCIT) are three critical questions: (1) What is the ideal HCIT model for the health organization in terms of achieving strategic expertise and competitive advantage? Specifically, how does this model link industry performance standards with organizational performance and accountability expectations? (2) How should the limitations of past HCIT models be reconciled to the benefits presented by the superior arrangement of the ideal model in the context of changing accountability expectations? (3) How should alternative HCIT solutions be evaluated in light of evidence-based accountability and organizational performance benchmarking? Insights into these questions will ensure that health care managers, HCIT practitioners and researchers can continue to focus on the most critical issues in harnessing today's fast-paced changing technologies for evolving strategically relevant, performance-based health organization systems.

  19. Relevant cost information for order acceptance decisions

    NARCIS (Netherlands)

    Wouters, M.J.F.

    1997-01-01

    Some economic considerations for order acceptance decisions are discussed. The relevant economic considerations for order acceptance are widely discussed in the literature: only those costs are relevant which would be avoidable by not accepting the order incremental costs plus opportunity costs .

  20. Divided attention selectively impairs memory for self-relevant information.

    Science.gov (United States)

    Turk, David J; Brady-van den Bos, Mirjam; Collard, Philip; Gillespie-Smith, Karri; Conway, Martin A; Cunningham, Sheila J

    2013-05-01

    Information that is relevant to oneself tends to be remembered more than information that relates to other people, but the role of attention in eliciting this "self-reference effect" is unclear. In the present study, we assessed the importance of attention in self-referential encoding using an ownership paradigm, which required participants to encode items under conditions of imagined ownership by themselves or by another person. Previous work has established that this paradigm elicits a robust self-reference effect, with more "self-owned" items being remembered than "other-owned" items. Access to attentional resources was manipulated using divided-attention tasks at encoding. A significant self-reference effect emerged under full-attention conditions and was related to an increase in episodic recollection for self-owned items, but dividing attention eliminated this memory advantage. These findings are discussed in relation to the nature of self-referential cognition and the importance of attentional resources at encoding in the manifestation of the self-reference effect in memory.

  1. EVIDENCE FROM THE GERMAN CAPITAL MARKET REGARDING THE VALUE RELEVANCE OF CONSOLIDATED VERSUS PARENT COMPANY FINANCIAL STATEMENTS

    Directory of Open Access Journals (Sweden)

    Muller Victor - Octavian

    2011-07-01

    Full Text Available Financial statements main objective is to give information on the financial position, performance and changes in financial position of the reporting entity, which is useful to investors and other users in making economic decisions. In order to be useful, financial information needs to be relevant to the decision-making process of users in general, and investors in particular. Hence, the following question arises logically which of the two sets best serves the information needs of investors (and other categories of users, respectively which of the two sets is more relevant for investors? Of course, the possibility of both sets at the same time best serving the information needs should not be ruled out. In our scientific endeavor we conducted an empirical association study on the problem of market value relevance of consolidated financial statements and of individual financial statements of the parent company, searching for an answer to the above question. In this sense, we analyze the absolute and relative market value relevance of consolidated accounting information of listed companies on the Frankfurt Stock Exchange (one of the largest three stock markets in the European Union between 2003 and 2008. Through this empirical study we intend to contribute to the relatively limited literature on this topic with a comparative time analysis of the absolute and incremental relevance of financial information supplied by the two categories of financial statements (group and individual. The results obtained indicate a statistically significant superiority of the relevance of consolidated statements (in detriment of individual ones. However, we could not statistically prove a superior value relevance of information provided together by consolidated and parent company financial statements as opposed to consolidated information. On the one hand, these results prove the importance (usefulness of consolidated financial statements especially for investors on

  2. An analysis of contextual information relevant to medical care unexpectedly volunteered to researchers by asthma patients.

    Science.gov (United States)

    Black, Heather L; Priolo, Chantel; Gonzalez, Rodalyn; Geer, Sabrina; Adam, Bariituu; Apter, Andrea J

    2012-09-01

    To describe and categorize contextual information relevant to patients' medical care unexpectedly volunteered to research personnel as part of a patient advocate (PA) intervention to facilitate access health care, communication with medical personnel, and self-management of a chronic disease such as asthma. We adapted a patient navigator intervention, to overcome barriers to access and communication for adults with moderate or severe asthma. Informed by focus groups of patients and providers, our PAs facilitated preparation for a visit with an asthma provider, attended the visit, confirmed understanding, and assisted with post-visit activities. During meetings with researchers, either for PA activities or for data collection, participants frequently volunteered personal and medical information relevant for achieving successful self-management that was not routinely shared with medical personnel. For this project, researchers journaled information not captured by the structured questionnaires and protocol. Using a qualitative analysis, we describe (1) researchers' journals of these unique communications; (2) their relevance for accomplishing self-management; (3) PAs' formal activities including teach-back, advocacy, and facilitating appointment making; and (4) observations of patients' interactions with the clinical practices. In 83 journals, patients' social support (83%), health (68%), and deportment (69%) were described. PA assistance with navigating the medical system (59%), teach-back (46%), and observed interactions with patient and medical staff (76%) were also journaled. Implicit were ways patients and practices could overcome barriers to access and communication. These journals describe the importance of seeking contextual and medically relevant information from all patients and, especially, those with significant morbidities, prompting patients for barriers to access to health care, and confirming understanding of medical information.

  3. Searchers' relevance judgments and criteria in evaluating Web pages in a learning style perspective

    DEFF Research Database (Denmark)

    Papaeconomou, Chariste; Zijlema, Annemarie F.; Ingwersen, Peter

    2008-01-01

    The paper presents the results of a case study of searcher's relevance criteria used for assessments of Web pages in a perspective of learning style. 15 test persons participated in the experiments based on two simulated work tasks that provided cover stories to trigger their information needs. Two...... learning styles were examined: Global and Sequential learners. The study applied eye-tracking for the observation of relevance hot spots on Web pages, learning style index analysis and post-search interviews to gain more in-depth information on relevance behavior. Findings reveal that with respect to use......, they are statistically insignificant. When interviewed in retrospective the resulting profiles tend to become even similar across learning styles but a shift occurs from instant assessments with content features of web pages replacing topicality judgments as predominant relevance criteria....

  4. Statistics information of rice EST mapping results - RGP estmap2001 | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us RGP estmap2001 Statistics information of rice EST mapping results Data detail Data name Statistics...of This Database Site Policy | Contact Us Statistics information of rice EST mapping results - RGP estmap2001 | LSDB Archive ...

  5. Statistics for clinical nursing practice: an introduction.

    Science.gov (United States)

    Rickard, Claire M

    2008-11-01

    Difficulty in understanding statistics is one of the most frequently reported barriers to nurses applying research results in their practice. Yet the amount of nursing research published each year continues to grow, as does the expectation that nurses will undertake practice based on this evidence. Critical care nurses do not need to be statisticians, but they do need to develop a working knowledge of statistics so they can be informed consumers of research and so practice can evolve and improve. For those undertaking a research project, statistical literacy is required to interact with other researchers and statisticians, so as to best design and undertake the project. This article is the first in a series that guides critical care nurses through statistical terms and concepts relevant to their practice.

  6. Towards brain-activity-controlled information retrieval: Decoding image relevance from MEG signals.

    Science.gov (United States)

    Kauppi, Jukka-Pekka; Kandemir, Melih; Saarinen, Veli-Matti; Hirvenkari, Lotta; Parkkonen, Lauri; Klami, Arto; Hari, Riitta; Kaski, Samuel

    2015-05-15

    We hypothesize that brain activity can be used to control future information retrieval systems. To this end, we conducted a feasibility study on predicting the relevance of visual objects from brain activity. We analyze both magnetoencephalographic (MEG) and gaze signals from nine subjects who were viewing image collages, a subset of which was relevant to a predetermined task. We report three findings: i) the relevance of an image a subject looks at can be decoded from MEG signals with performance significantly better than chance, ii) fusion of gaze-based and MEG-based classifiers significantly improves the prediction performance compared to using either signal alone, and iii) non-linear classification of the MEG signals using Gaussian process classifiers outperforms linear classification. These findings break new ground for building brain-activity-based interactive image retrieval systems, as well as for systems utilizing feedback both from brain activity and eye movements. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Conference: Statistical Physics and Biological Information; F

    International Nuclear Information System (INIS)

    Gross, David J.; Hwa, Terence

    2001-01-01

    In the spring of 2001, the Institute for Theoretical Physics ran a 6 month scientific program on Statistical Physics and Biological Information. This program was organized by Walter Fitch (UC Irvine), Terence Hwa (UC San Diego), Luca Peliti (University Federico II), Naples Gary Stormo (Washington University School of Medicine) and Chao Tang (NEC). Overall scientific supervision was provided by David Gross, Director, ITP. The ITP has an online conference/program proceeding which consists of audio and transparencies of almost all of the talks held during this program. Over 100 talks are available on the site at http://online.kitp.ucsb.edu/online/infobio01/

  8. Inclusion probability for DNA mixtures is a subjective one-sided match statistic unrelated to identification information.

    Science.gov (United States)

    Perlin, Mark William

    2015-01-01

    DNA mixtures of two or more people are a common type of forensic crime scene evidence. A match statistic that connects the evidence to a criminal defendant is usually needed for court. Jurors rely on this strength of match to help decide guilt or innocence. However, the reliability of unsophisticated match statistics for DNA mixtures has been questioned. The most prevalent match statistic for DNA mixtures is the combined probability of inclusion (CPI), used by crime labs for over 15 years. When testing 13 short tandem repeat (STR) genetic loci, the CPI(-1) value is typically around a million, regardless of DNA mixture composition. However, actual identification information, as measured by a likelihood ratio (LR), spans a much broader range. This study examined probability of inclusion (PI) mixture statistics for 517 locus experiments drawn from 16 reported cases and compared them with LR locus information calculated independently on the same data. The log(PI(-1)) values were examined and compared with corresponding log(LR) values. The LR and CPI methods were compared in case examples of false inclusion, false exclusion, a homicide, and criminal justice outcomes. Statistical analysis of crime laboratory STR data shows that inclusion match statistics exhibit a truncated normal distribution having zero center, with little correlation to actual identification information. By the law of large numbers (LLN), CPI(-1) increases with the number of tested genetic loci, regardless of DNA mixture composition or match information. These statistical findings explain why CPI is relatively constant, with implications for DNA policy, criminal justice, cost of crime, and crime prevention. Forensic crime laboratories have generated CPI statistics on hundreds of thousands of DNA mixture evidence items. However, this commonly used match statistic behaves like a random generator of inclusionary values, following the LLN rather than measuring identification information. A quantitative

  9. A statistical mechanical interpretation of algorithmic information theory: Total statistical mechanical interpretation based on physical argument

    International Nuclear Information System (INIS)

    Tadaki, Kohtaro

    2010-01-01

    The statistical mechanical interpretation of algorithmic information theory (AIT, for short) was introduced and developed by our former works [K. Tadaki, Local Proceedings of CiE 2008, pp. 425-434, 2008] and [K. Tadaki, Proceedings of LFCS'09, Springer's LNCS, vol. 5407, pp. 422-440, 2009], where we introduced the notion of thermodynamic quantities, such as partition function Z(T), free energy F(T), energy E(T), statistical mechanical entropy S(T), and specific heat C(T), into AIT. We then discovered that, in the interpretation, the temperature T equals to the partial randomness of the values of all these thermodynamic quantities, where the notion of partial randomness is a stronger representation of the compression rate by means of program-size complexity. Furthermore, we showed that this situation holds for the temperature T itself, which is one of the most typical thermodynamic quantities. Namely, we showed that, for each of the thermodynamic quantities Z(T), F(T), E(T), and S(T) above, the computability of its value at temperature T gives a sufficient condition for T is an element of (0,1) to satisfy the condition that the partial randomness of T equals to T. In this paper, based on a physical argument on the same level of mathematical strictness as normal statistical mechanics in physics, we develop a total statistical mechanical interpretation of AIT which actualizes a perfect correspondence to normal statistical mechanics. We do this by identifying a microcanonical ensemble in the framework of AIT. As a result, we clarify the statistical mechanical meaning of the thermodynamic quantities of AIT.

  10. Statistical Inference on Memory Structure of Processes and Its Applications to Information Theory

    Science.gov (United States)

    2016-05-12

    Distribution Unlimited UU UU UU UU 12-05-2016 15-May-2014 14-Feb-2015 Final Report: Statistical Inference on Memory Structure of Processes and Its Applications ...ES) U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 mathematical statistics ; time series; Markov chains; random...journals: Final Report: Statistical Inference on Memory Structure of Processes and Its Applications to Information Theory Report Title Three areas

  11. Using language models to identify relevant new information in inpatient clinical notes.

    Science.gov (United States)

    Zhang, Rui; Pakhomov, Serguei V; Lee, Janet T; Melton, Genevieve B

    2014-01-01

    Redundant information in clinical notes within electronic health record (EHR) systems is ubiquitous and may negatively impact the use of these notes by clinicians, and, potentially, the efficiency of patient care delivery. Automated methods to identify redundant versus relevant new information may provide a valuable tool for clinicians to better synthesize patient information and navigate to clinically important details. In this study, we investigated the use of language models for identification of new information in inpatient notes, and evaluated our methods using expert-derived reference standards. The best method achieved precision of 0.743, recall of 0.832 and F1-measure of 0.784. The average proportion of redundant information was similar between inpatient and outpatient progress notes (76.6% (SD=17.3%) and 76.7% (SD=14.0%), respectively). Advanced practice providers tended to have higher rates of redundancy in their notes compared to physicians. Future investigation includes the addition of semantic components and visualization of new information.

  12. Making Statistical Data More Easily Accessible on the Web Results of the StatSearch Case Study

    CERN Document Server

    Rajman, M; Boynton, I M; Fridlund, B; Fyhrlund, A; Sundgren, B; Lundquist, P; Thelander, H; Wänerskär, M

    2005-01-01

    In this paper we present the results of the StatSearch case study that aimed at providing an enhanced access to statistical data available on the Web. In the scope of this case study we developed a prototype of an information access tool combining a query-based search engine with semi-automated navigation techniques exploiting the hierarchical structuring of the available data. This tool enables a better control of the information retrieval, improving the quality and ease of the access to statistical information. The central part of the presented StatSearch tool consists in the design of an algorithm for automated navigation through a tree-like hierarchical document structure. The algorithm relies on the computation of query related relevance score distributions over the available database to identify the most relevant clusters in the data structure. These most relevant clusters are then proposed to the user for navigation, or, alternatively, are the support for the automated navigation process. Several appro...

  13. Statistics for library and information services a primer for using open source R software for accessibility and visualization

    CERN Document Server

    Friedman, Alon

    2016-01-01

    Statistics for Library and Information Services, written for non-statisticians, provides logical, user-friendly, and step-by-step instructions to make statistics more accessible for students and professionals in the field of Information Science. It emphasizes concepts of statistical theory and data collection methodologies, but also extends to the topics of visualization creation and display, so that the reader will be able to better conduct statistical analysis and communicate his/her findings. The book is tailored for information science students and professionals. It has specific examples of dataset sets, scripts, design modules, data repositories, homework assignments, and a glossary lexicon that matches the field of Information Science. The textbook provides a visual road map that is customized specifically for Information Science instructors, students, and professionals regarding statistics and visualization. Each chapter in the book includes full-color illustrations on how to use R for the statistical ...

  14. Fuzzy statistical decision-making theory and applications

    CERN Document Server

    Kabak, Özgür

    2016-01-01

    This book offers a comprehensive reference guide to fuzzy statistics and fuzzy decision-making techniques. It provides readers with all the necessary tools for making statistical inference in the case of incomplete information or insufficient data, where classical statistics cannot be applied. The respective chapters, written by prominent researchers, explain a wealth of both basic and advanced concepts including: fuzzy probability distributions, fuzzy frequency distributions, fuzzy Bayesian inference, fuzzy mean, mode and median, fuzzy dispersion, fuzzy p-value, and many others. To foster a better understanding, all the chapters include relevant numerical examples or case studies. Taken together, they form an excellent reference guide for researchers, lecturers and postgraduate students pursuing research on fuzzy statistics. Moreover, by extending all the main aspects of classical statistical decision-making to its fuzzy counterpart, the book presents a dynamic snapshot of the field that is expected to stimu...

  15. On divergence of finite measures and their applicability in statistics and information theory

    Czech Academy of Sciences Publication Activity Database

    Vajda, Igor; Stummer, W.

    2009-01-01

    Roč. 44, č. 2 (2009), s. 169-187 ISSN 0233-1888 R&D Projects: GA MŠk(CZ) 1M0572; GA ČR(CZ) GA102/07/1131 Institutional research plan: CEZ:AV0Z10750506 Keywords : Local and global divergences of finite measures * Divergences of sigma-finite measures * Statistical censoring * Pinsker's inequality, Ornstein's distance * Differential power entropies Subject RIV: BD - Theory of Information Impact factor: 0.759, year: 2009 http://library.utia.cas.cz/separaty/2009/SI/vajda-on divergence of finite measures and their applicability in statistics and information theory.pdf

  16. Toddlers favor communicatively presented information over statistical reliability in learning about artifacts.

    Directory of Open Access Journals (Sweden)

    Hanna Marno

    Full Text Available Observed associations between events can be validated by statistical information of reliability or by testament of communicative sources. We tested whether toddlers learn from their own observation of efficiency, assessed by statistical information on reliability of interventions, or from communicatively presented demonstration, when these two potential types of evidence of validity of interventions on a novel artifact are contrasted with each other. Eighteen-month-old infants observed two adults, one operating the artifact by a method that was more efficient (2/3 probability of success than that of the other (1/3 probability of success. Compared to the Baseline condition, in which communicative signals were not employed, infants tended to choose the less reliable method to operate the artifact when this method was demonstrated in a communicative manner in the Experimental condition. This finding demonstrates that, in certain circumstances, communicative sanctioning of reliability may override statistical evidence for young learners. Such a bias can serve fast and efficient transmission of knowledge between generations.

  17. Identifying Statistical Dependence in Genomic Sequences via Mutual Information Estimates

    Directory of Open Access Journals (Sweden)

    Wojciech Szpankowski

    2007-12-01

    Full Text Available Questions of understanding and quantifying the representation and amount of information in organisms have become a central part of biological research, as they potentially hold the key to fundamental advances. In this paper, we demonstrate the use of information-theoretic tools for the task of identifying segments of biomolecules (DNA or RNA that are statistically correlated. We develop a precise and reliable methodology, based on the notion of mutual information, for finding and extracting statistical as well as structural dependencies. A simple threshold function is defined, and its use in quantifying the level of significance of dependencies between biological segments is explored. These tools are used in two specific applications. First, they are used for the identification of correlations between different parts of the maize zmSRp32 gene. There, we find significant dependencies between the 5′ untranslated region in zmSRp32 and its alternatively spliced exons. This observation may indicate the presence of as-yet unknown alternative splicing mechanisms or structural scaffolds. Second, using data from the FBI's combined DNA index system (CODIS, we demonstrate that our approach is particularly well suited for the problem of discovering short tandem repeats—an application of importance in genetic profiling.

  18. Statistical information 1971-76. From the National Institute of Radiation Protection

    International Nuclear Information System (INIS)

    1978-01-01

    This report includes statistical information about the work performed at the National Institute of Radiation Protection, Sweden, during the period 1971-1976, as well as about the different fields causing the intervention by the institute. (E.R.)

  19. Linking attentional processes and conceptual problem solving: visual cues facilitate the automaticity of extracting relevant information from diagrams.

    Science.gov (United States)

    Rouinfar, Amy; Agra, Elise; Larson, Adam M; Rebello, N Sanjay; Loschky, Lester C

    2014-01-01

    This study investigated links between visual attention processes and conceptual problem solving. This was done by overlaying visual cues on conceptual physics problem diagrams to direct participants' attention to relevant areas to facilitate problem solving. Participants (N = 80) individually worked through four problem sets, each containing a diagram, while their eye movements were recorded. Each diagram contained regions that were relevant to solving the problem correctly and separate regions related to common incorrect responses. Problem sets contained an initial problem, six isomorphic training problems, and a transfer problem. The cued condition saw visual cues overlaid on the training problems. Participants' verbal responses were used to determine their accuracy. This study produced two major findings. First, short duration visual cues which draw attention to solution-relevant information and aid in the organizing and integrating of it, facilitate both immediate problem solving and generalization of that ability to new problems. Thus, visual cues can facilitate re-representing a problem and overcoming impasse, enabling a correct solution. Importantly, these cueing effects on problem solving did not involve the solvers' attention necessarily embodying the solution to the problem, but were instead caused by solvers attending to and integrating relevant information in the problems into a solution path. Second, this study demonstrates that when such cues are used across multiple problems, solvers can automatize the extraction of problem-relevant information extraction. These results suggest that low-level attentional selection processes provide a necessary gateway for relevant information to be used in problem solving, but are generally not sufficient for correct problem solving. Instead, factors that lead a solver to an impasse and to organize and integrate problem information also greatly facilitate arriving at correct solutions.

  20. The system for statistical analysis of logistic information

    Directory of Open Access Journals (Sweden)

    Khayrullin Rustam Zinnatullovich

    2015-05-01

    Full Text Available The current problem for managers in logistic and trading companies is the task of improving the operational business performance and developing the logistics support of sales. The development of logistics sales supposes development and implementation of a set of works for the development of the existing warehouse facilities, including both a detailed description of the work performed, and the timing of their implementation. Logistics engineering of warehouse complex includes such tasks as: determining the number and the types of technological zones, calculation of the required number of loading-unloading places, development of storage structures, development and pre-sales preparation zones, development of specifications of storage types, selection of loading-unloading equipment, detailed planning of warehouse logistics system, creation of architectural-planning decisions, selection of information-processing equipment, etc. The currently used ERP and WMS systems did not allow us to solve the full list of logistics engineering problems. In this regard, the development of specialized software products, taking into account the specifics of warehouse logistics, and subsequent integration of these software with ERP and WMS systems seems to be a current task. In this paper we suggest a system of statistical analysis of logistics information, designed to meet the challenges of logistics engineering and planning. The system is based on the methods of statistical data processing.The proposed specialized software is designed to improve the efficiency of the operating business and the development of logistics support of sales. The system is based on the methods of statistical data processing, the methods of assessment and prediction of logistics performance, the methods for the determination and calculation of the data required for registration, storage and processing of metal products, as well as the methods for planning the reconstruction and development

  1. New statistical methodology, mathematical models, and data bases relevant to the assessment of health impacts of energy technologies

    International Nuclear Information System (INIS)

    Ginevan, M.E.; Collins, J.J.; Brown, C.D.; Carnes, B.A.; Curtiss, J.B.; Devine, N.

    1981-01-01

    The present research develops new statistical methodology, mathematical models, and data bases of relevance to the assessment of health impacts of energy technologies, and uses these to identify, quantify, and pedict adverse health effects of energy related pollutants. Efforts are in five related areas including: (1) evaluation and development of statistical procedures for the analysis of death rate data, disease incidence data, and large scale data sets; (2) development of dose response and demographic models useful in the prediction of the health effects of energy technologies; (3) application of our method and models to analyses of the health risks of energy production; (4) a reanalysis of the Tri-State leukemia survey data, focusing on the relationship between myelogenous leukemia risk and diagnostic x-ray exposure; and (5) investigation of human birth weights as a possible early warning system for the effects of environmental pollution

  2. 76 FR 44337 - Comments and Information Relevant to Mid Decade Review of NORA

    Science.gov (United States)

    2011-07-25

    ... NIOSH-244] Comments and Information Relevant to Mid Decade Review of NORA AGENCY: Department of Health...) is conducting a review of the processes of the National Occupational Research Agenda (NORA). In 2006, NORA entered its second decade with an industry sector-based structure. In 2011, as NORA reaches the...

  3. Inclusion probability for DNA mixtures is a subjective one-sided match statistic unrelated to identification information

    Directory of Open Access Journals (Sweden)

    Mark William Perlin

    2015-01-01

    Full Text Available Background: DNA mixtures of two or more people are a common type of forensic crime scene evidence. A match statistic that connects the evidence to a criminal defendant is usually needed for court. Jurors rely on this strength of match to help decide guilt or innocence. However, the reliability of unsophisticated match statistics for DNA mixtures has been questioned. Materials and Methods: The most prevalent match statistic for DNA mixtures is the combined probability of inclusion (CPI, used by crime labs for over 15 years. When testing 13 short tandem repeat (STR genetic loci, the CPI -1 value is typically around a million, regardless of DNA mixture composition. However, actual identification information, as measured by a likelihood ratio (LR, spans a much broader range. This study examined probability of inclusion (PI mixture statistics for 517 locus experiments drawn from 16 reported cases and compared them with LR locus information calculated independently on the same data. The log(PI -1 values were examined and compared with corresponding log(LR values. Results: The LR and CPI methods were compared in case examples of false inclusion, false exclusion, a homicide, and criminal justice outcomes. Statistical analysis of crime laboratory STR data shows that inclusion match statistics exhibit a truncated normal distribution having zero center, with little correlation to actual identification information. By the law of large numbers (LLN, CPI -1 increases with the number of tested genetic loci, regardless of DNA mixture composition or match information. These statistical findings explain why CPI is relatively constant, with implications for DNA policy, criminal justice, cost of crime, and crime prevention. Conclusions: Forensic crime laboratories have generated CPI statistics on hundreds of thousands of DNA mixture evidence items. However, this commonly used match statistic behaves like a random generator of inclusionary values, following the LLN

  4. Introduction to statistics using interactive MM*Stat elements

    CERN Document Server

    Härdle, Wolfgang Karl; Rönz, Bernd

    2015-01-01

    MM*Stat, together with its enhanced online version with interactive examples, offers a flexible tool that facilitates the teaching of basic statistics. It covers all the topics found in introductory descriptive statistics courses, including simple linear regression and time series analysis, the fundamentals of inferential statistics (probability theory, random sampling and estimation theory), and inferential statistics itself (confidence intervals, testing). MM*Stat is also designed to help students rework class material independently and to promote comprehension with the help of additional examples. Each chapter starts with the necessary theoretical background, which is followed by a variety of examples. The core examples are based on the content of the respective chapter, while the advanced examples, designed to deepen students’ knowledge, also draw on information and material from previous chapters. The enhanced online version helps students grasp the complexity and the practical relevance of statistical...

  5. Teaching statistics to nursing students: an expert panel consensus.

    Science.gov (United States)

    Hayat, Matthew J; Eckardt, Patricia; Higgins, Melinda; Kim, MyoungJin; Schmiege, Sarah J

    2013-06-01

    Statistics education is a necessary element of nursing education, and its inclusion is recommended in the American Association of Colleges of Nursing guidelines for nurse training at all levels. This article presents a cohesive summary of an expert panel discussion, "Teaching Statistics to Nursing Students," held at the 2012 Joint Statistical Meetings. All panelists were statistics experts, had extensive teaching and consulting experience, and held faculty appointments in a U.S.-based nursing college or school. The panel discussed degree-specific curriculum requirements, course content, how to ensure nursing students understand the relevance of statistics, approaches to integrating statistics consulting knowledge, experience with classroom instruction, use of knowledge from the statistics education research field to make improvements in statistics education for nursing students, and classroom pedagogy and instruction on the use of statistical software. Panelists also discussed the need for evidence to make data-informed decisions about statistics education and training for nurses. Copyright 2013, SLACK Incorporated.

  6. Knowledge-Intensive Gathering and Integration of Statistical Information on European Fisheries

    NARCIS (Netherlands)

    Klinkert, M.; Treur, J.; Verwaart, T.; Loganantharaj, R.; Palm, G.; Ali, M.

    2000-01-01

    Gathering, maintenance, integration and presentation of statistics are major activities of the Dutch Agricultural Economics Research Institute LEI. In this paper we explore how knowledge and agent technology can be exploited to support the information gathering and integration process. In

  7. Motivated memory: memory for attitude-relevant information as a function of self-esteem

    NARCIS (Netherlands)

    Wiersema, D.V.; van der Pligt, J.; van Harreveld, F.

    2010-01-01

    In this article we offer a new perspective on the contradictory findings in the literature on memory for attitude-relevant information. We propose that biases in memory are most likely to occur when the attitude involved is connected to personally important values and the self; i.e., if the attitude

  8. Geometric theory of information

    CERN Document Server

    2014-01-01

    This book brings together geometric tools and their applications for Information analysis. It collects current and many uses of in the interdisciplinary fields of Information Geometry Manifolds in Advanced Signal, Image & Video Processing, Complex Data Modeling and Analysis, Information Ranking and Retrieval, Coding, Cognitive Systems, Optimal Control, Statistics on Manifolds, Machine Learning, Speech/sound recognition, and natural language treatment which are also substantially relevant for the industry.

  9. Relevance between the degree of industrial competition and fair value information: Study on the listed companies in China

    Directory of Open Access Journals (Sweden)

    Xuemin Zhuang

    2015-05-01

    Full Text Available Purpose: The purpose of this article is to study whether there exists natural relationship between fair value and corporate external market. A series of special phenomenon in the application of fair value arouses our research interests, which present evidences on how competition affects the correlation of fair value information. Design/methodology/approach: this thesis chooses fair value changes gains and losses and calculate the ratio of DFVPSit as the alternative variable of the fair value. In order to effectively inspect the mutual influence between the degree of industry competition and the value relevance of fair value, and reduce the impact of multi-collinearity, we built a regression model on the hypothesis, which supposes that if other conditions are the same, the fair value information has greater value relevance if the degree of the industry competition is greater. To test the hypothesis, we use the comparison of the DFVPSit coefficient absolute value to judge the value relevance of fair value information, and the greater the absolute value is, the higher relevance between the changes in fair value per share profits and losses with the stock prices. Findings: The higher the degree of competition in the industry is, the more fair value information relevance is. Also, there are evidences representing that fair value information often presents negative correlation with the stock price. Originality/value: The main contribution of the article is to show that not only need we make the formulation and implementation of the high quality of fair value accounting standards to suit for both the national conditions and international practice, but also need we further to improve the company's external governance mechanism to promote fair value’s information correlation.

  10. Review of Statistical Learning Methods in Integrated Omics Studies (An Integrated Information Science).

    Science.gov (United States)

    Zeng, Irene Sui Lan; Lumley, Thomas

    2018-01-01

    Integrated omics is becoming a new channel for investigating the complex molecular system in modern biological science and sets a foundation for systematic learning for precision medicine. The statistical/machine learning methods that have emerged in the past decade for integrated omics are not only innovative but also multidisciplinary with integrated knowledge in biology, medicine, statistics, machine learning, and artificial intelligence. Here, we review the nontrivial classes of learning methods from the statistical aspects and streamline these learning methods within the statistical learning framework. The intriguing findings from the review are that the methods used are generalizable to other disciplines with complex systematic structure, and the integrated omics is part of an integrated information science which has collated and integrated different types of information for inferences and decision making. We review the statistical learning methods of exploratory and supervised learning from 42 publications. We also discuss the strengths and limitations of the extended principal component analysis, cluster analysis, network analysis, and regression methods. Statistical techniques such as penalization for sparsity induction when there are fewer observations than the number of features and using Bayesian approach when there are prior knowledge to be integrated are also included in the commentary. For the completeness of the review, a table of currently available software and packages from 23 publications for omics are summarized in the appendix.

  11. Statistical benchmark for BosonSampling

    International Nuclear Information System (INIS)

    Walschaers, Mattia; Mayer, Klaus; Buchleitner, Andreas; Kuipers, Jack; Urbina, Juan-Diego; Richter, Klaus; Tichy, Malte Christopher

    2016-01-01

    Boson samplers—set-ups that generate complex many-particle output states through the transmission of elementary many-particle input states across a multitude of mutually coupled modes—promise the efficient quantum simulation of a classically intractable computational task, and challenge the extended Church–Turing thesis, one of the fundamental dogmas of computer science. However, as in all experimental quantum simulations of truly complex systems, one crucial problem remains: how to certify that a given experimental measurement record unambiguously results from enforcing the claimed dynamics, on bosons, fermions or distinguishable particles? Here we offer a statistical solution to the certification problem, identifying an unambiguous statistical signature of many-body quantum interference upon transmission across a multimode, random scattering device. We show that statistical analysis of only partial information on the output state allows to characterise the imparted dynamics through particle type-specific features of the emerging interference patterns. The relevant statistical quantifiers are classically computable, define a falsifiable benchmark for BosonSampling, and reveal distinctive features of many-particle quantum dynamics, which go much beyond mere bunching or anti-bunching effects. (fast track communication)

  12. IMPACT OF THE CONVERGENCE PROCESS TO INTERNATIONAL FINANCIAL REPORTING STANDARDS ON THE VALUE RELEVANCE OF FINANCIAL INFORMATION

    Directory of Open Access Journals (Sweden)

    Marcelo Alvaro da Silva Macedo

    2012-11-01

    Full Text Available Law 11.638/07 marked the start of a series of changes in the laws that regulate Brazilian accounting practices. The main reason for these changes is the convergence process of local with international accounting standards. As a result of Law 11.638/07, the legal precedent was established to achieve convergence. In that context, the aim of this study is to analyze the impact of the convergence process with international accounting standards on the relevance of financial information, based on data for 2007, without and with the alterations Law 11.638/07 introduced and according to the CPC Pronouncements, applicable as from 2008 onwards. Therefore, a value relevance study is used, applying regression analysis to annual stock price information (dependent variable and net profit per share (NPPS and net equity per share (NEPS as independent variables. The main results show that financial information on NPPS and NEPS for 2007, with and without the legal alterations, are relevant for the capital market. A comparison between both regressions used in the analysis, however, shows an information gain for financial information that includes the changes introduced in the first phase of the accounting convergence process with the international standards.

  13. Applications of statistical physics and information theory to the analysis of DNA sequences

    Science.gov (United States)

    Grosse, Ivo

    2000-10-01

    DNA carries the genetic information of most living organisms, and the of genome projects is to uncover that genetic information. One basic task in the analysis of DNA sequences is the recognition of protein coding genes. Powerful computer programs for gene recognition have been developed, but most of them are based on statistical patterns that vary from species to species. In this thesis I address the question if there exist universal statistical patterns that are different in coding and noncoding DNA of all living species, regardless of their phylogenetic origin. In search for such species-independent patterns I study the mutual information function of genomic DNA sequences, and find that it shows persistent period-three oscillations. To understand the biological origin of the observed period-three oscillations, I compare the mutual information function of genomic DNA sequences to the mutual information function of stochastic model sequences. I find that the pseudo-exon model is able to reproduce the mutual information function of genomic DNA sequences. Moreover, I find that a generalization of the pseudo-exon model can connect the existence and the functional form of long-range correlations to the presence and the length distributions of coding and noncoding regions. Based on these theoretical studies I am able to find an information-theoretical quantity, the average mutual information (AMI), whose probability distributions are significantly different in coding and noncoding DNA, while they are almost identical in all studied species. These findings show that there exist universal statistical patterns that are different in coding and noncoding DNA of all studied species, and they suggest that the AMI may be used to identify genes in different living species, irrespective of their taxonomic origin.

  14. A flexible statistics web processing service--added value for information systems for experiment data.

    Science.gov (United States)

    Heimann, Dennis; Nieschulze, Jens; König-Ries, Birgitta

    2010-04-20

    Data management in the life sciences has evolved from simple storage of data to complex information systems providing additional functionalities like analysis and visualization capabilities, demanding the integration of statistical tools. In many cases the used statistical tools are hard-coded within the system. That leads to an expensive integration, substitution, or extension of tools because all changes have to be done in program code. Other systems are using generic solutions for tool integration but adapting them to another system is mostly rather extensive work. This paper shows a way to provide statistical functionality over a statistics web service, which can be easily integrated in any information system and set up using XML configuration files. The statistical functionality is extendable by simply adding the description of a new application to a configuration file. The service architecture as well as the data exchange process between client and service and the adding of analysis applications to the underlying service provider are described. Furthermore a practical example demonstrates the functionality of the service.

  15. Information transport in classical statistical systems

    Science.gov (United States)

    Wetterich, C.

    2018-02-01

    For "static memory materials" the bulk properties depend on boundary conditions. Such materials can be realized by classical statistical systems which admit no unique equilibrium state. We describe the propagation of information from the boundary to the bulk by classical wave functions. The dependence of wave functions on the location of hypersurfaces in the bulk is governed by a linear evolution equation that can be viewed as a generalized Schrödinger equation. Classical wave functions obey the superposition principle, with local probabilities realized as bilinears of wave functions. For static memory materials the evolution within a subsector is unitary, as characteristic for the time evolution in quantum mechanics. The space-dependence in static memory materials can be used as an analogue representation of the time evolution in quantum mechanics - such materials are "quantum simulators". For example, an asymmetric Ising model on a Euclidean two-dimensional lattice represents the time evolution of free relativistic fermions in two-dimensional Minkowski space.

  16. Creation of reliable relevance judgments in information retrieval systems evaluation experimentation through crowdsourcing: a review.

    Science.gov (United States)

    Samimi, Parnia; Ravana, Sri Devi

    2014-01-01

    Test collection is used to evaluate the information retrieval systems in laboratory-based evaluation experimentation. In a classic setting, generating relevance judgments involves human assessors and is a costly and time consuming task. Researchers and practitioners are still being challenged in performing reliable and low-cost evaluation of retrieval systems. Crowdsourcing as a novel method of data acquisition is broadly used in many research fields. It has been proven that crowdsourcing is an inexpensive and quick solution as well as a reliable alternative for creating relevance judgments. One of the crowdsourcing applications in IR is to judge relevancy of query document pair. In order to have a successful crowdsourcing experiment, the relevance judgment tasks should be designed precisely to emphasize quality control. This paper is intended to explore different factors that have an influence on the accuracy of relevance judgments accomplished by workers and how to intensify the reliability of judgments in crowdsourcing experiment.

  17. Subjectivism as an unavoidable feature of ecological statistics

    Directory of Open Access Journals (Sweden)

    Martínez–Abraín, A.

    2014-12-01

    Full Text Available We approach here the handling of previous information when performing statistical inference in ecology, both when dealing with model specification and selection, and when dealing with parameter estimation. We compare the perspectives of this problem from the frequentist and Bayesian schools, including objective and subjective Bayesians. We show that the issue of making use of previous information and making a priori decisions is not only a reality for Bayesians but also for frequentists. However, the latter tend to overlook this because of the common difficulty of having previous information available on the magnitude of the effect that is thought to be biologically relevant. This prior information should be fed into a priori power tests when looking for the necessary sample sizes to couple statistical and biological significances. Ecologists should make a greater effort to make use of available prior information because this is their most legitimate contribution to the inferential process. Parameter estimation and model selection would benefit if this was done, allowing a more reliable accumulation of knowledge, and hence progress, in the biological sciences.

  18. INFORMATION TECHNOLOGIES OF THE STATISTICAL DATA ANALYSIS WITHIN THE SYSTEM OF HIGHER PSYCHOLOGICAL EDUCATION

    Directory of Open Access Journals (Sweden)

    Svetlana V. Smirnova

    2013-01-01

    Full Text Available The features of using information technologies within applied statisticians in psychology are considered in the article. Requirements to statistical preparation of psychology students in the conditions of information society are analyzed.

  19. Deep learning relevance

    DEFF Research Database (Denmark)

    Lioma, Christina; Larsen, Birger; Petersen, Casper

    2016-01-01

    train a Recurrent Neural Network (RNN) on existing relevant information to that query. We then use the RNN to "deep learn" a single, synthetic, and we assume, relevant document for that query. We design a crowdsourcing experiment to assess how relevant the "deep learned" document is, compared...... to existing relevant documents. Users are shown a query and four wordclouds (of three existing relevant documents and our deep learned synthetic document). The synthetic document is ranked on average most relevant of all....

  20. Expert vs. novice differences in the detection of relevant information during a chess game: evidence from eye movements.

    Science.gov (United States)

    Sheridan, Heather; Reingold, Eyal M

    2014-01-01

    The present study explored the ability of expert and novice chess players to rapidly distinguish between regions of a chessboard that were relevant to the best move on the board, and regions of the board that were irrelevant. Accordingly, we monitored the eye movements of expert and novice chess players, while they selected white's best move for a variety of chess problems. To manipulate relevancy, we constructed two different versions of each chess problem in the experiment, and we counterbalanced these versions across participants. These two versions of each problem were identical except that a single piece was changed from a bishop to a knight. This subtle change reversed the relevancy map of the board, such that regions that were relevant in one version of the board were now irrelevant (and vice versa). Using this paradigm, we demonstrated that both the experts and novices spent more time fixating the relevant relative to the irrelevant regions of the board. However, the experts were faster at detecting relevant information than the novices, as shown by the finding that experts (but not novices) were able to distinguish between relevant and irrelevant information during the early part of the trial. These findings further demonstrate the domain-related perceptual processing advantage of chess experts, using an experimental paradigm that allowed us to manipulate relevancy under tightly controlled conditions.

  1. Expert versus novice differences in the detection of relevant information during a chess game: Evidence from eye movements

    Directory of Open Access Journals (Sweden)

    Heather eSheridan

    2014-08-01

    Full Text Available The present study explored the ability of expert and novice chess players to rapidly distinguish between regions of a chessboard that were relevant to the best move on the board, and regions of the board that were irrelevant. Accordingly, we monitored the eye movements of expert and novice chess players, while they selected white’s best move for a variety of chess problems. To manipulate relevancy, we constructed two different versions of each chess problem in the experiment, and we counterbalanced these versions across participants. These two versions of each problem were identical except that a single piece was changed from a bishop to a knight. This subtle change reversed the relevancy map of the board, such that regions that were relevant in one version of the board were now irrelevant (and vice versa. Using this paradigm, we demonstrated that both the experts and novices spent more time fixating the relevant relative to the irrelevant regions of the board. However, the experts were faster at detecting relevant information than the novices, as shown by the finding that experts (but not novices were able to distinguish between relevant and irrelevant information during the early part of the trial. These findings further demonstrate the domain-related perceptual processing advantage of chess experts, using an experimental paradigm that allowed us to manipulate relevancy under tightly controlled conditions.

  2. The Development of Introductory Statistics Students' Informal Inferential Reasoning and Its Relationship to Formal Inferential Reasoning

    Science.gov (United States)

    Jacob, Bridgette L.

    2013-01-01

    The difficulties introductory statistics students have with formal statistical inference are well known in the field of statistics education. "Informal" statistical inference has been studied as a means to introduce inferential reasoning well before and without the formalities of formal statistical inference. This mixed methods study…

  3. Knowledge-Sharing Intention among Information Professionals in Nigeria: A Statistical Analysis

    Science.gov (United States)

    Tella, Adeyinka

    2016-01-01

    In this study, the researcher administered a survey and developed and tested a statistical model to examine the factors that determine the intention of information professionals in Nigeria to share knowledge with their colleagues. The result revealed correlations between the overall score for intending to share knowledge and other…

  4. Nonparametric statistics with applications to science and engineering

    CERN Document Server

    Kvam, Paul H

    2007-01-01

    A thorough and definitive book that fully addresses traditional and modern-day topics of nonparametric statistics This book presents a practical approach to nonparametric statistical analysis and provides comprehensive coverage of both established and newly developed methods. With the use of MATLAB, the authors present information on theorems and rank tests in an applied fashion, with an emphasis on modern methods in regression and curve fitting, bootstrap confidence intervals, splines, wavelets, empirical likelihood, and goodness-of-fit testing. Nonparametric Statistics with Applications to Science and Engineering begins with succinct coverage of basic results for order statistics, methods of categorical data analysis, nonparametric regression, and curve fitting methods. The authors then focus on nonparametric procedures that are becoming more relevant to engineering researchers and practitioners. The important fundamental materials needed to effectively learn and apply the discussed methods are also provide...

  5. LANGUAGE EXPERIENCE SHAPES PROCESSING OF PITCH RELEVANT INFORMATION IN THE HUMAN BRAINSTEM AND AUDITORY CORTEX: ELECTROPHYSIOLOGICAL EVIDENCE.

    Science.gov (United States)

    Krishnan, Ananthanarayan; Gandour, Jackson T

    2014-12-01

    Pitch is a robust perceptual attribute that plays an important role in speech, language, and music. As such, it provides an analytic window to evaluate how neural activity relevant to pitch undergo transformation from early sensory to later cognitive stages of processing in a well coordinated hierarchical network that is subject to experience-dependent plasticity. We review recent evidence of language experience-dependent effects in pitch processing based on comparisons of native vs. nonnative speakers of a tonal language from electrophysiological recordings in the auditory brainstem and auditory cortex. We present evidence that shows enhanced representation of linguistically-relevant pitch dimensions or features at both the brainstem and cortical levels with a stimulus-dependent preferential activation of the right hemisphere in native speakers of a tone language. We argue that neural representation of pitch-relevant information in the brainstem and early sensory level processing in the auditory cortex is shaped by the perceptual salience of domain-specific features. While both stages of processing are shaped by language experience, neural representations are transformed and fundamentally different at each biological level of abstraction. The representation of pitch relevant information in the brainstem is more fine-grained spectrotemporally as it reflects sustained neural phase-locking to pitch relevant periodicities contained in the stimulus. In contrast, the cortical pitch relevant neural activity reflects primarily a series of transient temporal neural events synchronized to certain temporal attributes of the pitch contour. We argue that experience-dependent enhancement of pitch representation for Chinese listeners most likely reflects an interaction between higher-level cognitive processes and early sensory-level processing to improve representations of behaviorally-relevant features that contribute optimally to perception. It is our view that long

  6. Identifying relevant group of miRNAs in cancer using fuzzy mutual information.

    Science.gov (United States)

    Pal, Jayanta Kumar; Ray, Shubhra Sankar; Pal, Sankar K

    2016-04-01

    MicroRNAs (miRNAs) act as a major biomarker of cancer. All miRNAs in human body are not equally important for cancer identification. We propose a methodology, called FMIMS, which automatically selects the most relevant miRNAs for a particular type of cancer. In FMIMS, miRNAs are initially grouped by using a SVM-based algorithm; then the group with highest relevance is determined and the miRNAs in that group are finally ranked for selection according to their redundancy. Fuzzy mutual information is used in computing the relevance of a group and the redundancy of miRNAs within it. Superiority of the most relevant group to all others, in deciding normal or cancer, is demonstrated on breast, renal, colorectal, lung, melanoma and prostate data. The merit of FMIMS as compared to several existing methods is established. While 12 out of 15 selected miRNAs by FMIMS corroborate with those of biological investigations, three of them viz., "hsa-miR-519," "hsa-miR-431" and "hsa-miR-320c" are possible novel predictions for renal cancer, lung cancer and melanoma, respectively. The selected miRNAs are found to be involved in disease-specific pathways by targeting various genes. The method is also able to detect the responsible miRNAs even at the primary stage of cancer. The related code is available at http://www.jayanta.droppages.com/FMIMS.html .

  7. [Design and implementation of online statistical analysis function in information system of air pollution and health impact monitoring].

    Science.gov (United States)

    Lü, Yiran; Hao, Shuxin; Zhang, Guoqing; Liu, Jie; Liu, Yue; Xu, Dongqun

    2018-01-01

    To implement the online statistical analysis function in information system of air pollution and health impact monitoring, and obtain the data analysis information real-time. Using the descriptive statistical method as well as time-series analysis and multivariate regression analysis, SQL language and visual tools to implement online statistical analysis based on database software. Generate basic statistical tables and summary tables of air pollution exposure and health impact data online; Generate tendency charts of each data part online and proceed interaction connecting to database; Generate butting sheets which can lead to R, SAS and SPSS directly online. The information system air pollution and health impact monitoring implements the statistical analysis function online, which can provide real-time analysis result to its users.

  8. Autism Spectrum Disorder Updates - Relevant Information for Early Interventionists to Consider.

    Science.gov (United States)

    Allen-Meares, Paula; MacDonald, Megan; McGee, Kristin

    2016-01-01

    Autism spectrum disorder (ASD) is a pervasive developmental disorder characterized by deficits in social communication skills as well as repetitive, restricted or stereotyped behaviors (1). Early interventionists are often found at the forefront of assessment, evaluation, and early intervention services for children with ASD. The role of an early intervention specialist may include assessing developmental history, providing group and individual counseling, working in partnership with families on home, school, and community environments, mobilizing school and community resources, and assisting in the development of positive early intervention strategies (2, 3). The commonality among these roles resides in the importance of providing up-to-date, relevant information to families and children. The purpose of this review is to provide pertinent up-to-date knowledge for early interventionists to help inform practice in working with individuals with ASD, including common behavioral models of intervention.

  9. Incorporating Nonparametric Statistics into Delphi Studies in Library and Information Science

    Science.gov (United States)

    Ju, Boryung; Jin, Tao

    2013-01-01

    Introduction: The Delphi technique is widely used in library and information science research. However, many researchers in the field fail to employ standard statistical tests when using this technique. This makes the technique vulnerable to criticisms of its reliability and validity. The general goal of this article is to explore how…

  10. Dissemination of statistical information for purposes of efficiency management of socio-economic development

    Directory of Open Access Journals (Sweden)

    Gerasimenko S.

    2013-01-01

    Full Text Available The questions connected with the facilitation of the access of the people to the information about the living standard are considered. In particular it is suggested to use the information of the System of National Accounts and statistic methods. It is stressed that the information about living standard should be amplified with the characteristics of the effectiveness of the management of the social-economic development.

  11. Statistical Information and Uncertainty: A Critique of Applications in Experimental Psychology

    Directory of Open Access Journals (Sweden)

    Donald Laming

    2010-04-01

    Full Text Available This paper presents, first, a formal exploration of the relationships between information (statistically defined, statistical hypothesis testing, the use of hypothesis testing in reverse as an investigative tool, channel capacity in a communication system, uncertainty, the concept of entropy in thermodynamics, and Bayes’ theorem. This exercise brings out the close mathematical interrelationships between different applications of these ideas in diverse areas of psychology. Subsequent illustrative examples are grouped under (a the human operator as an ideal communications channel, (b the human operator as a purely physical system, and (c Bayes’ theorem as an algorithm for combining information from different sources. Some tentative conclusions are drawn about the usefulness of information theory within these different categories. (a The idea of the human operator as an ideal communications channel has long been abandoned, though it provides some lessons that still need to be absorbed today. (b Treating the human operator as a purely physical system provides a platform for the quantitative exploration of many aspects of human performance by analogy with the analysis of other physical systems. (c The use of Bayes’ theorem to calculate the effects of prior probabilities and stimulus frequencies on human performance is probably misconceived, but it is difficult to obtain results precise enough to resolve this question.

  12. A quantum information approach to statistical mechanics

    International Nuclear Information System (INIS)

    Cuevas, G.

    2011-01-01

    The field of quantum information and computation harnesses and exploits the properties of quantum mechanics to perform tasks more efficiently than their classical counterparts, or that may uniquely be possible in the quantum world. Its findings and techniques have been applied to a number of fields, such as the study of entanglement in strongly correlated systems, new simulation techniques for many-body physics or, generally, to quantum optics. This thesis aims at broadening the scope of quantum information theory by applying it to problems in statistical mechanics. We focus on classical spin models, which are toy models used in a variety of systems, ranging from magnetism, neural networks, to quantum gravity. We tackle these models using quantum information tools from three different angles. First, we show how the partition function of a class of widely different classical spin models (models in different dimensions, different types of many-body interactions, different symmetries, etc) can be mapped to the partition function of a single model. We prove this by first establishing a relation between partition functions and quantum states, and then transforming the corresponding quantum states to each other. Second, we give efficient quantum algorithms to estimate the partition function of various classical spin models, such as the Ising or the Potts model. The proof is based on a relation between partition functions and quantum circuits, which allows us to determine the quantum computational complexity of the partition function by studying the corresponding quantum circuit. Finally, we outline the possibility of applying quantum information concepts and tools to certain models of dis- crete quantum gravity. The latter provide a natural route to generalize our results, insofar as the central quantity has the form of a partition function, and as classical spin models are used as toy models of matter. (author)

  13. Human capital information in management reports: An analysis of compliance with the characteristic of the relevance of disclosure

    Directory of Open Access Journals (Sweden)

    Ainhoa Saitua

    2015-06-01

    Full Text Available Purpose: The aim of this paper is to assess the compliance with the characteristic of the relevance of disclosure in Management Reports, particularly dealing with Human Capital (HC information.Design/methodology/approach: We codify all instances where narratives of IBEX-35 stock index companies over a five year period in Spain comply with the recommendations for a “high quality” Management Commentary (MC in terms of the relevance characteristic of the information disclosed (IASB, 2005.Findings: The analysis results show that a greater quantity of information about HC in terms of the number of pages devoted is not always indicative of higher quality in terms of relevance if we look for the application of IASB recommendations.Research limitations/implications: Further research could assess compliance with other qualitative characteristics required by other standards or guidances that were internationaly generaly accepted.Practical implications: Among the areas that require improvement in HC disclosures we highlight forward-looking information.Social implications: We propose that an internationally accepted agreement must be struck to unite all the efforts that are being made to improve narrative information in the MC section, and specifically with reference to HC.Originality/value: This work compiles the HC disclosures identified as best practices that may serve as a reference to other companies.

  14. Assessment of efficiency of functioning the infocommunication systems a special purpose in the conditions of violation quality of relevance information

    Science.gov (United States)

    Parinov, A. V.; Korotkikh, L. P.; Desyatov, D. B.; Stepanov, L. V.

    2018-03-01

    The uniqueness of information processing mechanisms in special-purpose infocommunication systems and the increased interest of intruders lead to an increase in the relevance of the problems associated with their protection. The paper considers the issues of building risk-models for the violation of the relevance and value of information in infocommunication systems for special purposes. Also, special attention is paid to the connection between the qualities of relevance and the value of information obtained as a result of the operation of infocommunication systems for special purposes. Analytical expressions for the risk and damage function in the time range in special-purpose infocommunication systems are obtained, which can serve as a mathematical basis for risk assessment. Further, an analytical expression is obtained to assess the chance of obtaining up-to-date information in the operation of infocommunication systems up to the time the information quality is violated. An analytical expression for estimating the chance can be used to calculate the effectiveness of a special-purpose infocommunication system.

  15. An estimator for statistical anisotropy from the CMB bispectrum

    International Nuclear Information System (INIS)

    Bartolo, N.; Dimastrogiovanni, E.; Matarrese, S.; Liguori, M.; Riotto, A.

    2012-01-01

    Various data analyses of the Cosmic Microwave Background (CMB) provide observational hints of statistical isotropy breaking. Some of these features can be studied within the framework of primordial vector fields in inflationary theories which generally display some level of statistical anisotropy both in the power spectrum and in higher-order correlation functions. Motivated by these observations and the recent theoretical developments in the study of primordial vector fields, we develop the formalism necessary to extract statistical anisotropy information from the three-point function of the CMB temperature anisotropy. We employ a simplified vector field model and parametrize the bispectrum of curvature fluctuations in such a way that all the information about statistical anisotropy is encoded in some parameters λ LM (which measure the anisotropic to the isotropic bispectrum amplitudes). For such a template bispectrum, we compute an optimal estimator for λ LM and the expected signal-to-noise ratio. We estimate that, for f NL ≅ 30, an experiment like Planck can be sensitive to a ratio of the anisotropic to the isotropic amplitudes of the bispectrum as small as 10%. Our results are complementary to the information coming from a power spectrum analysis and particularly relevant for those models where statistical anisotropy turns out to be suppressed in the power spectrum but not negligible in the bispectrum

  16. Do doctors need statistics? Doctors' use of and attitudes to probability and statistics.

    Science.gov (United States)

    Swift, Louise; Miles, Susan; Price, Gill M; Shepstone, Lee; Leinster, Sam J

    2009-07-10

    There is little published evidence on what doctors do in their work that requires probability and statistics, yet the General Medical Council (GMC) requires new doctors to have these skills. This study investigated doctors' use of and attitudes to probability and statistics with a view to informing undergraduate teaching.An email questionnaire was sent to 473 clinicians with an affiliation to the University of East Anglia's Medical School.Of 130 respondents approximately 90 per cent of doctors who performed each of the following activities found probability and statistics useful for that activity: accessing clinical guidelines and evidence summaries, explaining levels of risk to patients, assessing medical marketing and advertising material, interpreting the results of a screening test, reading research publications for general professional interest, and using research publications to explore non-standard treatment and management options.Seventy-nine per cent (103/130, 95 per cent CI 71 per cent, 86 per cent) of participants considered probability and statistics important in their work. Sixty-three per cent (78/124, 95 per cent CI 54 per cent, 71 per cent) said that there were activities that they could do better or start doing if they had an improved understanding of these areas and 74 of these participants elaborated on this. Themes highlighted by participants included: being better able to critically evaluate other people's research; becoming more research-active, having a better understanding of risk; and being better able to explain things to, or teach, other people.Our results can be used to inform how probability and statistics should be taught to medical undergraduates and should encourage today's medical students of the subjects' relevance to their future careers. Copyright 2009 John Wiley & Sons, Ltd.

  17. Statistical techniques to extract information during SMAP soil moisture assimilation

    Science.gov (United States)

    Kolassa, J.; Reichle, R. H.; Liu, Q.; Alemohammad, S. H.; Gentine, P.

    2017-12-01

    Statistical techniques permit the retrieval of soil moisture estimates in a model climatology while retaining the spatial and temporal signatures of the satellite observations. As a consequence, the need for bias correction prior to an assimilation of these estimates is reduced, which could result in a more effective use of the independent information provided by the satellite observations. In this study, a statistical neural network (NN) retrieval algorithm is calibrated using SMAP brightness temperature observations and modeled soil moisture estimates (similar to those used to calibrate the SMAP Level 4 DA system). Daily values of surface soil moisture are estimated using the NN and then assimilated into the NASA Catchment model. The skill of the assimilation estimates is assessed based on a comprehensive comparison to in situ measurements from the SMAP core and sparse network sites as well as the International Soil Moisture Network. The NN retrieval assimilation is found to significantly improve the model skill, particularly in areas where the model does not represent processes related to agricultural practices. Additionally, the NN method is compared to assimilation experiments using traditional bias correction techniques. The NN retrieval assimilation is found to more effectively use the independent information provided by SMAP resulting in larger model skill improvements than assimilation experiments using traditional bias correction techniques.

  18. The Need for the Dissemination of Statistical Data and Information

    Directory of Open Access Journals (Sweden)

    Anna-Alexandra Frunza

    2016-01-01

    Full Text Available There is an emphasis nowadays on knowledge, so the access to information has increased inrelevance in the modern economies which have developed their competitive advantage thoroughtheir dynamic response to the market changes. The effort for transparency has increasedtremendously within the last decades which have been also influenced by the weight that the digitalsupport has provided. The need for the dissemination of statistical data and information has metnew challenges in terms of aggregating the practices that both private and public organizations usein order to ensure the optimum access to the end users. The article stresses some key questions thatcan be introduced which ease the process of collection and presentation of the results subject todissemination.

  19. Stimulus-response correspondence effect as a function of temporal overlap between relevant and irrelevant information processing.

    Science.gov (United States)

    Wang, Dong-Yuan Debbie; Richard, F Dan; Ray, Brittany

    2016-01-01

    The stimulus-response correspondence (SRC) effect refers to advantages in performance when stimulus and response correspond in dimensions or features, even if the common features are irrelevant to the task. Previous research indicated that the SRC effect depends on the temporal course of stimulus information processing. The current study investigated how the temporal overlap between relevant and irrelevant stimulus processing influences the SRC effect. In this experiment, the irrelevant stimulus (a previously associated tone) preceded the relevant stimulus (a coloured rectangle). The irrelevant and relevant stimuli onset asynchrony was varied to manipulate the temporal overlap between the irrelevant and relevant stimuli processing. Results indicated that the SRC effect size varied as a quadratic function of the temporal overlap between the relevant stimulus and irrelevant stimulus. This finding extends previous experimental observations that the SRC effect size varies in an increasing or decreasing function with reaction time. The current study demonstrated a quadratic function between effect size and the temporal overlap.

  20. Statistical properties of quantum entanglement and information entropy

    International Nuclear Information System (INIS)

    Abdel-Aty, M.M.A.

    2007-03-01

    Key words: entropy, entanglement, atom-field interaction, trapped ions, cold atoms, information entropy. Objects of research: Pure state entanglement, entropy squeezing mazer. The aim of the work: Study of the new entanglement features and new measures for both pure-state and mixed state of particle-field interaction. Also, the impact of the information entropy on the quantum information theory. Method of investigation: Methods of theoretical physics and applied mathematics (statistical physics, quantum optics) are used. Results obtained and their novelty are: All the results of the dissertation are new and many new features have been discovered. Particularly: the most general case of the pure state entanglement has been introduced. Although various special aspects of the quantum entropy have been investigated previously, the general features of the dynamics, when a multi-level system and a common environment are considered, have not been treated before and our work therefore, field a gap in the literature. Specifically: 1) A new entanglement measure due to quantum mutual entropy (mixed-state entanglement) we called it DEM, has been introduced, 2) A new treatment of the atomic information entropy in higher level systems has been presented. The problem has been completely solved in the case of three-level system, 3) A new solution of the interaction between the ultra cold atoms and cavity field has been discovered, 4) Some new models of the atom-field interaction have been adopted. Practical value: The subject carries out theoretic character. Application region: Results can be used in quantum computer developments. Also, the presented results can be used for further developments of the quantum information and quantum communications. (author)

  1. Statistical inference and Aristotle's Rhetoric.

    Science.gov (United States)

    Macdonald, Ranald R

    2004-11-01

    Formal logic operates in a closed system where all the information relevant to any conclusion is present, whereas this is not the case when one reasons about events and states of the world. Pollard and Richardson drew attention to the fact that the reasoning behind statistical tests does not lead to logically justifiable conclusions. In this paper statistical inferences are defended not by logic but by the standards of everyday reasoning. Aristotle invented formal logic, but argued that people mostly get at the truth with the aid of enthymemes--incomplete syllogisms which include arguing from examples, analogies and signs. It is proposed that statistical tests work in the same way--in that they are based on examples, invoke the analogy of a model and use the size of the effect under test as a sign that the chance hypothesis is unlikely. Of existing theories of statistical inference only a weak version of Fisher's takes this into account. Aristotle anticipated Fisher by producing an argument of the form that there were too many cases in which an outcome went in a particular direction for that direction to be plausibly attributed to chance. We can therefore conclude that Aristotle would have approved of statistical inference and there is a good reason for calling this form of statistical inference classical.

  2. Associations between presence of relevant information in referrals to radiology and prevalence rates in patients with suspected pulmonary embolism.

    Science.gov (United States)

    Hedner, Charlotta; Sundgren, Pia C; Kelly, Aine Marie

    2013-09-01

    The purpose of this study was to assess if the presence of information including the pretest probability (Wells score), other known risk factors, and symptoms given on referrals for computed tomography (CT) pulmonary angiography correlated with prevalence rates for pulmonary embolism (PE). Also, to evaluate for differences between a university and a regional hospital setting regarding patient characteristics, amount of relevant information provided on referrals, and prevalence rates for pulmonary embolism. Retrospective review of all consecutive referrals (emergency room, inpatient, and outpatient) for CT performed on children and adults for suspected PE from two sites: a tertiary (university) hospital (site 1) and a secondary (regional) hospital (site 2) over a 5-year period. The overall prevalence rate was 510/3641 or 14% of all referrals. Significantly higher number of males had a positive CT compared to women (18% versus 12%, P relevant information on the referral and the probability for positive finding existed, a slight trend was noted (P = .09). In two categories, "hypoxia" and "signs of deep vein thrombosis," the presence of this information conferred a higher probability for pulmonary embolism, P information conferred a higher probability for pulmonary embolism. The amount of relevant clinical information on the request did not correlate with prevalence rates, which may reflect a lack of documentation on the part of emergency physicians who may use a "gestalt" approach. Request forms likely did not capture all relevant patient risks and many factors may interact with each other, both positively and negatively. Pretest probability estimations were rarely performed, despite their inclusion in major society guidelines. Copyright © 2013 AUR. Published by Elsevier Inc. All rights reserved.

  3. Age and self-relevance effects on information search during decision making.

    Science.gov (United States)

    Hess, Thomas M; Queen, Tara L; Ennis, Gilda E

    2013-09-01

    We investigated how information search strategies used to support decision making were influenced by self-related implications of the task to the individual. Consistent with the notion of selective engagement, we hypothesized that increased self-relevance would result in more adaptive search behaviors and that this effect would be stronger in older adults than in younger adults. We examined search behaviors in 79 younger and 81 older adults using a process-tracing procedure with 2 different decision tasks. The impact of motivation (i.e., self-related task implications) was examined by manipulating social accountability and the age-related relevance of the task. Although age differences in search strategies were not great, older adults were more likely than younger adults to use simpler strategies in contexts with minimal self-implications. Contrary to expectations, young and old alike were more likely to use noncompensatory than compensatory strategies, even when engaged in systematic search, with education being the most important determinant of search behavior. The results support the notion that older adults are adaptive decision makers and that factors other than age may be more important determinants of performance in situations where knowledge can be used to support performance.

  4. Information Theory - The Bridge Connecting Bounded Rational Game Theory and Statistical Physics

    Science.gov (United States)

    Wolpert, David H.

    2005-01-01

    A long-running difficulty with conventional game theory has been how to modify it to accommodate the bounded rationality of all red-world players. A recurring issue in statistical physics is how best to approximate joint probability distributions with decoupled (and therefore far more tractable) distributions. This paper shows that the same information theoretic mathematical structure, known as Product Distribution (PD) theory, addresses both issues. In this, PD theory not only provides a principle formulation of bounded rationality and a set of new types of mean field theory in statistical physics; it also shows that those topics are fundamentally one and the same.

  5. Performance of the S - [chi][squared] Statistic for Full-Information Bifactor Models

    Science.gov (United States)

    Li, Ying; Rupp, Andre A.

    2011-01-01

    This study investigated the Type I error rate and power of the multivariate extension of the S - [chi][squared] statistic using unidimensional and multidimensional item response theory (UIRT and MIRT, respectively) models as well as full-information bifactor (FI-bifactor) models through simulation. Manipulated factors included test length, sample…

  6. CellBase, a comprehensive collection of RESTful web services for retrieving relevant biological information from heterogeneous sources.

    Science.gov (United States)

    Bleda, Marta; Tarraga, Joaquin; de Maria, Alejandro; Salavert, Francisco; Garcia-Alonso, Luz; Celma, Matilde; Martin, Ainoha; Dopazo, Joaquin; Medina, Ignacio

    2012-07-01

    During the past years, the advances in high-throughput technologies have produced an unprecedented growth in the number and size of repositories and databases storing relevant biological data. Today, there is more biological information than ever but, unfortunately, the current status of many of these repositories is far from being optimal. Some of the most common problems are that the information is spread out in many small databases; frequently there are different standards among repositories and some databases are no longer supported or they contain too specific and unconnected information. In addition, data size is increasingly becoming an obstacle when accessing or storing biological data. All these issues make very difficult to extract and integrate information from different sources, to analyze experiments or to access and query this information in a programmatic way. CellBase provides a solution to the growing necessity of integration by easing the access to biological data. CellBase implements a set of RESTful web services that query a centralized database containing the most relevant biological data sources. The database is hosted in our servers and is regularly updated. CellBase documentation can be found at http://docs.bioinfo.cipf.es/projects/cellbase.

  7. Effects of self-schema elaboration on affective and cognitive reactions to self-relevant information.

    Science.gov (United States)

    Petersen, L E; Stahlberg, D; Dauenheimer, D

    2000-02-01

    The basic assumption of the integrative self-schema model (ISSM; L.-E. Petersen, 1994; L.-E. Petersen, D. Stahlberg, & D. Dauenheimer, 1996; D. Stahlberg, L.-E. Petersen, & D. Dauenheimer, 1994, 1999) is that self-schema elaboration (schematic vs. aschematic) affects reactions to self-relevant information. This assumption is based on the idea that schematic dimensions occupy a more central position in the cognitive system than aschematic dimensions. In the first study, this basic prediction could be clearly confirmed: The results showed that schematic dimensions possessed stronger cognitive associations with other self-relevant cognitions as well as a higher resistance to change than aschematic dimensions did. In the second study, the main assumptions of the ISSM concerning the affective and cognitive reactions to self-relevant feedback were tested: The ISSM proposes that, on schematic dimensions, reactions to self-relevant feedback will most likely follow principles of self-consistency theory, whereas on aschematic dimensions positive feedback should elicit the most positive reactions that self-enhancement theory would predict. The experimental results clearly confirmed the hypotheses derived from the ISSM for affective reactions. Cognitive reactions, however, were in line with self-consistency principles and were not modified by the elaboration of the self-schema dimension involved.

  8. Applied statistics for agriculture, veterinary, fishery, dairy and allied fields

    CERN Document Server

    Sahu, Pradip Kumar

    2016-01-01

    This book is aimed at a wide range of readers who lack confidence in the mathematical and statistical sciences, particularly in the fields of Agriculture, Veterinary, Fishery, Dairy and other related areas. Its goal is to present the subject of statistics and its useful tools in various disciplines in such a manner that, after reading the book, readers will be equipped to apply the statistical tools to extract otherwise hidden information from their data sets with confidence. Starting with the meaning of statistics, the book introduces measures of central tendency, dispersion, association, sampling methods, probability, inference, designs of experiments and many other subjects of interest in a step-by-step and lucid manner. The relevant theories are described in detail, followed by a broad range of real-world worked-out examples, solved either manually or with the help of statistical packages. In closing, the book also includes a chapter on which statistical packages to use, depending on the user’s respecti...

  9. INFO ANAV, a channel that is consolidated in the communication of information relevant to plant safety

    International Nuclear Information System (INIS)

    Lopera Broto, A. J.; Balbas Gomez, S.

    2012-01-01

    This weekly publication intended to make it to all the people who work at the sites of Asco and Vandellos relevant information for security since we are all responsible for the safe and reliable operation of our plants.

  10. Autism Spectrum Disorder Updates – Relevant Information for Early Interventionists to Consider

    Science.gov (United States)

    Allen-Meares, Paula; MacDonald, Megan; McGee, Kristin

    2016-01-01

    Autism spectrum disorder (ASD) is a pervasive developmental disorder characterized by deficits in social communication skills as well as repetitive, restricted or stereotyped behaviors (1). Early interventionists are often found at the forefront of assessment, evaluation, and early intervention services for children with ASD. The role of an early intervention specialist may include assessing developmental history, providing group and individual counseling, working in partnership with families on home, school, and community environments, mobilizing school and community resources, and assisting in the development of positive early intervention strategies (2, 3). The commonality among these roles resides in the importance of providing up-to-date, relevant information to families and children. The purpose of this review is to provide pertinent up-to-date knowledge for early interventionists to help inform practice in working with individuals with ASD, including common behavioral models of intervention. PMID:27840812

  11. From Quality to Information Quality in Official Statistics

    Directory of Open Access Journals (Sweden)

    Kenett Ron S.

    2016-12-01

    Full Text Available The term quality of statistical data, developed and used in official statistics and international organizations such as the International Monetary Fund (IMF and the Organisation for Economic Co-operation and Development (OECD, refers to the usefulness of summary statistics generated by producers of official statistics. Similarly, in the context of survey quality, official agencies such as Eurostat, National Center for Science and Engineering Statistics (NCSES, and Statistics Canada have created dimensions for evaluating the quality of a survey and its ability to report ‘accurate survey data’.

  12. Statistical modeling of static strengths of nuclear graphites with relevance to structural design

    International Nuclear Information System (INIS)

    Arai, Taketoshi

    1992-02-01

    Use of graphite materials for structural members poses a problem as to how to take into account of statistical properties of static strength, especially tensile fracture stresses, in component structural design. The present study concerns comprehensive examinations on statistical data base and modelings on nuclear graphites. First, the report provides individual samples and their analyses on strengths of IG-110 and PGX graphites for HTTR components. Those statistical characteristics on other HTGR graphites are also exemplified from the literature. Most of statistical distributions of individual samples are found to be approximately normal. The goodness of fit to normal distributions is more satisfactory with larger sample sizes. Molded and extruded graphites, however, possess a variety of statistical properties depending of samples from different with-in-log locations and/or different orientations. Second, the previous statistical models including the Weibull theory are assessed from the viewpoint of applicability to design procedures. This leads to a conclusion that the Weibull theory and its modified ones are satisfactory only for limited parts of tensile fracture behavior. They are not consistent for whole observations. Only normal statistics are justifiable as practical approaches to discuss specified minimum ultimate strengths as statistical confidence limits for individual samples. Third, the assessment of various statistical models emphasizes the need to develop advanced analytical ones which should involve modeling of microstructural features of actual graphite materials. Improvements of other structural design methodologies are also presented. (author)

  13. Historical maintenance relevant information road-map for a self-learning maintenance prediction procedural approach

    Science.gov (United States)

    Morales, Francisco J.; Reyes, Antonio; Cáceres, Noelia; Romero, Luis M.; Benitez, Francisco G.; Morgado, Joao; Duarte, Emanuel; Martins, Teresa

    2017-09-01

    A large percentage of transport infrastructures are composed of linear assets, such as roads and rail tracks. The large social and economic relevance of these constructions force the stakeholders to ensure a prolonged health/durability. Even though, inevitable malfunctioning, breaking down, and out-of-service periods arise randomly during the life cycle of the infrastructure. Predictive maintenance techniques tend to diminish the appearance of unpredicted failures and the execution of needed corrective interventions, envisaging the adequate interventions to be conducted before failures show up. This communication presents: i) A procedural approach, to be conducted, in order to collect the relevant information regarding the evolving state condition of the assets involved in all maintenance interventions; this reported and stored information constitutes a rich historical data base to train Machine Learning algorithms in order to generate reliable predictions of the interventions to be carried out in further time scenarios. ii) A schematic flow chart of the automatic learning procedure. iii) Self-learning rules from automatic learning from false positive/negatives. The description, testing, automatic learning approach and the outcomes of a pilot case are presented; finally some conclusions are outlined regarding the methodology proposed for improving the self-learning predictive capability.

  14. Improving the analysis of designed studies by combining statistical modelling with study design information

    NARCIS (Netherlands)

    Thissen, U.; Wopereis, S.; Berg, S.A.A. van den; Bobeldijk, I.; Kleemann, R.; Kooistra, T.; Dijk, K.W. van; Ommen, B. van; Smilde, A.K.

    2009-01-01

    Background: In the fields of life sciences, so-called designed studies are used for studying complex biological systems. The data derived from these studies comply with a study design aimed at generating relevant information while diminishing unwanted variation (noise). Knowledge about the study

  15. A neural mechanism of dynamic gating of task-relevant information by top-down influence in primary visual cortex.

    Science.gov (United States)

    Kamiyama, Akikazu; Fujita, Kazuhisa; Kashimori, Yoshiki

    2016-12-01

    Visual recognition involves bidirectional information flow, which consists of bottom-up information coding from retina and top-down information coding from higher visual areas. Recent studies have demonstrated the involvement of early visual areas such as primary visual area (V1) in recognition and memory formation. V1 neurons are not passive transformers of sensory inputs but work as adaptive processor, changing their function according to behavioral context. Top-down signals affect tuning property of V1 neurons and contribute to the gating of sensory information relevant to behavior. However, little is known about the neuronal mechanism underlying the gating of task-relevant information in V1. To address this issue, we focus on task-dependent tuning modulations of V1 neurons in two tasks of perceptual learning. We develop a model of the V1, which receives feedforward input from lateral geniculate nucleus and top-down input from a higher visual area. We show here that the change in a balance between excitation and inhibition in V1 connectivity is necessary for gating task-relevant information in V1. The balance change well accounts for the modulations of tuning characteristic and temporal properties of V1 neuronal responses. We also show that the balance change of V1 connectivity is shaped by top-down signals with temporal correlations reflecting the perceptual strategies of the two tasks. We propose a learning mechanism by which synaptic balance is modulated. To conclude, top-down signal changes the synaptic balance between excitation and inhibition in V1 connectivity, enabling early visual area such as V1 to gate context-dependent information under multiple task performances. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  16. What's New in the Medicine Cabinet?: A Panoramic Review of Clinically Relevant Information for the Busy Dermatologist.

    Science.gov (United States)

    Del Rosso, James Q; Zeichner, Joshua

    2014-01-01

    This article is the first in a periodic series of therapeutic topics with short reviews gleaned from major dermatology meetings, especially Scientific Poster Sessions, and is designed to provide information that may assist the readers in adapting information from the literature to their clinical practice. The topics covered in this issue are discussions of the clinical relevance of newer information about acne pathophysiology, acne in adult women, and topical corticosteroid spray formulations for chronic plaque psoriasis.

  17. 75 FR 21231 - Proposed Information Collection; Comment Request; Marine Recreational Fisheries Statistics Survey

    Science.gov (United States)

    2010-04-23

    ... Collection; Comment Request; Marine Recreational Fisheries Statistics Survey AGENCY: National Oceanic and... Andrews, (301) 713-2328, ext. 148 or [email protected] . SUPPLEMENTARY INFORMATION: I. Abstract Marine recreational anglers are surveyed for catch and effort data, fish biology data, and angler socioeconomic...

  18. Evaluation of relevant information for optimal reflector modeling through data assimilation procedures

    International Nuclear Information System (INIS)

    Argaud, J.P.; Bouriquet, B.; Clerc, T.; Lucet-Sanchez, F.; Poncot, A.

    2015-01-01

    The goal of this study is to look after the amount of information that is mandatory to get a relevant parameters optimisation by data assimilation for physical models in neutronic diffusion calculations, and to determine what is the best information to reach the optimum of accuracy at the cheapest cost. To evaluate the quality of the optimisation, we study the covariance matrix that represents the accuracy of the optimised parameter. This matrix is a classical output of the data assimilation procedure, and it is the main information about accuracy and sensitivity of the parameter optimal determination. We present some results collected in the field of neutronic simulation for PWR type reactor. We seek to optimise the reflector parameters that characterise the neutronic reflector surrounding the whole reactive core. On the basis of the configuration studies, it has been shown that with data assimilation we can determine a global strategy to optimise the quality of the result with respect to the amount of information provided. The consequence of this is a cost reduction in terms of measurement and/or computing time with respect to the basic approach. Another result is that using multi-campaign data rather data from a unique campaign significantly improves the efficiency of parameters optimisation

  19. Age differences in default and reward networks during processing of personally relevant information.

    Science.gov (United States)

    Grady, Cheryl L; Grigg, Omer; Ng, Charisa

    2012-06-01

    We recently found activity in default mode and reward-related regions during self-relevant tasks in young adults. Here we examine the effect of aging on engagement of the default network (DN) and reward network (RN) during these tasks. Previous studies have shown reduced engagement of the DN and reward areas in older adults, but the influence of age on these circuits during self-relevant tasks has not been examined. The tasks involved judging personality traits about one's self or a well known other person. There were no age differences in reaction time on the tasks but older adults had more positive Self and Other judgments, whereas younger adults had more negative judgments. Both groups had increased DN and RN activity during the self-relevant tasks, relative to non-self tasks, but this increase was reduced in older compared to young adults. Functional connectivity of both networks during the tasks was weaker in the older relative to younger adults. Intrinsic functional connectivity, measured at rest, also was weaker in the older adults in the DN, but not in the RN. These results suggest that, in younger adults, the processing of personally relevant information involves robust activation of and functional connectivity within these two networks, in line with current models that emphasize strong links between the self and reward. The finding that older adults had more positive judgments, but weaker engagement and less consistent functional connectivity in these networks, suggests potential brain mechanisms for the "positivity bias" with aging. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Autism spectrum disorder updates – relevant information for early interventionists to consider

    Directory of Open Access Journals (Sweden)

    Paula Allen-Meares

    2016-10-01

    Full Text Available Autism spectrum disorder (ASD is a pervasive developmental disorder characterized by deficits in social communication skills as well as repetitive, restricted or stereotyped behaviors (1. Early interventionists are often found at the forefront of assessment, evaluation and early intervention services for children with ASD. The role of an early intervention specialist may include, assessing developmental history, providing group and individual counseling, working in partnership with families on home, school, and community environments, mobilizing school and community resources and assisting in the development of positive early intervention strategies (2, 3. The commonality amongst these roles resides in the importance of providing up-to-date, relevant information to families and children. The purpose of this review is to provide pertinent up-to-date knowledge for early interventionists to help inform practice in working with individuals with ASD, including common behavioral models of intervention.

  1. Proceedings of the Pacific Rim Statistical Conference for Production Engineering : Big Data, Production Engineering and Statistics

    CERN Document Server

    Jang, Daeheung; Lai, Tze; Lee, Youngjo; Lu, Ying; Ni, Jun; Qian, Peter; Qiu, Peihua; Tiao, George

    2018-01-01

    This book presents the proceedings of the 2nd Pacific Rim Statistical Conference for Production Engineering: Production Engineering, Big Data and Statistics, which took place at Seoul National University in Seoul, Korea in December, 2016. The papers included discuss a wide range of statistical challenges, methods and applications for big data in production engineering, and introduce recent advances in relevant statistical methods.

  2. Entropy statistics and information theory

    NARCIS (Netherlands)

    Frenken, K.; Hanusch, H.; Pyka, A.

    2007-01-01

    Entropy measures provide important tools to indicate variety in distributions at particular moments in time (e.g., market shares) and to analyse evolutionary processes over time (e.g., technical change). Importantly, entropy statistics are suitable to decomposition analysis, which renders the

  3. Radiographic rejection index using statistical process control

    International Nuclear Information System (INIS)

    Savi, M.B.M.B.; Camozzato, T.S.C.; Soares, F.A.P.; Nandi, D.M.

    2015-01-01

    The Repeat Analysis Index (IRR) is one of the items contained in the Quality Control Program dictated by brazilian law of radiological protection and should be performed frequently, at least every six months. In order to extract more and better information of IRR, this study presents the Statistical Quality Control applied to reject rate through Statistical Process Control (Control Chart for Attributes ρ - GC) and the Pareto Chart (GP). Data collection was performed for 9 months and the last four months of collection was given on a daily basis. The Limits of Control (LC) were established and Minitab 16 software used to create the charts. IRR obtained for the period was corresponding to 8.8% ± 2,3% and the generated charts analyzed. Relevant information such as orders for X-ray equipment and processors were crossed to identify the relationship between the points that exceeded the control limits and the state of equipment at the time. The GC demonstrated ability to predict equipment failures, as well as the GP showed clearly what causes are recurrent in IRR. (authors) [pt

  4. Japanese Quality Assurance System Regarding the Provision of Material Accounting Reports and the Safeguards Relevant Information to the IAEA

    International Nuclear Information System (INIS)

    Goto, Y.; Namekawa, M.; Kumekawa, H.; Usui, A.; Sano, K.

    2015-01-01

    The provision of the safeguards relevant reports and information in accordance with the comprehensive safeguards agreement (CSA) and the additional protocol (AP) is the basis for the IAEA safeguards. The government of Japan (Japan Safeguards Office, JSGO) has believed that the correct reports contribute to effective and efficient safeguards therefore the domestic quality assurance system for the reporting to the IAEA was already established at the time of the accession of the CSA in 1977. It consists of Code 10 interpretation (including the seminars for operators in Japan), SSAC's checks for syntax error, code and internal consistency (computer based consistency check between facilities) and the discussion with the IAEA on the facilities' measurement system for bulk-handling facilities, which contributes to the more accurate reports from operators. This spirit has been maintained for the entry into force of the AP. For example, questions and amplification from the IAEA will be taken into account the review of the AP declaration before sending to the IAEA and the open source information such as news article and scientific literature in Japanese is collected and translated into English, and the translated information is provided to the IAEA as the supplementary information, which may contribute to broadening the IAEA information source and to their comprehensive evaluation. The other safeguards relevant information, such as the mail-box information for SNRI at LEU fuel fabrication plants, is also checked by the JSGO's QC software before posting. The software was developed by JSGO and it checks data format, batch IDs, birth/death date, shipper/receiver information and material description code. This paper explains the history of the development of the Japanese quality assurance system regarding the reports and the safeguards relevant information to the IAEA. (author)

  5. Inference of Functionally-Relevant N-acetyltransferase Residues Based on Statistical Correlations.

    Directory of Open Access Journals (Sweden)

    Andrew F Neuwald

    2016-12-01

    Full Text Available Over evolutionary time, members of a superfamily of homologous proteins sharing a common structural core diverge into subgroups filling various functional niches. At the sequence level, such divergence appears as correlations that arise from residue patterns distinct to each subgroup. Such a superfamily may be viewed as a population of sequences corresponding to a complex, high-dimensional probability distribution. Here we model this distribution as hierarchical interrelated hidden Markov models (hiHMMs, which describe these sequence correlations implicitly. By characterizing such correlations one may hope to obtain information regarding functionally-relevant properties that have thus far evaded detection. To do so, we infer a hiHMM distribution from sequence data using Bayes' theorem and Markov chain Monte Carlo (MCMC sampling, which is widely recognized as the most effective approach for characterizing a complex, high dimensional distribution. Other routines then map correlated residue patterns to available structures with a view to hypothesis generation. When applied to N-acetyltransferases, this reveals sequence and structural features indicative of functionally important, yet generally unknown biochemical properties. Even for sets of proteins for which nothing is known beyond unannotated sequences and structures, this can lead to helpful insights. We describe, for example, a putative coenzyme-A-induced-fit substrate binding mechanism mediated by arginine residue switching between salt bridge and π-π stacking interactions. A suite of programs implementing this approach is available (psed.igs.umaryland.edu.

  6. Inference of Functionally-Relevant N-acetyltransferase Residues Based on Statistical Correlations.

    Science.gov (United States)

    Neuwald, Andrew F; Altschul, Stephen F

    2016-12-01

    Over evolutionary time, members of a superfamily of homologous proteins sharing a common structural core diverge into subgroups filling various functional niches. At the sequence level, such divergence appears as correlations that arise from residue patterns distinct to each subgroup. Such a superfamily may be viewed as a population of sequences corresponding to a complex, high-dimensional probability distribution. Here we model this distribution as hierarchical interrelated hidden Markov models (hiHMMs), which describe these sequence correlations implicitly. By characterizing such correlations one may hope to obtain information regarding functionally-relevant properties that have thus far evaded detection. To do so, we infer a hiHMM distribution from sequence data using Bayes' theorem and Markov chain Monte Carlo (MCMC) sampling, which is widely recognized as the most effective approach for characterizing a complex, high dimensional distribution. Other routines then map correlated residue patterns to available structures with a view to hypothesis generation. When applied to N-acetyltransferases, this reveals sequence and structural features indicative of functionally important, yet generally unknown biochemical properties. Even for sets of proteins for which nothing is known beyond unannotated sequences and structures, this can lead to helpful insights. We describe, for example, a putative coenzyme-A-induced-fit substrate binding mechanism mediated by arginine residue switching between salt bridge and π-π stacking interactions. A suite of programs implementing this approach is available (psed.igs.umaryland.edu).

  7. Higher-Order Statistical Correlations and Mutual Information Among Particles in a Quantum Well

    Science.gov (United States)

    Yépez, V. S.; Sagar, R. P.; Laguna, H. G.

    2017-12-01

    The influence of wave function symmetry on statistical correlation is studied for the case of three non-interacting spin-free quantum particles in a unidimensional box, in position and in momentum space. Higher-order statistical correlations occurring among the three particles in this quantum system is quantified via higher-order mutual information and compared to the correlation between pairs of variables in this model, and to the correlation in the two-particle system. The results for the higher-order mutual information show that there are states where the symmetric wave functions are more correlated than the antisymmetric ones with same quantum numbers. This holds in position as well as in momentum space. This behavior is opposite to that observed for the correlation between pairs of variables in this model, and the two-particle system, where the antisymmetric wave functions are in general more correlated. These results are also consistent with those observed in a system of three uncoupled oscillators. The use of higher-order mutual information as a correlation measure, is monitored and examined by considering a superposition of states or systems with two Slater determinants.

  8. Higher-Order Statistical Correlations and Mutual Information Among Particles in a Quantum Well

    International Nuclear Information System (INIS)

    Yépez, V. S.; Sagar, R. P.; Laguna, H. G.

    2017-01-01

    The influence of wave function symmetry on statistical correlation is studied for the case of three non-interacting spin-free quantum particles in a unidimensional box, in position and in momentum space. Higher-order statistical correlations occurring among the three particles in this quantum system is quantified via higher-order mutual information and compared to the correlation between pairs of variables in this model, and to the correlation in the two-particle system. The results for the higher-order mutual information show that there are states where the symmetric wave functions are more correlated than the antisymmetric ones with same quantum numbers. This holds in position as well as in momentum space. This behavior is opposite to that observed for the correlation between pairs of variables in this model, and the two-particle system, where the antisymmetric wave functions are in general more correlated. These results are also consistent with those observed in a system of three uncoupled oscillators. The use of higher-order mutual information as a correlation measure, is monitored and examined by considering a superposition of states or systems with two Slater determinants. (author)

  9. Effect size, confidence intervals and statistical power in psychological research.

    Directory of Open Access Journals (Sweden)

    Téllez A.

    2015-07-01

    Full Text Available Quantitative psychological research is focused on detecting the occurrence of certain population phenomena by analyzing data from a sample, and statistics is a particularly helpful mathematical tool that is used by researchers to evaluate hypotheses and make decisions to accept or reject such hypotheses. In this paper, the various statistical tools in psychological research are reviewed. The limitations of null hypothesis significance testing (NHST and the advantages of using effect size and its respective confidence intervals are explained, as the latter two measurements can provide important information about the results of a study. These measurements also can facilitate data interpretation and easily detect trivial effects, enabling researchers to make decisions in a more clinically relevant fashion. Moreover, it is recommended to establish an appropriate sample size by calculating the optimum statistical power at the moment that the research is designed. Psychological journal editors are encouraged to follow APA recommendations strictly and ask authors of original research studies to report the effect size, its confidence intervals, statistical power and, when required, any measure of clinical significance. Additionally, we must account for the teaching of statistics at the graduate level. At that level, students do not receive sufficient information concerning the importance of using different types of effect sizes and their confidence intervals according to the different types of research designs; instead, most of the information is focused on the various tools of NHST.

  10. ERP investigation of attentional disengagement from suicide-relevant information in patients with major depressive disorder.

    Science.gov (United States)

    Baik, Seung Yeon; Jeong, Minkyung; Kim, Hyang Sook; Lee, Seung-Hwan

    2018-01-01

    Previous studies suggest the presence of attentional bias towards suicide-relevant information in suicidal individuals. However, the findings are limited by their reliance on behavioral measures. This study investigates the role of difficulty in disengaging attention from suicide-relevant stimuli using the P300 component of event-related potentials (ERPs). Forty-four adults with Major Depressive Disorder (MDD) were administered the spatial cueing task using suicide-relevant and negatively-valenced words as cue stimuli. Disengagement difficulty was measured using reaction time and P300 during invalid trials. P300 amplitudes at Pz were higher in suicide-relevant compared to negatively-valenced word condition on invalid trials for participants with low rates of suicidal behavior. However, no such difference was found among participants with high rates of suicidal behavior. P300 amplitudes for suicide-relevant word condition were negatively correlated with "lifetime suicide ideation and attempt" at Pz. No significant results were found for the reaction time data, indicating that the ERP may be more sensitive in capturing the attentional disengagement effect. The groups were divided according to Suicidal Behaviors Questionnaire-Revised (SBQ-R) total score. Neutral stimulus was not included as cue stimuli. Most participants were under medication during the experiment. Our results indicate that patients with MDD and low rates of suicidal behavior show difficulty in disengaging attention from suicide-relevant stimuli. We suggest that suicide-specific disengagement difficulties may be related to recentness of suicide attempt and that acquired capability for suicide may contribute to reduced disengagement difficulties. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Time-Dependent Statistical Analysis of Wide-Area Time-Synchronized Data

    Directory of Open Access Journals (Sweden)

    A. R. Messina

    2010-01-01

    Full Text Available Characterization of spatial and temporal changes in the dynamic patterns of a nonstationary process is a problem of great theoretical and practical importance. On-line monitoring of large-scale power systems by means of time-synchronized Phasor Measurement Units (PMUs provides the opportunity to analyze and characterize inter-system oscillations. Wide-area measurement sets, however, are often relatively large, and may contain phenomena with differing temporal scales. Extracting from these measurements the relevant dynamics is a difficult problem. As the number of observations of real events continues to increase, statistical techniques are needed to help identify relevant temporal dynamics from noise or random effects in measured data. In this paper, a statistically based, data-driven framework that integrates the use of wavelet-based EOF analysis and a sliding window-based method is proposed to identify and extract, in near-real-time, dynamically independent spatiotemporal patterns from time synchronized data. The method deals with the information in space and time simultaneously, and allows direct tracking and characterization of the nonstationary time-frequency dynamics of oscillatory processes. The efficiency and accuracy of the developed procedures for extracting localized information of power system behavior from time-synchronized phasor measurements of a real event in Mexico is assessed.

  12. Central Statistical Office as a source of information that is relevant in determining the state of the public finances of the Republic of Poland. The financial management of the Central Statistical Office

    Directory of Open Access Journals (Sweden)

    Wojciech Bożek

    2016-09-01

    Full Text Available In the elaboration the author discussed the meaning of the public statistics in public finances, the structure and the financial activity of the Central Statistical Office (CSO and other units of public sector, which is related with it. Besides, the author indicates example of legal solutions in the Polish order financial which underlines the importance and actualness of undertaken subject matter. Also, the author underlines the meaning of the public statistics in the process of the efficient financial management public and conducts of the transparent economy with public measures. The author constates that the catalogue of tasks of CSO, from the perspective of public finance, is extensive and dynamic.

  13. Analyzing Statistical Mediation with Multiple Informants: A New Approach with an Application in Clinical Psychology.

    Science.gov (United States)

    Papa, Lesther A; Litson, Kaylee; Lockhart, Ginger; Chassin, Laurie; Geiser, Christian

    2015-01-01

    Testing mediation models is critical for identifying potential variables that need to be targeted to effectively change one or more outcome variables. In addition, it is now common practice for clinicians to use multiple informant (MI) data in studies of statistical mediation. By coupling the use of MI data with statistical mediation analysis, clinical researchers can combine the benefits of both techniques. Integrating the information from MIs into a statistical mediation model creates various methodological and practical challenges. The authors review prior methodological approaches to MI mediation analysis in clinical research and propose a new latent variable approach that overcomes some limitations of prior approaches. An application of the new approach to mother, father, and child reports of impulsivity, frustration tolerance, and externalizing problems (N = 454) is presented. The results showed that frustration tolerance mediated the relationship between impulsivity and externalizing problems. The new approach allows for a more comprehensive and effective use of MI data when testing mediation models.

  14. Differential impact of relevant and irrelevant dimension primes on rule-based and information-integration category learning.

    Science.gov (United States)

    Grimm, Lisa R; Maddox, W Todd

    2013-11-01

    Research has identified multiple category-learning systems with each being "tuned" for learning categories with different task demands and each governed by different neurobiological systems. Rule-based (RB) classification involves testing verbalizable rules for category membership while information-integration (II) classification requires the implicit learning of stimulus-response mappings. In the first study to directly test rule priming with RB and II category learning, we investigated the influence of the availability of information presented at the beginning of the task. Participants viewed lines that varied in length, orientation, and position on the screen, and were primed to focus on stimulus dimensions that were relevant or irrelevant to the correct classification rule. In Experiment 1, we used an RB category structure, and in Experiment 2, we used an II category structure. Accuracy and model-based analyses suggested that a focus on relevant dimensions improves RB task performance later in learning while a focus on an irrelevant dimension improves II task performance early in learning. © 2013.

  15. THE LEVEL OF KNOWLEDGE IN THE VALUE RELEVANCE LITERATURE

    Directory of Open Access Journals (Sweden)

    Mihaela Alina ROBU

    2014-12-01

    Full Text Available In the last decades, numerous studies have covered the relationship between stock price or stock return and financial information. These studies represent the "value-relevance" literature. Knowledge of this area of interest, through literature and the main ideas, yields scientific progress. The aim of the study is to achieve a qualitative and a quantitative analysis regarding the level of knowledge in the value relevance literature, in an international context. To achieve this aim, a number of 53 scientific articles published between 2001 and 2013 were selected, from the first two journals related to the number of citations in the rankings compiled by Google Scholar, Accounting and Taxation category. Qualitative analysis and quantitative analysis (factorial analysis of multiple correspondences as statistical method were used. The results reflect the importance of existing problems in the financial markets. The studies are focused on solving these problems, to support the investors.

  16. Relevance and reliability of experimental data in human health risk assessment of pesticides.

    Science.gov (United States)

    Kaltenhäuser, Johanna; Kneuer, Carsten; Marx-Stoelting, Philip; Niemann, Lars; Schubert, Jens; Stein, Bernd; Solecki, Roland

    2017-08-01

    Evaluation of data relevance, reliability and contribution to uncertainty is crucial in regulatory health risk assessment if robust conclusions are to be drawn. Whether a specific study is used as key study, as additional information or not accepted depends in part on the criteria according to which its relevance and reliability are judged. In addition to GLP-compliant regulatory studies following OECD Test Guidelines, data from peer-reviewed scientific literature have to be evaluated in regulatory risk assessment of pesticide active substances. Publications should be taken into account if they are of acceptable relevance and reliability. Their contribution to the overall weight of evidence is influenced by factors including test organism, study design and statistical methods, as well as test item identification, documentation and reporting of results. Various reports make recommendations for improving the quality of risk assessments and different criteria catalogues have been published to support evaluation of data relevance and reliability. Their intention was to guide transparent decision making on the integration of the respective information into the regulatory process. This article describes an approach to assess the relevance and reliability of experimental data from guideline-compliant studies as well as from non-guideline studies published in the scientific literature in the specific context of uncertainty and risk assessment of pesticides. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  17. Bohm's mysterious 'quantum force' and 'active information': alternative interpretation and statistical properties

    International Nuclear Information System (INIS)

    Lan, B.L.

    2001-01-01

    An alternative interpretation to Bohm's 'quantum force' and 'active information' is proposed. Numerical evidence is presented, which suggests that the time series of Bohm's 'quantum force' evaluated at the Bohmian position for non-stationary quantum states are typically non-Gaussian stable distributed with a flat power spectrum in classically chaotic Hamiltonian systems. An important implication of these statistical properties is briefly mentioned. (orig.)

  18. Geospatial Information Relevant to the Flood Protection Available on The Mainstream Web

    Directory of Open Access Journals (Sweden)

    Kliment Tomáš

    2014-03-01

    Full Text Available Flood protection is one of several disciplines where geospatial data is very important and is a crucial component. Its management, processing and sharing form the foundation for their efficient use; therefore, special attention is required in the development of effective, precise, standardized, and interoperable models for the discovery and publishing of data on the Web. This paper describes the design of a methodology to discover Open Geospatial Consortium (OGC services on the Web and collect descriptive information, i.e., metadata in a geocatalogue. A pilot implementation of the proposed methodology - Geocatalogue of geospatial information provided by OGC services discovered on Google (hereinafter “Geocatalogue” - was used to search for available resources relevant to the area of flood protection. The result is an analysis of the availability of resources discovered through their metadata collected from the OGC services (WMS, WFS, etc. and the resources they provide (WMS layers, WFS objects, etc. within the domain of flood protection.

  19. When Statistical Literacy Really Matters: Understanding Published Information about the HIV/AIDS Epidemic in South Africa

    Science.gov (United States)

    Hobden, Sally

    2014-01-01

    Information on the HIV/AIDS epidemic in Southern Africa is often interpreted through a veil of secrecy and shame and, I argue, with flawed understanding of basic statistics. This research determined the levels of statistical literacy evident in 316 future Mathematical Literacy teachers' explanations of the median in the context of HIV/AIDS…

  20. A novel genome-information content-based statistic for genome-wide association analysis designed for next-generation sequencing data.

    Science.gov (United States)

    Luo, Li; Zhu, Yun; Xiong, Momiao

    2012-06-01

    The genome-wide association studies (GWAS) designed for next-generation sequencing data involve testing association of genomic variants, including common, low frequency, and rare variants. The current strategies for association studies are well developed for identifying association of common variants with the common diseases, but may be ill-suited when large amounts of allelic heterogeneity are present in sequence data. Recently, group tests that analyze their collective frequency differences between cases and controls shift the current variant-by-variant analysis paradigm for GWAS of common variants to the collective test of multiple variants in the association analysis of rare variants. However, group tests ignore differences in genetic effects among SNPs at different genomic locations. As an alternative to group tests, we developed a novel genome-information content-based statistics for testing association of the entire allele frequency spectrum of genomic variation with the diseases. To evaluate the performance of the proposed statistics, we use large-scale simulations based on whole genome low coverage pilot data in the 1000 Genomes Project to calculate the type 1 error rates and power of seven alternative statistics: a genome-information content-based statistic, the generalized T(2), collapsing method, multivariate and collapsing (CMC) method, individual χ(2) test, weighted-sum statistic, and variable threshold statistic. Finally, we apply the seven statistics to published resequencing dataset from ANGPTL3, ANGPTL4, ANGPTL5, and ANGPTL6 genes in the Dallas Heart Study. We report that the genome-information content-based statistic has significantly improved type 1 error rates and higher power than the other six statistics in both simulated and empirical datasets.

  1. Information geometry and sufficient statistics

    Czech Academy of Sciences Publication Activity Database

    Ay, N.; Jost, J.; Le, Hong-Van; Schwachhöfer, L.

    2015-01-01

    Roč. 162, 1-2 (2015), s. 327-364 ISSN 0178-8051 Institutional support: RVO:67985840 Keywords : Fisher quadratic form * Amari-Chentsov tensor * sufficient statistic Subject RIV: BA - General Mathematics Impact factor: 2.204, year: 2015 http://link.springer.com/article/10.1007/s00440-014-0574-8

  2. On the Estimation and Use of Statistical Modelling in Information Retrieval

    DEFF Research Database (Denmark)

    Petersen, Casper

    Automatic text processing often relies on assumptions about the distribution of some property (such as term frequency) in the data being processed. In information retrieval (IR) such assumptions may be contributed to (i) the absence of principled approaches for determining the correct statistical...... that assumptions regarding the distribution of dataset properties can be replaced with an effective, efficient and principled method for determining the best-fitting distribution and that using this distribution can lead to improved retrieval performance....

  3. Examples of the Application of Nonparametric Information Geometry to Statistical Physics

    Directory of Open Access Journals (Sweden)

    Giovanni Pistone

    2013-09-01

    Full Text Available We review a nonparametric version of Amari’s information geometry in which the set of positive probability densities on a given sample space is endowed with an atlas of charts to form a differentiable manifold modeled on Orlicz Banach spaces. This nonparametric setting is used to discuss the setting of typical problems in machine learning and statistical physics, such as black-box optimization, Kullback-Leibler divergence, Boltzmann-Gibbs entropy and the Boltzmann equation.

  4. Decrease of Fisher information and the information geometry of evolution equations for quantum mechanical probability amplitudes.

    Science.gov (United States)

    Cafaro, Carlo; Alsing, Paul M

    2018-04-01

    The relevance of the concept of Fisher information is increasing in both statistical physics and quantum computing. From a statistical mechanical standpoint, the application of Fisher information in the kinetic theory of gases is characterized by its decrease along the solutions of the Boltzmann equation for Maxwellian molecules in the two-dimensional case. From a quantum mechanical standpoint, the output state in Grover's quantum search algorithm follows a geodesic path obtained from the Fubini-Study metric on the manifold of Hilbert-space rays. Additionally, Grover's algorithm is specified by constant Fisher information. In this paper, we present an information geometric characterization of the oscillatory or monotonic behavior of statistically parametrized squared probability amplitudes originating from special functional forms of the Fisher information function: constant, exponential decay, and power-law decay. Furthermore, for each case, we compute both the computational speed and the availability loss of the corresponding physical processes by exploiting a convenient Riemannian geometrization of useful thermodynamical concepts. Finally, we briefly comment on the possibility of using the proposed methods of information geometry to help identify a suitable trade-off between speed and thermodynamic efficiency in quantum search algorithms.

  5. Decrease of Fisher information and the information geometry of evolution equations for quantum mechanical probability amplitudes

    Science.gov (United States)

    Cafaro, Carlo; Alsing, Paul M.

    2018-04-01

    The relevance of the concept of Fisher information is increasing in both statistical physics and quantum computing. From a statistical mechanical standpoint, the application of Fisher information in the kinetic theory of gases is characterized by its decrease along the solutions of the Boltzmann equation for Maxwellian molecules in the two-dimensional case. From a quantum mechanical standpoint, the output state in Grover's quantum search algorithm follows a geodesic path obtained from the Fubini-Study metric on the manifold of Hilbert-space rays. Additionally, Grover's algorithm is specified by constant Fisher information. In this paper, we present an information geometric characterization of the oscillatory or monotonic behavior of statistically parametrized squared probability amplitudes originating from special functional forms of the Fisher information function: constant, exponential decay, and power-law decay. Furthermore, for each case, we compute both the computational speed and the availability loss of the corresponding physical processes by exploiting a convenient Riemannian geometrization of useful thermodynamical concepts. Finally, we briefly comment on the possibility of using the proposed methods of information geometry to help identify a suitable trade-off between speed and thermodynamic efficiency in quantum search algorithms.

  6. Post-encoding control of working memory enhances processing of relevant information in rhesus monkeys (Macaca mulatta).

    Science.gov (United States)

    Brady, Ryan J; Hampton, Robert R

    2018-06-01

    Working memory is a system by which a limited amount of information can be kept available for processing after the cessation of sensory input. Because working memory resources are limited, it is adaptive to focus processing on the most relevant information. We used a retro-cue paradigm to determine the extent to which monkey working memory possesses control mechanisms that focus processing on the most relevant representations. Monkeys saw a sample array of images, and shortly after the array disappeared, they were visually cued to a location that had been occupied by one of the sample images. The cue indicated which image should be remembered for the upcoming recognition test. By determining whether the monkeys were more accurate and quicker to respond to cued images compared to un-cued images, we tested the hypothesis that monkey working memory focuses processing on relevant information. We found a memory benefit for the cued image in terms of accuracy and retrieval speed with a memory load of two images. With a memory load of three images, we found a benefit in retrieval speed but only after shortening the onset latency of the retro-cue. Our results demonstrate previously unknown flexibility in the cognitive control of memory in monkeys, suggesting that control mechanisms in working memory likely evolved in a common ancestor of humans and monkeys more than 32 million years ago. Future work should be aimed at understanding the interaction between memory load and the ability to control memory resources, and the role of working memory control in generating differences in cognitive capacity among primates. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Exploring the information and communication technology competence and confidence of nursing students and their perception of its relevance to clinical practice.

    Science.gov (United States)

    Levett-Jones, Tracy; Kenny, Raelene; Van der Riet, Pamela; Hazelton, Michael; Kable, Ashley; Bourgeois, Sharon; Luxford, Yoni

    2009-08-01

    This paper profiles a study that explored nursing students' information and communication technology competence and confidence. It presents selected findings that focus on students' attitudes towards information and communication technology as an educational methodology and their perceptions of its relevance to clinical practice. Information and communication technology is integral to contemporary nursing practice. Development of these skills is important to ensure that graduates are 'work ready' and adequately prepared to practice in increasingly technological healthcare environments. This was a mixed methods study. Students (n=971) from three Australian universities were surveyed using an instrument designed specifically for the study, and 24 students participated in focus groups. The focus group data revealed that a number of students were resistant to the use of information and communication technology as an educational methodology and lacked the requisite skills and confidence to engage successfully with this educational approach. Survey results indicated that 26 per cent of students were unsure about the relevance of information and communication technology to clinical practice and only 50 per cent felt 'very confident' using a computer. While the importance of information and communication technology to student's learning and to their preparedness for practice has been established, it is evident that students' motivation is influenced by their level of confidence and competence, and their understanding of the relevance of information and communication technology to their future careers.

  8. Aortic Aneurysm Statistics

    Science.gov (United States)

    ... Summary Coverdell Program 2012-2015 State Summaries Data & Statistics Fact Sheets Heart Disease and Stroke Fact Sheets ... Roadmap for State Planning Other Data Resources Other Statistic Resources Grantee Information Cross-Program Information Online Tools ...

  9. Use of a New International Classification of Health Interventions for Capturing Information on Health Interventions Relevant to People with Disabilities.

    Science.gov (United States)

    Fortune, Nicola; Madden, Richard; Almborg, Ann-Helene

    2018-01-17

    Development of the World Health Organization's International Classification of Health Interventions (ICHI) is currently underway. Once finalised, ICHI will provide a standard basis for collecting, aggregating, analysing, and comparing data on health interventions across all sectors of the health system. In this paper, we introduce the classification, describing its underlying tri-axial structure, organisation and content. We then discuss the potential value of ICHI for capturing information on met and unmet need for health interventions relevant to people with a disability, with a particular focus on interventions to support functioning and health promotion interventions. Early experiences of use of the Swedish National Classification of Social Care Interventions and Activities, which is based closely on ICHI, illustrate the value of a standard classification to support practice and collect statistical data. Testing of the ICHI beta version in a wide range of countries and contexts is now needed so that improvements can be made before it is finalised. Input from those with an interest in the health of people with disabilities and health promotion more broadly is welcomed.

  10. Hedge Accounting in the Brazilian Stock Market: Effects on the Quality of Accounting Information, Disclosure, and Information Asymmetry

    Directory of Open Access Journals (Sweden)

    Silas Adolfo Potin

    2016-08-01

    Full Text Available ABSTRACT This paper investigates, in the Brazilian stock market, the effect of hedge accounting on the quality of financial information, on the disclosure of derivative financial instruments, and on the information asymmetry. To measure the quality of accounting information, relevance metrics of accounting information and book earnings informativeness were used. For executing this research, a general sample was obtained through Brazilian companies, non-financial, listed on the Brazilian Securities, Commodities, and Futures Exchange (BM&FBOVESPA, comprising the 150 companies with highest market value on 01/01/2014. Through the general sample, samples were compiled for applying the econometric models of value relevance, informativeness, disclosure, and information asymmetry. The sample for relevance had 758 companies-years observations within the period from 2008 to 2013; the sample for informativeness had 701 companies-years observations with the period from 2008 to 2013; the sample for disclosure had 100 companies-years observations, within the period from 2011 to 2012; the sample for information asymmetry had 100 companies-years observations, also related to the period from 2011 to 2012. In addition to the econometric models, the propensity score matching method was applied to the analyses of the hedge accounting effect on disclosure and information asymmetry. The evidence found for the influence of hedge accounting indicates a relation: (i positive and significant concerning accounting information relevance and disclosure of derivatives; (ii negative and significant for book earnings informativeness. Regarding information asymmetry, although the coefficients showed up as expected, they were not statistically significant.

  11. The foundation of the concept of relevance

    DEFF Research Database (Denmark)

    Hjørland, Birger

    2010-01-01

    that what was regarded as the most fundamental view by Saracevic in 1975 has not since been considered (with very few exceptions). Other views, which are based on less fruitful assumptions, have dominated the discourse on relevance in information retrieval and information science. Many authors have...... reexamined the concept of relevance in information science, but have neglected the subject knowledge view, hence basic theoretical assumptions seem not to have been properly addressed. It is as urgent now as it was in 1975 seriously to consider “the subject knowledge view” of relevance (which may also...... be termed “the epistemological view”). The concept of relevance, like other basic concepts, is influenced by overall approaches to information science, such as the cognitive view and the domain-analytic view. There is today a trend toward a social paradigm for information science. This paper offers...

  12. Feature-Based Statistical Analysis of Combustion Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, J; Krishnamoorthy, V; Liu, S; Grout, R; Hawkes, E; Chen, J; Pascucci, V; Bremer, P T

    2011-11-18

    We present a new framework for feature-based statistical analysis of large-scale scientific data and demonstrate its effectiveness by analyzing features from Direct Numerical Simulations (DNS) of turbulent combustion. Turbulent flows are ubiquitous and account for transport and mixing processes in combustion, astrophysics, fusion, and climate modeling among other disciplines. They are also characterized by coherent structure or organized motion, i.e. nonlocal entities whose geometrical features can directly impact molecular mixing and reactive processes. While traditional multi-point statistics provide correlative information, they lack nonlocal structural information, and hence, fail to provide mechanistic causality information between organized fluid motion and mixing and reactive processes. Hence, it is of great interest to capture and track flow features and their statistics together with their correlation with relevant scalar quantities, e.g. temperature or species concentrations. In our approach we encode the set of all possible flow features by pre-computing merge trees augmented with attributes, such as statistical moments of various scalar fields, e.g. temperature, as well as length-scales computed via spectral analysis. The computation is performed in an efficient streaming manner in a pre-processing step and results in a collection of meta-data that is orders of magnitude smaller than the original simulation data. This meta-data is sufficient to support a fully flexible and interactive analysis of the features, allowing for arbitrary thresholds, providing per-feature statistics, and creating various global diagnostics such as Cumulative Density Functions (CDFs), histograms, or time-series. We combine the analysis with a rendering of the features in a linked-view browser that enables scientists to interactively explore, visualize, and analyze the equivalent of one terabyte of simulation data. We highlight the utility of this new framework for combustion

  13. Evaluation of undergraduate nursing students' attitudes towards statistics courses, before and after a course in applied statistics.

    Science.gov (United States)

    Hagen, Brad; Awosoga, Olu; Kellett, Peter; Dei, Samuel Ofori

    2013-09-01

    Undergraduate nursing students must often take a course in statistics, yet there is scant research to inform teaching pedagogy. The objectives of this study were to assess nursing students' overall attitudes towards statistics courses - including (among other things) overall fear and anxiety, preferred learning and teaching styles, and the perceived utility and benefit of taking a statistics course - before and after taking a mandatory course in applied statistics. The authors used a pre-experimental research design (a one-group pre-test/post-test research design), by administering a survey to nursing students at the beginning and end of the course. The study was conducted at a University in Western Canada that offers an undergraduate Bachelor of Nursing degree. Participants included 104 nursing students, in the third year of a four-year nursing program, taking a course in statistics. Although students only reported moderate anxiety towards statistics, student anxiety about statistics had dropped by approximately 40% by the end of the course. Students also reported a considerable and positive change in their attitudes towards learning in groups by the end of the course, a potential reflection of the team-based learning that was used. Students identified preferred learning and teaching approaches, including the use of real-life examples, visual teaching aids, clear explanations, timely feedback, and a well-paced course. Students also identified preferred instructor characteristics, such as patience, approachability, in-depth knowledge of statistics, and a sense of humor. Unfortunately, students only indicated moderate agreement with the idea that statistics would be useful and relevant to their careers, even by the end of the course. Our findings validate anecdotal reports on statistics teaching pedagogy, although more research is clearly needed, particularly on how to increase students' perceptions of the benefit and utility of statistics courses for their nursing

  14. Relevant Factors in The Post-Merger Systems Integration and Information Technology in Brazilian Banks

    Directory of Open Access Journals (Sweden)

    Marcel Ginotti Pires

    2017-01-01

    Full Text Available This article discusses the factors present in post-merger integration of Systems and Information Technology (SIT that lead to positive and negative results in mergers and acquisitions (M & A. The research comprised three of the largest acquiring banks in Brazil. We adopted two methods of research, qualitative, to operationalize the theoretical concepts and quantitative, to test the hypotheses. We interviewed six executives of banks that held relevant experience in M & A processes. Subsequently, we applied questionnaires to IT professionals who were involved in the SIT integration processes. The results showed that the quality and expertise of the integration teams and managing the integration were the most relevant factors in the processes, with positive results for increased efficiency and the increased capacity of SIT. Negative results were due to failures in exploiting learning opportunities, the loss of employees and the inexpressive record of integration procedures.

  15. Elaboration of a guide including relevant project and logistic information: a case study

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Tchaikowisky M. [Faculdade de Tecnologia e Ciencias (FTC), Itabuna, BA (Brazil); Bresci, Claudio T.; Franca, Carlos M.M. [PETROBRAS, Rio de Janeiro, RJ (Brazil)

    2009-07-01

    For every mobilization of a new enterprise it is necessary to quickly obtain the greatest amount of relative information in regards to location and availability of infra-structure, logistics, and work site amenities. Among this information are reports elaborated for management of the enterprise, (organizational chart, work schedule, objectives, contacts, etc.) as well as geographic anomalies, social-economic and culture of the area to be developed such as territorial extension, land aspects, local population, roads and amenities (fuel stations ,restaurants and hotels), infra-structure of the cities (health, education, entertainment, housing, transport, etc.) and logistically the distance between cities the estimated travel time, ROW access maps and notable points, among other relevant information. With the idea of making this information available for everyone involved in the enterprise, it was elaborated for GASCAC Spread 2A a rapid guide containing all the information mentioned above and made it available for all the vehicles used to transport employees and visitors to the spread. With this, everyone quickly received the majority of information necessary in one place, in a practical, quick, and precise manner, since the information is always used and controlled by the same person. This study includes the model used in the gas pipeline GASCAC Spread 2A project and the methodology used to draft and update the information. Besides the above, a file in the GIS format was prepared containing all necessary planning, execution and tracking information for enterprise activities, from social communication to the execution of the works previously mentioned. Part of the GIS file information was uploaded to Google Earth so as to disclose the information to a greater group of people, bearing in mind that this program is free of charge and easy to use. (author)

  16. Baseline Statistics of Linked Statistical Data

    NARCIS (Netherlands)

    Scharnhorst, Andrea; Meroño-Peñuela, Albert; Guéret, Christophe

    2014-01-01

    We are surrounded by an ever increasing ocean of information, everybody will agree to that. We build sophisticated strategies to govern this information: design data models, develop infrastructures for data sharing, building tool for data analysis. Statistical datasets curated by National

  17. Analyzing Statistical Mediation with Multiple Informants: A New Approach with an Application in Clinical Psychology

    Directory of Open Access Journals (Sweden)

    Lesther ePapa

    2015-11-01

    Full Text Available Testing mediation models is critical for identifying potential variables that need to be targeted to effectively change one or more outcome variables. In addition, it is now common practice for clinicians to use multiple informant (MI data in studies of statistical mediation. By coupling the use of MI data with statistical mediation analysis, clinical researchers can combine the benefits of both techniques. Integrating the information from MIs into a statistical mediation model creates various methodological and practical challenges. The authors review prior methodological approaches to MI mediation analysis in clinical research and propose a new latent variable approach that overcomes some limitations of prior approaches. An application of the new approach to mother, father, and child reports of impulsivity, frustration tolerance, and externalizing problems (N = 454 is presented. The results showed that frustration tolerance mediated the relationship between impulsivity and externalizing problems. Advantages and limitations of the new approach are discussed. The new approach can help clinical researchers overcome limitations of prior techniques. It allows for a more comprehensive and effective use of MI data when testing mediation models.

  18. Self-referential and anxiety-relevant information processing in subclinical social anxiety: an fMRI study.

    Science.gov (United States)

    Abraham, Anna; Kaufmann, Carolin; Redlich, Ronny; Hermann, Andrea; Stark, Rudolf; Stevens, Stephan; Hermann, Christiane

    2013-03-01

    The fear of negative evaluation is one of the hallmark features of social anxiety. Behavioral evidence thus far largely supports cognitive models which postulate that information processing biases in the face of socially relevant information are a key factor underlying this widespread phobia. So far only one neuroimaging study has explicitly focused on the fear of negative evaluation in social anxiety where the brain responses of social phobics were compared to healthy participants during the processing of self-referential relative to other-referential criticism, praise or neutral information. Only self-referential criticism led to stronger activations in emotion-relevant regions of the brain, such as the amygdala and medial prefrontal cortices (mPFC), in the social phobics. The objective of the current study was to determine whether these findings could be extended to subclinical social anxiety. In doing so, the specificity of this self-referential bias was also examined by including both social and non-social (physical illness-related) threat information as well as a highly health anxious control group in the experimental paradigm. The fMRI findings indicated that the processing of emotional stimuli was accompanied by activations in the amygdala and the ventral mPFC, while self-referential processing was associated with activity in regions such as the mPFC, posterior cingulate and temporal poles. Despite the validation of the paradigm, the results revealed that the previously reported behavioral and brain biases associated with social phobia could not be unequivocally extended to subclinical social anxiety. The divergence between the findings is explored in detail with reference to paradigm differences and conceptual issues.

  19. Self-relevant beauty evaluation: Evidence from an event-related potentials study.

    Science.gov (United States)

    Kong, Fanchang; Zhang, Yan; Tian, Yuan; Fan, Cuiying; Zhou, Zongkui

    2015-03-01

    This study examines the electrophysiological correlates of beauty evaluation when participants performed the self-reference task. About 13 (7 men, 6 women) undergraduates participated in the experiment using event-related potentials. Results showed that the response to self-relevant information was faster compared to other-relevant information and no significant differences for self-relevant relative to mother-relevant information were observed. Both physical and interior beauty words for self-relevant information showed an enhanced late positive component as compared to other-relevant information. Physical beauty for self-relevant information yielded a larger late positive component in contrast to mother-relevant information but not for interior beauty. This study indicates that beauty is specific to the person who judges it though an individual and one's mother may hold similar views of interior beauty.

  20. An analysis of contingent factors for the detection of strategic relevance in business information technologies

    Directory of Open Access Journals (Sweden)

    Antonio Paños Álvarez

    2005-01-01

    Full Text Available Information Technologies are resources able to create competitive advantages for companies. In this analysis, the Resource-based perspective have taken special relevance, because it is argued that this advantages should be identified, reached and maintained. This work is positioned in the analysis of several contingent factors in the process of pointing the possible assesment of these advantages. It is aproaching a portfolio for helping to select what Information Technologies are valuable for what companies and in what activity areas and the study of in what way the sector, the technological innovation profile, the size and the financial capacity of the companies affects this process

  1. On the statistical analysis of vegetation change: a wetland affected by water extraction and soil acidification

    NARCIS (Netherlands)

    Braak, ter C.J.F.; Wiertz, J.

    1994-01-01

    A case study is presented on the statistical analysis and interpretation of vegetation change without precise information on environmental change. The changes in a vegetation of a Junco-Molinion grassland are evaluated on the basis of relevés of 1977 and 1988 (20 plots) from a small nature reserve

  2. Statistical results 1991-1993 of the Official Personal Dosimetry Service

    International Nuclear Information System (INIS)

    Boerner, E.; Drexler, G.; Wittmann, A.

    1995-01-01

    The report consists of a summary of relevant statistical data in the official personal dosimetry in 1988-1990 for the Federal States of Bavaria, Hesse, Schleswig-Holstein, and Baden-Wuerttemberg. The data are based on the survey of more than 8000 institutions with over 140000 occupational exposed persons and are derived from more than one million single measurements. The report covers informations on the institutions, on the persons as well as dosimetric values. The measuring method is described briefly with respect to dosimeters used, their range and the interpretation of values. Information on notional doses and the interpolation of values nearby the detection limits are given. (HP) [de

  3. The nature of statistics

    CERN Document Server

    Wallis, W Allen

    2014-01-01

    Focusing on everyday applications as well as those of scientific research, this classic of modern statistical methods requires little to no mathematical background. Readers develop basic skills for evaluating and using statistical data. Lively, relevant examples include applications to business, government, social and physical sciences, genetics, medicine, and public health. ""W. Allen Wallis and Harry V. Roberts have made statistics fascinating."" - The New York Times ""The authors have set out with considerable success, to write a text which would be of interest and value to the student who,

  4. User perspectives on relevance criteria

    DEFF Research Database (Denmark)

    Maglaughlin, Kelly L.; Sonnenwald, Diane H.

    2002-01-01

    , partially relevant, or not relevant to their information need; and explained their decisions in an interview. Analysis revealed 29 criteria, discussed positively and negatively, that were used by the participants when selecting passages that contributed or detracted from a document's relevance......This study investigates the use of criteria to assess relevant, partially relevant, and not-relevant documents. Study participants identified passages within 20 document representations that they used to make relevance judgments; judged each document representation as a whole to be relevant...... matter, thought catalyst), full text (e.g., audience, novelty, type, possible content, utility), journal/publisher (e.g., novelty, main focus, perceived quality), and personal (e.g., competition, time requirements). Results further indicate that multiple criteria are used when making relevant, partially...

  5. Southeast Atlantic Cloud Properties in a Multivariate Statistical Model - How Relevant is Air Mass History for Local Cloud Properties?

    Science.gov (United States)

    Fuchs, Julia; Cermak, Jan; Andersen, Hendrik

    2017-04-01

    This study aims at untangling the impacts of external dynamics and local conditions on cloud properties in the Southeast Atlantic (SEA) by combining satellite and reanalysis data using multivariate statistics. The understanding of clouds and their determinants at different scales is important for constraining the Earth's radiative budget, and thus prominent in climate-system research. In this study, SEA stratocumulus cloud properties are observed not only as the result of local environmental conditions but also as affected by external dynamics and spatial origins of air masses entering the study area. In order to assess to what extent cloud properties are impacted by aerosol concentration, air mass history, and meteorology, a multivariate approach is conducted using satellite observations of aerosol and cloud properties (MODIS, SEVIRI), information on aerosol species composition (MACC) and meteorological context (ERA-Interim reanalysis). To account for the often-neglected but important role of air mass origin, information on air mass history based on HYSPLIT modeling is included in the statistical model. This multivariate approach is intended to lead to a better understanding of the physical processes behind observed stratocumulus cloud properties in the SEA.

  6. A statistic to estimate the variance of the histogram-based mutual information estimator based on dependent pairs of observations

    NARCIS (Netherlands)

    Moddemeijer, R

    In the case of two signals with independent pairs of observations (x(n),y(n)) a statistic to estimate the variance of the histogram based mutual information estimator has been derived earlier. We present such a statistic for dependent pairs. To derive this statistic it is necessary to avail of a

  7. Back to basics: an introduction to statistics.

    Science.gov (United States)

    Halfens, R J G; Meijers, J M M

    2013-05-01

    In the second in the series, Professor Ruud Halfens and Dr Judith Meijers give an overview of statistics, both descriptive and inferential. They describe the first principles of statistics, including some relevant inferential tests.

  8. College Students’ Information Needs and Information Seeking Behaviors regarding Personal Information

    Directory of Open Access Journals (Sweden)

    Yu-Wen Liu

    2017-12-01

    Full Text Available This study analyzed college students’ reactions toward the issues of personal information. Students’ needs and seeking behaviors for personal information were assessed. Relevant literature was reviewed for framing the research questions and designing the questionnaire items for survey. Survey subjects were students from an university at northern Taiwan. A set of questionnaire items were used to collect research data. Statistical analysis from 252 valid data reveals some items were highly rated: Students reflected highly for their need of knowledge under the security threat of personal information (M = 4.29. They reacted strongly on acquiring knowledge and resources through the Internet (M = 4.24. They preferred the use of resources clear and easy to be understood (M = 4.04. However, most students had low level faith toward either government or non-governmental organizations in securing their personal information (M < 3.0 for most items. More effort among education and government should be emphasized in the future to improve personal use and reduce uncertainty in the use of personal information.

  9. Does Guiding Toward Task-Relevant Information Help Improve Graph Processing and Graph Comprehension of Individuals with Low or High Numeracy? An Eye-Tracker Experiment.

    Science.gov (United States)

    Keller, Carmen; Junghans, Alex

    2017-11-01

    Individuals with low numeracy have difficulties with understanding complex graphs. Combining the information-processing approach to numeracy with graph comprehension and information-reduction theories, we examined whether high numerates' better comprehension might be explained by their closer attention to task-relevant graphical elements, from which they would expect numerical information to understand the graph. Furthermore, we investigated whether participants could be trained in improving their attention to task-relevant information and graph comprehension. In an eye-tracker experiment ( N = 110) involving a sample from the general population, we presented participants with 2 hypothetical scenarios (stomach cancer, leukemia) showing survival curves for 2 treatments. In the training condition, participants received written instructions on how to read the graph. In the control condition, participants received another text. We tracked participants' eye movements while they answered 9 knowledge questions. The sum constituted graph comprehension. We analyzed visual attention to task-relevant graphical elements by using relative fixation durations and relative fixation counts. The mediation analysis revealed a significant ( P attention to task-relevant information, which did not differ between the 2 conditions. Training had a significant main effect on visual attention ( P attention to task-relevant graphical elements than individuals with low numeracy. With appropriate instructions, both groups can be trained to improve their graph-processing efficiency. Future research should examine (e.g., motivational) mediators between visual attention and graph comprehension to develop appropriate instructions that also result in higher graph comprehension.

  10. Statistics in Schools

    Science.gov (United States)

    Information Statistics in Schools Educate your students about the value and everyday use of statistics. The Statistics in Schools program provides resources for teaching and learning with real life data. Explore the site for standards-aligned, classroom-ready activities. Statistics in Schools Math Activities History

  11. Automated selection of relevant information for notification of incident cancer cases within a multisource cancer registry.

    Science.gov (United States)

    Jouhet, V; Defossez, G; Ingrand, P

    2013-01-01

    The aim of this study was to develop and evaluate a selection algorithm of relevant records for the notification of incident cases of cancer on the basis of the individual data available in a multi-source information system. This work was conducted on data for the year 2008 in the general cancer registry of Poitou-Charentes region (France). The selection algorithm hierarchizes information according to its level of relevance for tumoral topography and tumoral morphology independently. The selected data are combined to form composite records. These records are then grouped in respect with the notification rules of the International Agency for Research on Cancer for multiple primary cancers. The evaluation, based on recall, precision and F-measure confronted cases validated manually by the registry's physicians with tumours notified with and without records selection. The analysis involved 12,346 tumours validated among 11,971 individuals. The data used were hospital discharge data (104,474 records), pathology data (21,851 records), healthcare insurance data (7508 records) and cancer care centre's data (686 records). The selection algorithm permitted performances improvement for notification of tumour topography (F-measure 0.926 with vs. 0.857 without selection) and tumour morphology (F-measure 0.805 with vs. 0.750 without selection). These results show that selection of information according to its origin is efficient in reducing noise generated by imprecise coding. Further research is needed for solving the semantic problems relating to the integration of heterogeneous data and the use of non-structured information.

  12. Learning Predictive Statistics: Strategies and Brain Mechanisms.

    Science.gov (United States)

    Wang, Rui; Shen, Yuan; Tino, Peter; Welchman, Andrew E; Kourtzi, Zoe

    2017-08-30

    When immersed in a new environment, we are challenged to decipher initially incomprehensible streams of sensory information. However, quite rapidly, the brain finds structure and meaning in these incoming signals, helping us to predict and prepare ourselves for future actions. This skill relies on extracting the statistics of event streams in the environment that contain regularities of variable complexity from simple repetitive patterns to complex probabilistic combinations. Here, we test the brain mechanisms that mediate our ability to adapt to the environment's statistics and predict upcoming events. By combining behavioral training and multisession fMRI in human participants (male and female), we track the corticostriatal mechanisms that mediate learning of temporal sequences as they change in structure complexity. We show that learning of predictive structures relates to individual decision strategy; that is, selecting the most probable outcome in a given context (maximizing) versus matching the exact sequence statistics. These strategies engage distinct human brain regions: maximizing engages dorsolateral prefrontal, cingulate, sensory-motor regions, and basal ganglia (dorsal caudate, putamen), whereas matching engages occipitotemporal regions (including the hippocampus) and basal ganglia (ventral caudate). Our findings provide evidence for distinct corticostriatal mechanisms that facilitate our ability to extract behaviorally relevant statistics to make predictions. SIGNIFICANCE STATEMENT Making predictions about future events relies on interpreting streams of information that may initially appear incomprehensible. Past work has studied how humans identify repetitive patterns and associative pairings. However, the natural environment contains regularities that vary in complexity from simple repetition to complex probabilistic combinations. Here, we combine behavior and multisession fMRI to track the brain mechanisms that mediate our ability to adapt to

  13. 100 statistical tests

    CERN Document Server

    Kanji, Gopal K

    2006-01-01

    This expanded and updated Third Edition of Gopal K. Kanji's best-selling resource on statistical tests covers all the most commonly used tests with information on how to calculate and interpret results with simple datasets. Each entry begins with a short summary statement about the test's purpose, and contains details of the test objective, the limitations (or assumptions) involved, a brief outline of the method, a worked example, and the numerical calculation. 100 Statistical Tests, Third Edition is the one indispensable guide for users of statistical materials and consumers of statistical information at all levels and across all disciplines.

  14. Business statistics for dummies

    CERN Document Server

    Anderson, Alan

    2013-01-01

    Score higher in your business statistics course? Easy. Business statistics is a common course for business majors and MBA candidates. It examines common data sets and the proper way to use such information when conducting research and producing informational reports such as profit and loss statements, customer satisfaction surveys, and peer comparisons. Business Statistics For Dummies tracks to a typical business statistics course offered at the undergraduate and graduate levels and provides clear, practical explanations of business statistical ideas, techniques, formulas, and calculations, w

  15. Integration of Transport-relevant Data within Image Record of the Surveillance System

    Directory of Open Access Journals (Sweden)

    Adam Stančić

    2016-10-01

    Full Text Available Integration of the collected information on the road within the image recorded by the surveillance system forms a unified source of transport-relevant data about the supervised situation. The basic assumption is that the procedure of integration changes the image to the extent that is invisible to the human eye, and the integrated data keep identical content. This assumption has been proven by studying the statistical properties of the image and integrated data using mathematical model modelled in the programming language Python using the combinations of the functions of additional libraries (OpenCV, NumPy, SciPy and Matplotlib. The model has been used to compare the input methods of meta-data and methods of steganographic integration by correcting the coefficients of Discrete Cosine Transform JPEG compressed image. For the procedures of steganographic data processing the steganographic algorithm F5 was used. The review paper analyses the advantages and drawbacks of the integration methods and present the examples of situations in traffic in which the formed unified sources of transport-relevant information could be used.

  16. The Criteria People Use in Relevance Decisions on Health Information: An Analysis of User Eye Movements When Browsing a Health Discussion Forum.

    Science.gov (United States)

    Pian, Wenjing; Khoo, Christopher Sg; Chang, Yun-Ke

    2016-06-20

    People are increasingly accessing health-related social media sites, such as health discussion forums, to post and read user-generated health information. It is important to know what criteria people use when deciding the relevance of information found on health social media websites, in different situations. The study attempted to identify the relevance criteria that people use when browsing a health discussion forum, in 3 types of use contexts: when seeking information for their own health issue, when seeking for other people's health issue, and when browsing without a particular health issue in mind. A total of 58 study participants were self-assigned to 1 of the 3 use contexts or information needs and were asked to browse a health discussion forum, HealthBoards.com. In the analysis, browsing a discussion forum was divided into 2 stages: scanning a set of post surrogates (mainly post titles) in the summary result screen and reading a detailed post content (including comments by other users). An eye tracker system was used to capture participants' eye movement behavior and the text they skim over and focus (ie, fixate) on during browsing. By analyzing the text that people's eyes fixated on, the types of health information used in the relevance judgment were determined. Post-experiment interviews elicited participants' comments on the relevance of the information and criteria used. It was found that participants seeking health information for their own health issue focused significantly more on the poster's symptoms, personal history of the disease, and description of the disease (P=.01, .001, and .02). Participants seeking for other people's health issue focused significantly more on cause of disease, disease terminology, and description of treatments and procedures (P=.01, .01, and .02). In contrast, participants browsing with no particular issue in mind focused significantly more on general health topics, hot topics, and rare health issues (P=.01, .01, and .01

  17. The Criteria People Use in Relevance Decisions on Health Information: An Analysis of User Eye Movements When Browsing a Health Discussion Forum

    Science.gov (United States)

    Khoo, Christopher SG; Chang, Yun-Ke

    2016-01-01

    Background People are increasingly accessing health-related social media sites, such as health discussion forums, to post and read user-generated health information. It is important to know what criteria people use when deciding the relevance of information found on health social media websites, in different situations. Objective The study attempted to identify the relevance criteria that people use when browsing a health discussion forum, in 3 types of use contexts: when seeking information for their own health issue, when seeking for other people’s health issue, and when browsing without a particular health issue in mind. Methods A total of 58 study participants were self-assigned to 1 of the 3 use contexts or information needs and were asked to browse a health discussion forum, HealthBoards.com. In the analysis, browsing a discussion forum was divided into 2 stages: scanning a set of post surrogates (mainly post titles) in the summary result screen and reading a detailed post content (including comments by other users). An eye tracker system was used to capture participants’ eye movement behavior and the text they skim over and focus (ie, fixate) on during browsing. By analyzing the text that people’s eyes fixated on, the types of health information used in the relevance judgment were determined. Post-experiment interviews elicited participants’ comments on the relevance of the information and criteria used. Results It was found that participants seeking health information for their own health issue focused significantly more on the poster’s symptoms, personal history of the disease, and description of the disease (P=.01, .001, and .02). Participants seeking for other people’s health issue focused significantly more on cause of disease, disease terminology, and description of treatments and procedures (P=.01, .01, and .02). In contrast, participants browsing with no particular issue in mind focused significantly more on general health topics, hot

  18. A Formal Approach for RT-DVS Algorithms Evaluation Based on Statistical Model Checking

    Directory of Open Access Journals (Sweden)

    Shengxin Dai

    2015-01-01

    Full Text Available Energy saving is a crucial concern in embedded real time systems. Many RT-DVS algorithms have been proposed to save energy while preserving deadline guarantees. This paper presents a novel approach to evaluate RT-DVS algorithms using statistical model checking. A scalable framework is proposed for RT-DVS algorithms evaluation, in which the relevant components are modeled as stochastic timed automata, and the evaluation metrics including utilization bound, energy efficiency, battery awareness, and temperature awareness are expressed as statistical queries. Evaluation of these metrics is performed by verifying the corresponding queries using UPPAAL-SMC and analyzing the statistical information provided by the tool. We demonstrate the applicability of our framework via a case study of five classical RT-DVS algorithms.

  19. Validation of survey information on smoking and alcohol consumption against import statistics, Greenland 1993-2010.

    Science.gov (United States)

    Bjerregaard, Peter; Becker, Ulrik

    2013-01-01

    Questionnaires are widely used to obtain information on health-related behaviour, and they are more often than not the only method that can be used to assess the distribution of behaviour in subgroups of the population. No validation studies of reported consumption of tobacco or alcohol have been published from circumpolar indigenous communities. The purpose of the study is to compare information on the consumption of tobacco and alcohol obtained from 3 population surveys in Greenland with import statistics. Estimates of consumption of cigarettes and alcohol using several different survey instruments in cross-sectional population studies from 1993-1994, 1999-2001 and 2005-2010 were compared with import statistics from the same years. For cigarettes, survey results accounted for virtually the total import. Alcohol consumption was significantly under-reported with reporting completeness ranging from 40% to 51% for different estimates of habitual weekly consumption in the 3 study periods. Including an estimate of binge drinking increased the estimated total consumption to 78% of the import. Compared with import statistics, questionnaire-based population surveys capture the consumption of cigarettes well in Greenland. Consumption of alcohol is under-reported, but asking about binge episodes in addition to the usual intake considerably increased the reported intake in this population and made it more in agreement with import statistics. It is unknown to what extent these findings at the population level can be inferred to population subgroups.

  20. Contributions to statistics

    CERN Document Server

    Mahalanobis, P C

    1965-01-01

    Contributions to Statistics focuses on the processes, methodologies, and approaches involved in statistics. The book is presented to Professor P. C. Mahalanobis on the occasion of his 70th birthday. The selection first offers information on the recovery of ancillary information and combinatorial properties of partially balanced designs and association schemes. Discussions focus on combinatorial applications of the algebra of association matrices, sample size analogy, association matrices and the algebra of association schemes, and conceptual statistical experiments. The book then examines latt

  1. Types of Lexicographical Information Needs and their Relevance for Information Science

    OpenAIRE

    Bergenholtz, Henning; Agerbo, Heidi

    2017-01-01

    In some situations, you need information in order to solve a problem that has occurred. In information science, user needs are often described through very specific examples rather than through a classification of situation types in which information needs occur. Furthermore, information science often describes general human needs, typically with a reference to Maslow's classification of needs (1954), instead of actual information needs. Lexicography has also focused on information needs, but...

  2. Information Graph Flow: A Geometric Approximation of Quantum and Statistical Systems

    Science.gov (United States)

    Vanchurin, Vitaly

    2018-05-01

    Given a quantum (or statistical) system with a very large number of degrees of freedom and a preferred tensor product factorization of the Hilbert space (or of a space of distributions) we describe how it can be approximated with a very low-dimensional field theory with geometric degrees of freedom. The geometric approximation procedure consists of three steps. The first step is to construct weighted graphs (we call information graphs) with vertices representing subsystems (e.g., qubits or random variables) and edges representing mutual information (or the flow of information) between subsystems. The second step is to deform the adjacency matrices of the information graphs to that of a (locally) low-dimensional lattice using the graph flow equations introduced in the paper. (Note that the graph flow produces very sparse adjacency matrices and thus might also be used, for example, in machine learning or network science where the task of graph sparsification is of a central importance.) The third step is to define an emergent metric and to derive an effective description of the metric and possibly other degrees of freedom. To illustrate the procedure we analyze (numerically and analytically) two information graph flows with geometric attractors (towards locally one- and two-dimensional lattices) and metric perturbations obeying a geometric flow equation. Our analysis also suggests a possible approach to (a non-perturbative) quantum gravity in which the geometry (a secondary object) emerges directly from a quantum state (a primary object) due to the flow of the information graphs.

  3. Statistical physics of vaccination

    Science.gov (United States)

    Wang, Zhen; Bauch, Chris T.; Bhattacharyya, Samit; d'Onofrio, Alberto; Manfredi, Piero; Perc, Matjaž; Perra, Nicola; Salathé, Marcel; Zhao, Dawei

    2016-12-01

    Historically, infectious diseases caused considerable damage to human societies, and they continue to do so today. To help reduce their impact, mathematical models of disease transmission have been studied to help understand disease dynamics and inform prevention strategies. Vaccination-one of the most important preventive measures of modern times-is of great interest both theoretically and empirically. And in contrast to traditional approaches, recent research increasingly explores the pivotal implications of individual behavior and heterogeneous contact patterns in populations. Our report reviews the developmental arc of theoretical epidemiology with emphasis on vaccination, as it led from classical models assuming homogeneously mixing (mean-field) populations and ignoring human behavior, to recent models that account for behavioral feedback and/or population spatial/social structure. Many of the methods used originated in statistical physics, such as lattice and network models, and their associated analytical frameworks. Similarly, the feedback loop between vaccinating behavior and disease propagation forms a coupled nonlinear system with analogs in physics. We also review the new paradigm of digital epidemiology, wherein sources of digital data such as online social media are mined for high-resolution information on epidemiologically relevant individual behavior. Armed with the tools and concepts of statistical physics, and further assisted by new sources of digital data, models that capture nonlinear interactions between behavior and disease dynamics offer a novel way of modeling real-world phenomena, and can help improve health outcomes. We conclude the review by discussing open problems in the field and promising directions for future research.

  4. Practical Statistics for LHC Physicists: Descriptive Statistics, Probability and Likelihood (1/3)

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    These lectures cover those principles and practices of statistics that are most relevant for work at the LHC. The first lecture discusses the basic ideas of descriptive statistics, probability and likelihood. The second lecture covers the key ideas in the frequentist approach, including confidence limits, profile likelihoods, p-values, and hypothesis testing. The third lecture covers inference in the Bayesian approach. Throughout, real-world examples will be used to illustrate the practical application of the ideas. No previous knowledge is assumed.

  5. Statistics of 2D solitons

    International Nuclear Information System (INIS)

    Brekke, L.; Imbo, T.D.

    1992-01-01

    The authors study the inequivalent quantizations of (1 + 1)-dimensional nonlinear sigma models with space manifold S 1 and target manifold X. If x is multiply connected, these models possess topological solitons. After providing a definition of spin and statistics for these solitons and demonstrating a spin-statistics correlation, we give various examples where the solitons can have exotic statistics. In some of these models, the solitons may obey a generalized version of fractional statistics called ambistatistics. In this paper the relevance of these 2d models to the statistics of vortices in (2 + 1)-dimensional spontaneously broken gauge theories is discussed. The authors close with a discussion concerning the extension of our results to higher dimensions

  6. Protection of safety-relevant information in distributed energy information systems; Schutz sicherheitsrelevanter Informationen in verteilten Energieinformationssystemen

    Energy Technology Data Exchange (ETDEWEB)

    Beenken, Petra

    2010-07-01

    Within the last years there has been an ongoing change in the energy domain. The German renewable energies law EnWG requires a liberalization that leads to a strict separation of domains such as transportation, supply and conversion of energy. Furthermore, climate and environmental protection as well as cost transparency and energy saving in combination with efficiency of resources leads to new challenges for the energy industry. The so called smart grid vision and the concluding design of an ICT-based information structure for the energy domain will help to reach these goals by integrating renewable energy resources, saving fuels and getting a higher energy efficiency. In order to reach these goals, information about current energy generation, energy storage and energy demand is required. Through an efficient network and fast information exchange by means of an energy information network an efficient energy use can be gained. The federated networking of an energy information network like this can tend to a weakness for cyber security within the energy domain. The growing number of people involved and data exchanges will create more potential points of attacks than before. Therefore, a suitable protection of an energy information network is necessary. Through paragraph 9 EnWG the protection goal confidentiality is particularly important. But the implementation of confidentiality must not lead to a violation of availability requirements, which are very important at some point of the energy domain. Additionally to the identification of such crucial side effects, the implementation of confidentiality for distributed, decentral systems is a challenge for the domain. The ENERTRUST security model includes a knowledge base construction, which allows the identification of such side effects or conflicts in the energy domain by applying reasoning techniques. Moreover, it allows the realization of confidentiality from distributed locations through a use and combination of

  7. Use and perceptions of information among family physicians: sources considered accessible, relevant, and reliable.

    Science.gov (United States)

    Kosteniuk, Julie G; Morgan, Debra G; D'Arcy, Carl K

    2013-01-01

    The research determined (1) the information sources that family physicians (FPs) most commonly use to update their general medical knowledge and to make specific clinical decisions, and (2) the information sources FPs found to be most physically accessible, intellectually accessible (easy to understand), reliable (trustworthy), and relevant to their needs. A cross-sectional postal survey of 792 FPs and locum tenens, in full-time or part-time medical practice, currently practicing or on leave of absence in the Canadian province of Saskatchewan was conducted during the period of January to April 2008. Of 666 eligible physicians, 331 completed and returned surveys, resulting in a response rate of 49.7% (331/666). Medical textbooks and colleagues in the main patient care setting were the top 2 sources for the purpose of making specific clinical decisions. Medical textbooks were most frequently considered by FPs to be reliable (trustworthy), and colleagues in the main patient care setting were most physically accessible (easy to access). When making specific clinical decisions, FPs were most likely to use information from sources that they considered to be reliable and generally physically accessible, suggesting that FPs can best be supported by facilitating easy and convenient access to high-quality information.

  8. Pitfalls in the statistical examination and interpretation of the correspondence between physician and patient satisfaction ratings and their relevance for shared decision making research

    Science.gov (United States)

    2011-01-01

    Background The correspondence of satisfaction ratings between physicians and patients can be assessed on different dimensions. One may examine whether they differ between the two groups or focus on measures of association or agreement. The aim of our study was to evaluate methodological difficulties in calculating the correspondence between patient and physician satisfaction ratings and to show the relevance for shared decision making research. Methods We utilised a structured tool for cardiovascular prevention (arriba™) in a pragmatic cluster-randomised controlled trial. Correspondence between patient and physician satisfaction ratings after individual primary care consultations was assessed using the Patient Participation Scale (PPS). We used the Wilcoxon signed-rank test, the marginal homogeneity test, Kendall's tau-b, weighted kappa, percentage of agreement, and the Bland-Altman method to measure differences, associations, and agreement between physicians and patients. Results Statistical measures signal large differences between patient and physician satisfaction ratings with more favourable ratings provided by patients and a low correspondence regardless of group allocation. Closer examination of the raw data revealed a high ceiling effect of satisfaction ratings and only slight disagreement regarding the distributions of differences between physicians' and patients' ratings. Conclusions Traditional statistical measures of association and agreement are not able to capture a clinically relevant appreciation of the physician-patient relationship by both parties in skewed satisfaction ratings. Only the Bland-Altman method for assessing agreement augmented by bar charts of differences was able to indicate this. Trial registration ISRCTN: ISRCT71348772 PMID:21592337

  9. 75 FR 20843 - Notice of Workshop To Discuss Policy-Relevant Science to Inform EPA's Integrated Plan for the...

    Science.gov (United States)

    2010-04-21

    ... Policy-Relevant Science to Inform EPA's Integrated Plan for the Review of the Lead National Ambient Air.... Environmental Protection Agency (EPA) is announcing that a workshop entitled, ``Workshop to Discuss Policy... workshop will be open to attendance by interested public observers on a first-come, first-served basis up...

  10. Some statistical considerations related to the estimation of cancer risk following exposure to ionizing radiation

    International Nuclear Information System (INIS)

    Land, C.E.; Pierce, D.A.

    1983-01-01

    Statistical theory and methodology provide the logical structure for scientific inference about the cancer risk associated with exposure to ionizing radiation. Although much is known about radiation carcinogenesis, the risk associated with low-level exposures is difficult to assess because it is too small to measure directly. Estimation must therefore depend upon mathematical models which relate observed risks at high exposure levels to risks at lower exposure levels. Extrapolated risk estimates obtained using such models are heavily dependent upon assumptions about the shape of the dose-response relationship, the temporal distribution of risk following exposure, and variation of risk according to variables such as age at exposure, sex, and underlying population cancer rates. Expanded statistical models, which make explicit certain assumed relationships between different data sets, can be used to strengthen inferences by incorporating relevant information from diverse sources. They also allow the uncertainties inherent in information from related data sets to be expressed in estimates which partially depend upon that information. To the extent that informed opinion is based upon a valid assessment of scientific data, the larger context of decision theory, which includes statistical theory, provides a logical framework for the incorporation into public policy decisions of the informational content of expert opinion

  11. The effects of statistical information on risk and ambiguity attitudes, and on rational insurance decisions

    NARCIS (Netherlands)

    P.P. Wakker (Peter); D.R.M. Timmermans (Danielle); I. Machielse (Irma)

    2007-01-01

    textabstractThis paper presents a field study into the effects of statistical information concerning risks on willingness to take insurance, with special attention being paid to the usefulness of these effects for the clients (the insured). Unlike many academic studies, we were able to use in-depth

  12. Principle of maximum Fisher information from Hardy's axioms applied to statistical systems.

    Science.gov (United States)

    Frieden, B Roy; Gatenby, Robert A

    2013-10-01

    Consider a finite-sized, multidimensional system in parameter state a. The system is either at statistical equilibrium or general nonequilibrium, and may obey either classical or quantum physics. L. Hardy's mathematical axioms provide a basis for the physics obeyed by any such system. One axiom is that the number N of distinguishable states a in the system obeys N=max. This assumes that N is known as deterministic prior knowledge. However, most observed systems suffer statistical fluctuations, for which N is therefore only known approximately. Then what happens if the scope of the axiom N=max is extended to include such observed systems? It is found that the state a of the system must obey a principle of maximum Fisher information, I=I(max). This is important because many physical laws have been derived, assuming as a working hypothesis that I=I(max). These derivations include uses of the principle of extreme physical information (EPI). Examples of such derivations were of the De Broglie wave hypothesis, quantum wave equations, Maxwell's equations, new laws of biology (e.g., of Coulomb force-directed cell development and of in situ cancer growth), and new laws of economic fluctuation and investment. That the principle I=I(max) itself derives from suitably extended Hardy axioms thereby eliminates its need to be assumed in these derivations. Thus, uses of I=I(max) and EPI express physics at its most fundamental level, its axiomatic basis in math.

  13. Bengali-English Relevant Cross Lingual Information Access Using Finite Automata

    Science.gov (United States)

    Banerjee, Avishek; Bhattacharyya, Swapan; Hazra, Simanta; Mondal, Shatabdi

    2010-10-01

    CLIR techniques searches unrestricted texts and typically extract term and relationships from bilingual electronic dictionaries or bilingual text collections and use them to translate query and/or document representations into a compatible set of representations with a common feature set. In this paper, we focus on dictionary-based approach by using a bilingual data dictionary with a combination to statistics-based methods to avoid the problem of ambiguity also the development of human computer interface aspects of NLP (Natural Language processing) is the approach of this paper. The intelligent web search with regional language like Bengali is depending upon two major aspect that is CLIA (Cross language information access) and NLP. In our previous work with IIT, KGP we already developed content based CLIA where content based searching in trained on Bengali Corpora with the help of Bengali data dictionary. Here we want to introduce intelligent search because to recognize the sense of meaning of a sentence and it has a better real life approach towards human computer interactions.

  14. Comparing the Influence of Title and URL in Information Retrieval Relevance in Search Engines Results between Human Science and Agriculture Science

    Directory of Open Access Journals (Sweden)

    Parisa Allami

    2012-12-01

    Full Text Available When the World Wide Web provides suitable methods for producing and publishing information to scientists, the Web has become a mediator to publishing information. This environment has been formed billions of web pages that each of them has a special title, special content, special address and special purpose. Search engines provide a variety of facilities limit search results to raise the possibility of relevance in the retrieval results. One of these facilities is the limitation of the keywords and search terms to the title or URL. It can increase the possibility of results relevance significantly. Search engines claim what are limited to title and URL is most relevant. This research tried to compare the results relevant between results limited in title and URL in agricultural and Humanities areas from their users sights also it notice to Comparison of the presence of keywords in the title and URL between two areas and the relationship between search query numbers and matching keywords in title and their URLs. For this purpose, the number of 30 students in each area whom were in MA process and in doing their thesis was chosen. There was a significant relevant of the results that they limited their information needs to title and URL. There was significantly relevance in URL results in agricultural area, but there was not any significant difference between title and URL results in the humanities. For comparing the number of keywords in title and URL in two areas, 30 keywords in each area were chosen. There was not any significantly difference between the number of keywords in the title and URL of websites in two areas. To show relationship between number of search keyword and the matching of title and URL 45 keywords in each area were chosen. They were divided to three parts (one keyword, two keywords and three keywords. It was determined that if search keyword was less, the amount of matching between title and URL was more and if the matching

  15. Applied statistics for economics and business

    CERN Document Server

    Özdemir, Durmuş

    2016-01-01

    This textbook introduces readers to practical statistical issues by presenting them within the context of real-life economics and business situations. It presents the subject in a non-threatening manner, with an emphasis on concise, easily understandable explanations. It has been designed to be accessible and student-friendly and, as an added learning feature, provides all the relevant data required to complete the accompanying exercises and computing problems, which are presented at the end of each chapter. It also discusses index numbers and inequality indices in detail, since these are of particular importance to students and commonly omitted in textbooks. Throughout the text it is assumed that the student has no prior knowledge of statistics. It is aimed primarily at business and economics undergraduates, providing them with the basic statistical skills necessary for further study of their subject. However, students of other disciplines will also find it relevant.

  16. A resilient and efficient CFD framework: Statistical learning tools for multi-fidelity and heterogeneous information fusion

    Science.gov (United States)

    Lee, Seungjoon; Kevrekidis, Ioannis G.; Karniadakis, George Em

    2017-09-01

    Exascale-level simulations require fault-resilient algorithms that are robust against repeated and expected software and/or hardware failures during computations, which may render the simulation results unsatisfactory. If each processor can share some global information about the simulation from a coarse, limited accuracy but relatively costless auxiliary simulator we can effectively fill-in the missing spatial data at the required times by a statistical learning technique - multi-level Gaussian process regression, on the fly; this has been demonstrated in previous work [1]. Based on the previous work, we also employ another (nonlinear) statistical learning technique, Diffusion Maps, that detects computational redundancy in time and hence accelerate the simulation by projective time integration, giving the overall computation a "patch dynamics" flavor. Furthermore, we are now able to perform information fusion with multi-fidelity and heterogeneous data (including stochastic data). Finally, we set the foundations of a new framework in CFD, called patch simulation, that combines information fusion techniques from, in principle, multiple fidelity and resolution simulations (and even experiments) with a new adaptive timestep refinement technique. We present two benchmark problems (the heat equation and the Navier-Stokes equations) to demonstrate the new capability that statistical learning tools can bring to traditional scientific computing algorithms. For each problem, we rely on heterogeneous and multi-fidelity data, either from a coarse simulation of the same equation or from a stochastic, particle-based, more "microscopic" simulation. We consider, as such "auxiliary" models, a Monte Carlo random walk for the heat equation and a dissipative particle dynamics (DPD) model for the Navier-Stokes equations. More broadly, in this paper we demonstrate the symbiotic and synergistic combination of statistical learning, domain decomposition, and scientific computing in

  17. Labor Informality: General Causes

    Directory of Open Access Journals (Sweden)

    Gustavo Sandoval Betancour

    2016-04-01

    Full Text Available The article examines the main causes of labor informality in order to verify the validity of classical theories that explain unemployment in market economies and its relationship to informality. Methodologically, the project was based, in the empirical part, on international statistics, comparing the evolution of labor market structure in a combined sample of highly industrialized countries and other less industrialized ones. Empirical evidence supports the conclusion that the classical economic theory of Marxist origin is inefficient to explain the causes of unemployment in contemporary market economies, as well as it fails to satisfactorily explain informality. On the contrary, we conclude that the theory in question is more relevant to explain informality in centrally planned economies where this phenomenon has been present even more significantly than in free market economies.

  18. Tennessee StreamStats: A Web-Enabled Geographic Information System Application for Automating the Retrieval and Calculation of Streamflow Statistics

    Science.gov (United States)

    Ladd, David E.; Law, George S.

    2007-01-01

    The U.S. Geological Survey (USGS) provides streamflow and other stream-related information needed to protect people and property from floods, to plan and manage water resources, and to protect water quality in the streams. Streamflow statistics provided by the USGS, such as the 100-year flood and the 7-day 10-year low flow, frequently are used by engineers, land managers, biologists, and many others to help guide decisions in their everyday work. In addition to streamflow statistics, resource managers often need to know the physical and climatic characteristics (basin characteristics) of the drainage basins for locations of interest to help them understand the mechanisms that control water availability and water quality at these locations. StreamStats is a Web-enabled geographic information system (GIS) application that makes it easy for users to obtain streamflow statistics, basin characteristics, and other information for USGS data-collection stations and for ungaged sites of interest. If a user selects the location of a data-collection station, StreamStats will provide previously published information for the station from a database. If a user selects a location where no data are available (an ungaged site), StreamStats will run a GIS program to delineate a drainage basin boundary, measure basin characteristics, and estimate streamflow statistics based on USGS streamflow prediction methods. A user can download a GIS feature class of the drainage basin boundary with attributes including the measured basin characteristics and streamflow estimates.

  19. Official Statistics and Statistics Education: Bridging the Gap

    Directory of Open Access Journals (Sweden)

    Gal Iddo

    2017-03-01

    Full Text Available This article aims to challenge official statistics providers and statistics educators to ponder on how to help non-specialist adult users of statistics develop those aspects of statistical literacy that pertain to official statistics. We first document the gap in the literature in terms of the conceptual basis and educational materials needed for such an undertaking. We then review skills and competencies that may help adults to make sense of statistical information in areas of importance to society. Based on this review, we identify six elements related to official statistics about which non-specialist adult users should possess knowledge in order to be considered literate in official statistics: (1 the system of official statistics and its work principles; (2 the nature of statistics about society; (3 indicators; (4 statistical techniques and big ideas; (5 research methods and data sources; and (6 awareness and skills for citizens’ access to statistical reports. Based on this ad hoc typology, we discuss directions that official statistics providers, in cooperation with statistics educators, could take in order to (1 advance the conceptualization of skills needed to understand official statistics, and (2 expand educational activities and services, specifically by developing a collaborative digital textbook and a modular online course, to improve public capacity for understanding of official statistics.

  20. Features of statistical dynamics in a finite system

    International Nuclear Information System (INIS)

    Yan, Shiwei; Sakata, Fumihiko; Zhuo Yizhong

    2002-01-01

    We study features of statistical dynamics in a finite Hamilton system composed of a relevant one degree of freedom coupled to an irrelevant multidegree of freedom system through a weak interaction. Special attention is paid on how the statistical dynamics changes depending on the number of degrees of freedom in the irrelevant system. It is found that the macrolevel statistical aspects are strongly related to an appearance of the microlevel chaotic motion, and a dissipation of the relevant motion is realized passing through three distinct stages: dephasing, statistical relaxation, and equilibrium regimes. It is clarified that the dynamical description and the conventional transport approach provide us with almost the same macrolevel and microlevel mechanisms only for the system with a very large number of irrelevant degrees of freedom. It is also shown that the statistical relaxation in the finite system is an anomalous diffusion and the fluctuation effects have a finite correlation time

  1. Statistical methods

    CERN Document Server

    Szulc, Stefan

    1965-01-01

    Statistical Methods provides a discussion of the principles of the organization and technique of research, with emphasis on its application to the problems in social statistics. This book discusses branch statistics, which aims to develop practical ways of collecting and processing numerical data and to adapt general statistical methods to the objectives in a given field.Organized into five parts encompassing 22 chapters, this book begins with an overview of how to organize the collection of such information on individual units, primarily as accomplished by government agencies. This text then

  2. Statistical Measures Alone Cannot Determine Which Database (BNI, CINAHL, MEDLINE, or EMBASE Is the Most Useful for Searching Undergraduate Nursing Topics. A Review of: Stokes, P., Foster, A., & Urquhart, C. (2009. Beyond relevance and recall: Testing new user-centred measures of database performance. Health Information and Libraries Journal, 26(3, 220-231.

    Directory of Open Access Journals (Sweden)

    Giovanna Badia

    2011-03-01

    database for a search topic, which was calculated as a percentage of the total number of unique results found in all four database searches;• availability (the number of relevant full text articles obtained from the database search results, which was calculated as a percentage of the total number of relevant results found in the database;• retrievability (the number of relevant full text articles obtained from the database search results, which was calculated as a percentage of the total number of relevant full text articles found from all four database searches;• effectiveness (the probable odds that a database will obtain relevant search results;• efficiency (the probable odds that a database will obtain both unique and relevant search results; and• accessibility (the probable odds that the full text of the relevant references obtained from the database search are available electronically or in print via the user’s library.Students decided whether the search results were relevant to their topic by using a “yes/no” scale. Only record titles were used to make relevancy judgments.Main Results – Friedman’s Test and odds ratios were used to compare the performance of BNI, CINAHL, MEDLINE, and EMBASE when searching for information about nursing topics.These two statistical measures demonstrated the following:• BNI had the best average score for the precision, availability, effectiveness, and accessibility of search results;• CINAHL scored the highest for the novelty, retrievability, and efficiency of results, and ranked second place for all the other criteria;• MEDLINE excelled in the areas of recall and originality, and ranked second place for novelty and retrievability; and• EMBASE did not obtain the highest, or second highest score, for any of the criteria.Conclusion – According to the authors, these results suggest that none of the databases studied can be considered the most useful for searching undergraduate nursing topics. CINAHL and

  3. [Test Reviews in Child Psychology: Test Users Wish to Obtain Practical Information Relevant to their Respective Field of Work].

    Science.gov (United States)

    Renner, Gerolf; Irblich, Dieter

    2016-11-01

    Test Reviews in Child Psychology: Test Users Wish to Obtain Practical Information Relevant to their Respective Field of Work This study investigated to what extent diagnosticians use reviews of psychometric tests for children and adolescents, how they evaluate their quality, and what they expect concerning content. Test users (n = 323) from different areas of work (notably social pediatrics, early intervention, special education, speech and language therapy) rated test reviews as one of the most important sources of information. Readers of test reviews value practically oriented descriptions and evaluations of tests that are relevant to their respective field of work. They expect independent reviews that critically discuss opportunities and limits of the tests under scrutiny. The results show that authors of test reviews should not only have a background in test theory but should also be familiar with the practical application of tests in various settings.

  4. Atlas selection for hippocampus segmentation: Relevance evaluation of three meta-information parameters.

    Science.gov (United States)

    Dill, Vanderson; Klein, Pedro Costa; Franco, Alexandre Rosa; Pinho, Márcio Sarroglia

    2018-04-01

    Current state-of-the-art methods for whole and subfield hippocampus segmentation use pre-segmented templates, also known as atlases, in the pre-processing stages. Typically, the input image is registered to the template, which provides prior information for the segmentation process. Using a single standard atlas increases the difficulty in dealing with individuals who have a brain anatomy that is morphologically different from the atlas, especially in older brains. To increase the segmentation precision in these cases, without any manual intervention, multiple atlases can be used. However, registration to many templates leads to a high computational cost. Researchers have proposed to use an atlas pre-selection technique based on meta-information followed by the selection of an atlas based on image similarity. Unfortunately, this method also presents a high computational cost due to the image-similarity process. Thus, it is desirable to pre-select a smaller number of atlases as long as this does not impact on the segmentation quality. To pick out an atlas that provides the best registration, we evaluate the use of three meta-information parameters (medical condition, age range, and gender) to choose the atlas. In this work, 24 atlases were defined and each is based on the combination of the three meta-information parameters. These atlases were used to segment 352 vol from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Hippocampus segmentation with each of these atlases was evaluated and compared to reference segmentations of the hippocampus, which are available from ADNI. The use of atlas selection by meta-information led to a significant gain in the Dice similarity coefficient, which reached 0.68 ± 0.11, compared to 0.62 ± 0.12 when using only the standard MNI152 atlas. Statistical analysis showed that the three meta-information parameters provided a significant improvement in the segmentation accuracy. Copyright © 2018 Elsevier Ltd

  5. Culturally-Relevant Online Cancer Education Modules Empower Alaska's Community Health Aides/Practitioners to Disseminate Cancer Information and Reduce Cancer Risk.

    Science.gov (United States)

    Cueva, Katie; Revels, Laura; Cueva, Melany; Lanier, Anne P; Dignan, Mark; Viswanath, K; Fung, Teresa T; Geller, Alan C

    2017-04-12

    To address a desire for timely, medically accurate cancer education in rural Alaska, ten culturally relevant online learning modules were developed with, and for, Alaska's Community Health Aides/Practitioners (CHA/Ps). The project was guided by the framework of Community-Based Participatory Action Research, honored Indigenous Ways of Knowing, and was informed by Empowerment Theory. A total of 428 end-of-module evaluation surveys were completed by 89 unique Alaska CHA/Ps between January and December 2016. CHA/Ps shared that as a result of completing the modules, they were empowered to share cancer information with their patients, families, friends, and communities, as well as engage in cancer risk reduction behaviors such as eating healthier, getting cancer screenings, exercising more, and quitting tobacco. CHA/Ps also reported the modules were informative and respectful of their diverse cultures. These results from end-of-module evaluation surveys suggest that the collaboratively developed, culturally relevant, online cancer education modules have empowered CHA/Ps to reduce cancer risk and disseminate cancer information. "brought me to tears couple of times, and I think it will help in destroying the silence that surrounds cancer".

  6. Types of lexicographical information needs and their relevance for information science

    DEFF Research Database (Denmark)

    Bergenholtz, Henning; Pedersen, Heidi Agerbo

    2017-01-01

    often describes general human needs, typically with a reference to Maslow’s classification of needs (1954), instead of actual information needs. Lexicography has also focused on information needs, but has developed a more abstract classification of types of information needs, though (until more recent...

  7. Validation of survey information on smoking and alcohol consumption against import statistics, Greenland 1993–2010

    Directory of Open Access Journals (Sweden)

    Peter Bjerregaard

    2013-03-01

    Full Text Available Background. Questionnaires are widely used to obtain information on health-related behaviour, and they are more often than not the only method that can be used to assess the distribution of behaviour in subgroups of the population. No validation studies of reported consumption of tobacco or alcohol have been published from circumpolar indigenous communities. Objective. The purpose of the study is to compare information on the consumption of tobacco and alcohol obtained from 3 population surveys in Greenland with import statistics. Design. Estimates of consumption of cigarettes and alcohol using several different survey instruments in cross-sectional population studies from 1993–1994, 1999–2001 and 2005–2010 were compared with import statistics from the same years. Results. For cigarettes, survey results accounted for virtually the total import. Alcohol consumption was significantly under-reported with reporting completeness ranging from 40% to 51% for different estimates of habitual weekly consumption in the 3 study periods. Including an estimate of binge drinking increased the estimated total consumption to 78% of the import. Conclusion. Compared with import statistics, questionnaire-based population surveys capture the consumption of cigarettes well in Greenland. Consumption of alcohol is under-reported, but asking about binge episodes in addition to the usual intake considerably increased the reported intake in this population and made it more in agreement with import statistics. It is unknown to what extent these findings at the population level can be inferred to population subgroups.

  8. Imputing historical statistics, soils information, and other land-use data to crop area

    Science.gov (United States)

    Perry, C. R., Jr.; Willis, R. W.; Lautenschlager, L.

    1982-01-01

    In foreign crop condition monitoring, satellite acquired imagery is routinely used. To facilitate interpretation of this imagery, it is advantageous to have estimates of the crop types and their extent for small area units, i.e., grid cells on a map represent, at 60 deg latitude, an area nominally 25 by 25 nautical miles in size. The feasibility of imputing historical crop statistics, soils information, and other ancillary data to crop area for a province in Argentina is studied.

  9. Mental Illness Statistics

    Science.gov (United States)

    ... News & Events About Us Home > Health Information Share Statistics Research shows that mental illnesses are common in ... of mental illnesses, such as suicide and disability. Statistics Top ı cs Mental Illness Any Anxiety Disorder ...

  10. Data and Statistics: Heart Failure

    Science.gov (United States)

    ... Summary Coverdell Program 2012-2015 State Summaries Data & Statistics Fact Sheets Heart Disease and Stroke Fact Sheets ... Roadmap for State Planning Other Data Resources Other Statistic Resources Grantee Information Cross-Program Information Online Tools ...

  11. Statistical Model of Extreme Shear

    DEFF Research Database (Denmark)

    Larsen, Gunner Chr.; Hansen, Kurt Schaldemose

    2004-01-01

    In order to continue cost-optimisation of modern large wind turbines, it is important to continously increase the knowledge on wind field parameters relevant to design loads. This paper presents a general statistical model that offers site-specific prediction of the probability density function...... by a model that, on a statistically consistent basis, describe the most likely spatial shape of an extreme wind shear event. Predictions from the model have been compared with results from an extreme value data analysis, based on a large number of high-sampled full-scale time series measurements...... are consistent, given the inevitabel uncertainties associated with model as well as with the extreme value data analysis. Keywords: Statistical model, extreme wind conditions, statistical analysis, turbulence, wind loading, statistical analysis, turbulence, wind loading, wind shear, wind turbines....

  12. Uterine Cancer Statistics

    Science.gov (United States)

    ... Doing AMIGAS Stay Informed Cancer Home Uterine Cancer Statistics Language: English (US) Español (Spanish) Recommend on Facebook ... the most commonly diagnosed gynecologic cancer. U.S. Cancer Statistics Data Visualizations Tool The Data Visualizations tool makes ...

  13. Analysis of statistical misconception in terms of statistical reasoning

    Science.gov (United States)

    Maryati, I.; Priatna, N.

    2018-05-01

    Reasoning skill is needed for everyone to face globalization era, because every person have to be able to manage and use information from all over the world which can be obtained easily. Statistical reasoning skill is the ability to collect, group, process, interpret, and draw conclusion of information. Developing this skill can be done through various levels of education. However, the skill is low because many people assume that statistics is just the ability to count and using formulas and so do students. Students still have negative attitude toward course which is related to research. The purpose of this research is analyzing students’ misconception in descriptive statistic course toward the statistical reasoning skill. The observation was done by analyzing the misconception test result and statistical reasoning skill test; observing the students’ misconception effect toward statistical reasoning skill. The sample of this research was 32 students of math education department who had taken descriptive statistic course. The mean value of misconception test was 49,7 and standard deviation was 10,6 whereas the mean value of statistical reasoning skill test was 51,8 and standard deviation was 8,5. If the minimal value is 65 to state the standard achievement of a course competence, students’ mean value is lower than the standard competence. The result of students’ misconception study emphasized on which sub discussion that should be considered. Based on the assessment result, it was found that students’ misconception happen on this: 1) writing mathematical sentence and symbol well, 2) understanding basic definitions, 3) determining concept that will be used in solving problem. In statistical reasoning skill, the assessment was done to measure reasoning from: 1) data, 2) representation, 3) statistic format, 4) probability, 5) sample, and 6) association.

  14. Statistics Poster Challenge for Schools

    Science.gov (United States)

    Payne, Brad; Freeman, Jenny; Stillman, Eleanor

    2013-01-01

    The analysis and interpretation of data are important life skills. A poster challenge for schoolchildren provides an innovative outlet for these skills and demonstrates their relevance to daily life. We discuss our Statistics Poster Challenge and the lessons we have learned.

  15. Networking—a statistical physics perspective

    Science.gov (United States)

    Yeung, Chi Ho; Saad, David

    2013-03-01

    Networking encompasses a variety of tasks related to the communication of information on networks; it has a substantial economic and societal impact on a broad range of areas including transportation systems, wired and wireless communications and a range of Internet applications. As transportation and communication networks become increasingly more complex, the ever increasing demand for congestion control, higher traffic capacity, quality of service, robustness and reduced energy consumption requires new tools and methods to meet these conflicting requirements. The new methodology should serve for gaining better understanding of the properties of networking systems at the macroscopic level, as well as for the development of new principled optimization and management algorithms at the microscopic level. Methods of statistical physics seem best placed to provide new approaches as they have been developed specifically to deal with nonlinear large-scale systems. This review aims at presenting an overview of tools and methods that have been developed within the statistical physics community and that can be readily applied to address the emerging problems in networking. These include diffusion processes, methods from disordered systems and polymer physics, probabilistic inference, which have direct relevance to network routing, file and frequency distribution, the exploration of network structures and vulnerability, and various other practical networking applications.

  16. Networking—a statistical physics perspective

    International Nuclear Information System (INIS)

    Yeung, Chi Ho; Saad, David

    2013-01-01

    Networking encompasses a variety of tasks related to the communication of information on networks; it has a substantial economic and societal impact on a broad range of areas including transportation systems, wired and wireless communications and a range of Internet applications. As transportation and communication networks become increasingly more complex, the ever increasing demand for congestion control, higher traffic capacity, quality of service, robustness and reduced energy consumption requires new tools and methods to meet these conflicting requirements. The new methodology should serve for gaining better understanding of the properties of networking systems at the macroscopic level, as well as for the development of new principled optimization and management algorithms at the microscopic level. Methods of statistical physics seem best placed to provide new approaches as they have been developed specifically to deal with nonlinear large-scale systems. This review aims at presenting an overview of tools and methods that have been developed within the statistical physics community and that can be readily applied to address the emerging problems in networking. These include diffusion processes, methods from disordered systems and polymer physics, probabilistic inference, which have direct relevance to network routing, file and frequency distribution, the exploration of network structures and vulnerability, and various other practical networking applications. (topical review)

  17. Practical Statistics

    CERN Document Server

    Lyons, L.

    2016-01-01

    Accelerators and detectors are expensive, both in terms of money and human effort. It is thus important to invest effort in performing a good statistical anal- ysis of the data, in order to extract the best information from it. This series of five lectures deals with practical aspects of statistical issues that arise in typical High Energy Physics analyses.

  18. Task-relevant perceptual features can define categories in visual memory too.

    Science.gov (United States)

    Antonelli, Karla B; Williams, Carrick C

    2017-11-01

    Although Konkle, Brady, Alvarez, and Oliva (2010, Journal of Experimental Psychology: General, 139(3), 558) claim that visual long-term memory (VLTM) is organized on underlying conceptual, not perceptual, information, visual memory results from visual search tasks are not well explained by this theory. We hypothesized that when viewing an object, any task-relevant visual information is critical to the organizational structure of VLTM. In two experiments, we examined the organization of VLTM by measuring the amount of retroactive interference created by objects possessing different combinations of task-relevant features. Based on task instructions, only the conceptual category was task relevant or both the conceptual category and a perceptual object feature were task relevant. Findings indicated that when made task relevant, perceptual object feature information, along with conceptual category information, could affect memory organization for objects in VLTM. However, when perceptual object feature information was task irrelevant, it did not contribute to memory organization; instead, memory defaulted to being organized around conceptual category information. These findings support the theory that a task-defined organizational structure is created in VLTM based on the relevance of particular object features and information.

  19. Cancer Data and Statistics Tools

    Science.gov (United States)

    ... Educational Campaigns Initiatives Stay Informed Cancer Data and Statistics Tools Recommend on Facebook Tweet Share Compartir Cancer Statistics Tools United States Cancer Statistics: Data Visualizations The ...

  20. Enhanced statistical damage identification using frequency-shift information with tunable piezoelectric transducer circuitry

    International Nuclear Information System (INIS)

    Zhao, J; Tang, J; Wang, K W

    2008-01-01

    The frequency-shift-based damage detection method entertains advantages such as global detection capability and easy implementation, but also suffers from drawbacks that include low detection accuracy and sensitivity and the difficulty in identifying damage using a small number of measurable frequencies. Moreover, the damage detection/identification performance is inevitably affected by the uncertainty/variations in the baseline model. In this research, we investigate an enhanced statistical damage identification method using the tunable piezoelectric transducer circuitry. The tunable piezoelectric transducer circuitry can lead to much enriched information on frequency shift (before and after damage occurrence). The circuitry elements, meanwhile, can be directly and accurately measured and thus can be considered uncertainty-free. A statistical damage identification algorithm is formulated which can identify both the mean and variance of the elemental property change. Our analysis indicates that the integration of the tunable piezoelectric transducer circuitry can significantly enhance the robustness of the frequency-shift-based damage identification approach under uncertainty and noise

  1. Multivariate statistical methods and data mining in particle physics (4/4)

    CERN Multimedia

    CERN. Geneva

    2008-01-01

    The lectures will cover multivariate statistical methods and their applications in High Energy Physics. The methods will be viewed in the framework of a statistical test, as used e.g. to discriminate between signal and background events. Topics will include an introduction to the relevant statistical formalism, linear test variables, neural networks, probability density estimation (PDE) methods, kernel-based PDE, decision trees and support vector machines. The methods will be evaluated with respect to criteria relevant to HEP analyses such as statistical power, ease of computation and sensitivity to systematic effects. Simple computer examples that can be extended to more complex analyses will be presented.

  2. Multivariate statistical methods and data mining in particle physics (2/4)

    CERN Multimedia

    CERN. Geneva

    2008-01-01

    The lectures will cover multivariate statistical methods and their applications in High Energy Physics. The methods will be viewed in the framework of a statistical test, as used e.g. to discriminate between signal and background events. Topics will include an introduction to the relevant statistical formalism, linear test variables, neural networks, probability density estimation (PDE) methods, kernel-based PDE, decision trees and support vector machines. The methods will be evaluated with respect to criteria relevant to HEP analyses such as statistical power, ease of computation and sensitivity to systematic effects. Simple computer examples that can be extended to more complex analyses will be presented.

  3. Multivariate statistical methods and data mining in particle physics (1/4)

    CERN Multimedia

    CERN. Geneva

    2008-01-01

    The lectures will cover multivariate statistical methods and their applications in High Energy Physics. The methods will be viewed in the framework of a statistical test, as used e.g. to discriminate between signal and background events. Topics will include an introduction to the relevant statistical formalism, linear test variables, neural networks, probability density estimation (PDE) methods, kernel-based PDE, decision trees and support vector machines. The methods will be evaluated with respect to criteria relevant to HEP analyses such as statistical power, ease of computation and sensitivity to systematic effects. Simple computer examples that can be extended to more complex analyses will be presented.

  4. Coulomb disintegration as an information source for relevant processes in nuclear astrophysics

    International Nuclear Information System (INIS)

    Bertulani, C.A.

    1989-01-01

    The possibility of obtaining the photodisintegration cross section using the equivalent-photon number method first deduced and employed for the Coulomb disintegration processes has been suggested. This is very interesting because there exist radioactive capture processes, related to the photodisintegration through time reversal, that are relevant in astrophysics. In this paper, the recent results of the Karlsruhe and the Texas A and M groups on the Coulomb disintegration of 6 Li and 7 Li and the problems of the method are discussed. The ideas developed in a previous paper (Nucl. Phys. A458 (1986) 188) are confirmed qualitatively. To understand the process quantitatively it is necessary to use a quantum treatment that would imply the introduction of Coulomb excitation effects of higher orders. The Coulomb disintegration of exotic secondary beams is also studied. It is particularly interesting the question about what kind of nuclear structure information, as binding energies of momentum distributions, may be obtained. (Author) [es

  5. Information relevant to ensuring that occupational radiation exposures at nuclear power stations will be as low as in reasonably achievable

    International Nuclear Information System (INIS)

    Anon.

    1979-01-01

    Regulations require that all reasonable efforts must be made to maintain exposure to radiation as far below the limits specified in 10 CFR Part 20 as is reasonably achievable. Information is provided relevant to attaining goals and objectives for planning, designing, constructing, operating and decommissioning a light-water-cooled nuclear power station to meet that criterion. Much of the information presented is also applicable to other than light-water-cooled nuclear power stations

  6. Locating relevant patient information in electronic health record data using representations of clinical concepts and database structures.

    Science.gov (United States)

    Pan, Xuequn; Cimino, James J

    2014-01-01

    Clinicians and clinical researchers often seek information in electronic health records (EHRs) that are relevant to some concept of interest, such as a disease or finding. The heterogeneous nature of EHRs can complicate retrieval, risking incomplete results. We frame this problem as the presence of two gaps: 1) a gap between clinical concepts and their representations in EHR data and 2) a gap between data representations and their locations within EHR data structures. We bridge these gaps with a knowledge structure that comprises relationships among clinical concepts (including concepts of interest and concepts that may be instantiated in EHR data) and relationships between clinical concepts and the database structures. We make use of available knowledge resources to develop a reproducible, scalable process for creating a knowledge base that can support automated query expansion from a clinical concept to all relevant EHR data.

  7. Statistics and Probability at Secondary Schools in the Federal State of Salzburg: An Empirical Study

    Directory of Open Access Journals (Sweden)

    Wolfgang Voit

    2014-12-01

    Full Text Available Knowledge about the practical use of statistics and probability in today's mathematics instruction at secondary schools is vital in order to improve the academic education for future teachers. We have conducted an empirical study among school teachers to inform towards improved mathematics instruction and teacher preparation. The study provides a snapshot into the daily practice of instruction at school. Centered around the four following questions, the status of statistics and probability was examined. Where did  the current mathematics teachers study? What relevance do statistics and probability have in school? Which contents are actually taught in class? What kind of continuing education would be desirable for teachers? The study population consisted of all teachers of mathematics at secondary schools in the federal state of Salzburg.

  8. Reports on internet traffic statistics

    NARCIS (Netherlands)

    Hoogesteger, Martijn; de Oliveira Schmidt, R.; Sperotto, Anna; Pras, Aiko

    2013-01-01

    Internet traffic statistics can provide valuable information to network analysts and researchers about the way nowadays networks are used. In the past, such information was provided by Internet2 in a public website called Internet2 NetFlow: Weekly Reports. The website reported traffic statistics

  9. Detection and statistics of gusts

    DEFF Research Database (Denmark)

    Hannesdóttir, Ásta; Kelly, Mark C.; Mann, Jakob

    In this project, a more realistic representation of gusts, based on statistical analysis, will account for the variability observed in real-world gusts. The gust representation will focus on temporal, spatial, and velocity scales that are relevant for modern wind turbines and which possibly affect...

  10. An information search model for online social Networks - MOBIRSE

    Directory of Open Access Journals (Sweden)

    Miguel Angel Niño Zambrano

    2015-09-01

    Full Text Available Online Social Networks (OSNs have been gaining great importance among Internet users in recent years.  These are sites where it is possible to meet people, publish, and share content in a way that is both easy and free of charge. As a result, the volume of information contained in these websites has grown exponentially, and web search has consequently become an important tool for users to easily find information relevant to their social networking objectives. Making use of ontologies and user profiles can make these searches more effective. This article presents a model for Information Retrieval in OSNs (MOBIRSE based on user profile and ontologies which aims to improve the relevance of retrieved information on these websites. The social network Facebook was chosen for a case study and as the instance for the proposed model. The model was validated using measures such as At-k Precision and Kappa statistics, to assess its efficiency.

  11. Statistical physics of networks, information and complex systems

    Energy Technology Data Exchange (ETDEWEB)

    Ecke, Robert E [Los Alamos National Laboratory

    2009-01-01

    In this project we explore the mathematical methods and concepts of statistical physics that are fmding abundant applications across the scientific and technological spectrum from soft condensed matter systems and bio-infonnatics to economic and social systems. Our approach exploits the considerable similarity of concepts between statistical physics and computer science, allowing for a powerful multi-disciplinary approach that draws its strength from cross-fertilization and mUltiple interactions of researchers with different backgrounds. The work on this project takes advantage of the newly appreciated connection between computer science and statistics and addresses important problems in data storage, decoding, optimization, the infonnation processing properties of the brain, the interface between quantum and classical infonnation science, the verification of large software programs, modeling of complex systems including disease epidemiology, resource distribution issues, and the nature of highly fluctuating complex systems. Common themes that the project has been emphasizing are (i) neural computation, (ii) network theory and its applications, and (iii) a statistical physics approach to infonnation theory. The project's efforts focus on the general problem of optimization and variational techniques, algorithm development and infonnation theoretic approaches to quantum systems. These efforts are responsible for fruitful collaborations and the nucleation of science efforts that span multiple divisions such as EES, CCS, 0 , T, ISR and P. This project supports the DOE mission in Energy Security and Nuclear Non-Proliferation by developing novel infonnation science tools for communication, sensing, and interacting complex networks such as the internet or energy distribution system. The work also supports programs in Threat Reduction and Homeland Security.

  12. Nuclear medicine statistics

    International Nuclear Information System (INIS)

    Martin, P.M.

    1977-01-01

    Numerical description of medical and biologic phenomena is proliferating. Laboratory studies on patients now yield measurements of at least a dozen indices, each with its own normal limits. Within nuclear medicine, numerical analysis as well as numerical measurement and the use of computers are becoming more common. While the digital computer has proved to be a valuable tool for measurment and analysis of imaging and radioimmunoassay data, it has created more work in that users now ask for more detailed calculations and for indices that measure the reliability of quantified observations. The following material is presented with the intention of providing a straight-forward methodology to determine values for some useful parameters and to estimate the errors involved. The process used is that of asking relevant questions and then providing answers by illustrations. It is hoped that this will help the reader avoid an error of the third kind, that is, the error of statistical misrepresentation or inadvertent deception. This occurs most frequently in cases where the right answer is found to the wrong question. The purposes of this chapter are: (1) to provide some relevant statistical theory, using a terminology suitable for the nuclear medicine field; (2) to demonstrate the application of a number of statistical methods to the kinds of data commonly encountered in nuclear medicine; (3) to provide a framework to assist the experimenter in choosing the method and the questions most suitable for the experiment at hand; and (4) to present a simple approach for a quantitative quality control program for scintillation cameras and other radiation detectors

  13. Testing statistical hypotheses of equivalence

    CERN Document Server

    Wellek, Stefan

    2010-01-01

    Equivalence testing has grown significantly in importance over the last two decades, especially as its relevance to a variety of applications has become understood. Yet published work on the general methodology remains scattered in specialists' journals, and for the most part, it focuses on the relatively narrow topic of bioequivalence assessment.With a far broader perspective, Testing Statistical Hypotheses of Equivalence provides the first comprehensive treatment of statistical equivalence testing. The author addresses a spectrum of specific, two-sided equivalence testing problems, from the

  14. Statistical Mechanics of Japanese Labor Markets

    Science.gov (United States)

    Chen, He

    We introduce a probabilistic model to analyze job-matching processes of recent Japanese labor markets, in particular, for university graduates by means of statistical physics. To make a model of the market efficiently, we take into account several hypotheses. Namely, each company fixes the (business year independent) number of opening positions for newcomers. The ability of gathering newcomers depends on the result of job matching process in past business years. This fact means that the ability of the company is weakening if the company did not make their quota or the company gathered applicants too much over the quota. All university graduates who are looking for their jobs can access the public information about the ranking of companies. By assuming the above essential key points, we construct the local energy function of each company and describe the probability that an arbitrary company gets students at each business year by a Boltzmann-Gibbs distribution. We evaluate the relevant physical quantities such as the employment rate and Gini index. We discuss social inequalities in labor markets, and provide some ways to improve these situations, such as the informal job offer rate, the job-worker mismatch between students and companies. Graduate School of Information Science and Technology.

  15. [Evidence-based medicine. 2. Research of clinically relevant biomedical information. Gruppo Italiano per la Medicina Basata sulle Evidenze--GIMBE].

    Science.gov (United States)

    Cartabellotta, A

    1998-05-01

    Evidence-based Medicine is a product of the electronic information age and there are several databases useful for practice it--MEDLINE, EMBASE, specialized compendiums of evidence (Cochrane Library, Best Evidence), practice guidelines--most of them free available through Internet, that offers a growing number of health resources. Because searching best evidence is a basic step to practice Evidence-based Medicine, this second review (the first one has been published in the issue of March 1998) has the aim to provide physicians tools and skills for retrieving relevant biomedical information. Therefore, we discuss about strategies for managing information overload, analyze characteristics, usefulness and limits of medical databases and explain how to use MEDLINE in day-to-day clinical practice.

  16. Relevance as process: judgements in the context of scholarly research

    Directory of Open Access Journals (Sweden)

    Theresa D. Anderson

    2005-01-01

    Full Text Available Introduction. This paper discusses how exploring the research process in-depth and over time contributes to a fuller understanding of interactions with various representations of information. Method. A longitudinal ethnographic study explored decisions made by two informants involved in scholarly research. Relevance assessment and information seeking were observed as part of informants' own ongoing research projects. Fieldwork used methods of discovery that allowed informants to shape the exploration of the practices surrounding the evolving understandings of their topics. Analysis. Inductive analysis was carried out on the qualitative data collected over a two-year period of judgements observed on a document-by-document basis. The paper introduces broad categories that point to the variability and richness of the ways that informants used representations of information resources to make relevance judgements. Results. Relevance judgements appear to be drivers of the search and research processes informants moved through during the observations. Focusing on research goals rather than on retrieval tasks brings us to a fuller understanding of the relationship between ultimate research goals and the articulation of those goals in interactions with information systems. Conclusion. Relevance assessment is a process that unfolds in the doing of a search, the making of judgements and the using of texts and representations of information.

  17. The spread of scientific information: insights from the web usage statistics in PLoS article-level metrics.

    Directory of Open Access Journals (Sweden)

    Koon-Kiu Yan

    Full Text Available The presence of web-based communities is a distinctive signature of Web 2.0. The web-based feature means that information propagation within each community is highly facilitated, promoting complex collective dynamics in view of information exchange. In this work, we focus on a community of scientists and study, in particular, how the awareness of a scientific paper is spread. Our work is based on the web usage statistics obtained from the PLoS Article Level Metrics dataset compiled by PLoS. The cumulative number of HTML views was found to follow a long tail distribution which is reasonably well-fitted by a lognormal one. We modeled the diffusion of information by a random multiplicative process, and thus extracted the rates of information spread at different stages after the publication of a paper. We found that the spread of information displays two distinct decay regimes: a rapid downfall in the first month after publication, and a gradual power law decay afterwards. We identified these two regimes with two distinct driving processes: a short-term behavior driven by the fame of a paper, and a long-term behavior consistent with citation statistics. The patterns of information spread were found to be remarkably similar in data from different journals, but there are intrinsic differences for different types of web usage (HTML views and PDF downloads versus XML. These similarities and differences shed light on the theoretical understanding of different complex systems, as well as a better design of the corresponding web applications that is of high potential marketing impact.

  18. The spread of scientific information: insights from the web usage statistics in PLoS article-level metrics.

    Science.gov (United States)

    Yan, Koon-Kiu; Gerstein, Mark

    2011-01-01

    The presence of web-based communities is a distinctive signature of Web 2.0. The web-based feature means that information propagation within each community is highly facilitated, promoting complex collective dynamics in view of information exchange. In this work, we focus on a community of scientists and study, in particular, how the awareness of a scientific paper is spread. Our work is based on the web usage statistics obtained from the PLoS Article Level Metrics dataset compiled by PLoS. The cumulative number of HTML views was found to follow a long tail distribution which is reasonably well-fitted by a lognormal one. We modeled the diffusion of information by a random multiplicative process, and thus extracted the rates of information spread at different stages after the publication of a paper. We found that the spread of information displays two distinct decay regimes: a rapid downfall in the first month after publication, and a gradual power law decay afterwards. We identified these two regimes with two distinct driving processes: a short-term behavior driven by the fame of a paper, and a long-term behavior consistent with citation statistics. The patterns of information spread were found to be remarkably similar in data from different journals, but there are intrinsic differences for different types of web usage (HTML views and PDF downloads versus XML). These similarities and differences shed light on the theoretical understanding of different complex systems, as well as a better design of the corresponding web applications that is of high potential marketing impact.

  19. Statistical, Spatial and Temporal Mapping of 911 Emergencies in Ecuador

    Directory of Open Access Journals (Sweden)

    Danilo Corral-De-Witt

    2018-01-01

    Full Text Available A public safety answering point (PSAP receives alerts and attends to emergencies that occur in its responsibility area. The analysis of the events related to a PSAP can give us relevant information in order to manage them and to improve the performance of the first response institutions (FRIs associated to every PSAP. However, current emergency systems are growing dramatically in terms of information heterogeneity and the volume of attended requests. In this work, we propose a system for statistical, spatial, and temporal analysis of incidences registered in a PSAP by using simple, yet robust and compact, event representations. The selected and designed temporal analysis tools include seasonal representations and nonparametric confidence intervals (CIs, which dissociate the main seasonal components and the transients. The spatial analysis tools include a straightforward event location over Google Maps and the detection of heat zones by means of bidimensional geographic Parzen windows with automatic width control in terms of the scales and the number of events in the region of interest. Finally, statistical representations are used for jointly analyzing temporal and spatial data in terms of the “time–space slices”. We analyzed the total number of emergencies that were attended during 2014 by seven FRIs articulated in a PSAP at the Ecuadorian 911 Integrated Security Service. Characteristic weekly patterns were observed in institutions such as the police, health, and transit services, whereas annual patterns were observed in firefighter events. Spatial and spatiotemporal analysis showed some expected patterns together with nontrivial differences among different services, to be taken into account for resource management. The proposed analysis allows for a flexible analysis by combining statistical, spatial and temporal information, and it provides 911 service managers with useful and operative information.

  20. National transportation statistics 2011

    Science.gov (United States)

    2011-04-01

    Compiled and published by the U.S. Department of Transportation's Bureau of Transportation Statistics : (BTS), National Transportation Statistics presents information on the U.S. transportation system, including : its physical components, safety reco...

  1. National Transportation Statistics 2008

    Science.gov (United States)

    2009-01-08

    Compiled and published by the U.S. Department of Transportations Bureau of Transportation Statistics (BTS), National Transportation Statistics presents information on the U.S. transportation system, including its physical components, safety record...

  2. National Transportation Statistics 2009

    Science.gov (United States)

    2010-01-21

    Compiled and published by the U.S. Department of Transportation's Bureau of Transportation Statistics (BTS), National Transportation Statistics presents information on the U.S. transportation system, including its physical components, safety record, ...

  3. Explaining citizens’ perceptions of international climate-policy relevance

    International Nuclear Information System (INIS)

    Schleich, Joachim; Faure, Corinne

    2017-01-01

    This paper empirically analyses the antecedents of citizens’ perceptions of the relevance of international climate policy. Its use of representative surveys in the USA, China and Germany controls for different environmental attitudes and socio-economic factors between countries. The findings of the micro-econometric analysis suggest that the perceived relevance of international climate policy is positively affected by its perceived effectiveness, approval of the key topics discussed at international climate conferences, and environmental attitudes, but is not affected by perceived procedural justice. A higher level of perceived trust in international climate policy was positively related to perceived relevance in the USA and in China, but not in Germany. Citizens who felt that they were well informed and that their position was represented at climate summits were more likely to perceive international climate policy as relevant in China in particular. Generally, the results show only weak evidence of socio-demographic effects. - Highlights: • Perceptions of climate-policy relevance increase with perceptions of effectiveness. • In China and the USA, trust increases perceptions of climate-policy relevance. • Environmental attitudes are related to perceptions of climate-policy relevance. • In China, well-informed citizens perceive climate policy as more relevant. • Socio-demographics only weakly affect perceptions of climate-policy relevance.

  4. Identifying and exploiting trait-relevant tissues with multiple functional annotations in genome-wide association studies

    Science.gov (United States)

    Zhang, Shujun

    2018-01-01

    Genome-wide association studies (GWASs) have identified many disease associated loci, the majority of which have unknown biological functions. Understanding the mechanism underlying trait associations requires identifying trait-relevant tissues and investigating associations in a trait-specific fashion. Here, we extend the widely used linear mixed model to incorporate multiple SNP functional annotations from omics studies with GWAS summary statistics to facilitate the identification of trait-relevant tissues, with which to further construct powerful association tests. Specifically, we rely on a generalized estimating equation based algorithm for parameter inference, a mixture modeling framework for trait-tissue relevance classification, and a weighted sequence kernel association test constructed based on the identified trait-relevant tissues for powerful association analysis. We refer to our analytic procedure as the Scalable Multiple Annotation integration for trait-Relevant Tissue identification and usage (SMART). With extensive simulations, we show how our method can make use of multiple complementary annotations to improve the accuracy for identifying trait-relevant tissues. In addition, our procedure allows us to make use of the inferred trait-relevant tissues, for the first time, to construct more powerful SNP set tests. We apply our method for an in-depth analysis of 43 traits from 28 GWASs using tissue-specific annotations in 105 tissues derived from ENCODE and Roadmap. Our results reveal new trait-tissue relevance, pinpoint important annotations that are informative of trait-tissue relationship, and illustrate how we can use the inferred trait-relevant tissues to construct more powerful association tests in the Wellcome trust case control consortium study. PMID:29377896

  5. Identifying and exploiting trait-relevant tissues with multiple functional annotations in genome-wide association studies.

    Directory of Open Access Journals (Sweden)

    Xingjie Hao

    2018-01-01

    Full Text Available Genome-wide association studies (GWASs have identified many disease associated loci, the majority of which have unknown biological functions. Understanding the mechanism underlying trait associations requires identifying trait-relevant tissues and investigating associations in a trait-specific fashion. Here, we extend the widely used linear mixed model to incorporate multiple SNP functional annotations from omics studies with GWAS summary statistics to facilitate the identification of trait-relevant tissues, with which to further construct powerful association tests. Specifically, we rely on a generalized estimating equation based algorithm for parameter inference, a mixture modeling framework for trait-tissue relevance classification, and a weighted sequence kernel association test constructed based on the identified trait-relevant tissues for powerful association analysis. We refer to our analytic procedure as the Scalable Multiple Annotation integration for trait-Relevant Tissue identification and usage (SMART. With extensive simulations, we show how our method can make use of multiple complementary annotations to improve the accuracy for identifying trait-relevant tissues. In addition, our procedure allows us to make use of the inferred trait-relevant tissues, for the first time, to construct more powerful SNP set tests. We apply our method for an in-depth analysis of 43 traits from 28 GWASs using tissue-specific annotations in 105 tissues derived from ENCODE and Roadmap. Our results reveal new trait-tissue relevance, pinpoint important annotations that are informative of trait-tissue relationship, and illustrate how we can use the inferred trait-relevant tissues to construct more powerful association tests in the Wellcome trust case control consortium study.

  6. Perceived Relevance of Educative Information on Public (Skin Health: Results of a Representative, Population-Based Telephone Survey

    Directory of Open Access Journals (Sweden)

    Daniela Haluza

    2015-11-01

    Full Text Available Individual skin health attitudes are influenced by various factors, including public education campaigns, mass media, family, and friends. Evidence-based, educative information materials assist communication and decision-making in doctor-patient interactions. The present study aims at assessing the prevailing use of skin health information material and sources and their impact on skin health knowledge, motives to tan, and sun protection. We conducted a questionnaire survey among a representative sample of Austrian residents. Print media and television were perceived as the two most relevant sources for skin health information, whereas the source physician was ranked third. Picking the information source physician increased participants’ skin health knowledge (p = 0.025 and sun-protective behavior (p < 0.001. The study results highlight the demand for targeted health messages to attain lifestyle changes towards photo-protective habits. Providing resources that encourage pro-active counseling in every-day doctor-patient communication could increase skin health knowledge and sun-protective behavior, and thus, curb the rise in skin cancer incidence rates.

  7. Readability, relevance and quality of the information in Spanish on the Web for patients with rheumatoid arthritis.

    Science.gov (United States)

    Castillo-Ortiz, Jose Dionisio; Valdivia-Nuno, Jose de Jesus; Ramirez-Gomez, Andrea; Garagarza-Mariscal, Heber; Gallegos-Rios, Carlos; Flores-Hernandez, Gabriel; Hernandez-Sanchez, Luis; Brambila-Barba, Victor; Castaneda-Sanchez, Jose Juan; Barajas-Ochoa, Zalathiel; Suarez-Rico, Angel; Sanchez-Gonzalez, Jorge Manuel; Ramos-Remus, Cesar

    Education is a major health determinant and one of the main independent outcome predictors in rheumatoid arthritis (RA). The use of the Internet by patients has grown exponentially in the last decade. To assess the characteristics, legibility and quality of the information available in Spanish in the Internet regarding to rheumatoid arthritis. The search was performed in Google using the phrase rheumatoid arthritis. Information from the first 30 pages was evaluated according to a pre-established format (relevance, scope, authorship, type of publication and financial objective). The quality and legibility of the pages were assessed using two validated tools, DISCERN and INFLESZ respectively. Data extraction was performed by senior medical students and evaluation was achieved by consensus. The Google search returned 323 hits but only 63% were considered relevant; 80% of them were information sites (71% discussed exclusively RA, 44% conventional treatment and 12% alternative therapies) and 12.5% had a primary financial interest. 60% of the sites were created by nonprofit organizations and 15% by medical associations. Web sites posted by medical institutions from the United States of America were better positioned in Spanish (Arthritis Foundation 4th position and American College of Rheumatology 10th position) than web sites posted by Spanish speaking countries. There is a risk of disinformation for patients with RA that use the Internet. We identified a window of opportunity for rheumatology medical institutions from Spanish-speaking countries to have a more prominent societal involvement in the education of their patients with RA. Copyright © 2016 Elsevier España, S.L.U. and Sociedad Española de Reumatología y Colegio Mexicano de Reumatología. All rights reserved.

  8. Segmentation of human skull in MRI using statistical shape information from CT data.

    Science.gov (United States)

    Wang, Defeng; Shi, Lin; Chu, Winnie C W; Cheng, Jack C Y; Heng, Pheng Ann

    2009-09-01

    To automatically segment the skull from the MRI data using a model-based three-dimensional segmentation scheme. This study exploited the statistical anatomy extracted from the CT data of a group of subjects by means of constructing an active shape model of the skull surfaces. To construct a reliable shape model, a novel approach was proposed to optimize the automatic landmarking on the coupled surfaces (i.e., the skull vault) by minimizing the description length that incorporated local thickness information. This model was then used to locate the skull shape in MRI of a different group of patients. Compared with performing landmarking separately on the coupled surfaces, the proposed landmarking method constructed models that had better generalization ability and specificity. The segmentation accuracies were measured by the Dice coefficient and the set difference, and compared with the method based on mathematical morphology operations. The proposed approach using the active shape model based on the statistical skull anatomy presented in the head CT data contributes to more reliable segmentation of the skull from MRI data.

  9. Technology for enhancing statistical reasoning at the school level

    NARCIS (Netherlands)

    Biehler, R.; Ben-Zvi, D.; Bakker, A.|info:eu-repo/dai/nl/272605778; Makar, K.

    2013-01-01

    The purpose of this chapter is to provide an updated overview of digital technologies relevant to statistics education, and to summarize what is currently known about how these new technologies can support the development of students’ statistical reasoning at the school level. A brief literature

  10. Statistical Model of Extreme Shear

    DEFF Research Database (Denmark)

    Hansen, Kurt Schaldemose; Larsen, Gunner Chr.

    2005-01-01

    In order to continue cost-optimisation of modern large wind turbines, it is important to continuously increase the knowledge of wind field parameters relevant to design loads. This paper presents a general statistical model that offers site-specific prediction of the probability density function...... by a model that, on a statistically consistent basis, describes the most likely spatial shape of an extreme wind shear event. Predictions from the model have been compared with results from an extreme value data analysis, based on a large number of full-scale measurements recorded with a high sampling rate...

  11. Using small XML elements to support relevance

    NARCIS (Netherlands)

    G. Ramirez Camps (Georgina); T.H.W. Westerveld (Thijs); A.P. de Vries (Arjen)

    2006-01-01

    htmlabstractSmall XML elements are often estimated relevant by the retrieval model but they are not desirable retrieval units. This paper presents a generic model that exploits the information obtained from small elements. We identify relationships between small and relevant elements and use this

  12. A study on the relevance and influence of the existing regulation and risk informed/performance based regulation

    Energy Technology Data Exchange (ETDEWEB)

    Cheong, B. J.; Koh, Y. J.; Kim, H. S.; Koh, S. H.; Kang, D. H.; Kang, T. W. [Cheju National Univ., Jeju (Korea, Republic of)

    2004-02-15

    The goal of this study is to estimate the Relevance and Influence of the Existing Regulation and the RI-PBR to the institutionalization of the regulatory system. This study reviews the current regulatory system and the status of the RI-PBR implementation of the US NRC and Korea based upon SECY Papers, Risk Informed Regulation Implementation Plan (RIRIP) of the US NRC and other domestic studies. Also the recent trends of the individual technologies regarding the RI-PBR and RIA are summarized.

  13. USING STATISTICAL SURVEY IN ECONOMICS

    Directory of Open Access Journals (Sweden)

    Delia TESELIOS

    2012-01-01

    Full Text Available Statistical survey is an effective method of statistical investigation that involves gathering quantitative data, which is often preferred in statistical reports due to the information which can be obtained regarding the entire population studied by observing a part of it. Therefore, because of the information provided, polls are used in many research areas. In economics, statistics are used in the decision making process in choosing competitive strategies in the analysis of certain economic phenomena, the formulation of forecasts. Economic study presented in this paper is to illustrate how a simple random sampling is used to analyze the existing parking spaces situation in a given locality.

  14. Industrial commodity statistics yearbook 2001. Production statistics (1992-2001)

    International Nuclear Information System (INIS)

    2003-01-01

    This is the thirty-fifth in a series of annual compilations of statistics on world industry designed to meet both the general demand for information of this kind and the special requirements of the United Nations and related international bodies. Beginning with the 1992 edition, the title of the publication was changed to industrial Commodity Statistics Yearbook as the result of a decision made by the United Nations Statistical Commission at its twenty-seventh session to discontinue, effective 1994, publication of the Industrial Statistics Yearbook, volume I, General Industrial Statistics by the Statistics Division of the United Nations. The United Nations Industrial Development Organization (UNIDO) has become responsible for the collection and dissemination of general industrial statistics while the Statistics Division of the United Nations continues to be responsible for industrial commodity production statistics. The previous title, Industrial Statistics Yearbook, volume II, Commodity Production Statistics, was introduced in the 1982 edition. The first seven editions in this series were published under the title The Growth of World industry and the next eight editions under the title Yearbook of Industrial Statistics. This edition of the Yearbook contains annual quantity data on production of industrial commodities by country, geographical region, economic grouping and for the world. A standard list of about 530 commodities (about 590 statistical series) has been adopted for the publication. The statistics refer to the ten-year period 1992-2001 for about 200 countries and areas

  15. Industrial commodity statistics yearbook 2002. Production statistics (1993-2002)

    International Nuclear Information System (INIS)

    2004-01-01

    This is the thirty-sixth in a series of annual compilations of statistics on world industry designed to meet both the general demand for information of this kind and the special requirements of the United Nations and related international bodies. Beginning with the 1992 edition, the title of the publication was changed to industrial Commodity Statistics Yearbook as the result of a decision made by the United Nations Statistical Commission at its twenty-seventh session to discontinue, effective 1994, publication of the Industrial Statistics Yearbook, volume I, General Industrial Statistics by the Statistics Division of the United Nations. The United Nations Industrial Development Organization (UNIDO) has become responsible for the collection and dissemination of general industrial statistics while the Statistics Division of the United Nations continues to be responsible for industrial commodity production statistics. The previous title, Industrial Statistics Yearbook, volume II, Commodity Production Statistics, was introduced in the 1982 edition. The first seven editions in this series were published under the title 'The Growth of World industry' and the next eight editions under the title 'Yearbook of Industrial Statistics'. This edition of the Yearbook contains annual quantity data on production of industrial commodities by country, geographical region, economic grouping and for the world. A standard list of about 530 commodities (about 590 statistical series) has been adopted for the publication. The statistics refer to the ten-year period 1993-2002 for about 200 countries and areas

  16. Industrial commodity statistics yearbook 2000. Production statistics (1991-2000)

    International Nuclear Information System (INIS)

    2002-01-01

    This is the thirty-third in a series of annual compilations of statistics on world industry designed to meet both the general demand for information of this kind and the special requirements of the United Nations and related international bodies. Beginning with the 1992 edition, the title of the publication was changed to industrial Commodity Statistics Yearbook as the result of a decision made by the United Nations Statistical Commission at its twenty-seventh session to discontinue, effective 1994, publication of the Industrial Statistics Yearbook, volume I, General Industrial Statistics by the Statistics Division of the United Nations. The United Nations Industrial Development Organization (UNIDO) has become responsible for the collection and dissemination of general industrial statistics while the Statistics Division of the United Nations continues to be responsible for industrial commodity production statistics. The previous title, Industrial Statistics Yearbook, volume II, Commodity Production Statistics, was introduced in the 1982 edition. The first seven editions in this series were published under the title The Growth of World industry and the next eight editions under the title Yearbook of Industrial Statistics. This edition of the Yearbook contains annual quantity data on production of industrial commodities by country, geographical region, economic grouping and for the world. A standard list of about 530 commodities (about 590 statistical series) has been adopted for the publication. Most of the statistics refer to the ten-year period 1991-2000 for about 200 countries and areas

  17. HPV-Associated Cancers Statistics

    Science.gov (United States)

    ... What CDC Is Doing Related Links Stay Informed Statistics for Other Kinds of Cancer Breast Cervical Colorectal ( ... Vaginal and Vulvar Cancer Home HPV-Associated Cancer Statistics Language: English (US) Español (Spanish) Recommend on Facebook ...

  18. Implementation of the common phrase index method on the phrase query for information retrieval

    Science.gov (United States)

    Fatmawati, Triyah; Zaman, Badrus; Werdiningsih, Indah

    2017-08-01

    As the development of technology, the process of finding information on the news text is easy, because the text of the news is not only distributed in print media, such as newspapers, but also in electronic media that can be accessed using the search engine. In the process of finding relevant documents on the search engine, a phrase often used as a query. The number of words that make up the phrase query and their position obviously affect the relevance of the document produced. As a result, the accuracy of the information obtained will be affected. Based on the outlined problem, the purpose of this research was to analyze the implementation of the common phrase index method on information retrieval. This research will be conducted in English news text and implemented on a prototype to determine the relevance level of the documents produced. The system is built with the stages of pre-processing, indexing, term weighting calculation, and cosine similarity calculation. Then the system will display the document search results in a sequence, based on the cosine similarity. Furthermore, system testing will be conducted using 100 documents and 20 queries. That result is then used for the evaluation stage. First, determine the relevant documents using kappa statistic calculation. Second, determine the system success rate using precision, recall, and F-measure calculation. In this research, the result of kappa statistic calculation was 0.71, so that the relevant documents are eligible for the system evaluation. Then the calculation of precision, recall, and F-measure produces precision of 0.37, recall of 0.50, and F-measure of 0.43. From this result can be said that the success rate of the system to produce relevant documents is low.

  19. Value Relevance of Investment Properties: Evidence from the Brazilian Capital Market

    Directory of Open Access Journals (Sweden)

    Ketlyn Alves Gonçalves

    2017-04-01

    Full Text Available This study investigates the relevance to the capital market of the assets recognized as investment properties of companies listed on the BM&F BOVESPA, in the period from 2011 to 2014. The research conducted was based on the Ohlson model (1995 and panel analysis was carried out using linear regression with POLS and Fixed and Random Effects estimators. Two hypothesis were made: (i that Earning and Equity generate accounting information relevant to investors; and (2 that Earning, Equity and Investment Property generate accounting information relevant to investors, assuming that investment properties have incremental effect on the relevance of this information relative only to earning and to equity. Both hypotheses were rejected, so it is concluded that Investment Property assets are not of value relevance in the determination of share price and do not influence the decision making of users of accounting information. The study adds to the limited literature on the value relevance of Investment Property, permitting a better understanding of the impact of accounting disclosures used by companies on their market value.

  20. Developing decision-relevant data and information systems for California water through listening and collaboration

    Science.gov (United States)

    Bales, R. C.; Bernacchi, L.; Conklin, M. H.; Viers, J. H.; Fogg, G. E.; Fisher, A. T.; Kiparsky, M.

    2017-12-01

    California's historic drought of 2011-2015 provided excellent conditions for researchers to listen to water-management challenges from decision makers, particularly with regard to data and information needs for improved decision making. Through the UC Water Security and Sustainability Research Initiative (http://ucwater.org/) we began a multi-year dialog with water-resources decision makers and state agencies that provide data and technical support for water management. Near-term products of that collaboration will be both a vision for a 21st-century water data and information system, and near-term steps to meet immediate legislative deadlines in a way that is consistent with the longer-term vision. While many university-based water researchers engage with state and local agencies on both science and policy challenges, UC Water's focus was on: i) integrated system management, from headwaters through groundwater and agriculture, and on ii) improved decision making through better water information systems. This focus aligned with the recognition by water leaders that fundamental changes in the way the state manages water were overdue. UC Water is focused on three "I"s: improved water information, empowering Institutions to use and to create new information, and enabling decision makers to make smart investments in both green and grey Infrastructure. Effective communication with water decision makers has led to engagement on high-priority programs where large knowledge gaps remain, including more-widespread groundwater recharge of storm flows, restoration of mountain forests in important source-water areas, governance structures for groundwater sustainability, and filling information gaps by bringing new technology to bear on measurement and data programs. Continuing engagement of UC Water researchers in public dialog around water resources, through opinion pieces, feature articles, blogs, white papers, social media, video clips and a feature documentary film have

  1. Data analysis for radiological characterisation: Geostatistical and statistical complementarity

    International Nuclear Information System (INIS)

    Desnoyers, Yvon; Dubot, Didier

    2012-01-01

    Radiological characterisation may cover a large range of evaluation objectives during a decommissioning and dismantling (D and D) project: removal of doubt, delineation of contaminated materials, monitoring of the decontamination work and final survey. At each stage, collecting relevant data to be able to draw the conclusions needed is quite a big challenge. In particular two radiological characterisation stages require an advanced sampling process and data analysis, namely the initial categorization and optimisation of the materials to be removed and the final survey to demonstrate compliance with clearance levels. On the one hand the latter is widely used and well developed in national guides and norms, using random sampling designs and statistical data analysis. On the other hand a more complex evaluation methodology has to be implemented for the initial radiological characterisation, both for sampling design and for data analysis. The geostatistical framework is an efficient way to satisfy the radiological characterisation requirements providing a sound decision-making approach for the decommissioning and dismantling of nuclear premises. The relevance of the geostatistical methodology relies on the presence of a spatial continuity for radiological contamination. Thus geo-statistics provides reliable methods for activity estimation, uncertainty quantification and risk analysis, leading to a sound classification of radiological waste (surfaces and volumes). This way, the radiological characterization of contaminated premises can be divided into three steps. First, the most exhaustive facility analysis provides historical and qualitative information. Then, a systematic (exhaustive or not) surface survey of the contamination is implemented on a regular grid. Finally, in order to assess activity levels and contamination depths, destructive samples are collected at several locations within the premises (based on the surface survey results) and analysed. Combined with

  2. 1997 statistical yearbook

    International Nuclear Information System (INIS)

    1998-01-01

    The international office of energy information and studies (Enerdata), has published the second edition of its 1997 statistical yearbook which includes consolidated 1996 data with respect to the previous version from June 1997. The CD-Rom comprises the annual worldwide petroleum, natural gas, coal and electricity statistics from 1991 to 1996 with information about production, external trade, consumption, market shares, sectoral distribution of consumption and energy balance sheets. The world is divided into 12 zones (52 countries available). It contains also energy indicators: production and consumption tendencies, supply and production structures, safety of supplies, energy efficiency, and CO 2 emissions. (J.S.)

  3. CMS Statistics

    Data.gov (United States)

    U.S. Department of Health & Human Services — The CMS Center for Strategic Planning produces an annual CMS Statistics reference booklet that provides a quick reference for summary information about health...

  4. Statistical results 1988-1990 of the Official Personal Dosimetry Service and data compilation 1980-1990

    International Nuclear Information System (INIS)

    Boerner, E.; Drexler, G.; Scheibe, D.; Schraube, H.

    1994-01-01

    The report consists of a summary of relevant statistical data in the official personal dosimetry in 1988-1990 for the Federal States of Bavaria, Hesse, Schleswig-Holstein, and since 1989, Baden-Wuerttemberg. The data are based on the survey of more than 8000 institutions with over 100000 occupational exposed persons and are derived from more than one million single measurements. The report covers informations on the institutions, on the persons as well as dosimetric values. The measuring method is described briefly with respect to dosimeters used, their range and the interpretation of values. Information on notional doses and the interpolation of values nearby the detection limits are given. (HP) [de

  5. [Delirium in stroke patients : Critical analysis of statistical procedures for the identification of risk factors].

    Science.gov (United States)

    Nydahl, P; Margraf, N G; Ewers, A

    2017-04-01

    Delirium is a relevant complication following an acute stroke. It is a multifactor occurrence with numerous interacting risk factors that alternately influence each other. The risk factors of delirium in stroke patients are often based on limited clinical studies. The statistical procedures and clinical relevance of delirium related risk factors in adult stroke patients should therefore be questioned. This secondary analysis includes clinically relevant studies that give evidence for the clinical relevance and statistical significance of delirium-associated risk factors in stroke patients. The quality of the reporting of regression analyses was assessed using Ottenbacher's quality criteria. The delirium-associated risk factors identified were examined with regard to statistical significance using the Bonferroni method of multiple testing for forming incorrect positive hypotheses. This was followed by a literature-based discussion on clinical relevance. Nine clinical studies were included. None of the studies fulfilled all the prerequisites and assumptions given for the reporting of regression analyses according to Ottenbacher. Of the 108 delirium-associated risk factors, a total of 48 (44.4%) were significant, whereby a total of 28 (58.3%) were false positive after Bonferroni correction. Following a literature-based discussion on clinical relevance, the assumption of statistical significance and clinical relevance could be found for only four risk factors (dementia or cognitive impairment, total anterior infarct, severe infarct and infections). The statistical procedures used in the existing literature are questionable, as are their results. A post-hoc analysis and critical appraisal reduced the number of possible delirium-associated risk factors to just a few clinically relevant factors.

  6. Advancing the Relevance Criteria for Video Search and Visual Summarization

    NARCIS (Netherlands)

    Rudinac, S.

    2013-01-01

    To facilitate finding of relevant information in ever-growing multimedia collections, a number of multimedia information retrieval solutions have been proposed over the past years. The essential element of any such solution is the relevance criterion deployed to select or rank the items from a

  7. Statistical secrecy and multibit commitments

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Pedersen, Torben P.; Pfitzmann, Birgit

    1998-01-01

    nothing about it. One definition is based on the L1-norm distance between probability distributions, the other on information theory. We prove that the two definitions are essentially equivalent. We also show that statistical counterparts of definitions of computational secrecy are essentially equivalent......We present and compare definitions of "statistically hiding" protocols, and we propose a novel statistically hiding commitment scheme. Informally, a protocol statistically hides a secret if a computationally unlimited adversary who conducts the protocol with the owner of the secret learns almost...... to our main definitions. Commitment schemes are an important cryptologic primitive. Their purpose is to commit one party to a certain value, while hiding this value from the other party until some later time. We present a statistically hiding commitment scheme allowing commitment to many bits...

  8. Applied matrix algebra in the statistical sciences

    CERN Document Server

    Basilevsky, Alexander

    2005-01-01

    This comprehensive text offers teachings relevant to both applied and theoretical branches of matrix algebra and provides a bridge between linear algebra and statistical models. Appropriate for advanced undergraduate and graduate students. 1983 edition.

  9. Official statistics and Big Data

    Directory of Open Access Journals (Sweden)

    Peter Struijs

    2014-07-01

    Full Text Available The rise of Big Data changes the context in which organisations producing official statistics operate. Big Data provides opportunities, but in order to make optimal use of Big Data, a number of challenges have to be addressed. This stimulates increased collaboration between National Statistical Institutes, Big Data holders, businesses and universities. In time, this may lead to a shift in the role of statistical institutes in the provision of high-quality and impartial statistical information to society. In this paper, the changes in context, the opportunities, the challenges and the way to collaborate are addressed. The collaboration between the various stakeholders will involve each partner building on and contributing different strengths. For national statistical offices, traditional strengths include, on the one hand, the ability to collect data and combine data sources with statistical products and, on the other hand, their focus on quality, transparency and sound methodology. In the Big Data era of competing and multiplying data sources, they continue to have a unique knowledge of official statistical production methods. And their impartiality and respect for privacy as enshrined in law uniquely position them as a trusted third party. Based on this, they may advise on the quality and validity of information of various sources. By thus positioning themselves, they will be able to play their role as key information providers in a changing society.

  10. CONFIDENCE LEVELS AND/VS. STATISTICAL HYPOTHESIS TESTING IN STATISTICAL ANALYSIS. CASE STUDY

    Directory of Open Access Journals (Sweden)

    ILEANA BRUDIU

    2009-05-01

    Full Text Available Estimated parameters with confidence intervals and testing statistical assumptions used in statistical analysis to obtain conclusions on research from a sample extracted from the population. Paper to the case study presented aims to highlight the importance of volume of sample taken in the study and how this reflects on the results obtained when using confidence intervals and testing for pregnant. If statistical testing hypotheses not only give an answer "yes" or "no" to some questions of statistical estimation using statistical confidence intervals provides more information than a test statistic, show high degree of uncertainty arising from small samples and findings build in the "marginally significant" or "almost significant (p very close to 0.05.

  11. Method for statistical data analysis of multivariate observations

    CERN Document Server

    Gnanadesikan, R

    1997-01-01

    A practical guide for multivariate statistical techniques-- now updated and revised In recent years, innovations in computer technology and statistical methodologies have dramatically altered the landscape of multivariate data analysis. This new edition of Methods for Statistical Data Analysis of Multivariate Observations explores current multivariate concepts and techniques while retaining the same practical focus of its predecessor. It integrates methods and data-based interpretations relevant to multivariate analysis in a way that addresses real-world problems arising in many areas of inte

  12. Diffusion-Based Density-Equalizing Maps: an Interdisciplinary Approach to Visualizing Homicide Rates and Other Georeferenced Statistical Data

    Science.gov (United States)

    Mazzitello, Karina I.; Candia, Julián

    2012-12-01

    In every country, public and private agencies allocate extensive funding to collect large-scale statistical data, which in turn are studied and analyzed in order to determine local, regional, national, and international policies regarding all aspects relevant to the welfare of society. One important aspect of that process is the visualization of statistical data with embedded geographical information, which most often relies on archaic methods such as maps colored according to graded scales. In this work, we apply nonstandard visualization techniques based on physical principles. We illustrate the method with recent statistics on homicide rates in Brazil and their correlation to other publicly available data. This physics-based approach provides a novel tool that can be used by interdisciplinary teams investigating statistics and model projections in a variety of fields such as economics and gross domestic product research, public health and epidemiology, sociodemographics, political science, business and marketing, and many others.

  13. An introduction to descriptive statistics: A review and practical guide

    International Nuclear Information System (INIS)

    Marshall, Gill; Jonker, Leon

    2010-01-01

    This paper, the first of two, demonstrates why it is necessary for radiographers to understand basic statistical concepts both to assimilate the work of others and also in their own research work. As the emphasis on evidence-based practice increases, it will become more pressing for radiographers to be able to dissect other people's research and to contribute to research themselves. The different types of data that one can come across are covered here, as well as different ways to describe data. Furthermore, the statistical terminology and methods used that comprise descriptive statistics are explained, including levels of measurement, measures of central tendency (average), and dispersion (spread) and the concept of normal distribution. This paper reviews relevant literature, provides a checklist of points to consider before progressing with the application of appropriate statistical methods to a data set, and provides a glossary of relevant terms for reference.

  14. A comparison of the value relevance of interim and annual financial statements

    Directory of Open Access Journals (Sweden)

    Mbalenhle Zulu

    2017-03-01

    Aim: It explores whether the value relevance of interim financial statements is higher than the value relevance of annual financial statements. Finally, it investigates whether accounting information published in interim and annual financial statements has incremental value relevance. Setting: Data for the period from 1999 to 2012 were collected from a sample of non-financial companies listed on the Johannesburg Stock Exchange. Method: The Ohlson model to investigate the value relevance of accounting information was used for the study. Results: The results show that interim book value of equity is value relevant while interim earnings are not. Interim financial statements appear to have higher value relevance than annual financial statements. The value relevance of interim and annual accounting information has remained fairly constant over the sample period. Incremental comparisons provide evidence that additional book value of equity and earnings that accrue to a company between interim and annual reporting dates are value relevant. Conclusion: The study was conducted over a long sample period (1999–2012, in an era when a technology-driven economy and more timely reporting media could have had an effect on the value relevance of published accounting information. To the best of our knowledge, this is the first study to evaluate and compare the value relevance of published interim and annual financial statements.

  15. Statistical decisions under nonparametric a priori information

    International Nuclear Information System (INIS)

    Chilingaryan, A.A.

    1985-01-01

    The basic module of applied program package for statistical analysis of the ANI experiment data is described. By means of this module tasks of choosing theoretical model most adequately fitting to experimental data, selection of events of definte type, identification of elementary particles are carried out. For mentioned problems solving, the Bayesian rules, one-leave out test and KNN (K Nearest Neighbour) adaptive density estimation are utilized

  16. Statistical Pattern Recognition

    CERN Document Server

    Webb, Andrew R

    2011-01-01

    Statistical pattern recognition relates to the use of statistical techniques for analysing data measurements in order to extract information and make justified decisions.  It is a very active area of study and research, which has seen many advances in recent years. Applications such as data mining, web searching, multimedia data retrieval, face recognition, and cursive handwriting recognition, all require robust and efficient pattern recognition techniques. This third edition provides an introduction to statistical pattern theory and techniques, with material drawn from a wide range of fields,

  17. Arizona Public Library Statistics, 1999-2000.

    Science.gov (United States)

    Arizona State Dept. of Library, Archives and Public Records, Phoenix.

    These statistics were compiled from information supplied by Arizona's public libraries. The document is divided according to the following county groups: Apache, Cochise; Coconino, Gila; Graham, Greenlee, La Paz; Maricopa; Mohave, Navajo; Pima, Pinal; Santa Cruz, Yavapai; Yuma. Statistics are presented on the following: general information;…

  18. An Update on Statistical Boosting in Biomedicine.

    Science.gov (United States)

    Mayr, Andreas; Hofner, Benjamin; Waldmann, Elisabeth; Hepp, Tobias; Meyer, Sebastian; Gefeller, Olaf

    2017-01-01

    Statistical boosting algorithms have triggered a lot of research during the last decade. They combine a powerful machine learning approach with classical statistical modelling, offering various practical advantages like automated variable selection and implicit regularization of effect estimates. They are extremely flexible, as the underlying base-learners (regression functions defining the type of effect for the explanatory variables) can be combined with any kind of loss function (target function to be optimized, defining the type of regression setting). In this review article, we highlight the most recent methodological developments on statistical boosting regarding variable selection, functional regression, and advanced time-to-event modelling. Additionally, we provide a short overview on relevant applications of statistical boosting in biomedicine.

  19. Mathematical model of statistical identification of information support of road transport

    Directory of Open Access Journals (Sweden)

    V. G. Kozlov

    2016-01-01

    Full Text Available In this paper based on the statistical identification method using the theory of self-organizing systems, built multifactor model the relationship of road transport and training system. Background information for the model represented by a number of parameters of average annual road transport operations and information provision, including training complex system parameters (inputs, road management and output parameters. Ask two criteria: stability criterion model and test correlation. The program determines their minimum, and is the only model of optimal complexity. The predetermined number of parameters established mathematical relationship of each output parameter with the others. To improve the accuracy and regularity of the forecast of the interpolation nodes allocated in the test data sequence. Other data form the training sequence. Decision model based on the principle of selection. Running it with the gradual complication of the mathematical description and exhaustive search of all possible variants of the models on the specified criteria. Advantages of the proposed model: adequately reflects the actual process, allows you to enter any additional input parameters and determine their impact on the individual output parameters of the road transport, allows in turn change the values of key parameters in a certain ratio and to determine the appropriate changes the output parameters of the road transport, allows to predict the output parameters road transport operations.

  20. Medical facility statistics in Japan.

    Science.gov (United States)

    Hamajima, Nobuyuki; Sugimoto, Takuya; Hasebe, Ryo; Myat Cho, Su; Khaing, Moe; Kariya, Tetsuyoshi; Mon Saw, Yu; Yamamoto, Eiko

    2017-11-01

    Medical facility statistics provide essential information to policymakers, administrators, academics, and practitioners in the field of health services. In Japan, the Health Statistics Office of the Director-General for Statistics and Information Policy at the Ministry of Health, Labour and Welfare is generating these statistics. Although the statistics are widely available in both Japanese and English, the methodology described in the technical reports are primarily in Japanese, and are not fully described in English. This article aimed to describe these processes for readers in the English-speaking world. The Health Statistics Office routinely conduct two surveys called the Hospital Report and the Survey of Medical Institutions. The subjects of the former are all the hospitals and clinics with long-term care beds in Japan. It comprises a Patient Questionnaire focusing on the numbers of inpatients, admissions, discharges, and outpatients in one month, and an Employee Questionnaire, which asks about the number of employees as of October 1. The Survey of Medical Institutions consists of the Dynamic Survey, which focuses on the opening and closing of facilities every month, and the Static Survey, which focuses on staff, facilities, and services as of October 1, as well as the number of inpatients as of September 30 and the total number of outpatients during September. All hospitals, clinics, and dental clinics are requested to submit the Static Survey questionnaire every three years. These surveys are useful tools for collecting essential information, as well as providing occasions to implicitly inform facilities of the movements of government policy.

  1. Arizona Public Library Statistics, 2000-2001.

    Science.gov (United States)

    Elliott, Jan, Comp.

    These statistics were compiled from information supplied by Arizona's public libraries. The document is divided according to the following county groups: Apache, Cochise; Coconino, Gila; Graham, Greenlee, La Paz; Maricopa; Mohave, Navajo; Pima, Pinal; Santa Cruz, Yavapai; and Yuma. Statistics are presented on the following: general information;…

  2. Data and Statistics: Women and Heart Disease

    Science.gov (United States)

    ... Summary Coverdell Program 2012-2015 State Summaries Data & Statistics Fact Sheets Heart Disease and Stroke Fact Sheets ... Roadmap for State Planning Other Data Resources Other Statistic Resources Grantee Information Cross-Program Information Online Tools ...

  3. WPRDC Statistics

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Data about the usage of the WPRDC site and its various datasets, obtained by combining Google Analytics statistics with information from the WPRDC's data portal.

  4. Statistical properties of curved polymer

    Indian Academy of Sciences (India)

    respective ground states decide the conformational statistics of the polymer. For semiflexible polymers, the relevant non-dimensional quantity is lp/L, where lp is the persistence length (which is proportional to the bending modulus k) and L is the contour length of the polymer. In the limit, lp/L ≪ 1, the polymer behaves as.

  5. The Relevant Physical Trace in Criminal Investigation

    Directory of Open Access Journals (Sweden)

    Durdica Hazard

    2016-01-01

    Full Text Available A criminal investigation requires the forensic scientist to search and to interpret vestiges of a criminal act that happened in the past. The forensic scientist is one of the many stakeholders who take part in the information quest within the criminal justice system. She reads the investigation scene in search of physical traces that should enable her to tell the story of the offense/crime that allegedly occurred. The challenge for any investigator is to detect and recognize relevant physical traces in order to provide clues for investigation and intelligence purposes, and that will constitute sound and relevant evidence for the court. This article shows how important it is to consider the relevancy of physical traces from the beginning of the investigation and what might influence the evaluation process. The exchange and management of information between the investigation stakeholders are important. Relevancy is a dimension that needs to be understood from the standpoints of law enforcement personnel and forensic scientists with the aim of strengthening investigation and ultimately the overall judicial process.

  6. Statistical analysis of field data for aircraft warranties

    Science.gov (United States)

    Lakey, Mary J.

    Air Force and Navy maintenance data collection systems were researched to determine their scientific applicability to the warranty process. New and unique algorithms were developed to extract failure distributions which were then used to characterize how selected families of equipment typically fails. Families of similar equipment were identified in terms of function, technology and failure patterns. Statistical analyses and applications such as goodness-of-fit test, maximum likelihood estimation and derivation of confidence intervals for the probability density function parameters were applied to characterize the distributions and their failure patterns. Statistical and reliability theory, with relevance to equipment design and operational failures were also determining factors in characterizing the failure patterns of the equipment families. Inferences about the families with relevance to warranty needs were then made.

  7. Improving Nigerian health policymakers' capacity to access and utilize policy relevant evidence: outcome of information and communication technology training workshop.

    Science.gov (United States)

    Uneke, Chigozie Jesse; Ezeoha, Abel Ebeh; Uro-Chukwu, Henry; Ezeonu, Chinonyelum Thecla; Ogbu, Ogbonnaya; Onwe, Friday; Edoga, Chima

    2015-01-01

    Information and communication technology (ICT) tools are known to facilitate communication and processing of information and sharing of knowledge by electronic means. In Nigeria, the lack of adequate capacity on the use of ICT by health sector policymakers constitutes a major impediment to the uptake of research evidence into the policymaking process. The objective of this study was to improve the knowledge and capacity of policymakers to access and utilize policy relevant evidence. A modified "before and after" intervention study design was used in which outcomes were measured on the target participants both before the intervention is implemented and after. A 4-point likert scale according to the degree of adequacy; 1 = grossly inadequate, 4 = very adequate was employed. This study was conducted in Ebonyi State, south-eastern Nigeria and the participants were career health policy makers. A two-day intensive ICT training workshop was organized for policymakers who had 52 participants in attendance. Topics covered included: (i). intersectoral partnership/collaboration; (ii). Engaging ICT in evidence-informed policy making; use of ICT for evidence synthesis; (iv) capacity development on the use of computer, internet and other ICT. The pre-workshop mean of knowledge and capacity for use of ICT ranged from 2.19-3.05, while the post-workshop mean ranged from 2.67-3.67 on 4-point scale. The percentage increase in mean of knowledge and capacity at the end of the workshop ranged from 8.3%-39.1%. Findings of this study suggest that policymakers' ICT competence relevant to evidence-informed policymaking can be enhanced through training workshop.

  8. Benchmarking and Its Relevance to the Library and Information Sector. Interim Findings of "Best Practice Benchmarking in the Library and Information Sector," a British Library Research and Development Department Project.

    Science.gov (United States)

    Kinnell, Margaret; Garrod, Penny

    This British Library Research and Development Department study assesses current activities and attitudes toward quality management in library and information services (LIS) in the academic sector as well as the commercial/industrial sector. Definitions and types of benchmarking are described, and the relevance of benchmarking to LIS is evaluated.…

  9. THE INTEGRATED SHORT-TERM STATISTICAL SURVEYS: EXPERIENCE OF NBS IN MOLDOVA

    Directory of Open Access Journals (Sweden)

    Oleg CARA

    2012-07-01

    Full Text Available The users’ rising need for relevant, reliable, coherent, timely data for the early diagnosis of the economic vulnerability and of the turning points in the business cycles, especially during a financial and economic crisis, asks for a prompt answer, coordinated by statistical institutions. High quality short term statistics are of special interest for the emerging market economies, such as the Moldavian one, being extremely vulnerable when facing economic recession. Answering to the challenges of producing a coherent and adequate image of the economic activity, by using the system of indicators and definitions efficiently applied at the level of the European Union, the National Bureau of Statistics (NBS of the Republic of Moldova has launched the development process of an integrated system of short term statistics (STS based on the advanced international experience.Thus, in 2011, BNS implemented the integrated statistical survey on STS based on consistent concepts, harmonized with the EU standards. The integration of the production processes, which were previously separated, is based on a common technical infrastructure, standardized procedures and techniques for data production. The achievement of this complex survey with holistic approach has allowed the consolidation of the statistical data quality, comparable at European level and the signifi cant reduction of information burden on business units, especially of small size.The reformation of STS based on the integrated survey has been possible thanks to the consistent methodological and practical support given to NBS by the National Institute of Statistics (INS of Romania, for which we would like to thank to our Romanian colleagues.

  10. Android Smartphone Relevance to Military Weather Applications

    Science.gov (United States)

    2011-10-01

    lithium -ion battery that may be replaced by the user (unlike Apple iPod Touch devices), thus spare batteries can be carried. If there is only sporadic...Android Smartphone Relevance to Military Weather Applications by David Sauter ARL-TR-5793 October 2011...Android Smartphone Relevance to Military Weather Applications David Sauter Computational and Information Sciences Directorate, ARL

  11. Some challenges with statistical inference in adaptive designs.

    Science.gov (United States)

    Hung, H M James; Wang, Sue-Jane; Yang, Peiling

    2014-01-01

    Adaptive designs have generated a great deal of attention to clinical trial communities. The literature contains many statistical methods to deal with added statistical uncertainties concerning the adaptations. Increasingly encountered in regulatory applications are adaptive statistical information designs that allow modification of sample size or related statistical information and adaptive selection designs that allow selection of doses or patient populations during the course of a clinical trial. For adaptive statistical information designs, a few statistical testing methods are mathematically equivalent, as a number of articles have stipulated, but arguably there are large differences in their practical ramifications. We pinpoint some undesirable features of these methods in this work. For adaptive selection designs, the selection based on biomarker data for testing the correlated clinical endpoints may increase statistical uncertainty in terms of type I error probability, and most importantly the increased statistical uncertainty may be impossible to assess.

  12. Women and Science: Issues and Resources [and] Women and Information Technology: A Selective Bibliography.

    Science.gov (United States)

    Searing, Susan, Comp.; Shult, Linda, Comp.

    Two bibliographies list over 120 books, journal articles, reference materials, statistical sources, organizations, and media relevant to women's roles in science and in information technology. The first bibliography emphasizes books, most of which were published in the late 1970's and the 1980's, that present a feminist critique of scientific…

  13. Digital immunohistochemistry platform for the staining variation monitoring based on integration of image and statistical analyses with laboratory information system.

    Science.gov (United States)

    Laurinaviciene, Aida; Plancoulaine, Benoit; Baltrusaityte, Indra; Meskauskas, Raimundas; Besusparis, Justinas; Lesciute-Krilaviciene, Daiva; Raudeliunas, Darius; Iqbal, Yasir; Herlin, Paulette; Laurinavicius, Arvydas

    2014-01-01

    Digital immunohistochemistry (IHC) is one of the most promising applications brought by new generation image analysis (IA). While conventional IHC staining quality is monitored by semi-quantitative visual evaluation of tissue controls, IA may require more sensitive measurement. We designed an automated system to digitally monitor IHC multi-tissue controls, based on SQL-level integration of laboratory information system with image and statistical analysis tools. Consecutive sections of TMA containing 10 cores of breast cancer tissue were used as tissue controls in routine Ki67 IHC testing. Ventana slide label barcode ID was sent to the LIS to register the serial section sequence. The slides were stained and scanned (Aperio ScanScope XT), IA was performed by the Aperio/Leica Colocalization and Genie Classifier/Nuclear algorithms. SQL-based integration ensured automated statistical analysis of the IA data by the SAS Enterprise Guide project. Factor analysis and plot visualizations were performed to explore slide-to-slide variation of the Ki67 IHC staining results in the control tissue. Slide-to-slide intra-core IHC staining analysis revealed rather significant variation of the variables reflecting the sample size, while Brown and Blue Intensity were relatively stable. To further investigate this variation, the IA results from the 10 cores were aggregated to minimize tissue-related variance. Factor analysis revealed association between the variables reflecting the sample size detected by IA and Blue Intensity. Since the main feature to be extracted from the tissue controls was staining intensity, we further explored the variation of the intensity variables in the individual cores. MeanBrownBlue Intensity ((Brown+Blue)/2) and DiffBrownBlue Intensity (Brown-Blue) were introduced to better contrast the absolute intensity and the colour balance variation in each core; relevant factor scores were extracted. Finally, tissue-related factors of IHC staining variance were

  14. Statistical analysis of coding for molecular properties in the olfactory bulb

    Directory of Open Access Journals (Sweden)

    Benjamin eAuffarth

    2011-07-01

    Full Text Available The relationship between molecular properties of odorants and neural activities is arguably one of the most important issues in olfaction and the rules governing this relationship are still not clear. In the olfactory bulb (OB, glomeruli relay olfactory information to second-order neurons which in turn project to cortical areas. We investigate relevance of odorant properties, spatial localization of glomerular coding sites, and size of coding zones in a dataset of 2-deoxyglucose images of glomeruli over the entire OB of the rat. We relate molecular properties to activation of glomeruli in the OB using a nonparametric statistical test and a support-vector machine classification study. Our method permits to systematically map the topographic representation of various classes of odorants in the OB. Our results suggest many localized coding sites for particular molecular properties and some molecular properties that could form the basis for a spatial map of olfactory information. We found that alkynes, alkanes, alkenes, and amines affect activation maps very strongly as compared to other properties and that amines, sulfur-containing compounds, and alkynes have small zones and high relevance to activation changes, while aromatics, alkanes, and carboxylics acid recruit very big zones in the dataset. Results suggest a local spatial encoding for molecular properties.

  15. Improving information access by relevance and topical feedback

    NARCIS (Netherlands)

    Kaptein, R.; Kamps, J.; Hopfgartner, F.

    2008-01-01

    One of the main bottle-necks in providing more effective information access is the poverty of the query end. With an average query length of about two terms, users provide only a highly ambiguous statement of the, often complex, underlying information need. Implicit and explicit feedback can provide

  16. Birth Defects Data and Statistics

    Science.gov (United States)

    ... Submit" /> Information For… Media Policy Makers Data & Statistics Recommend on Facebook Tweet Share Compartir On This ... and critical. Read below for the latest national statistics on the occurrence of birth defects in the ...

  17. Transportation statistics annual report 2010

    Science.gov (United States)

    2011-01-01

    The Transportation Statistics Annual Report (TSAR) presents data and information compiled by the Bureau of Transportation Statistics (BTS), a component of the U.S. Department of Transportations (USDOTs) Research and Innovative Technology Admini...

  18. Transportation statistics annual report 2009

    Science.gov (United States)

    2009-01-01

    The Transportation Statistics Annual Report (TSAR) presents data and information selected by the Bureau of Transportation Statistics (BTS), a component of the U.S. Department of Transportation's (USDOT's) Research and Innovative Technology Administra...

  19. Understanding common statistical methods, Part I: descriptive methods, probability, and continuous data.

    Science.gov (United States)

    Skinner, Carl G; Patel, Manish M; Thomas, Jerry D; Miller, Michael A

    2011-01-01

    Statistical methods are pervasive in medical research and general medical literature. Understanding general statistical concepts will enhance our ability to critically appraise the current literature and ultimately improve the delivery of patient care. This article intends to provide an overview of the common statistical methods relevant to medicine.

  20. Order-specific fertility estimates based on perinatal statistics and statistics on out-of-hospital births

    OpenAIRE

    Kreyenfeld, Michaela; Peters, Frederik; Scholz, Rembrandt; Wlosnewski, Ines

    2014-01-01

    Until 2008, German vital statistics has not provided information on biological birth order. We have tried to close part of this gap by providing order-specific fertility rates generated from Perinatal Statistics and statistics on out-of-hospital births for the period 2001-2008. This investigation has been published in Comparative Population Studies (CPoS) (see Kreyenfeld, Scholz, Peters and Wlosnewski 2010). The CPoS-paper describes how data from the Perinatal Statistics and statistics on out...

  1. METHODS FOR ASSESSING SECURITY THREATS CONFIDENTIAL INFORMATION FOR THE INFORMATION AND TELECOMMUNICATIONS SYSTEMS

    Directory of Open Access Journals (Sweden)

    E. V. Belokurova

    2015-01-01

    Full Text Available The article discusses the different approaches to assessing the safety of confidential information-term for information and telecommunication systems of various pre-appreciable destination in the presence of internal and external threats to its integrity and availability. The difficulty of ensuring the security of confidential information from exposure to information and telecommunication systems of external and internal threats at the present time, is of particular relevance. This problem is confirmed by the analysis of available statistical information on the impact of threats on the security circulating in the information and telecommunications system. Leak confidential information, intellectual property, information, know-how is the result of significant material and moral damage caused to the owner of the restricted information. The paper presents the structure of the indicators and criteria shows that the most promising are analytical criteria. However, their use to assess the level of security of confidential information is difficult due to the lack of appropriate mathematical models. The complexity of the problem is that existing traditional mathematical models are not always appropriate for the stated objectives. Therefore, it is necessary to develop mathematical models designed to assess the security of confidential information and its impact on information and telecommunication system threats.

  2. A Hybrid Approach to Finding Relevant Social Media Content for Complex Domain Specific Information Needs.

    Science.gov (United States)

    Cameron, Delroy; Sheth, Amit P; Jaykumar, Nishita; Thirunarayan, Krishnaprasad; Anand, Gaurish; Smith, Gary A

    2014-12-01

    While contemporary semantic search systems offer to improve classical keyword-based search, they are not always adequate for complex domain specific information needs. The domain of prescription drug abuse, for example, requires knowledge of both ontological concepts and "intelligible constructs" not typically modeled in ontologies. These intelligible constructs convey essential information that include notions of intensity, frequency, interval, dosage and sentiments, which could be important to the holistic needs of the information seeker. In this paper, we present a hybrid approach to domain specific information retrieval that integrates ontology-driven query interpretation with synonym-based query expansion and domain specific rules, to facilitate search in social media on prescription drug abuse. Our framework is based on a context-free grammar (CFG) that defines the query language of constructs interpretable by the search system. The grammar provides two levels of semantic interpretation: 1) a top-level CFG that facilitates retrieval of diverse textual patterns, which belong to broad templates and 2) a low-level CFG that enables interpretation of specific expressions belonging to such textual patterns. These low-level expressions occur as concepts from four different categories of data: 1) ontological concepts, 2) concepts in lexicons (such as emotions and sentiments), 3) concepts in lexicons with only partial ontology representation, called lexico-ontology concepts (such as side effects and routes of administration (ROA)), and 4) domain specific expressions (such as date, time, interval, frequency and dosage) derived solely through rules. Our approach is embodied in a novel Semantic Web platform called PREDOSE, which provides search support for complex domain specific information needs in prescription drug abuse epidemiology. When applied to a corpus of over 1 million drug abuse-related web forum posts, our search framework proved effective in retrieving

  3. From statistic mechanic outside equilibrium to transport equations

    International Nuclear Information System (INIS)

    Balian, R.

    1995-01-01

    This lecture notes give a synthetic view on the foundations of non-equilibrium statistical mechanics. The purpose is to establish the transport equations satisfied by the relevant variables, starting from the microscopic dynamics. The Liouville representation is introduced, and a projection associates with any density operator , for given choice of relevant observables, a reduced density operator. An exact integral-differential equation for the relevant variables is thereby derived. A short-memory approximation then yields the transport equations. A relevant entropy which characterizes the coarseness of the description is associated with each level of description. As an illustration, the classical gas, with its three levels of description and with the Chapman-Enskog method, is discussed. (author). 3 figs., 5 refs

  4. Use of demonstrations and experiments in teaching business statistics

    OpenAIRE

    Johnson, D. G.; John, J. A.

    2003-01-01

    The aim of a business statistics course should be to help students think statistically and to interpret and understand data, rather than to focus on mathematical detail and computation. To achieve this students must be thoroughly involved in the learning process, and encouraged to discover for themselves the meaning, importance and relevance of statistical concepts. In this paper we advocate the use of experiments and demonstrations as aids to achieving these goals. A number of demonstrations...

  5. Quantum formalism for classical statistics

    Science.gov (United States)

    Wetterich, C.

    2018-06-01

    In static classical statistical systems the problem of information transport from a boundary to the bulk finds a simple description in terms of wave functions or density matrices. While the transfer matrix formalism is a type of Heisenberg picture for this problem, we develop here the associated Schrödinger picture that keeps track of the local probabilistic information. The transport of the probabilistic information between neighboring hypersurfaces obeys a linear evolution equation, and therefore the superposition principle for the possible solutions. Operators are associated to local observables, with rules for the computation of expectation values similar to quantum mechanics. We discuss how non-commutativity naturally arises in this setting. Also other features characteristic of quantum mechanics, such as complex structure, change of basis or symmetry transformations, can be found in classical statistics once formulated in terms of wave functions or density matrices. We construct for every quantum system an equivalent classical statistical system, such that time in quantum mechanics corresponds to the location of hypersurfaces in the classical probabilistic ensemble. For suitable choices of local observables in the classical statistical system one can, in principle, compute all expectation values and correlations of observables in the quantum system from the local probabilistic information of the associated classical statistical system. Realizing a static memory material as a quantum simulator for a given quantum system is not a matter of principle, but rather of practical simplicity.

  6. Spina Bifida Data and Statistics

    Science.gov (United States)

    ... Us Information For… Media Policy Makers Data and Statistics Recommend on Facebook Tweet Share Compartir Spina bifida ... the spine. Read below for the latest national statistics on spina bifida in the United States. In ...

  7. Statistical performance and information content of time lag analysis and redundancy analysis in time series modeling.

    Science.gov (United States)

    Angeler, David G; Viedma, Olga; Moreno, José M

    2009-11-01

    Time lag analysis (TLA) is a distance-based approach used to study temporal dynamics of ecological communities by measuring community dissimilarity over increasing time lags. Despite its increased use in recent years, its performance in comparison with other more direct methods (i.e., canonical ordination) has not been evaluated. This study fills this gap using extensive simulations and real data sets from experimental temporary ponds (true zooplankton communities) and landscape studies (landscape categories as pseudo-communities) that differ in community structure and anthropogenic stress history. Modeling time with a principal coordinate of neighborhood matrices (PCNM) approach, the canonical ordination technique (redundancy analysis; RDA) consistently outperformed the other statistical tests (i.e., TLAs, Mantel test, and RDA based on linear time trends) using all real data. In addition, the RDA-PCNM revealed different patterns of temporal change, and the strength of each individual time pattern, in terms of adjusted variance explained, could be evaluated, It also identified species contributions to these patterns of temporal change. This additional information is not provided by distance-based methods. The simulation study revealed better Type I error properties of the canonical ordination techniques compared with the distance-based approaches when no deterministic component of change was imposed on the communities. The simulation also revealed that strong emphasis on uniform deterministic change and low variability at other temporal scales is needed to result in decreased statistical power of the RDA-PCNM approach relative to the other methods. Based on the statistical performance of and information content provided by RDA-PCNM models, this technique serves ecologists as a powerful tool for modeling temporal change of ecological (pseudo-) communities.

  8. Statistical physics of crime: a review.

    Science.gov (United States)

    D'Orsogna, Maria R; Perc, Matjaž

    2015-03-01

    Containing the spread of crime in urban societies remains a major challenge. Empirical evidence suggests that, if left unchecked, crimes may be recurrent and proliferate. On the other hand, eradicating a culture of crime may be difficult, especially under extreme social circumstances that impair the creation of a shared sense of social responsibility. Although our understanding of the mechanisms that drive the emergence and diffusion of crime is still incomplete, recent research highlights applied mathematics and methods of statistical physics as valuable theoretical resources that may help us better understand criminal activity. We review different approaches aimed at modeling and improving our understanding of crime, focusing on the nucleation of crime hotspots using partial differential equations, self-exciting point process and agent-based modeling, adversarial evolutionary games, and the network science behind the formation of gangs and large-scale organized crime. We emphasize that statistical physics of crime can relevantly inform the design of successful crime prevention strategies, as well as improve the accuracy of expectations about how different policing interventions should impact malicious human activity that deviates from social norms. We also outline possible directions for future research, related to the effects of social and coevolving networks and to the hierarchical growth of criminal structures due to self-organization. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. An Update on Statistical Boosting in Biomedicine

    Directory of Open Access Journals (Sweden)

    Andreas Mayr

    2017-01-01

    Full Text Available Statistical boosting algorithms have triggered a lot of research during the last decade. They combine a powerful machine learning approach with classical statistical modelling, offering various practical advantages like automated variable selection and implicit regularization of effect estimates. They are extremely flexible, as the underlying base-learners (regression functions defining the type of effect for the explanatory variables can be combined with any kind of loss function (target function to be optimized, defining the type of regression setting. In this review article, we highlight the most recent methodological developments on statistical boosting regarding variable selection, functional regression, and advanced time-to-event modelling. Additionally, we provide a short overview on relevant applications of statistical boosting in biomedicine.

  10. Image Statistics

    Energy Technology Data Exchange (ETDEWEB)

    Wendelberger, Laura Jean [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-08

    In large datasets, it is time consuming or even impossible to pick out interesting images. Our proposed solution is to find statistics to quantify the information in each image and use those to identify and pick out images of interest.

  11. The temporal-relevance temporal-uncertainty model of prospective duration judgment.

    Science.gov (United States)

    Zakay, Dan

    2015-12-15

    A model aimed at explaining prospective duration judgments in real life settings (as well as in the laboratory) is presented. The model is based on the assumption that situational meaning is continuously being extracted by humans' perceptual and cognitive information processing systems. Time is one of the important dimensions of situational meaning. Based on the situational meaning, a value for Temporal Relevance is set. Temporal Relevance reflects the importance of temporal aspects for enabling adaptive behavior in a specific moment in time. When Temporal Relevance is above a certain threshold a prospective duration judgment process is evoked automatically. In addition, a search for relevant temporal information is taking place and its outcomes determine the level of Temporal Uncertainty which reflects the degree of knowledge one has regarding temporal aspects of the task to be performed. The levels of Temporal Relevance and Temporal Uncertainty determine the amount of attentional resources allocated for timing by the executive system. The merit of the model is in connecting timing processes with the ongoing general information processing stream. The model rests on findings in various domains which indicate that cognitive-relevance and self-relevance are powerful determinants of resource allocation policy. The feasibility of the model is demonstrated by analyzing various temporal phenomena. Suggestions for further empirical validation of the model are presented. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Water Quality attainment Information from Clean Water Act Statewide Statistical Surveys

    Data.gov (United States)

    U.S. Environmental Protection Agency — Designated uses assessed by statewide statistical surveys and their state and national attainment categories. Statewide statistical surveys are water quality...

  13. Water Quality Stressor Information from Clean Water Act Statewide Statistical Surveys

    Data.gov (United States)

    U.S. Environmental Protection Agency — Stressors assessed by statewide statistical surveys and their state and national attainment categories. Statewide statistical surveys are water quality assessments...

  14. Dramatic lives and relevant becomings

    DEFF Research Database (Denmark)

    Henriksen, Ann-Karina; Miller, Jody

    2012-01-01

    of marginality into positions of relevance. The analysis builds on empirical data from Copenhagen, Denmark, gained through ethnographic fieldwork with the participation of 20 female informants aged 13–22. The theoretical contribution proposes viewing conflicts as multi-linear, multi-causal and non...

  15. The effects of lossy compression on diagnostically relevant seizure information in EEG signals.

    Science.gov (United States)

    Higgins, G; McGinley, B; Faul, S; McEvoy, R P; Glavin, M; Marnane, W P; Jones, E

    2013-01-01

    This paper examines the effects of compression on EEG signals, in the context of automated detection of epileptic seizures. Specifically, it examines the use of lossy compression on EEG signals in order to reduce the amount of data which has to be transmitted or stored, while having as little impact as possible on the information in the signal relevant to diagnosing epileptic seizures. Two popular compression methods, JPEG2000 and SPIHT, were used. A range of compression levels was selected for both algorithms in order to compress the signals with varying degrees of loss. This compression was applied to the database of epileptiform data provided by the University of Freiburg, Germany. The real-time EEG analysis for event detection automated seizure detection system was used in place of a trained clinician for scoring the reconstructed data. Results demonstrate that compression by a factor of up to 120:1 can be achieved, with minimal loss in seizure detection performance as measured by the area under the receiver operating characteristic curve of the seizure detection system.

  16. Breakthroughs in statistics

    CERN Document Server

    Johnson, Norman

    This is author-approved bcc: This is the third volume of a collection of seminal papers in the statistical sciences written during the past 110 years. These papers have each had an outstanding influence on the development of statistical theory and practice over the last century. Each paper is preceded by an introduction written by an authority in the field providing background information and assessing its influence. Volume III concerntrates on articles from the 1980's while including some earlier articles not included in Volume I and II. Samuel Kotz is Professor of Statistics in the College of Business and Management at the University of Maryland. Norman L. Johnson is Professor Emeritus of Statistics at the University of North Carolina. Also available: Breakthroughs in Statistics Volume I: Foundations and Basic Theory Samuel Kotz and Norman L. Johnson, Editors 1993. 631 pp. Softcover. ISBN 0-387-94037-5 Breakthroughs in Statistics Volume II: Methodology and Distribution Samuel Kotz and Norman L. Johnson, Edi...

  17. Knowing Where They Went: Six Years of Online Access Statistics via the Online Catalog for Federal Government Information

    Science.gov (United States)

    Brown, Christopher C.

    2011-01-01

    As federal government information is increasingly migrating to online formats, libraries are providing links to this content via URLs or persistent URLs (PURLs) in their online public access catalogs (OPACs). Clickthrough statistics that accumulated as users visited links to online content in the University of Denver's library OPAC were gathered…

  18. Revisiting Information Technology tools serving authorship and editorship: a case-guided tutorial to statistical analysis and plagiarism detection

    Science.gov (United States)

    Bamidis, P D; Lithari, C; Konstantinidis, S T

    2010-01-01

    With the number of scientific papers published in journals, conference proceedings, and international literature ever increasing, authors and reviewers are not only facilitated with an abundance of information, but unfortunately continuously confronted with risks associated with the erroneous copy of another's material. In parallel, Information Communication Technology (ICT) tools provide to researchers novel and continuously more effective ways to analyze and present their work. Software tools regarding statistical analysis offer scientists the chance to validate their work and enhance the quality of published papers. Moreover, from the reviewers and the editor's perspective, it is now possible to ensure the (text-content) originality of a scientific article with automated software tools for plagiarism detection. In this paper, we provide a step-bystep demonstration of two categories of tools, namely, statistical analysis and plagiarism detection. The aim is not to come up with a specific tool recommendation, but rather to provide useful guidelines on the proper use and efficiency of either category of tools. In the context of this special issue, this paper offers a useful tutorial to specific problems concerned with scientific writing and review discourse. A specific neuroscience experimental case example is utilized to illustrate the young researcher's statistical analysis burden, while a test scenario is purpose-built using open access journal articles to exemplify the use and comparative outputs of seven plagiarism detection software pieces. PMID:21487489

  19. Revisiting Information Technology tools serving authorship and editorship: a case-guided tutorial to statistical analysis and plagiarism detection.

    Science.gov (United States)

    Bamidis, P D; Lithari, C; Konstantinidis, S T

    2010-12-01

    With the number of scientific papers published in journals, conference proceedings, and international literature ever increasing, authors and reviewers are not only facilitated with an abundance of information, but unfortunately continuously confronted with risks associated with the erroneous copy of another's material. In parallel, Information Communication Technology (ICT) tools provide to researchers novel and continuously more effective ways to analyze and present their work. Software tools regarding statistical analysis offer scientists the chance to validate their work and enhance the quality of published papers. Moreover, from the reviewers and the editor's perspective, it is now possible to ensure the (text-content) originality of a scientific article with automated software tools for plagiarism detection. In this paper, we provide a step-bystep demonstration of two categories of tools, namely, statistical analysis and plagiarism detection. The aim is not to come up with a specific tool recommendation, but rather to provide useful guidelines on the proper use and efficiency of either category of tools. In the context of this special issue, this paper offers a useful tutorial to specific problems concerned with scientific writing and review discourse. A specific neuroscience experimental case example is utilized to illustrate the young researcher's statistical analysis burden, while a test scenario is purpose-built using open access journal articles to exemplify the use and comparative outputs of seven plagiarism detection software pieces.

  20. Learning predictive statistics from temporal sequences: Dynamics and strategies.

    Science.gov (United States)

    Wang, Rui; Shen, Yuan; Tino, Peter; Welchman, Andrew E; Kourtzi, Zoe

    2017-10-01

    Human behavior is guided by our expectations about the future. Often, we make predictions by monitoring how event sequences unfold, even though such sequences may appear incomprehensible. Event structures in the natural environment typically vary in complexity, from simple repetition to complex probabilistic combinations. How do we learn these structures? Here we investigate the dynamics of structure learning by tracking human responses to temporal sequences that change in structure unbeknownst to the participants. Participants were asked to predict the upcoming item following a probabilistic sequence of symbols. Using a Markov process, we created a family of sequences, from simple frequency statistics (e.g., some symbols are more probable than others) to context-based statistics (e.g., symbol probability is contingent on preceding symbols). We demonstrate the dynamics with which individuals adapt to changes in the environment's statistics-that is, they extract the behaviorally relevant structures to make predictions about upcoming events. Further, we show that this structure learning relates to individual decision strategy; faster learning of complex structures relates to selection of the most probable outcome in a given context (maximizing) rather than matching of the exact sequence statistics. Our findings provide evidence for alternate routes to learning of behaviorally relevant statistics that facilitate our ability to predict future events in variable environments.

  1. Extraction of Pluvial Flood Relevant Volunteered Geographic Information (VGI by Deep Learning from User Generated Texts and Photos

    Directory of Open Access Journals (Sweden)

    Yu Feng

    2018-01-01

    Full Text Available In recent years, pluvial floods caused by extreme rainfall events have occurred frequently. Especially in urban areas, they lead to serious damages and endanger the citizens’ safety. Therefore, real-time information about such events is desirable. With the increasing popularity of social media platforms, such as Twitter or Instagram, information provided by voluntary users becomes a valuable source for emergency response. Many applications have been built for disaster detection and flood mapping using crowdsourcing. Most of the applications so far have merely used keyword filtering or classical language processing methods to identify disaster relevant documents based on user generated texts. As the reliability of social media information is often under criticism, the precision of information retrieval plays a significant role for further analyses. Thus, in this paper, high quality eyewitnesses of rainfall and flooding events are retrieved from social media by applying deep learning approaches on user generated texts and photos. Subsequently, events are detected through spatiotemporal clustering and visualized together with these high quality eyewitnesses in a web map application. Analyses and case studies are conducted during flooding events in Paris, London and Berlin.

  2. Statistical coding and decoding of heartbeat intervals.

    Science.gov (United States)

    Lucena, Fausto; Barros, Allan Kardec; Príncipe, José C; Ohnishi, Noboru

    2011-01-01

    The heart integrates neuroregulatory messages into specific bands of frequency, such that the overall amplitude spectrum of the cardiac output reflects the variations of the autonomic nervous system. This modulatory mechanism seems to be well adjusted to the unpredictability of the cardiac demand, maintaining a proper cardiac regulation. A longstanding theory holds that biological organisms facing an ever-changing environment are likely to evolve adaptive mechanisms to extract essential features in order to adjust their behavior. The key question, however, has been to understand how the neural circuitry self-organizes these feature detectors to select behaviorally relevant information. Previous studies in computational perception suggest that a neural population enhances information that is important for survival by minimizing the statistical redundancy of the stimuli. Herein we investigate whether the cardiac system makes use of a redundancy reduction strategy to regulate the cardiac rhythm. Based on a network of neural filters optimized to code heartbeat intervals, we learn a population code that maximizes the information across the neural ensemble. The emerging population code displays filter tuning proprieties whose characteristics explain diverse aspects of the autonomic cardiac regulation, such as the compromise between fast and slow cardiac responses. We show that the filters yield responses that are quantitatively similar to observed heart rate responses during direct sympathetic or parasympathetic nerve stimulation. Our findings suggest that the heart decodes autonomic stimuli according to information theory principles analogous to how perceptual cues are encoded by sensory systems.

  3. Statistical coding and decoding of heartbeat intervals.

    Directory of Open Access Journals (Sweden)

    Fausto Lucena

    Full Text Available The heart integrates neuroregulatory messages into specific bands of frequency, such that the overall amplitude spectrum of the cardiac output reflects the variations of the autonomic nervous system. This modulatory mechanism seems to be well adjusted to the unpredictability of the cardiac demand, maintaining a proper cardiac regulation. A longstanding theory holds that biological organisms facing an ever-changing environment are likely to evolve adaptive mechanisms to extract essential features in order to adjust their behavior. The key question, however, has been to understand how the neural circuitry self-organizes these feature detectors to select behaviorally relevant information. Previous studies in computational perception suggest that a neural population enhances information that is important for survival by minimizing the statistical redundancy of the stimuli. Herein we investigate whether the cardiac system makes use of a redundancy reduction strategy to regulate the cardiac rhythm. Based on a network of neural filters optimized to code heartbeat intervals, we learn a population code that maximizes the information across the neural ensemble. The emerging population code displays filter tuning proprieties whose characteristics explain diverse aspects of the autonomic cardiac regulation, such as the compromise between fast and slow cardiac responses. We show that the filters yield responses that are quantitatively similar to observed heart rate responses during direct sympathetic or parasympathetic nerve stimulation. Our findings suggest that the heart decodes autonomic stimuli according to information theory principles analogous to how perceptual cues are encoded by sensory systems.

  4. Surveys Assessing Students' Attitudes toward Statistics: A Systematic Review of Validity and Reliability

    Science.gov (United States)

    Nolan, Meaghan M.; Beran, Tanya; Hecker, Kent G.

    2012-01-01

    Students with positive attitudes toward statistics are likely to show strong academic performance in statistics courses. Multiple surveys measuring students' attitudes toward statistics exist; however, a comparison of the validity and reliability of interpretations based on their scores is needed. A systematic review of relevant electronic…

  5. The Concise Encyclopedia of Statistics

    CERN Document Server

    Dodge, Yadolah

    2008-01-01

    The Concise Encyclopedia of Statistics presents the essential information about statistical tests, concepts, and analytical methods in language that is accessible to practitioners and students of the vast community using statistics in medicine, engineering, physical science, life science, social science, and business/economics. The reference is alphabetically arranged to provide quick access to the fundamental tools of statistical methodology and biographies of famous statisticians. The more than 500 entries include definitions, history, mathematical details, limitations, examples, references,

  6. A new class of information complexity (ICOMP) criteria with an application to customer profiling and segmentation

    OpenAIRE

    Bozdogan, Hamparsun

    2010-01-01

    This paper introduces several forms of a new class of information-theoretic measure of complexity criterion called ICOMP as a decision rule for model selection in statistical modeling to help provide new approaches relevant to statistical inference. The practical utility and the importance of ICOMP is illustrated by providing a real numerical example in data mining of mobile phone data for customer profiling and segmentation of mobile phone customers using a novel multi-class support vector m...

  7. EDI Performance Statistics

    Data.gov (United States)

    U.S. Department of Health & Human Services — This section contains statistical information and reports related to the percentage of electronic transactions being sent to Medicare contractors in the formats...

  8. Assessing Hospital Physicians' Acceptance of Clinical Information Systems: A Review of the Relevant Literature

    Directory of Open Access Journals (Sweden)

    Bram Pynoo

    2013-06-01

    Full Text Available In view of the tremendous potential benefits of clinical information systems (CIS for the quality of patient care; it is hard to understand why not every CIS is embraced by its targeted users, the physicians. The aim of this study is to propose a framework for assessing hospital physicians' CIS-acceptance that can serve as a guidance for future research into this area. Hereto, a review of the relevant literature was performed in the ISI Web-of-Science database. Eleven studies were withheld from an initial dataset of 797 articles. Results show that just as in business settings, there are four core groups of variables that influence physicians' acceptance of a CIS: its usefulness and ease of use, social norms, and factors in the working environment that facilitate use of the CIS (such as providing computers/workstations, compatibility between the new and existing system.... We also identified some additional variables as predictors of CIS-acceptance.

  9. There’s an App for That? Highlighting the Difficulty in Finding Clinically Relevant Smartphone Applications

    Directory of Open Access Journals (Sweden)

    Warren Wiechmann, MD, MBA

    2016-03-01

    Full Text Available Introduction: The use of personal mobile devices in the medical field has grown quickly, and a large proportion of physicians use their mobile devices as an immediate resource for clinical decisionmaking, prescription information and other medical information. The iTunes App Store (Apple, Inc. contains approximately 20,000 apps in its “Medical” category, providing a robust repository of resources for clinicians; however, this represents only 2% of the entire App Store. The App Store does not have strict criteria for identifying content specific to practicing physicians, making the identification of clinically relevant content difficult. The objective of this study is to quantify the characteristics of existing medical applications in the iTunes App Store that could be used by emergency physicians, residents, or medical students. Methods: We found applications related to emergency medicine (EM by searching the iTunes App Store for 21 terms representing core content areas of EM, such as “emergency medicine,” “critical care,” “orthopedics,” and “procedures.” Two physicians independently reviewed descriptions of these applications in the App Store and categorized each as the following: Clinically Relevant, Book/ Published Source, Non-English, Study Tools, or Not Relevant. A third physician reviewer resolved disagreements about categorization. Descriptive statistics were calculated. Results: We found a total of 7,699 apps from the 21 search terms, of which 17.8% were clinical, 9.6% were based on a book or published source, 1.6% were non-English, 0.7% were clinically relevant patient education resources, and 4.8% were study tools. Most significantly, 64.9% were considered not relevant to medical professionals. Clinically relevant apps make up approximately 6.9% of the App Store’s “Medical” Category and 0.1% of the overall App Store. Conclusion: Clinically relevant apps represent only a small percentage (6.9% of the total App

  10. Information processing in bacteria: memory, computation, and statistical physics: a key issues review

    International Nuclear Information System (INIS)

    Lan, Ganhui; Tu, Yuhai

    2016-01-01

    preserving information, it does not reveal the underlying mechanism that leads to the observed input-output relationship, nor does it tell us much about which information is important for the organism and how biological systems use information to carry out specific functions. To do that, we need to develop models of the biological machineries, e.g. biochemical networks and neural networks, to understand the dynamics of biological information processes. This is a much more difficult task. It requires deep knowledge of the underlying biological network—the main players (nodes) and their interactions (links)—in sufficient detail to build a model with predictive power, as well as quantitative input-output measurements of the system under different perturbations (both genetic variations and different external conditions) to test the model predictions to guide further development of the model. Due to the recent growth of biological knowledge thanks in part to high throughput methods (sequencing, gene expression microarray, etc) and development of quantitative in vivo techniques such as various florescence technology, these requirements are starting to be realized in different biological systems. The possible close interaction between quantitative experimentation and theoretical modeling has made systems biology an attractive field for physicists interested in quantitative biology. In this review, we describe some of the recent work in developing a quantitative predictive model of bacterial chemotaxis, which can be considered as the hydrogen atom of systems biology. Using statistical physics approaches, such as the Ising model and Langevin equation, we study how bacteria, such as E. coli, sense and amplify external signals, how they keep a working memory of the stimuli, and how they use these data to compute the chemical gradient. In particular, we will describe how E. coli cells avoid cross-talk in a heterogeneous receptor cluster to keep a ligand-specific memory. We will also

  11. Information processing in bacteria: memory, computation, and statistical physics: a key issues review

    Science.gov (United States)

    Lan, Ganhui; Tu, Yuhai

    2016-05-01

    preserving information, it does not reveal the underlying mechanism that leads to the observed input-output relationship, nor does it tell us much about which information is important for the organism and how biological systems use information to carry out specific functions. To do that, we need to develop models of the biological machineries, e.g. biochemical networks and neural networks, to understand the dynamics of biological information processes. This is a much more difficult task. It requires deep knowledge of the underlying biological network—the main players (nodes) and their interactions (links)—in sufficient detail to build a model with predictive power, as well as quantitative input-output measurements of the system under different perturbations (both genetic variations and different external conditions) to test the model predictions to guide further development of the model. Due to the recent growth of biological knowledge thanks in part to high throughput methods (sequencing, gene expression microarray, etc) and development of quantitative in vivo techniques such as various florescence technology, these requirements are starting to be realized in different biological systems. The possible close interaction between quantitative experimentation and theoretical modeling has made systems biology an attractive field for physicists interested in quantitative biology. In this review, we describe some of the recent work in developing a quantitative predictive model of bacterial chemotaxis, which can be considered as the hydrogen atom of systems biology. Using statistical physics approaches, such as the Ising model and Langevin equation, we study how bacteria, such as E. coli, sense and amplify external signals, how they keep a working memory of the stimuli, and how they use these data to compute the chemical gradient. In particular, we will describe how E. coli cells avoid cross-talk in a heterogeneous receptor cluster to keep a ligand-specific memory. We will also

  12. Information processing in bacteria: memory, computation, and statistical physics: a key issues review.

    Science.gov (United States)

    Lan, Ganhui; Tu, Yuhai

    2016-05-01

    preserving information, it does not reveal the underlying mechanism that leads to the observed input-output relationship, nor does it tell us much about which information is important for the organism and how biological systems use information to carry out specific functions. To do that, we need to develop models of the biological machineries, e.g. biochemical networks and neural networks, to understand the dynamics of biological information processes. This is a much more difficult task. It requires deep knowledge of the underlying biological network-the main players (nodes) and their interactions (links)-in sufficient detail to build a model with predictive power, as well as quantitative input-output measurements of the system under different perturbations (both genetic variations and different external conditions) to test the model predictions to guide further development of the model. Due to the recent growth of biological knowledge thanks in part to high throughput methods (sequencing, gene expression microarray, etc) and development of quantitative in vivo techniques such as various florescence technology, these requirements are starting to be realized in different biological systems. The possible close interaction between quantitative experimentation and theoretical modeling has made systems biology an attractive field for physicists interested in quantitative biology. In this review, we describe some of the recent work in developing a quantitative predictive model of bacterial chemotaxis, which can be considered as the hydrogen atom of systems biology. Using statistical physics approaches, such as the Ising model and Langevin equation, we study how bacteria, such as E. coli, sense and amplify external signals, how they keep a working memory of the stimuli, and how they use these data to compute the chemical gradient. In particular, we will describe how E. coli cells avoid cross-talk in a heterogeneous receptor cluster to keep a ligand-specific memory. We will also

  13. Pattern recognition by the use of multivariate statistical evaluation of macro- and micro-PIXE results

    International Nuclear Information System (INIS)

    Tapper, U.A.S.; Malmqvist, K.G.; Loevestam, N.E.G.; Swietlicki, E.; Salford, L.G.

    1991-01-01

    The importance of statistical evaluation of multielemental data is illustrated using the data collected in a macro- and micro-PIXE analysis of human brain tumours. By employing a multivariate statistical classification methodology (SIMCA) it was shown that the total information collected from each specimen separates three types of tissue: High malignant, less malignant and normal brain tissue. This makes a classification of a given specimen possible based on the elemental concentrations. Partial least squares regression (PLS), a multivariate regression method, made it possible to study the relative importance of the examined nine trace elements, the dry/wet weight ratio and the age of the patient in predicting the survival time after operation for patients with the high malignant form, astrocytomas grade III-IV. The elemental maps from a microprobe analysis were also subjected to multivariate analysis. This showed that the six elements sorted into maps could be presented in three maps containing all the relevant information. The intensity in these maps is proportional to the value (score) of the actual pixel along the calculated principal components. (orig.)

  14. The minimal work cost of information processing

    Science.gov (United States)

    Faist, Philippe; Dupuis, Frédéric; Oppenheim, Jonathan; Renner, Renato

    2015-07-01

    Irreversible information processing cannot be carried out without some inevitable thermodynamical work cost. This fundamental restriction, known as Landauer's principle, is increasingly relevant today, as the energy dissipation of computing devices impedes the development of their performance. Here we determine the minimal work required to carry out any logical process, for instance a computation. It is given by the entropy of the discarded information conditional to the output of the computation. Our formula takes precisely into account the statistically fluctuating work requirement of the logical process. It enables the explicit calculation of practical scenarios, such as computational circuits or quantum measurements. On the conceptual level, our result gives a precise and operational connection between thermodynamic and information entropy, and explains the emergence of the entropy state function in macroscopic thermodynamics.

  15. Surveying managers to inform a regionally relevant invasive Phragmites australis control research program.

    Science.gov (United States)

    Rohal, C B; Kettenring, K M; Sims, K; Hazelton, E L G; Ma, Z

    2018-01-15

    more pertinent to manager needs and trusted by managers. Such an approach that integrates manager surveys to inform management experiments could be adapted to any developing research program seeking to be relevant to management audiences. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Reports on internet traffic statistics

    OpenAIRE

    Hoogesteger, Martijn; de Oliveira Schmidt, R.; Sperotto, Anna; Pras, Aiko

    2013-01-01

    Internet traffic statistics can provide valuable information to network analysts and researchers about the way nowadays networks are used. In the past, such information was provided by Internet2 in a public website called Internet2 NetFlow: Weekly Reports. The website reported traffic statistics from the Abilene network on a weekly basis. At that time, the network connected 230 research institutes with a 10Gb/s link. Although these reports were limited to the behavior of the Albeline's users,...

  17. Pilot information needs survey regarding climate relevant technologies

    International Nuclear Information System (INIS)

    Van Berkel, R.; Van Roekel, A.

    1997-02-01

    The objective of this pilot survey was to arrive at a preliminary understanding of the initial technology and technology information needs in non-Annex II countries in order to support international efforts to facilitate the transfer of technologies and know-how conducive to mitigating and adapting to climate change. The study encompassed two main components, i.e. the development of a survey instrument and the execution of a pilot survey among selected non-Annex II countries. The survey instrument addresses the present status of enabling activities; technology and technology information needs; and issues related to information supply and accessibility. The survey was distributed to national focal points in 20 non-Annex II countries and to at least 35 other stakeholders in five of these non-Annex II countries. A total of 27 completed questionnaires were received, covering 10 non-Annex II countries. 3 refs

  18. Pilot information needs survey regarding climate relevant technologies

    Energy Technology Data Exchange (ETDEWEB)

    Van Berkel, R.; Van Roekel, A.

    1997-02-01

    The objective of this pilot survey was to arrive at a preliminary understanding of the initial technology and technology information needs in non-Annex II countries in order to support international efforts to facilitate the transfer of technologies and know-how conducive to mitigating and adapting to climate change. The study encompassed two main components, i.e. the development of a survey instrument and the execution of a pilot survey among selected non-Annex II countries. The survey instrument addresses the present status of enabling activities; technology and technology information needs; and issues related to information supply and accessibility. The survey was distributed to national focal points in 20 non-Annex II countries and to at least 35 other stakeholders in five of these non-Annex II countries. A total of 27 completed questionnaires were received, covering 10 non-Annex II countries. 3 refs.

  19. Simple statistical model for branched aggregates

    DEFF Research Database (Denmark)

    Lemarchand, Claire; Hansen, Jesper Schmidt

    2015-01-01

    , given that it already has bonds with others. The model is applied here to asphaltene nanoaggregates observed in molecular dynamics simulations of Cooee bitumen. The variation with temperature of the probabilities deduced from this model is discussed in terms of statistical mechanics arguments....... The relevance of the statistical model in the case of asphaltene nanoaggregates is checked by comparing the predicted value of the probability for one molecule to have exactly i bonds with the same probability directly measured in the molecular dynamics simulations. The agreement is satisfactory......We propose a statistical model that can reproduce the size distribution of any branched aggregate, including amylopectin, dendrimers, molecular clusters of monoalcohols, and asphaltene nanoaggregates. It is based on the conditional probability for one molecule to form a new bond with a molecule...

  20. Mineral statistics yearbook 1994

    International Nuclear Information System (INIS)

    1994-01-01

    A summary of mineral production in Saskatchewan was compiled and presented as a reference manual. Statistical information on fuel minerals such as crude oil, natural gas, liquefied petroleum gas and coal, and of industrial and metallic minerals, such as potash, sodium sulphate, salt and uranium, was provided in all conceivable variety of tables. Production statistics, disposition and value of sales of industrial and metallic minerals were also made available. Statistical data on drilling of oil and gas reservoirs and crown land disposition were also included. figs., tabs

  1. The value relevance of environmental emissions

    Directory of Open Access Journals (Sweden)

    Melinda Lydia Nelwan

    2016-07-01

    Full Text Available This study examines whether environmental performance has value relevance by investigating the relations between environmental emissions and stock prices for the U.S. public companies. The previous studies argued that the conjectured relations between accounting performance measures and environmental performance do not have a strong theoretical basis, and the modeling of relations between market per-formance measures and environmental performance do not adequately consider the relevance of accounting performance to market value. Therefore, this study examines whether publicly reported environmental emissions provide incremental information to accounting earnings in pricing companies stocks. It is done among the complete set of industries covered by Toxics Release Inventory (TRI reporting for the period 2007 to 2010. Using Ohlson model but modified to include different types of emis-sions, it is found that ground emissions (underground injection and land emissions are value relevant but other emission types (air and water and transferred-out emis-sions appear to not provide incremental information in the valuation model. The result in this study raise concerns that different types of emissions are assessed differently by the market, confirming that studies should not aggregate such measures.

  2. Contribution statistics can make to "strengthening forensic science"

    CSIR Research Space (South Africa)

    Cooper, Antony K

    2009-08-01

    Full Text Available draw on inputs from other countries and much of the report is relevant to forensic science in other countries. The report makes thirteen detailed recommendations, several of which will require statistics and statisticians for their implementation...

  3. Reducing the memory size in the study of statistical properties of the pseudo-random number generators, focused on solving problems of cryptographic information protection

    International Nuclear Information System (INIS)

    Chugunkov, I.V.

    2014-01-01

    The report contains the description of an approach based on calculation of missing sets quantity, which allows to reduce memory usage needed for implementation of statistical tests. Information about estimation procedure of test statistics derived as a result of using this approach is also provided [ru

  4. Matrix algebra theory, computations and applications in statistics

    CERN Document Server

    Gentle, James E

    2017-01-01

    This textbook for graduate and advanced undergraduate students presents the theory of matrix algebra for statistical applications, explores various types of matrices encountered in statistics, and covers numerical linear algebra. Matrix algebra is one of the most important areas of mathematics in data science and in statistical theory, and the second edition of this very popular textbook provides essential updates and comprehensive coverage on critical topics in mathematics in data science and in statistical theory. Part I offers a self-contained description of relevant aspects of the theory of matrix algebra for applications in statistics. It begins with fundamental concepts of vectors and vector spaces; covers basic algebraic properties of matrices and analytic properties of vectors and matrices in multivariate calculus; and concludes with a discussion on operations on matrices in solutions of linear systems and in eigenanalysis. Part II considers various types of matrices encountered in statistics, such as...

  5. The Development of On-Line Statistics Program for Radiation Oncology

    International Nuclear Information System (INIS)

    Kim, Yoon Jong; Lee, Dong Hoon; Ji, Young Hoon; Lee, Dong Han; Jo, Chul Ku; Kim, Mi Sook; Ru, Sung Rul; Hong, Seung Hong

    2001-01-01

    Purpose : By developing on-line statistics program to record the information of radiation oncology to share the information with internet. It is possible to supply basic reference data for administrative plans to improve radiation oncology. Materials and methods : The information of radiation oncology statistics had been collected by paper forms about 52 hospitals in the past. Now, we can input the data by internet web browsers. The statistics program used windows NT 4.0 operation system, Internet Information Server 4.0 (IIS4.0) as a web server and the Microsoft Access MDB. We used Structured Query Language (SQL), Visual Basic, VBScript and JAVAScript to display the statistics according to years and hospitals. Results : This program shows present conditions about man power, research, therapy machines, technic, brachytherapy, clinic statistics, radiation safety management, institution, quality assurance and radioisotopes in radiation oncology department. The database consists of 38 inputs and 6 outputs windows. Statistical output windows can be increased continuously according to user need. Conclusion : We have developed statistics program to process all of the data in department of radiation oncology for reference information. Users easily could input the data by internet web browsers and share the information

  6. Data and Statistics

    Science.gov (United States)

    ... About Us Information For… Media Policy Makers Data & Statistics Recommend on Facebook Tweet Share Compartir Sickle cell ... 1999 through 2002. This drop coincided with the introduction in 2000 of a vaccine that protects against ...

  7. Transport statistics 1996

    CSIR Research Space (South Africa)

    Shepperson, L

    1997-12-01

    Full Text Available This publication contains transport and related statistics on roads, vehicles, infrastructure, passengers, freight, rail, air, maritime and road traffic, and international comparisons. The information compiled in this publication has been gathered...

  8. Reconstructing missing information on precipitation datasets: impact of tails on adopted statistical distributions.

    Science.gov (United States)

    Pedretti, Daniele; Beckie, Roger Daniel

    2014-05-01

    Missing data in hydrological time-series databases are ubiquitous in practical applications, yet it is of fundamental importance to make educated decisions in problems involving exhaustive time-series knowledge. This includes precipitation datasets, since recording or human failures can produce gaps in these time series. For some applications, directly involving the ratio between precipitation and some other quantity, lack of complete information can result in poor understanding of basic physical and chemical dynamics involving precipitated water. For instance, the ratio between precipitation (recharge) and outflow rates at a discharge point of an aquifer (e.g. rivers, pumping wells, lysimeters) can be used to obtain aquifer parameters and thus to constrain model-based predictions. We tested a suite of methodologies to reconstruct missing information in rainfall datasets. The goal was to obtain a suitable and versatile method to reduce the errors given by the lack of data in specific time windows. Our analyses included both a classical chronologically-pairing approach between rainfall stations and a probability-based approached, which accounted for the probability of exceedence of rain depths measured at two or multiple stations. Our analyses proved that it is not clear a priori which method delivers the best methodology. Rather, this selection should be based considering the specific statistical properties of the rainfall dataset. In this presentation, our emphasis is to discuss the effects of a few typical parametric distributions used to model the behavior of rainfall. Specifically, we analyzed the role of distributional "tails", which have an important control on the occurrence of extreme rainfall events. The latter strongly affect several hydrological applications, including recharge-discharge relationships. The heavy-tailed distributions we considered were parametric Log-Normal, Generalized Pareto, Generalized Extreme and Gamma distributions. The methods were

  9. A Statistical and Spectral Model for Representing Noisy Sounds with Short-Time Sinusoids

    Directory of Open Access Journals (Sweden)

    Myriam Desainte-Catherine

    2005-07-01

    Full Text Available We propose an original model for noise analysis, transformation, and synthesis: the CNSS model. Noisy sounds are represented with short-time sinusoids whose frequencies and phases are random variables. This spectral and statistical model represents information about the spectral density of frequencies. This perceptually relevant property is modeled by three mathematical parameters that define the distribution of the frequencies. This model also represents the spectral envelope. The mathematical parameters are defined and the analysis algorithms to extract these parameters from sounds are introduced. Then algorithms for generating sounds from the parameters of the model are presented. Applications of this model include tools for composers, psychoacoustic experiments, and pedagogy.

  10. Statistical Methods for Particle Physics (4/4)

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    The series of four lectures will introduce some of the important statistical methods used in Particle Physics, and should be particularly relevant to those involved in the analysis of LHC data. The lectures will include an introduction to statistical tests, parameter estimation, and the application of these tools to searches for new phenomena. Both frequentist and Bayesian methods will be described, with particular emphasis on treatment of systematic uncertainties. The lectures will also cover unfolding, that is, estimation of a distribution in binned form where the variable in question is subject to measurement errors.

  11. Statistical Methods for Particle Physics (1/4)

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    The series of four lectures will introduce some of the important statistical methods used in Particle Physics, and should be particularly relevant to those involved in the analysis of LHC data. The lectures will include an introduction to statistical tests, parameter estimation, and the application of these tools to searches for new phenomena. Both frequentist and Bayesian methods will be described, with particular emphasis on treatment of systematic uncertainties. The lectures will also cover unfolding, that is, estimation of a distribution in binned form where the variable in question is subject to measurement errors.

  12. Statistical Methods for Particle Physics (2/4)

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    The series of four lectures will introduce some of the important statistical methods used in Particle Physics, and should be particularly relevant to those involved in the analysis of LHC data. The lectures will include an introduction to statistical tests, parameter estimation, and the application of these tools to searches for new phenomena. Both frequentist and Bayesian methods will be described, with particular emphasis on treatment of systematic uncertainties. The lectures will also cover unfolding, that is, estimation of a distribution in binned form where the variable in question is subject to measurement errors.

  13. Statistical Methods for Particle Physics (3/4)

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    The series of four lectures will introduce some of the important statistical methods used in Particle Physics, and should be particularly relevant to those involved in the analysis of LHC data. The lectures will include an introduction to statistical tests, parameter estimation, and the application of these tools to searches for new phenomena. Both frequentist and Bayesian methods will be described, with particular emphasis on treatment of systematic uncertainties. The lectures will also cover unfolding, that is, estimation of a distribution in binned form where the variable in question is subject to measurement errors.

  14. Improving Statistical Literacy in Schools in Australia

    OpenAIRE

    Trewin, Dennis

    2005-01-01

    We live in the information age. Statistical thinking is a life skill that all Australian children should have. The Statistical Society of Australia (SSAI) and the Australian Bureau of Statistics (ABS) have been working on a strategy to ensure Australian school children acquire a sufficient understanding and appreciation of how data can be acquired and used so they can make informed judgements in their daily lives, as children and then as adults. There is another motive for our work i...

  15. Statistical methods in nonlinear dynamics

    Indian Academy of Sciences (India)

    Sensitivity to initial conditions in nonlinear dynamical systems leads to exponential divergence of trajectories that are initially arbitrarily close, and hence to unpredictability. Statistical methods have been found to be helpful in extracting useful information about such systems. In this paper, we review briefly some statistical ...

  16. Selecting the right statistical model for analysis of insect count data by using information theoretic measures.

    Science.gov (United States)

    Sileshi, G

    2006-10-01

    Researchers and regulatory agencies often make statistical inferences from insect count data using modelling approaches that assume homogeneous variance. Such models do not allow for formal appraisal of variability which in its different forms is the subject of interest in ecology. Therefore, the objectives of this paper were to (i) compare models suitable for handling variance heterogeneity and (ii) select optimal models to ensure valid statistical inferences from insect count data. The log-normal, standard Poisson, Poisson corrected for overdispersion, zero-inflated Poisson, the negative binomial distribution and zero-inflated negative binomial models were compared using six count datasets on foliage-dwelling insects and five families of soil-dwelling insects. Akaike's and Schwarz Bayesian information criteria were used for comparing the various models. Over 50% of the counts were zeros even in locally abundant species such as Ootheca bennigseni Weise, Mesoplatys ochroptera Stål and Diaecoderus spp. The Poisson model after correction for overdispersion and the standard negative binomial distribution model provided better description of the probability distribution of seven out of the 11 insects than the log-normal, standard Poisson, zero-inflated Poisson or zero-inflated negative binomial models. It is concluded that excess zeros and variance heterogeneity are common data phenomena in insect counts. If not properly modelled, these properties can invalidate the normal distribution assumptions resulting in biased estimation of ecological effects and jeopardizing the integrity of the scientific inferences. Therefore, it is recommended that statistical models appropriate for handling these data properties be selected using objective criteria to ensure efficient statistical inference.

  17. Robust Control Methods for On-Line Statistical Learning

    Directory of Open Access Journals (Sweden)

    Capobianco Enrico

    2001-01-01

    Full Text Available The issue of controlling that data processing in an experiment results not affected by the presence of outliers is relevant for statistical control and learning studies. Learning schemes should thus be tested for their capacity of handling outliers in the observed training set so to achieve reliable estimates with respect to the crucial bias and variance aspects. We describe possible ways of endowing neural networks with statistically robust properties by defining feasible error criteria. It is convenient to cast neural nets in state space representations and apply both Kalman filter and stochastic approximation procedures in order to suggest statistically robustified solutions for on-line learning.

  18. Statistical science: a grammar for research.

    Science.gov (United States)

    Cox, David R

    2017-06-01

    I greatly appreciate the invitation to give this lecture with its century long history. The title is a warning that the lecture is rather discursive and not highly focused and technical. The theme is simple. That statistical thinking provides a unifying set of general ideas and specific methods relevant whenever appreciable natural variation is present. To be most fruitful these ideas should merge seamlessly with subject-matter considerations. By contrast, there is sometimes a temptation to regard formal statistical analysis as a ritual to be added after the serious work has been done, a ritual to satisfy convention, referees, and regulatory agencies. I want implicitly to refute that idea.

  19. The Relevance of Hyperbaric Oxygen to Combat Medicine

    Science.gov (United States)

    2001-06-01

    and Hyperbaric Conditions [les Questions medicales a caractere oprationel liees aux conditions hypobares ou hyperbares ] To order the complete...UNCLASSIFIED Defense Technical Information Center Compilation Part Notice ADPO 11081 TITLE: The Relevance of Hyperbaric Oxygen to Combat Medicine...following component part numbers comprise the compilation report: ADPO11059 thru ADP011100 UNCLASSIFIED 25-1 The Relevance of Hyperbaric Oxygen to

  20. A survey of statistical downscaling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Zorita, E.; Storch, H. von [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Hydrophysik

    1997-12-31

    The derivation of regional information from integrations of coarse-resolution General Circulation Models (GCM) is generally referred to as downscaling. The most relevant statistical downscaling techniques are described here and some particular examples are worked out in detail. They are classified into three main groups: linear methods, classification methods and deterministic non-linear methods. Their performance in a particular example, winter rainfall in the Iberian peninsula, is compared to a simple downscaling analog method. It is found that the analog method performs equally well than the more complicated methods. Downscaling analysis can be also used as a tool to validate regional performance of global climate models by analyzing the covariability of the simulated large-scale climate and the regional climates. (orig.) [Deutsch] Die Ableitung regionaler Information aus Integrationen grob aufgeloester Klimamodelle wird als `Regionalisierung` bezeichnet. Dieser Beitrag beschreibt die wichtigsten statistischen Regionalisierungsverfahren und gibt darueberhinaus einige detaillierte Beispiele. Regionalisierungsverfahren lassen sich in drei Hauptgruppen klassifizieren: lineare Verfahren, Klassifikationsverfahren und nicht-lineare deterministische Verfahren. Diese Methoden werden auf den Niederschlag auf der iberischen Halbinsel angewandt und mit den Ergebnissen eines einfachen Analog-Modells verglichen. Es wird festgestellt, dass die Ergebnisse der komplizierteren Verfahren im wesentlichen auch mit der Analog-Methode erzielt werden koennen. Eine weitere Anwendung der Regionalisierungsmethoden besteht in der Validierung globaler Klimamodelle, indem die simulierte und die beobachtete Kovariabilitaet zwischen dem grosskaligen und dem regionalen Klima miteinander verglichen wird. (orig.)

  1. A survey of statistical downscaling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Zorita, E; Storch, H von [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Hydrophysik

    1998-12-31

    The derivation of regional information from integrations of coarse-resolution General Circulation Models (GCM) is generally referred to as downscaling. The most relevant statistical downscaling techniques are described here and some particular examples are worked out in detail. They are classified into three main groups: linear methods, classification methods and deterministic non-linear methods. Their performance in a particular example, winter rainfall in the Iberian peninsula, is compared to a simple downscaling analog method. It is found that the analog method performs equally well than the more complicated methods. Downscaling analysis can be also used as a tool to validate regional performance of global climate models by analyzing the covariability of the simulated large-scale climate and the regional climates. (orig.) [Deutsch] Die Ableitung regionaler Information aus Integrationen grob aufgeloester Klimamodelle wird als `Regionalisierung` bezeichnet. Dieser Beitrag beschreibt die wichtigsten statistischen Regionalisierungsverfahren und gibt darueberhinaus einige detaillierte Beispiele. Regionalisierungsverfahren lassen sich in drei Hauptgruppen klassifizieren: lineare Verfahren, Klassifikationsverfahren und nicht-lineare deterministische Verfahren. Diese Methoden werden auf den Niederschlag auf der iberischen Halbinsel angewandt und mit den Ergebnissen eines einfachen Analog-Modells verglichen. Es wird festgestellt, dass die Ergebnisse der komplizierteren Verfahren im wesentlichen auch mit der Analog-Methode erzielt werden koennen. Eine weitere Anwendung der Regionalisierungsmethoden besteht in der Validierung globaler Klimamodelle, indem die simulierte und die beobachtete Kovariabilitaet zwischen dem grosskaligen und dem regionalen Klima miteinander verglichen wird. (orig.)

  2. Statistics II essentials

    CERN Document Server

    Milewski, Emil G

    2012-01-01

    REA's Essentials provide quick and easy access to critical information in a variety of different fields, ranging from the most basic to the most advanced. As its name implies, these concise, comprehensive study guides summarize the essentials of the field covered. Essentials are helpful when preparing for exams, doing homework and will remain a lasting reference source for students, teachers, and professionals. Statistics II discusses sampling theory, statistical inference, independent and dependent variables, correlation theory, experimental design, count data, chi-square test, and time se

  3. Intelligent medical information filtering.

    Science.gov (United States)

    Quintana, Y

    1998-01-01

    This paper describes an intelligent information filtering system to assist users to be notified of updates to new and relevant medical information. Among the major problems users face is the large volume of medical information that is generated each day, and the need to filter and retrieve relevant information. The Internet has dramatically increased the amount of electronically accessible medical information and reduced the cost and time needed to publish. The opportunity of the Internet for the medical profession and consumers is to have more information to make decisions and this could potentially lead to better medical decisions and outcomes. However, without the assistance from professional medical librarians, retrieving new and relevant information from databases and the Internet remains a challenge. Many physicians do not have access to the services of a medical librarian. Most physicians indicate on surveys that they do not prefer to retrieve the literature themselves, or visit libraries because of the lack of recent materials, poor organisation and indexing of materials, lack of appropriate and available material, and lack of time. The information filtering system described in this paper records the online web browsing behaviour of each user and creates a user profile of the index terms found on the web pages visited by the user. A relevance-ranking algorithm then matches the user profiles to the index terms of new health care web pages that are added each day. The system creates customised summaries of new information for each user. A user can then connect to the web site to read the new information. Relevance feedback buttons on each page ask the user to rate the usefulness of the page to their immediate information needs. Errors in relevance ranking are reduced in this system by having both the user profile and medical information represented in the same representation language using a controlled vocabulary. This system also updates the user profiles

  4. Brief guidelines for methods and statistics in medical research

    CERN Document Server

    Ab Rahman, Jamalludin

    2015-01-01

    This book serves as a practical guide to methods and statistics in medical research. It includes step-by-step instructions on using SPSS software for statistical analysis, as well as relevant examples to help those readers who are new to research in health and medical fields. Simple texts and diagrams are provided to help explain the concepts covered, and print screens for the statistical steps and the SPSS outputs are provided, together with interpretations and examples of how to report on findings. Brief Guidelines for Methods and Statistics in Medical Research offers a valuable quick reference guide for healthcare students and practitioners conducting research in health related fields, written in an accessible style.

  5. Fisher information and statistical inference for phase-type distributions

    DEFF Research Database (Denmark)

    Bladt, Mogens; Esparza, Luz Judith R; Nielsen, Bo Friis

    2011-01-01

    This paper is concerned with statistical inference for both continuous and discrete phase-type distributions. We consider maximum likelihood estimation, where traditionally the expectation-maximization (EM) algorithm has been employed. Certain numerical aspects of this method are revised and we...

  6. METHODS AND TOOLS TO DEVELOP INNOVATIVE STRATEGIC MANAGEMENT DECISIONS BASED ON THE APPLICATION OF ADVANCED INFORMATION AND COMMUNICATION TECHNOLOGIES IN THE STATISTICAL BRANCH OF UZBEKISTAN

    OpenAIRE

    Irina E. Zhukovskya

    2013-01-01

    This paper focuses on the improvement of the statistical branch-based application of electronic document management and network information technology. As a software solutions proposed use of new software solutions of the State Committee on Statistics of the Republic of Uzbekistan «eStat 2.0», allowing not only to optimize the statistical sector employees, but also serves as a link between all the economic entities of the national economy.

  7. Relevance of brands and beef quality differentials for the consumer at the time of purchase

    Directory of Open Access Journals (Sweden)

    Carla Mecca Giacomazzi

    Full Text Available ABSTRACT The objective of this study was to identify the purchase habits and preferences of beef consumers, their level of knowledge on brands and products with quality differentials (certifications, packaging, premium lines, and the relevance of different attributes in the purchase decision, and to group consumers according to the profile of purchase decision. The methodology consisted of using an information-collecting instrument applied to 271 beef consumers. The data collected were analyzed using descriptive statistical analyses, chi-square analysis, and correspondence analysis, relating socio-demographic profile of the respondents with the other variables collected. Chi-square and correspondence analyses showed that younger consumers with lower levels of income and education are influenced by posters and advertisements at the point of sale, unaware of differentiated and branded products, and that they do not choose branded beef at the time of purchase. Consumers over 60 years showed a more conservative purchase profile, with no influence. The most valued attributes are appearance, price, and type of cut, being brand and certifications little relevant as tools to help decide the product purchase.

  8. Wind energy statistics

    International Nuclear Information System (INIS)

    Holttinen, H.; Tammelin, B.; Hyvoenen, R.

    1997-01-01

    The recording, analyzing and publishing of statistics of wind energy production has been reorganized in cooperation of VTT Energy, Finnish Meteorological (FMI Energy) and Finnish Wind Energy Association (STY) and supported by the Ministry of Trade and Industry (KTM). VTT Energy has developed a database that contains both monthly data and information on the wind turbines, sites and operators involved. The monthly production figures together with component failure statistics are collected from the operators by VTT Energy, who produces the final wind energy statistics to be published in Tuulensilmae and reported to energy statistics in Finland and abroad (Statistics Finland, Eurostat, IEA). To be able to verify the annual and monthly wind energy potential with average wind energy climate a production index in adopted. The index gives the expected wind energy production at various areas in Finland calculated using real wind speed observations, air density and a power curve for a typical 500 kW-wind turbine. FMI Energy has produced the average figures for four weather stations using the data from 1985-1996, and produces the monthly figures. (orig.)

  9. A Response to White and Gorard: Against Inferential Statistics: How and Why Current Statistics Teaching Gets It Wrong

    Science.gov (United States)

    Nicholson, James; Ridgway, Jim

    2017-01-01

    White and Gorard make important and relevant criticisms of some of the methods commonly used in social science research, but go further by criticising the logical basis for inferential statistical tests. This paper comments briefly on matters we broadly agree on with them and more fully on matters where we disagree. We agree that too little…

  10. Earnings Management, Value Relevance Of Earnings and Book Value of Equity

    OpenAIRE

    Subekti, Imam

    2013-01-01

    Previous studies examining relationship between earnings management and value relevance of accounting information show that earnings management decrease value relevance of accounting information. Generally, the studies apply accruals earnings management. In contrast, the present study applies integrated earnings management proxies i.e. real and accruals earnings manage-ment. Real earnings management proxies are measured by abnormal cash flow of operation, ab-normal production cost, and abnorm...

  11. AP statistics crash course

    CERN Document Server

    D'Alessio, Michael

    2012-01-01

    AP Statistics Crash Course - Gets You a Higher Advanced Placement Score in Less Time Crash Course is perfect for the time-crunched student, the last-minute studier, or anyone who wants a refresher on the subject. AP Statistics Crash Course gives you: Targeted, Focused Review - Study Only What You Need to Know Crash Course is based on an in-depth analysis of the AP Statistics course description outline and actual Advanced Placement test questions. It covers only the information tested on the exam, so you can make the most of your valuable study time. Our easy-to-read format covers: exploring da

  12. Statistical Power in Meta-Analysis

    Science.gov (United States)

    Liu, Jin

    2015-01-01

    Statistical power is important in a meta-analysis study, although few studies have examined the performance of simulated power in meta-analysis. The purpose of this study is to inform researchers about statistical power estimation on two sample mean difference test under different situations: (1) the discrepancy between the analytical power and…

  13. Statistically significant dependence of the Xaa-Pro peptide bond conformation on secondary structure and amino acid sequence

    Directory of Open Access Journals (Sweden)

    Leitner Dietmar

    2005-04-01

    Full Text Available Abstract Background A reliable prediction of the Xaa-Pro peptide bond conformation would be a useful tool for many protein structure calculation methods. We have analyzed the Protein Data Bank and show that the combined use of sequential and structural information has a predictive value for the assessment of the cis versus trans peptide bond conformation of Xaa-Pro within proteins. For the analysis of the data sets different statistical methods such as the calculation of the Chou-Fasman parameters and occurrence matrices were used. Furthermore we analyzed the relationship between the relative solvent accessibility and the relative occurrence of prolines in the cis and in the trans conformation. Results One of the main results of the statistical investigations is the ranking of the secondary structure and sequence information with respect to the prediction of the Xaa-Pro peptide bond conformation. We observed a significant impact of secondary structure information on the occurrence of the Xaa-Pro peptide bond conformation, while the sequence information of amino acids neighboring proline is of little predictive value for the conformation of this bond. Conclusion In this work, we present an extensive analysis of the occurrence of the cis and trans proline conformation in proteins. Based on the data set, we derived patterns and rules for a possible prediction of the proline conformation. Upon adoption of the Chou-Fasman parameters, we are able to derive statistically relevant correlations between the secondary structure of amino acid fragments and the Xaa-Pro peptide bond conformation.

  14. Statistics of high-level scene context.

    Science.gov (United States)

    Greene, Michelle R

    2013-01-01

    CONTEXT IS CRITICAL FOR RECOGNIZING ENVIRONMENTS AND FOR SEARCHING FOR OBJECTS WITHIN THEM: contextual associations have been shown to modulate reaction time and object recognition accuracy, as well as influence the distribution of eye movements and patterns of brain activations. However, we have not yet systematically quantified the relationships between objects and their scene environments. Here I seek to fill this gap by providing descriptive statistics of object-scene relationships. A total of 48, 167 objects were hand-labeled in 3499 scenes using the LabelMe tool (Russell et al., 2008). From these data, I computed a variety of descriptive statistics at three different levels of analysis: the ensemble statistics that describe the density and spatial distribution of unnamed "things" in the scene; the bag of words level where scenes are described by the list of objects contained within them; and the structural level where the spatial distribution and relationships between the objects are measured. The utility of each level of description for scene categorization was assessed through the use of linear classifiers, and the plausibility of each level for modeling human scene categorization is discussed. Of the three levels, ensemble statistics were found to be the most informative (per feature), and also best explained human patterns of categorization errors. Although a bag of words classifier had similar performance to human observers, it had a markedly different pattern of errors. However, certain objects are more useful than others, and ceiling classification performance could be achieved using only the 64 most informative objects. As object location tends not to vary as a function of category, structural information provided little additional information. Additionally, these data provide valuable information on natural scene redundancy that can be exploited for machine vision, and can help the visual cognition community to design experiments guided by statistics

  15. Computations Underlying Social Hierarchy Learning: Distinct Neural Mechanisms for Updating and Representing Self-Relevant Information.

    Science.gov (United States)

    Kumaran, Dharshan; Banino, Andrea; Blundell, Charles; Hassabis, Demis; Dayan, Peter

    2016-12-07

    Knowledge about social hierarchies organizes human behavior, yet we understand little about the underlying computations. Here we show that a Bayesian inference scheme, which tracks the power of individuals, better captures behavioral and neural data compared with a reinforcement learning model inspired by rating systems used in games such as chess. We provide evidence that the medial prefrontal cortex (MPFC) selectively mediates the updating of knowledge about one's own hierarchy, as opposed to that of another individual, a process that underpinned successful performance and involved functional interactions with the amygdala and hippocampus. In contrast, we observed domain-general coding of rank in the amygdala and hippocampus, even when the task did not require it. Our findings reveal the computations underlying a core aspect of social cognition and provide new evidence that self-relevant information may indeed be afforded a unique representational status in the brain. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  16. Statistics for business

    CERN Document Server

    Waller, Derek L

    2008-01-01

    Statistical analysis is essential to business decision-making and management, but the underlying theory of data collection, organization and analysis is one of the most challenging topics for business students and practitioners. This user-friendly text and CD-ROM package will help you to develop strong skills in presenting and interpreting statistical information in a business or management environment. Based entirely on using Microsoft Excel rather than more complicated applications, it includes a clear guide to using Excel with the key functions employed in the book, a glossary of terms and

  17. The clinical relevance and newsworthiness of NIHR HTA-funded research: a cohort study.

    Science.gov (United States)

    Wright, D; Young, A; Iserman, E; Maeso, R; Turner, S; Haynes, R B; Milne, R

    2014-05-07

    To assess the clinical relevance and newsworthiness of the UK National Institute for Health Research (NIHR) Health Technology Assessment (HTA) Programme funded reports. Retrospective cohort study. The cohort included 311 NIHR HTA Programme funded reports publishing in HTA in the period 1 January 2007-31 December 2012. The McMaster Online Rating of Evidence (MORE) system independently identified the clinical relevance and newsworthiness of NIHR HTA publications and non-NIHR HTA publications. The MORE system involves over 4000 physicians rating publications on a scale of relevance (the extent to which articles are relevant to practice) and a scale of newsworthiness (the extent to which articles contain news or something clinicians are unlikely to know). The proportion of reports published in HTA meeting MORE inclusion criteria and mean average relevance and newsworthiness ratings were calculated and compared with publications from the same studies publishing outside HTA and non-NIHR HTA funded publications. 286/311 (92.0%) of NIHR HTA reports were assessed by MORE, of which 192 (67.1%) passed MORE criteria. The average clinical relevance rating for NIHR HTA reports was 5.48, statistically higher than the 5.32 rating for non-NIHR HTA publications (mean difference=0.16, 95% CI 0.04 to 0.29, p=0.01). Average newsworthiness ratings were similar between NIHR HTA reports and non-NIHR HTA publications (4.75 and 4.70, respectively; mean difference=0.05, 95% CI -0.18 to 0.07, p=0.402). NIHR HTA-funded original research reports were statistically higher for newsworthiness than reviews (5.05 compared with 4.64) (mean difference=0.41, 95% CI 0.18 to 0.64, p=0.001). Funding research of clinical relevance is important in maximising the value of research investment. The NIHR HTA Programme is successful in funding projects that generate outputs of clinical relevance.

  18. Spatially characterizing visitor use and its association with informal trails in Yosemite Valley meadows.

    Science.gov (United States)

    Walden-Schreiner, Chelsey; Leung, Yu-Fai

    2013-07-01

    Ecological impacts associated with nature-based recreation and tourism can compromise park and protected area goals if left unrestricted. Protected area agencies are increasingly incorporating indicator-based management frameworks into their management plans to address visitor impacts. Development of indicators requires empirical evaluation of indicator measures and examining their ecological and social relevance. This study addresses the development of the informal trail indicator in Yosemite National Park by spatially characterizing visitor use in open landscapes and integrating use patterns with informal trail condition data to examine their spatial association. Informal trail and visitor use data were collected concurrently during July and August of 2011 in three, high-use meadows of Yosemite Valley. Visitor use was clustered at statistically significant levels in all three study meadows. Spatial data integration found no statistically significant differences between use patterns and trail condition class. However, statistically significant differences were found between the distance visitors were observed from informal trails and visitor activity type with active activities occurring closer to trail corridors. Gender was also found to be significant with male visitors observed further from trail corridors. Results highlight the utility of integrated spatial analysis in supporting indicator-based monitoring and informing management of open landscapes. Additional variables for future analysis and methodological improvements are discussed.

  19. Spatially Characterizing Visitor Use and Its Association with Informal Trails in Yosemite Valley Meadows

    Science.gov (United States)

    Walden-Schreiner, Chelsey; Leung, Yu-Fai

    2013-07-01

    Ecological impacts associated with nature-based recreation and tourism can compromise park and protected area goals if left unrestricted. Protected area agencies are increasingly incorporating indicator-based management frameworks into their management plans to address visitor impacts. Development of indicators requires empirical evaluation of indicator measures and examining their ecological and social relevance. This study addresses the development of the informal trail indicator in Yosemite National Park by spatially characterizing visitor use in open landscapes and integrating use patterns with informal trail condition data to examine their spatial association. Informal trail and visitor use data were collected concurrently during July and August of 2011 in three, high-use meadows of Yosemite Valley. Visitor use was clustered at statistically significant levels in all three study meadows. Spatial data integration found no statistically significant differences between use patterns and trail condition class. However, statistically significant differences were found between the distance visitors were observed from informal trails and visitor activity type with active activities occurring closer to trail corridors. Gender was also found to be significant with male visitors observed further from trail corridors. Results highlight the utility of integrated spatial analysis in supporting indicator-based monitoring and informing management of open landscapes. Additional variables for future analysis and methodological improvements are discussed.

  20. METHODS AND TOOLS TO DEVELOP INNOVATIVE STRATEGIC MANAGEMENT DECISIONS BASED ON THE APPLICATION OF ADVANCED INFORMATION AND COMMUNICATION TECHNOLOGIES IN THE STATISTICAL BRANCH OF UZBEKISTAN

    Directory of Open Access Journals (Sweden)

    Irina E. Zhukovskya

    2013-01-01

    Full Text Available This paper focuses on the improvement of the statistical branch-based application of electronic document management and network information technology. As a software solutions proposed use of new software solutions of the State Committee on Statistics of the Republic of Uzbekistan «eStat 2.0», allowing not only to optimize the statistical sector employees, but also serves as a link between all the economic entities of the national economy.

  1. Analysis and Evaluation of Statistical Models for Integrated Circuits Design

    Directory of Open Access Journals (Sweden)

    Sáenz-Noval J.J.

    2011-10-01

    Full Text Available Statistical models for integrated circuits (IC allow us to estimate the percentage of acceptable devices in the batch before fabrication. Actually, Pelgrom is the statistical model most accepted in the industry; however it was derived from a micrometer technology, which does not guarantee reliability in nanometric manufacturing processes. This work considers three of the most relevant statistical models in the industry and evaluates their limitations and advantages in analog design, so that the designer has a better criterion to make a choice. Moreover, it shows how several statistical models can be used for each one of the stages and design purposes.

  2. Generalized statistics and the formation of a quark-gluon plasma

    International Nuclear Information System (INIS)

    Teweldeberhan, A.M.; Miller, H.G.; Tegen, R.

    2003-01-01

    The aim of this paper is to investigate the effect of a non-extensive form of statistical mechanics proposed by Tsallis on the formation of a quark-gluon plasma (QGP). We suggest to account for the effects of the dominant part of the long-range interactions among the constituents in the QGP by a change in the statistics of the system in this phase, and we study the relevance of this statistics for the phase transition. The results show that small deviations (≈ 10%) from Boltzmann–Gibbs statistics in the QGP produce a noticeable change in the phase diagram, which can, in principle, be tested experimentally. (author)

  3. Biometrics in the Medical School Curriculum: Making the Necessary Relevant.

    Science.gov (United States)

    Murphy, James R.

    1980-01-01

    Because a student is more likely to learn and retain course content perceived as relevant, an attempt was made to change medical students' perceptions of a biometrics course by introducing statistical methods as a means of solving problems in the interpretation of clinical lab data. Retrospective analysis of student course evaluations indicates a…

  4. [Mood-congruent effect in self-relevant information processing: a study using an autobiographical memory recall task].

    Science.gov (United States)

    Itoh, M

    2000-10-01

    The pattern of the mood-congruent effect in an autobiographical memory recall task was investigated. Each subject was randomly assigned to one of three experimental conditions: positive mood, negative mood (induced with music), and control groups (no specific mood). Subjects were then presented with a word at a time from a list of trait words, which were pleasant or unpleasant. They decided whether they could recall any of their autobiographical memories related to the word, and responded with "yes" or "no" buttons as rapidly and accurately as possible. After the task, they were given five minutes for an incidental free recall test. Results indicated that the mood-congruent effect was found regardless of whether there was an autobiographical memory related to the word or not in both positive and negative mood states. The effect of moods on self-relevant information processing was discussed.

  5. CMS Statistics Reference Booklet

    Data.gov (United States)

    U.S. Department of Health & Human Services — The annual CMS Statistics reference booklet provides a quick reference for summary information about health expenditures and the Medicare and Medicaid health...

  6. Statistics of extremes theory and applications

    CERN Document Server

    Beirlant, Jan; Segers, Johan; Teugels, Jozef; De Waal, Daniel; Ferro, Chris

    2006-01-01

    Research in the statistical analysis of extreme values has flourished over the past decade: new probability models, inference and data analysis techniques have been introduced; and new application areas have been explored. Statistics of Extremes comprehensively covers a wide range of models and application areas, including risk and insurance: a major area of interest and relevance to extreme value theory. Case studies are introduced providing a good balance of theory and application of each model discussed, incorporating many illustrated examples and plots of data. The last part of the book covers some interesting advanced topics, including  time series, regression, multivariate and Bayesian modelling of extremes, the use of which has huge potential.  

  7. Information technology skills and training needs of health information management professionals in Nigeria: a nationwide study.

    Science.gov (United States)

    Taiwo Adeleke, Ibrahim; Hakeem Lawal, Adedeji; Adetona Adio, Razzaq; Adisa Adebisi, AbdulLateef

    There is a lack of effective health information management systems in Nigeria due to the prevalence of cumbersome paper-based and disjointed health data management systems. This can make informed healthcare decision making difficult. This study examined the information technology (IT) skills, utilisation and training needs of Nigerian health information management professionals. We deployed a cross-sectional structured questionnaire to determine the IT skills and training needs of health information management professionals who have leadership roles in the nation's healthcare information systems (n=374). It was found that ownership of a computer, level of education and age were associated with knowledge and perception of IT. The vast majority of participants (98.8%) acknowledged the importance and relevance of IT in healthcare information systems and many expressed a desire for further IT training, especially in statistical analysis. Despite this, few (8.1 %) worked in settings where such systems operate and there exists an IT skill gap among these professionals which is not compatible with their roles in healthcare information systems. To rectify this anomaly they require continuing professional development education, especially in the areas of health IT. Government intervention in the provision of IT infrastructure in order to put into practice a computerised healthcare information system would therefore be a worthwhile undertaking.

  8. Teaching biology through statistics: application of statistical methods in genetics and zoology courses.

    Science.gov (United States)

    Colon-Berlingeri, Migdalisel; Burrowes, Patricia A

    2011-01-01

    Incorporation of mathematics into biology curricula is critical to underscore for undergraduate students the relevance of mathematics to most fields of biology and the usefulness of developing quantitative process skills demanded in modern biology. At our institution, we have made significant changes to better integrate mathematics into the undergraduate biology curriculum. The curricular revision included changes in the suggested course sequence, addition of statistics and precalculus as prerequisites to core science courses, and incorporating interdisciplinary (math-biology) learning activities in genetics and zoology courses. In this article, we describe the activities developed for these two courses and the assessment tools used to measure the learning that took place with respect to biology and statistics. We distinguished the effectiveness of these learning opportunities in helping students improve their understanding of the math and statistical concepts addressed and, more importantly, their ability to apply them to solve a biological problem. We also identified areas that need emphasis in both biology and mathematics courses. In light of our observations, we recommend best practices that biology and mathematics academic departments can implement to train undergraduates for the demands of modern biology.

  9. the relevance of libraries and information communicaton technology

    African Journals Online (AJOL)

    GRACE

    Information and Communications Technology (ICT) could be used to improve .... accountable, efficient and effective interaction between the public, business and ... agencies, research institutions and private organizations, such as print and ...

  10. A global approach to estimate irrigated areas - a comparison between different data and statistics

    Science.gov (United States)

    Meier, Jonas; Zabel, Florian; Mauser, Wolfram

    2018-02-01

    Agriculture is the largest global consumer of water. Irrigated areas constitute 40 % of the total area used for agricultural production (FAO, 2014a) Information on their spatial distribution is highly relevant for regional water management and food security. Spatial information on irrigation is highly important for policy and decision makers, who are facing the transition towards more efficient sustainable agriculture. However, the mapping of irrigated areas still represents a challenge for land use classifications, and existing global data sets differ strongly in their results. The following study tests an existing irrigation map based on statistics and extends the irrigated area using ancillary data. The approach processes and analyzes multi-temporal normalized difference vegetation index (NDVI) SPOT-VGT data and agricultural suitability data - both at a spatial resolution of 30 arcsec - incrementally in a multiple decision tree. It covers the period from 1999 to 2012. The results globally show a 18 % larger irrigated area than existing approaches based on statistical data. The largest differences compared to the official national statistics are found in Asia and particularly in China and India. The additional areas are mainly identified within already known irrigated regions where irrigation is more dense than previously estimated. The validation with global and regional products shows the large divergence of existing data sets with respect to size and distribution of irrigated areas caused by spatial resolution, the considered time period and the input data and assumption made.

  11. AUTOMATIC SUMMARIZATION OF WEB FORUMS AS SOURCES OF PROFESSIONALLY SIGNIFICANT INFORMATION

    Directory of Open Access Journals (Sweden)

    K. I. Buraya

    2016-07-01

    Full Text Available Subject of Research.The competitive advantage of a modern specialist is the widest possible coverage of informationsources useful from the point of view of obtaining and acquisition of relevant professionally significant information. Among these sources professional web forums occupy a significant place. The paperconsiders the problem of automaticforum text summarization, i.e. identification ofthose fragments that contain professionally relevant information. Method.The research is based on statistical analysis of texts of forums by means of machine learning. Six web forums were selected for research considering aspects of technologies of various subject domains as their subject-matter. The marking of forums was carried out by an expert way. Using various methods of machine learning the models were designed reflecting functional communication between the estimated characteristics of PSI extraction quality and signs of posts. The cumulative NDCG metrics and its dispersion were used for an assessment of quality of models.Main Results. We have shown that an important role in an assessment of PSI extraction efficiency is played by requestcontext. The contexts of requestshave been selected,characteristic of PSI extraction, reflecting various interpretations of information needs of users, designated by terms relevance and informational content. The scales for their estimates have been designed corresponding to worldwide approaches. We have experimentally confirmed that results of the summarization of forums carried out by experts manually significantly depend on requestcontext. We have shown that in the general assessment of PSI extraction efficiency relevance is rather well described by a linear combination of features, and the informational content assessment already requires their nonlinear combination. At the same time at a relevance assessment the leading role is played by the features connected with keywords, and at an informational content

  12. A benchmark for statistical microarray data analysis that preserves actual biological and technical variance.

    Science.gov (United States)

    De Hertogh, Benoît; De Meulder, Bertrand; Berger, Fabrice; Pierre, Michael; Bareke, Eric; Gaigneaux, Anthoula; Depiereux, Eric

    2010-01-11

    Recent reanalysis of spike-in datasets underscored the need for new and more accurate benchmark datasets for statistical microarray analysis. We present here a fresh method using biologically-relevant data to evaluate the performance of statistical methods. Our novel method ranks the probesets from a dataset composed of publicly-available biological microarray data and extracts subset matrices with precise information/noise ratios. Our method can be used to determine the capability of different methods to better estimate variance for a given number of replicates. The mean-variance and mean-fold change relationships of the matrices revealed a closer approximation of biological reality. Performance analysis refined the results from benchmarks published previously.We show that the Shrinkage t test (close to Limma) was the best of the methods tested, except when two replicates were examined, where the Regularized t test and the Window t test performed slightly better. The R scripts used for the analysis are available at http://urbm-cluster.urbm.fundp.ac.be/~bdemeulder/.

  13. Institutional Support : Institute of Statistical, Social and Economic ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    The Institute of Statistical, Social and Economic Research (ISSER) established in 1969 is a semi-autonomous university-based research centre located at the University of Ghana, Legon, Accra. ISSER has a strong track record of undertaking high-quality policy-relevant research. This grant - the largest being awarded under ...

  14. Topics from Australian Conferences on Teaching Statistics

    CERN Document Server

    Phillips, Brian; Martin, Michael

    2014-01-01

    The first OZCOTS conference in 1998 was inspired by papers contributed by Australians to the 5th International Conference on Teaching Statistics. In 2008, as part of the program of one of the first National Senior Teaching Fellowships, the 6th OZCOTS was held in conjunction with the Australian Statistical Conference, with Fellowship keynotes and contributed papers, optional refereeing and proceedings. This venture was so successful that the 7th and 8th OZCOTS were similarly run, conjoined with Australian Statistical Conferences in 2010 and 2012. Authors of papers from these OZCOTS conferences were invited to develop chapters for refereeing and inclusion in this volume. There are sections on keynote topics, undergraduate curriculum and learning, professional development, postgraduate learning, and papers from OZCOTS 2012. Because OZCOTS aim to unite statisticians and statistics educators, the approaches this volume takes are immediately relevant to all who have a vested interest in good teaching practices. Glo...

  15. An evidence perspective on topical relevance types and its implications for exploratory and task-based retrieval

    Directory of Open Access Journals (Sweden)

    Xiaoli Huang

    2006-01-01

    Full Text Available Introduction. The concept of relevance lies at the heart of intellectual access and information retrieval, indeed of reasoning and communication in general; in turn, topical relevance lies at the heart of relevance. The common view of topical relevance is limited to topic matching, resulting in information retrieval systems' failure to detect more complex topical connections which are needed to respond to diversified user situations and tasks. Method. Based on the role a piece of information plays in the overall structure of an argument, we have identified four topical relevance types: Direct, Indirect (circumstantial, Context, and Comparison. In the process of creating a speech retrieval test collection, graduate history students made 27,000 topical relevance assessments between Holocaust survivor interview segments and real user topics, using the four relevance types, each on a scale of 0 to 4. They recorded justifications for their assessments and kept detailed Topic Notes. Analysis. We analysed these relevance assessments using a grounded theory approach to arrive at a finer classification of topical relevance types. Results. For example, indirect relevance(a piece of information is connected to the topic indirectly through inference, circumstantial evidence was refined to Generic Indirect Relevance, Backward Inference (abduction, Forward Inference (deduction, and Inference from Cases (induction, with each subtype being further illustrated and explicated by examples. Conclusion. Each of these refined types of topical relevance plays a special role in reasoning, making a conclusive argument, or performing a task. Incorporating them into information retrieval systems allows users more flexibility and a better focus on their tasks. They can also be used in teaching reasoning skills.

  16. Energy statistics manual

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2010-07-01

    Detailed, complete, timely and reliable statistics are essential to monitor the energy situation at a country level as well as at an international level. Energy statistics on supply, trade, stocks, transformation and demand are indeed the basis for any sound energy policy decision. For instance, the market of oil -- which is the largest traded commodity worldwide -- needs to be closely monitored in order for all market players to know at any time what is produced, traded, stocked and consumed and by whom. In view of the role and importance of energy in world development, one would expect that basic energy information to be readily available and reliable. This is not always the case and one can even observe a decline in the quality, coverage and timeliness of energy statistics over the last few years.

  17. Probabilistic anatomical labeling of brain structures using statistical probabilistic anatomical maps

    International Nuclear Information System (INIS)

    Kim, Jin Su; Lee, Dong Soo; Lee, Byung Il; Lee, Jae Sung; Shin, Hee Won; Chung, June Key; Lee, Myung Chul

    2002-01-01

    The use of statistical parametric mapping (SPM) program has increased for the analysis of brain PET and SPECT images. Montreal neurological institute (MNI) coordinate is used in SPM program as a standard anatomical framework. While the most researchers look up Talairach atlas to report the localization of the activations detected in SPM program, there is significant disparity between MNI templates and Talairach atlas. That disparity between Talairach and MNI coordinates makes the interpretation of SPM result time consuming, subjective and inaccurate. The purpose of this study was to develop a program to provide objective anatomical information of each x-y-z position in ICBM coordinate. Program was designed to provide the anatomical information for the given x-y-z position in MNI coordinate based on the statistical probabilistic anatomical map (SPAM) images of ICBM. When x-y-z position was given to the program, names of the anatomical structures with non-zero probability and the probabilities that the given position belongs to the structures were tabulated. The program was coded using IDL and JAVA language for the easy transplantation to any operating system or platform. Utility of this program was shown by comparing the results of this program to those of SPM program. Preliminary validation study was performed by applying this program to the analysis of PET brain activation study of human memory in which the anatomical information on the activated areas are previously known. Real time retrieval of probabilistic information with 1 mm spatial resolution was archived using the programs. Validation study showed the relevance of this program: probability that the activated area for memory belonged to hippocampal formation was more than 80%. These programs will be useful for the result interpretation of the image analysis performed on MNI coordinate, as done in SPM program

  18. A perceptual space of local image statistics.

    Science.gov (United States)

    Victor, Jonathan D; Thengone, Daniel J; Rizvi, Syed M; Conte, Mary M

    2015-12-01

    Local image statistics are important for visual analysis of textures, surfaces, and form. There are many kinds of local statistics, including those that capture luminance distributions, spatial contrast, oriented segments, and corners. While sensitivity to each of these kinds of statistics have been well-studied, much less is known about visual processing when multiple kinds of statistics are relevant, in large part because the dimensionality of the problem is high and different kinds of statistics interact. To approach this problem, we focused on binary images on a square lattice - a reduced set of stimuli which nevertheless taps many kinds of local statistics. In this 10-parameter space, we determined psychophysical thresholds to each kind of statistic (16 observers) and all of their pairwise combinations (4 observers). Sensitivities and isodiscrimination contours were consistent across observers. Isodiscrimination contours were elliptical, implying a quadratic interaction rule, which in turn determined ellipsoidal isodiscrimination surfaces in the full 10-dimensional space, and made predictions for sensitivities to complex combinations of statistics. These predictions, including the prediction of a combination of statistics that was metameric to random, were verified experimentally. Finally, check size had only a mild effect on sensitivities over the range from 2.8 to 14min, but sensitivities to second- and higher-order statistics was substantially lower at 1.4min. In sum, local image statistics form a perceptual space that is highly stereotyped across observers, in which different kinds of statistics interact according to simple rules. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Tag-Based Social Image Search: Toward Relevant and Diverse Results

    Science.gov (United States)

    Yang, Kuiyuan; Wang, Meng; Hua, Xian-Sheng; Zhang, Hong-Jiang

    Recent years have witnessed a great success of social media websites. Tag-based image search is an important approach to access the image content of interest on these websites. However, the existing ranking methods for tag-based image search frequently return results that are irrelevant or lack of diversity. This chapter presents a diverse relevance ranking scheme which simultaneously takes relevance and diversity into account by exploring the content of images and their associated tags. First, it estimates the relevance scores of images with respect to the query term based on both visual information of images and semantic information of associated tags. Then semantic similarities of social images are estimated based on their tags. Based on the relevance scores and the similarities, the ranking list is generated by a greedy ordering algorithm which optimizes Average Diverse Precision (ADP), a novel measure that is extended from the conventional Average Precision (AP). Comprehensive experiments and user studies demonstrate the effectiveness of the approach.

  20. Is statistical significance clinically important?--A guide to judge the clinical relevance of study findings

    NARCIS (Netherlands)

    Sierevelt, Inger N.; van Oldenrijk, Jakob; Poolman, Rudolf W.

    2007-01-01

    In this paper we describe several issues that influence the reporting of statistical significance in relation to clinical importance, since misinterpretation of p values is a common issue in orthopaedic literature. Orthopaedic research is tormented by the risks of false-positive (type I error) and